From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id 247183857709; Fri, 4 Aug 2023 10:29:02 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 247183857709 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1691144942; bh=copo5zNqR5q4AudIQ6E1pwtwNPIKHcpbDJ4BdC61lmA=; h=From:To:Subject:Date:In-Reply-To:References:From; b=jXwUQqo3B4qav9cSJvP2OP6kosrm8S/ycaNa+iEb0svOssS4kmJjjb3gH7F+8RLY8 FnnzIvyLU6ChdK9t/VUEDUtevJKMgOfjJ2w3h6rEdTuOvFUo96pZvr+Fu9w2BkDo7Q rv9KJsp+qlFPcoTlnLP3wmvGZ0S8NOY2Q/GXj5Mg= From: "cvs-commit at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug middle-end/110316] [11/12/13/14 Regression] g++.dg/ext/timevar1.C and timevar2.C fail erratically Date: Fri, 04 Aug 2023 10:29:00 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: middle-end X-Bugzilla-Version: 14.0 X-Bugzilla-Keywords: ice-on-valid-code X-Bugzilla-Severity: normal X-Bugzilla-Who: cvs-commit at gcc dot gnu.org X-Bugzilla-Status: NEW X-Bugzilla-Resolution: X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: 11.5 X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D110316 --- Comment #4 from CVS Commits --- The master branch has been updated by Matthew Malcomson : https://gcc.gnu.org/g:0782b01c9ea43d43648071faa9c65a101f5068a2 commit r14-2986-g0782b01c9ea43d43648071faa9c65a101f5068a2 Author: Matthew Malcomson Date: Fri Aug 4 11:26:47 2023 +0100 mid-end: Use integral time intervals in timevar.cc On some AArch64 bootstrapped builds, we were getting a flaky test because the floating point operations in `get_time` were being fused with the floating point operations in `timevar_accumulate`. This meant that the rounding behaviour of our multiplication with `ticks_to_msec` was different when used in `timer::start` and when performed in `timer::stop`. These extra inaccuracies led to the testcase `g++.dg/ext/timevar1.C` being flaky on some hardware. ------------------------------ Avoiding the inlining which was agreed to be undesirable. Three alternative approaches: 1) Use `-ffp-contract=3Don` to avoid this particular optimisation. 2) Adjusting the code so that the "tolerance" is always of the order of a "tick". 3) Recording times and elapsed differences in integral values. - Could be in terms of a standard measurement (e.g. nanoseconds or microseconds). - Could be in terms of whatever integral value ("ticks" / secondsµseconds / "clock ticks") is returned from the syscall chosen at configure time. While `-ffp-contract=3Don` removes the problem that I bumped into, there has been a similar bug on x86 that was to do with a different floating point problem that also happens after `get_time` and `timevar_accumulate` both being inlined into the same function. Hence it seems worth choosing a different approach. Of the two other solutions, recording measurements in integral values seems the most robust against slightly "off" measurements being presented to the user -- even though it could avoid the ICE that creates a flaky test. I considered storing time in whatever units our syscall returns and normalising them at the time we print out rather than normalising them to nanoseconds at the point we record our "current time". The logic being that normalisation could have some rounding affect (e.g. if TICKS_PER_SECOND is 3) that would be taken into account in calculations. I decided against it in order to give the values recorded in `timevar_time_def` some interpretive value so it's easier to read the code. Compared to the small rounding that would represent a tiny amount of time and AIUI can not trigger the same kind of ICE's as we are attempting to fix, said interpretive value seems more valuable. Recording time in microseconds seemed reasonable since all obvious values for ticks and `getrusage` are at microsecond granularity or less precise. That said, since TICKS_PER_SECOND and CLOCKS_PER_SEC are both variables given to use by the host system I was not sure of that enough to make this decision. ------------------------------ timer::all_zero is ignoring rows which are inconsequential to the user and would be printed out as all zeros. Since upon printing rows we convert to the same double value and print out the same precision as before, we return true/false based on the same amount of time as before. timer::print_row casts to a floating point measurement in units of seconds as was printed out before. timer::validate_phases -- I'm printing out nanoseconds here rather than floating point seconds since this is an error message for when things have "gone wrong" printing out the actual nanoseconds that have been recorded seems like the best approach. N.b. since we now print out nanoseconds instead of floating point value the padding requirements are different. Originally we were padding to 24 characters and printing 18 decimal places. This looked odd with the now visually smaller values getting printed. I judged 13 characters (corresponding to 2 hours) to be a reasonable point at which our alignment could start to degrade and this provides a more compact output for the majority of cases (checked by triggering the error case via GDB). ------------------------------ N.b. I use a literal 1000000000 for "NANOSEC_PER_SEC". I believe this would fit in an integer on all hosts that GCC supports, but am not certain there are not strange integer sizes we support hence am pointing it out for special attention during review. ------------------------------ No expected change in generated code. Bootstrapped and regtested on AArch64 with no regressions. Hope this is acceptable -- I had originally planned to use `-ffp-contract` as agreed until I saw mention of the old x86 bug in the same area which was not to do with floating point contraction of operations (PR 99903). gcc/ChangeLog: PR middle-end/110316 PR middle-end/9903 * timevar.cc (NANOSEC_PER_SEC, TICKS_TO_NANOSEC, CLOCKS_TO_NANOSEC, nanosec_to_floating_sec, percent_of): New. (TICKS_TO_MSEC, CLOCKS_TO_MSEC): Remove these macros. (timer::validate_phases): Use integral arithmetic to check validity. (timer::print_row, timer::print): Convert from integral nanoseconds to floating point seconds before printing. (timer::all_zero): Change limit to nanosec count instead of fractional count of seconds. (make_json_for_timevar_time_def): Convert from integral nanoseconds to floating point seconds before recording. * timevar.h (struct timevar_time_def): Update all measurements to use uint64_t nanoseconds rather than seconds stored in a double.=