From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id A06E6385842C; Sun, 1 Oct 2023 17:22:55 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org A06E6385842C DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1696180975; bh=vhJNsN5UgqeBb6qCEk3zbh4ZIA7my33HtIPVnahU4xg=; h=From:To:Subject:Date:From; b=rJjaykDoQwOgLJi1DjEzLXaFIXai68Y+a0xnrmtic1mJU8x3HwkfelkN02XC/qVpX TzMK+YRdBV7SX4FnmZYfY1xIVJwFW7XiGJXteX/0CsL3Yvp3Gs45w+kcxt0RRHUCDM 2u3S5duwB3n1ThQQ6knwhmWLpCCt9nvSdjBNwef8= From: "eggert at cs dot ucla.edu" To: gcc-bugs@gcc.gnu.org Subject: [Bug tree-optimization/111655] New: wrong code generated for __builtin_signbit on x86-64 -O2 Date: Sun, 01 Oct 2023 17:22:55 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: tree-optimization X-Bugzilla-Version: 13.2.1 X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: eggert at cs dot ucla.edu X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_id short_desc product version bug_status bug_severity priority component assigned_to reporter target_milestone Message-ID: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D111655 Bug ID: 111655 Summary: wrong code generated for __builtin_signbit on x86-64 -O2 Product: gcc Version: 13.2.1 Status: UNCONFIRMED Severity: normal Priority: P3 Component: tree-optimization Assignee: unassigned at gcc dot gnu.org Reporter: eggert at cs dot ucla.edu Target Milestone: --- I ran into this bug when testing Gnulib code on Fedora 38 x86-64, which uses gcc (GCC) 13.2.1 20230728 (Red Hat 13.2.1-1). The problem is a regression f= rom gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44), which does the right thing. Here is a stripped-down version of the bug. Compile and run the following c= ode with "gcc -O2 t.i; ./a.out". int main () { double x =3D 0.0 / 0.0; if (!__builtin_signbit (x)) x =3D -x; return !__builtin_signbit (x); } Although a.out's exit status should be 0, it is 1. If I compile without -O2= the bug goes away. Here's the key part of the generated assembly language: main: pxor %xmm0, %xmm0 divsd %xmm0, %xmm0 xorpd .LC1(%rip), %xmm0 movmskpd %xmm0, %eax testb $1, %al sete %al movzbl %al, %eax ret .LC1: .long 0 .long -2147483648 On the x86-64, the "divsd %xmm0, %xmm0" instruction that implements 0.0 / 0= .0 generates a NaN with the sign bit set. I determined this by testing on a Xe= on W-1350, although I don't see where the NaN's sign bit is documented by Inte= l in this situation. It appears that GCC's optimization incorrectly assumes that 0.0 / 0.0 gener= ates a NaN with the sign bit clear, which causes the "if (!__builtin_signbit (x)= ) x =3D -x;" to be compiled as if it were merely "x =3D -x;", which is obviously incorrect.=