From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id 9FFE23858408; Mon, 15 Jan 2024 10:28:57 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 9FFE23858408 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1705314537; bh=FMDUp/DtlrF/aZ8h2IiCvPFI2dKpIHaNsHpnjxY7CwA=; h=From:To:Subject:Date:In-Reply-To:References:From; b=hRRHs7NeUsbUDA68MsCl0Kq/ogtlLlhtMk96V6EHeTh++bGzvY3Gx3+haEV66g9kT 1J17OAMOz340yFdYxPAHJUhI8UQoNlBf0+gN2jWoCXzaYz4hdTeoD8F0j+FtOkJagu 1h/zOtbrdwvnU+7uva0T5OxoD04BROcqLzFvGN7w= From: "jakub at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug tree-optimization/113372] wrong code with _BitInt() arithmetics at -O1 Date: Mon, 15 Jan 2024 10:28:57 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: tree-optimization X-Bugzilla-Version: 14.0 X-Bugzilla-Keywords: wrong-code X-Bugzilla-Severity: normal X-Bugzilla-Who: jakub at gcc dot gnu.org X-Bugzilla-Status: NEW X-Bugzilla-Resolution: X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D113372 Jakub Jelinek changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |rguenth at gcc dot gnu.org --- Comment #2 from Jakub Jelinek --- Seems this is invalid variable sharing during RTL, I'm afraid we have tons = of open bugs against that, and I'm afraid I don't know anything what to do on = the bitint lowering side. Bitint lowering creates 3 large variables: unsigned long bitint.8[75]; unsigned long bitint.7[75]; unsigned long bitint.6[100]; where bitint.6 corresponds to _3 =3D _2 % y_14(D); and later when _3 isn't needed anymore to _8 =3D -y_14(D); bitint.7 corresponds to _4 =3D -_3; _5 =3D (unsigned _BitInt(4745)) _4; and bitint.8 corresponds to _7 =3D _5 * _6; Reusing the same variable bitint.6 for 2 different _BitInt(6384) SSA_NAMEs ought to be fine, they aren't live at the same time. After lowering, we end up with: .DIVMODBITINT (0B, 0, &bitint.6, 6384, &D.2796, -8, &y, -6384); // Above fills in bitint.6 array ... loop which uses bitint.6 to compute bitint.7 bitint.6 =3D{v} {CLOBBER(eos)}; ... .MULBITINT (&bitint.8, 4745, &bitint.7, 4745, &D.2795, -8); // Above fills in bitint.8 array ... loop to compute bitint.6 ... loop which uses bitint.6 ... _21 =3D MEM[(unsigned long *)&bitint.8]; _22 =3D (_BitInt(8)) _21; _18 =3D _22; return _18; // I.e. use bitint.8 for the return value But unfortunately RTL expansion decides to use the same underlying memory f= or bitint.6 and bitint.8 arrays, even when the second uses of bitint.6 happens when bitint.8 is live. Or am I using incorrect CLOBBER kind when an underlying variable is used for more than one SSA_NAME? Like should that be just eob rather than eos?=