From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id B9BDB3857437; Fri, 17 Feb 2023 17:59:00 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org B9BDB3857437 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1676656740; bh=Feogtp0uELvzZfqsSP7JMufkkuD0V/2Vken74+WiMeE=; h=From:To:Subject:Date:From; b=aWVWJMOE/NzOPf61ZSvI5/Xc9CojaXexrNtb5JXgNXGfliU8cho0J4XXn3GwiuACi i4Q7h+G72TFEoneFJzyKzzou6JEwGnnF+iSiAnmF6sXvoJBeuAxcLE65pmEE/74vca 5v1k6r1H3/zKBlQTrav45i2nWOb3hrrc0Iie4TZQ= From: "jakub at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug target/108840] New: Aarch64 doesn't optimize away shift counter masking Date: Fri, 17 Feb 2023 17:59:00 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: target X-Bugzilla-Version: 13.0 X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: jakub at gcc dot gnu.org X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_id short_desc product version bug_status bug_severity priority component assigned_to reporter target_milestone Message-ID: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D108840 Bug ID: 108840 Summary: Aarch64 doesn't optimize away shift counter masking Product: gcc Version: 13.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: target Assignee: unassigned at gcc dot gnu.org Reporter: jakub at gcc dot gnu.org Target Milestone: --- As mentioned in=20 https://gcc.gnu.org/pipermail/gcc-patches/2023-February/612214.html aarch64 doesn't optimize away and instructions masking shift count if there= is more than one shift with the same count. Consider -O2 -fno-tree-vectorize: int foo (int x, int y) { return x << (y & 31); } void bar (int x[3], int y) { x[0] <<=3D (y & 31); x[1] <<=3D (y & 31); x[2] <<=3D (y & 31); } void baz (int x[3], int y) { y &=3D 31; x[0] <<=3D y; x[1] <<=3D y; x[2] <<=3D y; } void corge (int, int, int); void qux (int x, int y, int z, int n) { n &=3D 31; corge (x << n, y << n, z >> n); } foo is optimized correctly, combine matches the shift with masking, but in = the rest of cases due to costs the desirable combination is rejected. Shift wi= th embedded masking of the count should have rtx_cost the same as normal shift when it is actually under the hood the shift itself.=