From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id F2ACF385840D; Fri, 21 Jan 2022 09:11:56 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org F2ACF385840D From: "rsandifo at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug middle-end/104151] [9/10/11/12 Regression] x86: excessive code generated for 128-bit byteswap Date: Fri, 21 Jan 2022 09:11:56 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: middle-end X-Bugzilla-Version: 12.0 X-Bugzilla-Keywords: missed-optimization X-Bugzilla-Severity: normal X-Bugzilla-Who: rsandifo at gcc dot gnu.org X-Bugzilla-Status: NEW X-Bugzilla-Resolution: X-Bugzilla-Priority: P2 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: 12.0 X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: gcc-bugs@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-bugs mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Jan 2022 09:11:57 -0000 https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D104151 --- Comment #7 from rsandifo at gcc dot gnu.org --- (In reply to Richard Biener from comment #6) > Richard - I'm sure we can construct a similar case for aarch64 where > argument passing and vector mode use cause spilling? >=20 > On x86 the simplest testcase showing this is >=20 > typedef unsigned long long v2di __attribute__((vector_size(16))); > v2di bswap(__uint128_t a) > { > return *(v2di *)&a; > } >=20 > that produces >=20 > bswap: > .LFB0: > .cfi_startproc > sub sp, sp, #16 > .cfi_def_cfa_offset 16 > stp x0, x1, [sp] > ldr q0, [sp] > add sp, sp, 16 > .cfi_def_cfa_offset 0 > ret >=20 > on arm for me. Maybe the stp x0, x1 store can forward to the ldr load > though and I'm not sure there's another way to move x0/x1 to q0. It looks like this is a deliberate choice for aarch64. The generic costing has: /* Avoid the use of slow int<->fp moves for spilling by setting their cost higher than memmov_cost. */ 5, /* GP2FP */ So in cases like the above, we're telling IRA that spilling to memory and reloading is cheaper than moving between registers. For -mtune=3Dthunderx we generate: fmov d0, x0 ins v0.d[1], x1 ret instead.=