From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id B79283858C2C; Fri, 21 Jan 2022 10:18:52 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org B79283858C2C From: "rguenth at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug middle-end/104151] [9/10/11/12 Regression] x86: excessive code generated for 128-bit byteswap Date: Fri, 21 Jan 2022 10:18:52 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: middle-end X-Bugzilla-Version: 12.0 X-Bugzilla-Keywords: missed-optimization X-Bugzilla-Severity: normal X-Bugzilla-Who: rguenth at gcc dot gnu.org X-Bugzilla-Status: NEW X-Bugzilla-Resolution: X-Bugzilla-Priority: P2 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: 12.0 X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: gcc-bugs@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-bugs mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Jan 2022 10:18:52 -0000 https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D104151 --- Comment #8 from Richard Biener --- (In reply to rsandifo@gcc.gnu.org from comment #7) > (In reply to Richard Biener from comment #6) > > Richard - I'm sure we can construct a similar case for aarch64 where > > argument passing and vector mode use cause spilling? > >=20 > > On x86 the simplest testcase showing this is > >=20 > > typedef unsigned long long v2di __attribute__((vector_size(16))); > > v2di bswap(__uint128_t a) > > { > > return *(v2di *)&a; > > } > >=20 > > that produces > >=20 > > bswap: > > .LFB0: > > .cfi_startproc > > sub sp, sp, #16 > > .cfi_def_cfa_offset 16 > > stp x0, x1, [sp] > > ldr q0, [sp] > > add sp, sp, 16 > > .cfi_def_cfa_offset 0 > > ret > >=20 > > on arm for me. Maybe the stp x0, x1 store can forward to the ldr load > > though and I'm not sure there's another way to move x0/x1 to q0. > It looks like this is a deliberate choice for aarch64. The generic > costing has: >=20 > /* Avoid the use of slow int<->fp moves for spilling by setting > their cost higher than memmov_cost. */ > 5, /* GP2FP */ >=20 > So in cases like the above, we're telling IRA that spilling to > memory and reloading is cheaper than moving between registers. > For -mtune=3Dthunderx we generate: >=20 > fmov d0, x0 > ins v0.d[1], x1 > ret >=20 > instead. Ah, interesting. On x86 we disallow/pessimize GPR<->XMM moves with some tunings as well, still there a sequence like movq %rdi, -24(%rsp) movq %rsi, -16(%rsp) movq -24(%rsp), %xmm0 movq -16(%rsp), %xmm1 unpckhpd %xmm0, %xmm1 (fixme - that's wrong, but you get the idea) instead of movq %rdi, -24(%rsp) movq %rsi, -16(%rsp) movdqa -24(%rsp), %xmm0 would likely be faster. Not sure if one can get LRA to produce this two-staged reload with just appropriate costing though. As said the key of the cost of the bad sequence is the failing store forwarding, so it's special for spilling of two-GPR TImode and reloading as single FPR V2DImode.=