From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id 831F33857809; Fri, 21 Jan 2022 10:29:53 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 831F33857809 From: "rguenth at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug middle-end/104151] [9/10/11/12 Regression] x86: excessive code generated for 128-bit byteswap Date: Fri, 21 Jan 2022 10:29:53 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: middle-end X-Bugzilla-Version: 12.0 X-Bugzilla-Keywords: missed-optimization X-Bugzilla-Severity: normal X-Bugzilla-Who: rguenth at gcc dot gnu.org X-Bugzilla-Status: NEW X-Bugzilla-Resolution: X-Bugzilla-Priority: P2 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: 12.0 X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: gcc-bugs@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-bugs mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Jan 2022 10:29:53 -0000 https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D104151 --- Comment #9 from Richard Biener --- (In reply to Richard Biener from comment #8) > (In reply to rsandifo@gcc.gnu.org from comment #7) > > (In reply to Richard Biener from comment #6) > > > Richard - I'm sure we can construct a similar case for aarch64 where > > > argument passing and vector mode use cause spilling? > > >=20 > > > On x86 the simplest testcase showing this is > > >=20 > > > typedef unsigned long long v2di __attribute__((vector_size(16))); > > > v2di bswap(__uint128_t a) > > > { > > > return *(v2di *)&a; > > > } > > >=20 > > > that produces > > >=20 > > > bswap: > > > .LFB0: > > > .cfi_startproc > > > sub sp, sp, #16 > > > .cfi_def_cfa_offset 16 > > > stp x0, x1, [sp] > > > ldr q0, [sp] > > > add sp, sp, 16 > > > .cfi_def_cfa_offset 0 > > > ret > > >=20 > > > on arm for me. Maybe the stp x0, x1 store can forward to the ldr load > > > though and I'm not sure there's another way to move x0/x1 to q0. > > It looks like this is a deliberate choice for aarch64. The generic > > costing has: > >=20 > > /* Avoid the use of slow int<->fp moves for spilling by setting > > their cost higher than memmov_cost. */ > > 5, /* GP2FP */ > >=20 > > So in cases like the above, we're telling IRA that spilling to > > memory and reloading is cheaper than moving between registers. > > For -mtune=3Dthunderx we generate: > >=20 > > fmov d0, x0 > > ins v0.d[1], x1 > > ret > >=20 > > instead. >=20 > Ah, interesting. On x86 we disallow/pessimize GPR<->XMM moves with > some tunings as well, still there a sequence like >=20 > movq %rdi, -24(%rsp) > movq %rsi, -16(%rsp) > movq -24(%rsp), %xmm0 > movq -16(%rsp), %xmm1 > unpckhpd %xmm0, %xmm1 (fixme - that's wrong, but you get the id= ea) >=20 > instead of >=20 > movq %rdi, -24(%rsp) > movq %rsi, -16(%rsp) > movdqa -24(%rsp), %xmm0 >=20 > would likely be faster. Not sure if one can get LRA to produce this > two-staged reload with just appropriate costing though. As said the > key of the cost of the bad sequence is the failing store forwarding, > so it's special for spilling of two-GPR TImode and reloading as > single FPR V2DImode. And a speciality for aarch64 seems to be that it has arguments passed in (reg:TI x0) which supposedly is a register-pair. On x86 there are no TImode register pair registers I think, instead the __int128 is passed as two 8-bytes in regular GPRs. So on aarch64 we have the simpler (insn 13 3 10 2 (set (reg:TI 95) (reg:TI 0 x0 [ a ])) "t.ii":3:2 58 {*movti_aarch64} (expr_list:REG_DEAD (reg:TI 0 x0 [ a ]) (nil))) (insn 10 13 11 2 (set (reg/i:V2DI 32 v0) (subreg:V2DI (reg:TI 95) 0)) "t.ii":5:2 1173 {*aarch64_simd_movv2di} (expr_list:REG_DEAD (reg:TI 95) (nil))) before RA.=