From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id CDAB03858D39; Thu, 7 Mar 2024 09:14:58 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org CDAB03858D39 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1709802898; bh=o4ZXYY2V1e1evrecjwW2u5qpYiAwS2tANkRsP1WOZ6s=; h=From:To:Subject:Date:In-Reply-To:References:From; b=sqza0IF60ZbN7LdCQA4InQBnIWXbG+At9+067wLEcGabn5Yl+IEjXp8AbHv+VhtHk F2mIE/TdTDYgaXbOr+jXUH84N/sy3HonJSTKYRvn30ZP/SiR2As28lCC8GdKWFO7sj SDMo4mVQISjzGf/6WCE+eAMZD4t6H/pc4Y6OSOHw= From: "rguenth at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug target/114252] Introducing bswapsi reduces code performance Date: Thu, 07 Mar 2024 09:14:57 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: target X-Bugzilla-Version: 14.0 X-Bugzilla-Keywords: missed-optimization X-Bugzilla-Severity: normal X-Bugzilla-Who: rguenth at gcc dot gnu.org X-Bugzilla-Status: NEW X-Bugzilla-Resolution: X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D114252 Richard Biener changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |sayle at gcc dot gnu.org --- Comment #10 from Richard Biener --- (In reply to Georg-Johann Lay from comment #8) > (In reply to Richard Biener from comment #7) > > Note I do understand what you are saying, just the middle-end in detect= ing > > and using __builtin_bswap32 does what it does everywhere else - it chec= ks > > whether the target implements the operation. > >=20 > > The middle-end doesn't try to actually compare costs (it has no idea of= the > > bswapsi costs), >=20 > But even when the bswapsi insn costs nothing, the v14 code has these > additional 6 movqi insns 32...37 compared to v13 code. In order to have = the > same performance like v13 code, a bswapsi would have to cost negative 6 > insns. And an optimizer that assumes negative costs is not reasonable, in > particular because the recognition of bswap opportunities serves > optimization -- or is supposed to serve it as far as I understand. >=20 > > and it most definitely doesn't see how AVR is special in > > having only QImode registers and thus the created SImode load (which the > > target supports!) will end up as four registers. >=20 > Even when the bswap insn would cost nothing the code is worse. Yes, I know. > > The only thing that maybe would make sense with AVR exposing bswapsi is > > users calling __builtin_bswap but since it always expands as a libcall > > even that makes no sense. >=20 > It makes perfect sense when C/C++ code uses __builtin_bswap32: >=20 > * With current bswapsi insn, the code does a call that performs SI:22 =3D > bswap(SI:22) with NO additionall register pressure. >=20 > * Without bswap insn, the code does a real ABI call that performs SI:22 = =3D > bswap(SI:22) PLUS IT CLOBBERS r18, r19, r20, r21, r26, r27, r30 and r31; > which are the most powerful GPRs. I think the target controls the "libcall" ABI that's used for calls to libgcc, but somehow we fail to go that path (but I can see __bswapsi and __bswapdi even in the x86_64 libgcc). In particular OPTAB_NC(bswap_optab, "bswap$a2", BSWAP) doesn't list bswap as having a libfunc ... > > So my preferred fix would be to remove bswapsi from avr.md? >=20 > Is there a way that the backend can fold a call to an insn that performs > better that a call? Like in TARGET_FOLD_BUILTIN? As far as I know, the > backend can only fold target builtins, but not common builtins? Tree fold > cannot fold to an insn obviously, but it could fold to inline asm, no? >=20 > Or can the target change an optabs entry so it expands to an insn that's > more profitable that a respective call? (like avr.md's bswap insn with > transparent call is more profitable than a real call). I think the target should implement an inline bswap, possibly via a define_insn_and_split or define_split so the byte ops are only exposed at a desired point; important points being lower_subreg (split-wide-types) and register allocation - possibly lower_subreg should itself know how to handle bswap (though the degenerate AVR case is quite special). I've CCed Roger who might know the traps with "implementing" an SImode bswap on a target with just QImode regs but multi-reg operations not decomposed during most of the RTL pipeline(?) > The avr backend does this for many other stuff, too: >=20 > divmod, SI and PSI multiplications, parity, popcount, clz, ffs,=20 Indeed. Maybe it's never the case that a loop implementing clz is better than a libcall or separate div/mod are better than divmod (oddly divmod also lacks the libcall entry for the optabs...). > > Does it benefit from recognizing bswap done with shifts on an int? >=20 > I don't fully understand that question. You mean to write code that shifts > bytes around like in > uint32_t res =3D 0; > res |=3D ((uint32_t) buf[0]) << 24; > res |=3D ((uint32_t) buf[1]) << 16; > res |=3D (uint32_t) buf[2] << 8; > res |=3D buf[3]; > return res; > is better than a bswapsi call? Yeah. Or comparing to open-coding the bswap without going through the call. I don't have a AVR libgcc around, but libgcc2.s has #ifdef L_bswapsi2 SItype __bswapsi2 (SItype u) { return ((((u) & 0xff000000u) >> 24) | (((u) & 0x00ff0000u) >> 8) | (((u) & 0x0000ff00u) << 8) | (((u) & 0x000000ffu) << 24)); } #endif=20 and that's compiled to __bswapsi2: /* prologue: function */ /* frame size =3D 0 */ /* stack size =3D 0 */ .L__stack_usage =3D 0 rcall __bswapsi2 /* epilogue start */ ret so this can't be it ;)=