* [x86_PATCH] New *ashl<dwi3>_doubleword_highpart define_insn_and_split.
@ 2023-06-24 18:04 Roger Sayle
2023-06-25 8:23 ` Uros Bizjak
0 siblings, 1 reply; 2+ messages in thread
From: Roger Sayle @ 2023-06-24 18:04 UTC (permalink / raw)
To: gcc-patches; +Cc: 'Uros Bizjak'
[-- Attachment #1: Type: text/plain, Size: 2788 bytes --]
This patch contains a pair of (related) optimizations in i386.md that
allow us to generate better code for the example below (this is a step
towards fixing a bugzilla PR, but I've forgotten the number).
__int128 foo64(__int128 x, long long y)
{
__int128 t = (__int128)y << 64;
return x ^ t;
}
The hidden issue is that the RTL currently seen by reload contains
the sign extension of y from DImode to TImode, even though this is
dead (not required) for left shifts by more than WORD_SIZE bits.
(insn 11 8 12 2 (parallel [
(set (reg:TI 0 ax [orig:91 y ] [91])
(sign_extend:TI (reg:DI 1 dx [97])))
(clobber (reg:CC 17 flags))
(clobber (scratch:DI))
]) {extendditi2}
What makes this particularly undesirable is that the sign-extension
pattern above requires an additional DImode scratch register, indicated
by the clobber, which unnecessarily increases register pressure.
The proposed solution is to add a define_insn_and_split for such
left shifts (of sign or zero extensions) that only have a non-zero
highpart, where the extension is redundant and eliminated, that can
be split after reload, without scratch registers or early clobbers.
This (late split) exposes a second optimization opportunity where
setting the lowpart to zero can sometimes be combined/simplified with
the following instruction during peephole2.
For the test case above, we previously generated with -O2:
foo64: xorl %eax, %eax
xorq %rsi, %rdx
xorq %rdi, %rax
ret
with this patch, we now generate:
foo64: movq %rdi, %rax
xorq %rsi, %rdx
ret
Likewise for the related -m32 test case, we go from:
foo32: movl 12(%esp), %eax
movl %eax, %edx
xorl %eax, %eax
xorl 8(%esp), %edx
xorl 4(%esp), %eax
ret
to the improved:
foo32: movl 12(%esp), %edx
movl 4(%esp), %eax
xorl 8(%esp), %edx
ret
This patch has been tested on x86_64-pc-linux-gnu with make bootstrap
and make -k check, both with and without --target_board=unix{-m32}
with no new failures. Ok for mainline?
2023-06-24 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
* config/i386/i386.md (peephole2): Simplify zeroing a register
followed by an IOR, XOR or PLUS operation on it, into a move.
(*ashl<dwi>3_doubleword_highpart): New define_insn_and_split to
eliminate (and hide from reload) unnecessary word to doubleword
extensions that are followed by left shifts by sufficient large
(but valid) bit counts.
gcc/testsuite/ChangeLog
* gcc.target/i386/ashldi3-1.c: New 32-bit test case.
* gcc.target/i386/ashlti3-2.c: New 64-bit test case.
Thanks again,
Roger
--
[-- Attachment #2: patchts.txt --]
[-- Type: text/plain, Size: 3147 bytes --]
diff --git a/gcc/config/i386/i386.md b/gcc/config/i386/i386.md
index 95a6653c..7664dff 100644
--- a/gcc/config/i386/i386.md
+++ b/gcc/config/i386/i386.md
@@ -12206,6 +12206,18 @@
(set_attr "type" "alu")
(set_attr "mode" "QI")])
+;; Peephole2 rega = 0; rega op= regb into rega = regb.
+(define_peephole2
+ [(parallel [(set (match_operand:SWI 0 "general_reg_operand")
+ (const_int 0))
+ (clobber (reg:CC FLAGS_REG))])
+ (parallel [(set (match_dup 0)
+ (any_or_plus:SWI (match_dup 0)
+ (match_operand:SWI 1 "<general_operand>")))
+ (clobber (reg:CC FLAGS_REG))])]
+ ""
+ [(set (match_dup 0) (match_dup 1))])
+
;; Split DST = (HI<<32)|LO early to minimize register usage.
(define_insn_and_split "*concat<mode><dwi>3_1"
[(set (match_operand:<DWI> 0 "nonimmediate_operand" "=ro,r")
@@ -13365,6 +13377,28 @@
[(const_int 0)]
"ix86_split_ashl (operands, operands[3], <DWI>mode); DONE;")
+(define_insn_and_split "*ashl<dwi>3_doubleword_highpart"
+ [(set (match_operand:<DWI> 0 "register_operand" "=r")
+ (ashift:<DWI>
+ (any_extend:<DWI> (match_operand:DWIH 1 "nonimmediate_operand" "rm"))
+ (match_operand:QI 2 "const_int_operand")))
+ (clobber (reg:CC FLAGS_REG))]
+ "INTVAL (operands[2]) >= <MODE_SIZE> * BITS_PER_UNIT
+ && INTVAL (operands[2]) < <MODE_SIZE> * BITS_PER_UNIT * 2"
+ "#"
+ "&& reload_completed"
+ [(const_int 0)]
+{
+ split_double_mode (<DWI>mode, &operands[0], 1, &operands[0], &operands[3]);
+ int bits = INTVAL (operands[2]) - (<MODE_SIZE> * BITS_PER_UNIT);
+ if (!rtx_equal_p (operands[3], operands[1]))
+ emit_move_insn (operands[3], operands[1]);
+ if (bits > 0)
+ emit_insn (gen_ashl<mode>3 (operands[3], operands[3], GEN_INT (bits)));
+ ix86_expand_clear (operands[0]);
+ DONE;
+})
+
(define_insn "x86_64_shld"
[(set (match_operand:DI 0 "nonimmediate_operand" "+r*m")
(ior:DI (ashift:DI (match_dup 0)
diff --git a/gcc/testsuite/gcc.target/i386/ashldi3-1.c b/gcc/testsuite/gcc.target/i386/ashldi3-1.c
new file mode 100644
index 0000000..b61d63b
--- /dev/null
+++ b/gcc/testsuite/gcc.target/i386/ashldi3-1.c
@@ -0,0 +1,16 @@
+/* { dg-do compile { target ia32 } } */
+/* { dg-options "-O2" } */
+
+long long foo(long long x, int y)
+{
+ long long t = (long long)y << 32;
+ return x ^ t;
+}
+
+long long bar(long long x, int y)
+{
+ long long t = (long long)y << 35;
+ return x ^ t;
+}
+
+/* { dg-final { scan-assembler-times "xorl" 2 } } */
diff --git a/gcc/testsuite/gcc.target/i386/ashlti3-2.c b/gcc/testsuite/gcc.target/i386/ashlti3-2.c
new file mode 100644
index 0000000..7e21ab9
--- /dev/null
+++ b/gcc/testsuite/gcc.target/i386/ashlti3-2.c
@@ -0,0 +1,17 @@
+/* { dg-do compile { target int128 } } */
+/* { dg-options "-O2" } */
+
+__int128 foo(__int128 x, long long y)
+{
+ __int128 t = (__int128)y << 64;
+ return x ^ t;
+}
+
+__int128 bar(__int128 x, long long y)
+{
+ __int128 t = (__int128)y << 67;
+ return x ^ t;
+}
+
+/* { dg-final { scan-assembler-not "xorl" } } */
+/* { dg-final { scan-assembler-times "xorq" 2 } } */
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [x86_PATCH] New *ashl<dwi3>_doubleword_highpart define_insn_and_split.
2023-06-24 18:04 [x86_PATCH] New *ashl<dwi3>_doubleword_highpart define_insn_and_split Roger Sayle
@ 2023-06-25 8:23 ` Uros Bizjak
0 siblings, 0 replies; 2+ messages in thread
From: Uros Bizjak @ 2023-06-25 8:23 UTC (permalink / raw)
To: Roger Sayle; +Cc: gcc-patches
On Sat, Jun 24, 2023 at 8:04 PM Roger Sayle <roger@nextmovesoftware.com> wrote:
>
>
> This patch contains a pair of (related) optimizations in i386.md that
> allow us to generate better code for the example below (this is a step
> towards fixing a bugzilla PR, but I've forgotten the number).
>
> __int128 foo64(__int128 x, long long y)
> {
> __int128 t = (__int128)y << 64;
> return x ^ t;
> }
>
> The hidden issue is that the RTL currently seen by reload contains
> the sign extension of y from DImode to TImode, even though this is
> dead (not required) for left shifts by more than WORD_SIZE bits.
>
> (insn 11 8 12 2 (parallel [
> (set (reg:TI 0 ax [orig:91 y ] [91])
> (sign_extend:TI (reg:DI 1 dx [97])))
> (clobber (reg:CC 17 flags))
> (clobber (scratch:DI))
> ]) {extendditi2}
>
> What makes this particularly undesirable is that the sign-extension
> pattern above requires an additional DImode scratch register, indicated
> by the clobber, which unnecessarily increases register pressure.
>
> The proposed solution is to add a define_insn_and_split for such
> left shifts (of sign or zero extensions) that only have a non-zero
> highpart, where the extension is redundant and eliminated, that can
> be split after reload, without scratch registers or early clobbers.
>
> This (late split) exposes a second optimization opportunity where
> setting the lowpart to zero can sometimes be combined/simplified with
> the following instruction during peephole2.
>
> For the test case above, we previously generated with -O2:
>
> foo64: xorl %eax, %eax
> xorq %rsi, %rdx
> xorq %rdi, %rax
> ret
>
> with this patch, we now generate:
>
> foo64: movq %rdi, %rax
> xorq %rsi, %rdx
> ret
>
> Likewise for the related -m32 test case, we go from:
>
> foo32: movl 12(%esp), %eax
> movl %eax, %edx
> xorl %eax, %eax
> xorl 8(%esp), %edx
> xorl 4(%esp), %eax
> ret
>
> to the improved:
>
> foo32: movl 12(%esp), %edx
> movl 4(%esp), %eax
> xorl 8(%esp), %edx
> ret
>
>
> This patch has been tested on x86_64-pc-linux-gnu with make bootstrap
> and make -k check, both with and without --target_board=unix{-m32}
> with no new failures. Ok for mainline?
>
>
> 2023-06-24 Roger Sayle <roger@nextmovesoftware.com>
>
> gcc/ChangeLog
> * config/i386/i386.md (peephole2): Simplify zeroing a register
> followed by an IOR, XOR or PLUS operation on it, into a move.
> (*ashl<dwi>3_doubleword_highpart): New define_insn_and_split to
> eliminate (and hide from reload) unnecessary word to doubleword
> extensions that are followed by left shifts by sufficient large
> (but valid) bit counts.
>
> gcc/testsuite/ChangeLog
> * gcc.target/i386/ashldi3-1.c: New 32-bit test case.
> * gcc.target/i386/ashlti3-2.c: New 64-bit test case.
OK.
Thanks,
Uros.
>
>
> Thanks again,
> Roger
> --
>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2023-06-25 8:23 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-06-24 18:04 [x86_PATCH] New *ashl<dwi3>_doubleword_highpart define_insn_and_split Roger Sayle
2023-06-25 8:23 ` Uros Bizjak
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).