* [PATCH] PR tree-optimization/71343: Optimize (X<<C)&(Y<<C) as (X&Y)<<C.
@ 2022-08-08 8:07 Roger Sayle
2022-08-08 11:42 ` Richard Biener
0 siblings, 1 reply; 6+ messages in thread
From: Roger Sayle @ 2022-08-08 8:07 UTC (permalink / raw)
To: gcc-patches
[-- Attachment #1: Type: text/plain, Size: 2145 bytes --]
This patch resolves PR tree-optimization/71343, a missed-optimization
enhancement request where GCC fails to see that (a<<2)+(b<<2) == a*4+b*4.
This requires two related (sets of) optimizations to be added to match.pd.
The first is that (X<<C) op (Y<<C) can be simplified to (X op Y) << C,
for many binary operators, including AND, IOR, XOR, and (if overflow
isn't an issue) PLUS and MINUS. Likewise, the right shifts (both logical
and arithmetic) and bit-wise logical operators can be simplified in a
similar fashion. These all reduce the number of GIMPLE binary operations
from 3 to 2, by combining/eliminating a shift operation.
The second optimization reflects that the middle-end doesn't impose a
canonical form on multiplications by powers of two, vs. left shifts,
instead leaving these operations as specified by the programmer
unless there's a good reason to change them. Hence, GIMPLE code may
contain the expressions "X * 8" and "X << 3" even though these represent
the same value/computation. The tweak to match.pd is that comparison
operations whose operands are equivalent non-canonical expressions can
be taught their equivalence. Hence "(X * 8) == (X << 3)" will always
evaluate to true, and "(X<<2) > 4*X" will always evaluate to false.
This patch has been tested on x86_64-pc-linux-gnu with make bootstrap
and make -k check, both with and without --target_board=unix{-m32},
with no new failures. Ok for mainline?
2022-08-08 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
PR tree-optimization/71343
* match.pd (op (lshift @0 @1) (lshift @2 @1)): Optimize the
expression (X<<C) + (Y<<C) to (X+Y)<<C for multiple operators.
(op (rshift @0 @1) (rshift @2 @1)): Likwise, simplify (X>>C)^(Y>>C)
to (X^Y)>>C for binary logical operators, AND, IOR and XOR.
(cmp:c (lshift @0) (mult @1)): Optimize comparisons between
shifts by integer constants and multiplications by powers of 2.
gcc/testsuite/ChangeLog
PR tree-optimization/71343
* gcc.dg/pr71343-1.c: New test case.
* gcc.dg/pr71343-2.c: Likewise.
Thanks in advance,
Roger
--
[-- Attachment #2: patchfd.txt --]
[-- Type: text/plain, Size: 4890 bytes --]
diff --git a/gcc/match.pd b/gcc/match.pd
index f82f94a..d6b9cfb 100644
--- a/gcc/match.pd
+++ b/gcc/match.pd
@@ -982,6 +982,35 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
&& tree_nop_conversion_p (type, TREE_TYPE (@1)))
(lshift @0 @2)))
+/* Shifts by constants distribute over several binary operations,
+ hence (X << C) + (Y << C) can be simplified to (X + Y) << C. */
+(for op (plus minus)
+ (simplify
+ (op (lshift:s @0 INTEGER_CST@1) (lshift:s @2 INTEGER_CST@1))
+ (if (INTEGRAL_TYPE_P (type)
+ && TYPE_OVERFLOW_WRAPS (type)
+ && !TYPE_SATURATING (type)
+ && tree_fits_shwi_p (@1)
+ && tree_to_shwi (@1) > 0
+ && tree_to_shwi (@1) < TYPE_PRECISION (type))
+ (lshift (op @0 @2) @1))))
+
+(for op (bit_and bit_ior bit_xor)
+ (simplify
+ (op (lshift:s @0 INTEGER_CST@1) (lshift:s @2 INTEGER_CST@1))
+ (if (INTEGRAL_TYPE_P (type)
+ && tree_fits_shwi_p (@1)
+ && tree_to_shwi (@1) > 0
+ && tree_to_shwi (@1) < TYPE_PRECISION (type))
+ (lshift (op @0 @2) @1)))
+ (simplify
+ (op (rshift:s @0 INTEGER_CST@1) (rshift:s @2 INTEGER_CST@1))
+ (if (INTEGRAL_TYPE_P (type)
+ && tree_fits_shwi_p (@1)
+ && tree_to_shwi (@1) > 0
+ && tree_to_shwi (@1) < TYPE_PRECISION (type))
+ (rshift (op @0 @2) @1))))
+
/* Fold (1 << (C - x)) where C = precision(type) - 1
into ((1 << C) >> x). */
(simplify
@@ -2241,6 +2270,28 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
(if (TREE_INT_CST_LOW (@1) & 1)
{ constant_boolean_node (cmp == NE_EXPR, type); })))
+/* Some tree expressions are intentionally non-canonical.
+ We handle the comparison of the equivalent forms here. */
+(for cmp (eq le ge)
+ (simplify
+ (cmp:c (lshift @0 INTEGER_CST@1) (mult @0 integer_pow2p@2))
+ (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))
+ && tree_fits_shwi_p (@1)
+ && tree_to_shwi (@1) > 0
+ && tree_to_shwi (@1) < TYPE_PRECISION (TREE_TYPE (@0))
+ && wi::to_wide (@1) == wi::exact_log2 (wi::to_wide (@2)))
+ { constant_boolean_node (true, type); })))
+
+(for cmp (ne lt gt)
+ (simplify
+ (cmp:c (lshift @0 INTEGER_CST@1) (mult @0 integer_pow2p@2))
+ (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))
+ && tree_fits_shwi_p (@1)
+ && tree_to_shwi (@1) > 0
+ && tree_to_shwi (@1) < TYPE_PRECISION (TREE_TYPE (@0))
+ && wi::to_wide (@1) == wi::exact_log2 (wi::to_wide (@2)))
+ { constant_boolean_node (false, type); })))
+
/* Arguments on which one can call get_nonzero_bits to get the bits
possibly set. */
(match with_possible_nonzero_bits
diff --git a/gcc/testsuite/gcc.dg/pr71343-1.c b/gcc/testsuite/gcc.dg/pr71343-1.c
new file mode 100644
index 0000000..146f5fc
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr71343-1.c
@@ -0,0 +1,56 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-optimized" } */
+
+unsigned int foo_plus(unsigned int a, unsigned int b)
+{
+ return (a << 2) + (b << 2);
+}
+
+unsigned int foo_and(unsigned int a, unsigned int b)
+{
+ return (a << 2) & (b << 2);
+}
+
+unsigned int foo_ior(unsigned int a, unsigned int b)
+{
+ return (a << 2) | (b << 2);
+}
+
+unsigned int foo_xor(unsigned int a, unsigned int b)
+{
+ return (a << 2) ^ (b << 2);
+}
+
+unsigned int bar_and(unsigned int a, unsigned int b)
+{
+ return (a >> 2) & (b >> 2);
+}
+
+unsigned int bar_ior(unsigned int a, unsigned int b)
+{
+ return (a >> 2) | (b >> 2);
+}
+
+unsigned int bar_xor(unsigned int a, unsigned int b)
+{
+ return (a >> 2) ^ (b >> 2);
+}
+
+int baz_and(int a, int b)
+{
+ return (a >> 2) & (b >> 2);
+}
+
+int baz_ior(int a, int b)
+{
+ return (a >> 2) | (b >> 2);
+}
+
+int baz_xor(int a, int b)
+{
+ return (a >> 2) ^ (b >> 2);
+}
+
+/* { dg-final { scan-tree-dump-times " << " 4 "optimized" } } */
+/* { dg-final { scan-tree-dump-times " >> " 6 "optimized" } } */
+
diff --git a/gcc/testsuite/gcc.dg/pr71343-2.c b/gcc/testsuite/gcc.dg/pr71343-2.c
new file mode 100644
index 0000000..11800a9
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr71343-2.c
@@ -0,0 +1,34 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-optimized" } */
+
+unsigned int test1(unsigned int a , unsigned int b)
+{
+ return (a << 2) + (b << 2) == a * 4 + b * 4;
+}
+
+unsigned int test2(unsigned int a , unsigned int b)
+{
+ return (a << 2) + (b << 2) == (a + b) << 2;
+}
+
+unsigned int test3(unsigned int a , unsigned int b)
+{
+ return a * 4 + b * 4 == (a + b) * 4;
+}
+
+unsigned int test4(unsigned int a , unsigned int b)
+{
+ return (a + b) << 2 == (a + b) * 4;
+}
+
+unsigned int test5(unsigned int a , unsigned int b)
+{
+ return (a << 2) + (b << 2) == (a + b) * 4;
+}
+
+unsigned int test6(unsigned int a , unsigned int b)
+{
+ return (a + b) << 2 == a * 4 + b * 4;
+}
+
+/* { dg-final { scan-tree-dump-times "return 1" 6 "optimized" } } */
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] PR tree-optimization/71343: Optimize (X<<C)&(Y<<C) as (X&Y)<<C.
2022-08-08 8:07 [PATCH] PR tree-optimization/71343: Optimize (X<<C)&(Y<<C) as (X&Y)<<C Roger Sayle
@ 2022-08-08 11:42 ` Richard Biener
2022-08-12 21:45 ` [PATCH take #2] " Roger Sayle
2022-09-13 17:54 ` [PATCH] PR tree-optimization/71343: Value number X<<2 as X*4 Roger Sayle
0 siblings, 2 replies; 6+ messages in thread
From: Richard Biener @ 2022-08-08 11:42 UTC (permalink / raw)
To: Roger Sayle; +Cc: GCC Patches
On Mon, Aug 8, 2022 at 10:07 AM Roger Sayle <roger@nextmovesoftware.com> wrote:
>
>
> This patch resolves PR tree-optimization/71343, a missed-optimization
> enhancement request where GCC fails to see that (a<<2)+(b<<2) == a*4+b*4.
> This requires two related (sets of) optimizations to be added to match.pd.
>
> The first is that (X<<C) op (Y<<C) can be simplified to (X op Y) << C,
> for many binary operators, including AND, IOR, XOR, and (if overflow
> isn't an issue) PLUS and MINUS. Likewise, the right shifts (both logical
> and arithmetic) and bit-wise logical operators can be simplified in a
> similar fashion. These all reduce the number of GIMPLE binary operations
> from 3 to 2, by combining/eliminating a shift operation.
>
> The second optimization reflects that the middle-end doesn't impose a
> canonical form on multiplications by powers of two, vs. left shifts,
> instead leaving these operations as specified by the programmer
> unless there's a good reason to change them. Hence, GIMPLE code may
> contain the expressions "X * 8" and "X << 3" even though these represent
> the same value/computation. The tweak to match.pd is that comparison
> operations whose operands are equivalent non-canonical expressions can
> be taught their equivalence. Hence "(X * 8) == (X << 3)" will always
> evaluate to true, and "(X<<2) > 4*X" will always evaluate to false.
>
> This patch has been tested on x86_64-pc-linux-gnu with make bootstrap
> and make -k check, both with and without --target_board=unix{-m32},
> with no new failures. Ok for mainline?
+/* Shifts by constants distribute over several binary operations,
+ hence (X << C) + (Y << C) can be simplified to (X + Y) << C. */
+(for op (plus minus)
+ (simplify
+ (op (lshift:s @0 INTEGER_CST@1) (lshift:s @2 INTEGER_CST@1))
+ (if (INTEGRAL_TYPE_P (type)
+ && TYPE_OVERFLOW_WRAPS (type)
+ && !TYPE_SATURATING (type)
+ && tree_fits_shwi_p (@1)
+ && tree_to_shwi (@1) > 0
+ && tree_to_shwi (@1) < TYPE_PRECISION (type))
I do wonder why we need to restrict this to shifts by constants?
Any out-of-bound shift was already there, no?
+/* Some tree expressions are intentionally non-canonical.
+ We handle the comparison of the equivalent forms here. */
+(for cmp (eq le ge)
+ (simplify
+ (cmp:c (lshift @0 INTEGER_CST@1) (mult @0 integer_pow2p@2))
+ (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))
+ && tree_fits_shwi_p (@1)
+ && tree_to_shwi (@1) > 0
+ && tree_to_shwi (@1) < TYPE_PRECISION (TREE_TYPE (@0))
+ && wi::to_wide (@1) == wi::exact_log2 (wi::to_wide (@2)))
+ { constant_boolean_node (true, type); })))
+
+(for cmp (ne lt gt)
+ (simplify
+ (cmp:c (lshift @0 INTEGER_CST@1) (mult @0 integer_pow2p@2))
+ (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))
+ && tree_fits_shwi_p (@1)
+ && tree_to_shwi (@1) > 0
+ && tree_to_shwi (@1) < TYPE_PRECISION (TREE_TYPE (@0))
+ && wi::to_wide (@1) == wi::exact_log2 (wi::to_wide (@2)))
+ { constant_boolean_node (false, type); })))
hmm. I wonder if it makes more sense to handle this in value-numbering.
tree-ssa-sccvn.cc:visit_nary_op handles some cases that are not
exactly canonicalization issues but the shift vs mult could be handled
there by just performing the alternate lookup. That would also enable
CSE and by means of that of course the comparisons you do above.
Thanks,
Richard.
>
> 2022-08-08 Roger Sayle <roger@nextmovesoftware.com>
>
> gcc/ChangeLog
> PR tree-optimization/71343
> * match.pd (op (lshift @0 @1) (lshift @2 @1)): Optimize the
> expression (X<<C) + (Y<<C) to (X+Y)<<C for multiple operators.
> (op (rshift @0 @1) (rshift @2 @1)): Likwise, simplify (X>>C)^(Y>>C)
> to (X^Y)>>C for binary logical operators, AND, IOR and XOR.
> (cmp:c (lshift @0) (mult @1)): Optimize comparisons between
> shifts by integer constants and multiplications by powers of 2.
>
> gcc/testsuite/ChangeLog
> PR tree-optimization/71343
> * gcc.dg/pr71343-1.c: New test case.
> * gcc.dg/pr71343-2.c: Likewise.
>
>
> Thanks in advance,
> Roger
> --
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH take #2] PR tree-optimization/71343: Optimize (X<<C)&(Y<<C) as (X&Y)<<C.
2022-08-08 11:42 ` Richard Biener
@ 2022-08-12 21:45 ` Roger Sayle
2022-08-15 7:48 ` Richard Biener
2022-09-13 17:54 ` [PATCH] PR tree-optimization/71343: Value number X<<2 as X*4 Roger Sayle
1 sibling, 1 reply; 6+ messages in thread
From: Roger Sayle @ 2022-08-12 21:45 UTC (permalink / raw)
To: 'Richard Biener'; +Cc: 'GCC Patches'
[-- Attachment #1: Type: text/plain, Size: 5990 bytes --]
Hi Richard,
Many thanks for the review and useful suggestions. I (think I) agree that
handling non-canonical forms in value_numbering makes more sense,
so this revised patch is just the first (non-controversial) part of the original
submission, that incorporates your observation that it doesn't need to
be limited to (valid) constant shifts, and can be generalized to any
shift, without introducing undefined behaviour that didn't exist before.
This revised patch has been tested on x86_64-pc-linux-gnu with
make bootstrap and make -k check, both with and without
--target_board=unix{-m32} with no new failures. Ok for mainline?
2022-08-12 Roger Sayle <roger@nextmovesoftware.com>
Richard Biener <rguenther@suse.de>
gcc/ChangeLog
PR tree-optimization/71343
* match.pd (op (lshift @0 @1) (lshift @2 @1)): Optimize the
expression (X<<C) + (Y<<C) to (X+Y)<<C for multiple operators.
(op (rshift @0 @1) (rshift @2 @1)): Likewise, simplify (X>>C)^(Y>>C)
to (X^Y)>>C for binary logical operators, AND, IOR and XOR.
gcc/testsuite/ChangeLog
PR tree-optimization/71343
* gcc.dg/pr71343-1.c: New test case.
Thanks,
Roger
--
> -----Original Message-----
> From: Richard Biener <richard.guenther@gmail.com>
> Sent: 08 August 2022 12:42
> To: Roger Sayle <roger@nextmovesoftware.com>
> Cc: GCC Patches <gcc-patches@gcc.gnu.org>
> Subject: Re: [PATCH] PR tree-optimization/71343: Optimize (X<<C)&(Y<<C) as
> (X&Y)<<C.
>
> On Mon, Aug 8, 2022 at 10:07 AM Roger Sayle
> <roger@nextmovesoftware.com> wrote:
> >
> >
> > This patch resolves PR tree-optimization/71343, a missed-optimization
> > enhancement request where GCC fails to see that (a<<2)+(b<<2) == a*4+b*4.
> > This requires two related (sets of) optimizations to be added to match.pd.
> >
> > The first is that (X<<C) op (Y<<C) can be simplified to (X op Y) << C,
> > for many binary operators, including AND, IOR, XOR, and (if overflow
> > isn't an issue) PLUS and MINUS. Likewise, the right shifts (both
> > logical and arithmetic) and bit-wise logical operators can be
> > simplified in a similar fashion. These all reduce the number of
> > GIMPLE binary operations from 3 to 2, by combining/eliminating a shift
> operation.
> >
> > The second optimization reflects that the middle-end doesn't impose a
> > canonical form on multiplications by powers of two, vs. left shifts,
> > instead leaving these operations as specified by the programmer unless
> > there's a good reason to change them. Hence, GIMPLE code may contain
> > the expressions "X * 8" and "X << 3" even though these represent the
> > same value/computation. The tweak to match.pd is that comparison
> > operations whose operands are equivalent non-canonical expressions can
> > be taught their equivalence. Hence "(X * 8) == (X << 3)" will always
> > evaluate to true, and "(X<<2) > 4*X" will always evaluate to false.
> >
> > This patch has been tested on x86_64-pc-linux-gnu with make bootstrap
> > and make -k check, both with and without --target_board=unix{-m32},
> > with no new failures. Ok for mainline?
>
> +/* Shifts by constants distribute over several binary operations,
> + hence (X << C) + (Y << C) can be simplified to (X + Y) << C. */
> +(for op (plus minus)
> + (simplify
> + (op (lshift:s @0 INTEGER_CST@1) (lshift:s @2 INTEGER_CST@1))
> + (if (INTEGRAL_TYPE_P (type)
> + && TYPE_OVERFLOW_WRAPS (type)
> + && !TYPE_SATURATING (type)
> + && tree_fits_shwi_p (@1)
> + && tree_to_shwi (@1) > 0
> + && tree_to_shwi (@1) < TYPE_PRECISION (type))
>
> I do wonder why we need to restrict this to shifts by constants?
> Any out-of-bound shift was already there, no?
>
> +/* Some tree expressions are intentionally non-canonical.
> + We handle the comparison of the equivalent forms here. */ (for cmp
> +(eq le ge)
> + (simplify
> + (cmp:c (lshift @0 INTEGER_CST@1) (mult @0 integer_pow2p@2))
> + (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))
> + && tree_fits_shwi_p (@1)
> + && tree_to_shwi (@1) > 0
> + && tree_to_shwi (@1) < TYPE_PRECISION (TREE_TYPE (@0))
> + && wi::to_wide (@1) == wi::exact_log2 (wi::to_wide (@2)))
> + { constant_boolean_node (true, type); })))
> +
> +(for cmp (ne lt gt)
> + (simplify
> + (cmp:c (lshift @0 INTEGER_CST@1) (mult @0 integer_pow2p@2))
> + (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))
> + && tree_fits_shwi_p (@1)
> + && tree_to_shwi (@1) > 0
> + && tree_to_shwi (@1) < TYPE_PRECISION (TREE_TYPE (@0))
> + && wi::to_wide (@1) == wi::exact_log2 (wi::to_wide (@2)))
> + { constant_boolean_node (false, type); })))
>
> hmm. I wonder if it makes more sense to handle this in value-numbering.
> tree-ssa-sccvn.cc:visit_nary_op handles some cases that are not exactly
> canonicalization issues but the shift vs mult could be handled there by just
> performing the alternate lookup. That would also enable CSE and by means of
> that of course the comparisons you do above.
>
> Thanks,
> Richard.
>
> >
> > 2022-08-08 Roger Sayle <roger@nextmovesoftware.com>
> >
> > gcc/ChangeLog
> > PR tree-optimization/71343
> > * match.pd (op (lshift @0 @1) (lshift @2 @1)): Optimize the
> > expression (X<<C) + (Y<<C) to (X+Y)<<C for multiple operators.
> > (op (rshift @0 @1) (rshift @2 @1)): Likwise, simplify (X>>C)^(Y>>C)
> > to (X^Y)>>C for binary logical operators, AND, IOR and XOR.
> > (cmp:c (lshift @0) (mult @1)): Optimize comparisons between
> > shifts by integer constants and multiplications by powers of 2.
> >
> > gcc/testsuite/ChangeLog
> > PR tree-optimization/71343
> > * gcc.dg/pr71343-1.c: New test case.
> > * gcc.dg/pr71343-2.c: Likewise.
> >
> >
> > Thanks in advance,
> > Roger
> > --
> >
[-- Attachment #2: patchfd2.txt --]
[-- Type: text/plain, Size: 2321 bytes --]
diff --git a/gcc/match.pd b/gcc/match.pd
index f82f94a..7940f75 100644
--- a/gcc/match.pd
+++ b/gcc/match.pd
@@ -982,6 +982,26 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
&& tree_nop_conversion_p (type, TREE_TYPE (@1)))
(lshift @0 @2)))
+/* Shifts by constants distribute over several binary operations,
+ hence (X << C) + (Y << C) can be simplified to (X + Y) << C. */
+(for op (plus minus)
+ (simplify
+ (op (lshift:s @0 @1) (lshift:s @2 @1))
+ (if (INTEGRAL_TYPE_P (type)
+ && TYPE_OVERFLOW_WRAPS (type)
+ && !TYPE_SATURATING (type))
+ (lshift (op @0 @2) @1))))
+
+(for op (bit_and bit_ior bit_xor)
+ (simplify
+ (op (lshift:s @0 @1) (lshift:s @2 @1))
+ (if (INTEGRAL_TYPE_P (type))
+ (lshift (op @0 @2) @1)))
+ (simplify
+ (op (rshift:s @0 @1) (rshift:s @2 @1))
+ (if (INTEGRAL_TYPE_P (type))
+ (rshift (op @0 @2) @1))))
+
/* Fold (1 << (C - x)) where C = precision(type) - 1
into ((1 << C) >> x). */
(simplify
diff --git a/gcc/testsuite/gcc.dg/pr71343-1.c b/gcc/testsuite/gcc.dg/pr71343-1.c
new file mode 100644
index 0000000..146f5fc
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr71343-1.c
@@ -0,0 +1,56 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-optimized" } */
+
+unsigned int foo_plus(unsigned int a, unsigned int b)
+{
+ return (a << 2) + (b << 2);
+}
+
+unsigned int foo_and(unsigned int a, unsigned int b)
+{
+ return (a << 2) & (b << 2);
+}
+
+unsigned int foo_ior(unsigned int a, unsigned int b)
+{
+ return (a << 2) | (b << 2);
+}
+
+unsigned int foo_xor(unsigned int a, unsigned int b)
+{
+ return (a << 2) ^ (b << 2);
+}
+
+unsigned int bar_and(unsigned int a, unsigned int b)
+{
+ return (a >> 2) & (b >> 2);
+}
+
+unsigned int bar_ior(unsigned int a, unsigned int b)
+{
+ return (a >> 2) | (b >> 2);
+}
+
+unsigned int bar_xor(unsigned int a, unsigned int b)
+{
+ return (a >> 2) ^ (b >> 2);
+}
+
+int baz_and(int a, int b)
+{
+ return (a >> 2) & (b >> 2);
+}
+
+int baz_ior(int a, int b)
+{
+ return (a >> 2) | (b >> 2);
+}
+
+int baz_xor(int a, int b)
+{
+ return (a >> 2) ^ (b >> 2);
+}
+
+/* { dg-final { scan-tree-dump-times " << " 4 "optimized" } } */
+/* { dg-final { scan-tree-dump-times " >> " 6 "optimized" } } */
+
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH take #2] PR tree-optimization/71343: Optimize (X<<C)&(Y<<C) as (X&Y)<<C.
2022-08-12 21:45 ` [PATCH take #2] " Roger Sayle
@ 2022-08-15 7:48 ` Richard Biener
0 siblings, 0 replies; 6+ messages in thread
From: Richard Biener @ 2022-08-15 7:48 UTC (permalink / raw)
To: Roger Sayle; +Cc: GCC Patches
On Fri, Aug 12, 2022 at 11:45 PM Roger Sayle <roger@nextmovesoftware.com> wrote:
>
>
> Hi Richard,
> Many thanks for the review and useful suggestions. I (think I) agree that
> handling non-canonical forms in value_numbering makes more sense,
> so this revised patch is just the first (non-controversial) part of the original
> submission, that incorporates your observation that it doesn't need to
> be limited to (valid) constant shifts, and can be generalized to any
> shift, without introducing undefined behaviour that didn't exist before.
>
> This revised patch has been tested on x86_64-pc-linux-gnu with
> make bootstrap and make -k check, both with and without
> --target_board=unix{-m32} with no new failures. Ok for mainline?
OK.
Thanks,
Richard.
>
> 2022-08-12 Roger Sayle <roger@nextmovesoftware.com>
> Richard Biener <rguenther@suse.de>
>
> gcc/ChangeLog
> PR tree-optimization/71343
> * match.pd (op (lshift @0 @1) (lshift @2 @1)): Optimize the
> expression (X<<C) + (Y<<C) to (X+Y)<<C for multiple operators.
> (op (rshift @0 @1) (rshift @2 @1)): Likewise, simplify (X>>C)^(Y>>C)
> to (X^Y)>>C for binary logical operators, AND, IOR and XOR.
>
> gcc/testsuite/ChangeLog
> PR tree-optimization/71343
> * gcc.dg/pr71343-1.c: New test case.
>
>
> Thanks,
> Roger
> --
>
> > -----Original Message-----
> > From: Richard Biener <richard.guenther@gmail.com>
> > Sent: 08 August 2022 12:42
> > To: Roger Sayle <roger@nextmovesoftware.com>
> > Cc: GCC Patches <gcc-patches@gcc.gnu.org>
> > Subject: Re: [PATCH] PR tree-optimization/71343: Optimize (X<<C)&(Y<<C) as
> > (X&Y)<<C.
> >
> > On Mon, Aug 8, 2022 at 10:07 AM Roger Sayle
> > <roger@nextmovesoftware.com> wrote:
> > >
> > >
> > > This patch resolves PR tree-optimization/71343, a missed-optimization
> > > enhancement request where GCC fails to see that (a<<2)+(b<<2) == a*4+b*4.
> > > This requires two related (sets of) optimizations to be added to match.pd.
> > >
> > > The first is that (X<<C) op (Y<<C) can be simplified to (X op Y) << C,
> > > for many binary operators, including AND, IOR, XOR, and (if overflow
> > > isn't an issue) PLUS and MINUS. Likewise, the right shifts (both
> > > logical and arithmetic) and bit-wise logical operators can be
> > > simplified in a similar fashion. These all reduce the number of
> > > GIMPLE binary operations from 3 to 2, by combining/eliminating a shift
> > operation.
> > >
> > > The second optimization reflects that the middle-end doesn't impose a
> > > canonical form on multiplications by powers of two, vs. left shifts,
> > > instead leaving these operations as specified by the programmer unless
> > > there's a good reason to change them. Hence, GIMPLE code may contain
> > > the expressions "X * 8" and "X << 3" even though these represent the
> > > same value/computation. The tweak to match.pd is that comparison
> > > operations whose operands are equivalent non-canonical expressions can
> > > be taught their equivalence. Hence "(X * 8) == (X << 3)" will always
> > > evaluate to true, and "(X<<2) > 4*X" will always evaluate to false.
> > >
> > > This patch has been tested on x86_64-pc-linux-gnu with make bootstrap
> > > and make -k check, both with and without --target_board=unix{-m32},
> > > with no new failures. Ok for mainline?
> >
> > +/* Shifts by constants distribute over several binary operations,
> > + hence (X << C) + (Y << C) can be simplified to (X + Y) << C. */
> > +(for op (plus minus)
> > + (simplify
> > + (op (lshift:s @0 INTEGER_CST@1) (lshift:s @2 INTEGER_CST@1))
> > + (if (INTEGRAL_TYPE_P (type)
> > + && TYPE_OVERFLOW_WRAPS (type)
> > + && !TYPE_SATURATING (type)
> > + && tree_fits_shwi_p (@1)
> > + && tree_to_shwi (@1) > 0
> > + && tree_to_shwi (@1) < TYPE_PRECISION (type))
> >
> > I do wonder why we need to restrict this to shifts by constants?
> > Any out-of-bound shift was already there, no?
> >
> > +/* Some tree expressions are intentionally non-canonical.
> > + We handle the comparison of the equivalent forms here. */ (for cmp
> > +(eq le ge)
> > + (simplify
> > + (cmp:c (lshift @0 INTEGER_CST@1) (mult @0 integer_pow2p@2))
> > + (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))
> > + && tree_fits_shwi_p (@1)
> > + && tree_to_shwi (@1) > 0
> > + && tree_to_shwi (@1) < TYPE_PRECISION (TREE_TYPE (@0))
> > + && wi::to_wide (@1) == wi::exact_log2 (wi::to_wide (@2)))
> > + { constant_boolean_node (true, type); })))
> > +
> > +(for cmp (ne lt gt)
> > + (simplify
> > + (cmp:c (lshift @0 INTEGER_CST@1) (mult @0 integer_pow2p@2))
> > + (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))
> > + && tree_fits_shwi_p (@1)
> > + && tree_to_shwi (@1) > 0
> > + && tree_to_shwi (@1) < TYPE_PRECISION (TREE_TYPE (@0))
> > + && wi::to_wide (@1) == wi::exact_log2 (wi::to_wide (@2)))
> > + { constant_boolean_node (false, type); })))
> >
> > hmm. I wonder if it makes more sense to handle this in value-numbering.
> > tree-ssa-sccvn.cc:visit_nary_op handles some cases that are not exactly
> > canonicalization issues but the shift vs mult could be handled there by just
> > performing the alternate lookup. That would also enable CSE and by means of
> > that of course the comparisons you do above.
> >
> > Thanks,
> > Richard.
> >
> > >
> > > 2022-08-08 Roger Sayle <roger@nextmovesoftware.com>
> > >
> > > gcc/ChangeLog
> > > PR tree-optimization/71343
> > > * match.pd (op (lshift @0 @1) (lshift @2 @1)): Optimize the
> > > expression (X<<C) + (Y<<C) to (X+Y)<<C for multiple operators.
> > > (op (rshift @0 @1) (rshift @2 @1)): Likwise, simplify (X>>C)^(Y>>C)
> > > to (X^Y)>>C for binary logical operators, AND, IOR and XOR.
> > > (cmp:c (lshift @0) (mult @1)): Optimize comparisons between
> > > shifts by integer constants and multiplications by powers of 2.
> > >
> > > gcc/testsuite/ChangeLog
> > > PR tree-optimization/71343
> > > * gcc.dg/pr71343-1.c: New test case.
> > > * gcc.dg/pr71343-2.c: Likewise.
> > >
> > >
> > > Thanks in advance,
> > > Roger
> > > --
> > >
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH] PR tree-optimization/71343: Value number X<<2 as X*4.
2022-08-08 11:42 ` Richard Biener
2022-08-12 21:45 ` [PATCH take #2] " Roger Sayle
@ 2022-09-13 17:54 ` Roger Sayle
2022-09-14 8:24 ` Richard Biener
1 sibling, 1 reply; 6+ messages in thread
From: Roger Sayle @ 2022-09-13 17:54 UTC (permalink / raw)
To: 'GCC Patches'; +Cc: 'Richard Biener'
[-- Attachment #1: Type: text/plain, Size: 5914 bytes --]
This patch is the second part of a fix for PR tree-optimization/71343,
that implements Richard Biener's suggestion of using tree-ssa's value
numbering instead of match.pd. The change is that when assigning a
value number for the expression X<<C, we actually look-up or insert
the value number for the multiplication X*(1<<C). This elegantly
handles the fact that we (intentionally) don't canonicalize these as
equivalent in GIMPLE, and the optimization/equivalence in PR 71343 now
happens by (tree-ssa SCCVN) magic.
This patch has been tested on x86_64-pc-linux-gnu with make bootstrap
and make -k check, both with and without --target_board=unix{-32},
with no new failures. Ok for mainline?
2022-09-13 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
PR tree-optimization/71343
* tree-ssa-sccvn.cc (visit_nary_op) <case LSHIFT_EXPR>: Make
the value number of the expression X << C the same as the value
number for the multiplication X * (1<<C).
gcc/testsuite/ChangeLog
PR tree-optimization/71343
* gcc.dg/pr71343-2.c: New test case.
Thanks in advance,
Roger
--
> -----Original Message-----
> From: Richard Biener <richard.guenther@gmail.com>
> Sent: 08 August 2022 12:42
> To: Roger Sayle <roger@nextmovesoftware.com>
> Cc: GCC Patches <gcc-patches@gcc.gnu.org>
> Subject: Re: [PATCH] PR tree-optimization/71343: Optimize (X<<C)&(Y<<C) as
> (X&Y)<<C.
>
> On Mon, Aug 8, 2022 at 10:07 AM Roger Sayle
> <roger@nextmovesoftware.com> wrote:
> >
> > This patch resolves PR tree-optimization/71343, a missed-optimization
> > enhancement request where GCC fails to see that (a<<2)+(b<<2) == a*4+b*4.
> > This requires two related (sets of) optimizations to be added to match.pd.
> >
> > The first is that (X<<C) op (Y<<C) can be simplified to (X op Y) << C,
> > for many binary operators, including AND, IOR, XOR, and (if overflow
> > isn't an issue) PLUS and MINUS. Likewise, the right shifts (both
> > logical and arithmetic) and bit-wise logical operators can be
> > simplified in a similar fashion. These all reduce the number of
> > GIMPLE binary operations from 3 to 2, by combining/eliminating a shift
> operation.
> >
> > The second optimization reflects that the middle-end doesn't impose a
> > canonical form on multiplications by powers of two, vs. left shifts,
> > instead leaving these operations as specified by the programmer unless
> > there's a good reason to change them. Hence, GIMPLE code may contain
> > the expressions "X * 8" and "X << 3" even though these represent the
> > same value/computation. The tweak to match.pd is that comparison
> > operations whose operands are equivalent non-canonical expressions can
> > be taught their equivalence. Hence "(X * 8) == (X << 3)" will always
> > evaluate to true, and "(X<<2) > 4*X" will always evaluate to false.
> >
> > This patch has been tested on x86_64-pc-linux-gnu with make bootstrap
> > and make -k check, both with and without --target_board=unix{-m32},
> > with no new failures. Ok for mainline?
>
> +/* Shifts by constants distribute over several binary operations,
> + hence (X << C) + (Y << C) can be simplified to (X + Y) << C. */
> +(for op (plus minus)
> + (simplify
> + (op (lshift:s @0 INTEGER_CST@1) (lshift:s @2 INTEGER_CST@1))
> + (if (INTEGRAL_TYPE_P (type)
> + && TYPE_OVERFLOW_WRAPS (type)
> + && !TYPE_SATURATING (type)
> + && tree_fits_shwi_p (@1)
> + && tree_to_shwi (@1) > 0
> + && tree_to_shwi (@1) < TYPE_PRECISION (type))
>
> I do wonder why we need to restrict this to shifts by constants?
> Any out-of-bound shift was already there, no?
>
> +/* Some tree expressions are intentionally non-canonical.
> + We handle the comparison of the equivalent forms here. */ (for cmp
> +(eq le ge)
> + (simplify
> + (cmp:c (lshift @0 INTEGER_CST@1) (mult @0 integer_pow2p@2))
> + (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))
> + && tree_fits_shwi_p (@1)
> + && tree_to_shwi (@1) > 0
> + && tree_to_shwi (@1) < TYPE_PRECISION (TREE_TYPE (@0))
> + && wi::to_wide (@1) == wi::exact_log2 (wi::to_wide (@2)))
> + { constant_boolean_node (true, type); })))
> +
> +(for cmp (ne lt gt)
> + (simplify
> + (cmp:c (lshift @0 INTEGER_CST@1) (mult @0 integer_pow2p@2))
> + (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))
> + && tree_fits_shwi_p (@1)
> + && tree_to_shwi (@1) > 0
> + && tree_to_shwi (@1) < TYPE_PRECISION (TREE_TYPE (@0))
> + && wi::to_wide (@1) == wi::exact_log2 (wi::to_wide (@2)))
> + { constant_boolean_node (false, type); })))
>
> hmm. I wonder if it makes more sense to handle this in value-numbering.
> tree-ssa-sccvn.cc:visit_nary_op handles some cases that are not exactly
> canonicalization issues but the shift vs mult could be handled there by just
> performing the alternate lookup. That would also enable CSE and by means of
> that of course the comparisons you do above.
>
> Thanks,
> Richard.
>
> >
> > 2022-08-08 Roger Sayle <roger@nextmovesoftware.com>
> >
> > gcc/ChangeLog
> > PR tree-optimization/71343
> > * match.pd (op (lshift @0 @1) (lshift @2 @1)): Optimize the
> > expression (X<<C) + (Y<<C) to (X+Y)<<C for multiple operators.
> > (op (rshift @0 @1) (rshift @2 @1)): Likwise, simplify (X>>C)^(Y>>C)
> > to (X^Y)>>C for binary logical operators, AND, IOR and XOR.
> > (cmp:c (lshift @0) (mult @1)): Optimize comparisons between
> > shifts by integer constants and multiplications by powers of 2.
> >
> > gcc/testsuite/ChangeLog
> > PR tree-optimization/71343
> > * gcc.dg/pr71343-1.c: New test case.
> > * gcc.dg/pr71343-2.c: Likewise.
> >
> >
> > Thanks in advance,
> > Roger
> > --
[-- Attachment #2: patchvn.txt --]
[-- Type: text/plain, Size: 2066 bytes --]
diff --git a/gcc/tree-ssa-sccvn.cc b/gcc/tree-ssa-sccvn.cc
index 74b8d8d..2644446 100644
--- a/gcc/tree-ssa-sccvn.cc
+++ b/gcc/tree-ssa-sccvn.cc
@@ -5312,6 +5312,30 @@ visit_nary_op (tree lhs, gassign *stmt)
}
}
break;
+ case LSHIFT_EXPR:
+ /* For X << C, use the value number of X * (1 << C). */
+ if (INTEGRAL_TYPE_P (type))
+ {
+ tree rhs2 = gimple_assign_rhs2 (stmt);
+ if (TREE_CODE (rhs2) == INTEGER_CST
+ && tree_fits_uhwi_p (rhs2)
+ && tree_to_uhwi (rhs2) < TYPE_PRECISION (type))
+ {
+ wide_int w = wi::set_bit_in_zero (tree_to_uhwi (rhs2),
+ TYPE_PRECISION (type));
+ gimple_match_op match_op (gimple_match_cond::UNCOND,
+ MULT_EXPR, type, rhs1,
+ wide_int_to_tree (type, w));
+ result = vn_nary_build_or_lookup (&match_op);
+ if (result)
+ {
+ bool changed = set_ssa_val_to (lhs, result);
+ vn_nary_op_insert_stmt (stmt, result);
+ return changed;
+ }
+ }
+ }
+ break;
default:
break;
}
diff --git a/gcc/testsuite/gcc.dg/pr71343-2.c b/gcc/testsuite/gcc.dg/pr71343-2.c
new file mode 100644
index 0000000..11800a9
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr71343-2.c
@@ -0,0 +1,34 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-optimized" } */
+
+unsigned int test1(unsigned int a , unsigned int b)
+{
+ return (a << 2) + (b << 2) == a * 4 + b * 4;
+}
+
+unsigned int test2(unsigned int a , unsigned int b)
+{
+ return (a << 2) + (b << 2) == (a + b) << 2;
+}
+
+unsigned int test3(unsigned int a , unsigned int b)
+{
+ return a * 4 + b * 4 == (a + b) * 4;
+}
+
+unsigned int test4(unsigned int a , unsigned int b)
+{
+ return (a + b) << 2 == (a + b) * 4;
+}
+
+unsigned int test5(unsigned int a , unsigned int b)
+{
+ return (a << 2) + (b << 2) == (a + b) * 4;
+}
+
+unsigned int test6(unsigned int a , unsigned int b)
+{
+ return (a + b) << 2 == a * 4 + b * 4;
+}
+
+/* { dg-final { scan-tree-dump-times "return 1" 6 "optimized" } } */
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] PR tree-optimization/71343: Value number X<<2 as X*4.
2022-09-13 17:54 ` [PATCH] PR tree-optimization/71343: Value number X<<2 as X*4 Roger Sayle
@ 2022-09-14 8:24 ` Richard Biener
0 siblings, 0 replies; 6+ messages in thread
From: Richard Biener @ 2022-09-14 8:24 UTC (permalink / raw)
To: Roger Sayle; +Cc: GCC Patches
On Tue, Sep 13, 2022 at 7:55 PM Roger Sayle <roger@nextmovesoftware.com> wrote:
>
>
> This patch is the second part of a fix for PR tree-optimization/71343,
> that implements Richard Biener's suggestion of using tree-ssa's value
> numbering instead of match.pd. The change is that when assigning a
> value number for the expression X<<C, we actually look-up or insert
> the value number for the multiplication X*(1<<C). This elegantly
> handles the fact that we (intentionally) don't canonicalize these as
> equivalent in GIMPLE, and the optimization/equivalence in PR 71343 now
> happens by (tree-ssa SCCVN) magic.
>
> This patch has been tested on x86_64-pc-linux-gnu with make bootstrap
> and make -k check, both with and without --target_board=unix{-32},
> with no new failures. Ok for mainline?
Note that "insertion" is quite limited, in particular does not support
inserting a MULT_EXPR (see eliminate_dom_walker::eliminate_insert).
Your testcases have all expressions in one C statement, if you break
things down to enforce the various evaluation orders you seem to
test I think you'd see that CSEing
tem1 = (a + b) << 2;
tem2 = (a + b ) * 4;
does not actually work? Amending eliminate_insert by for example
changing the BIT_AND_EXPR handling (with constant rhs2) to
covert all tcc_binary should make it work.
Can you double-check?
Otherwise this looks OK.
Thanks,
Richard.
>
> 2022-09-13 Roger Sayle <roger@nextmovesoftware.com>
>
> gcc/ChangeLog
> PR tree-optimization/71343
> * tree-ssa-sccvn.cc (visit_nary_op) <case LSHIFT_EXPR>: Make
> the value number of the expression X << C the same as the value
> number for the multiplication X * (1<<C).
>
> gcc/testsuite/ChangeLog
> PR tree-optimization/71343
> * gcc.dg/pr71343-2.c: New test case.
>
>
> Thanks in advance,
> Roger
> --
>
> > -----Original Message-----
> > From: Richard Biener <richard.guenther@gmail.com>
> > Sent: 08 August 2022 12:42
> > To: Roger Sayle <roger@nextmovesoftware.com>
> > Cc: GCC Patches <gcc-patches@gcc.gnu.org>
> > Subject: Re: [PATCH] PR tree-optimization/71343: Optimize (X<<C)&(Y<<C) as
> > (X&Y)<<C.
> >
> > On Mon, Aug 8, 2022 at 10:07 AM Roger Sayle
> > <roger@nextmovesoftware.com> wrote:
> > >
> > > This patch resolves PR tree-optimization/71343, a missed-optimization
> > > enhancement request where GCC fails to see that (a<<2)+(b<<2) == a*4+b*4.
> > > This requires two related (sets of) optimizations to be added to match.pd.
> > >
> > > The first is that (X<<C) op (Y<<C) can be simplified to (X op Y) << C,
> > > for many binary operators, including AND, IOR, XOR, and (if overflow
> > > isn't an issue) PLUS and MINUS. Likewise, the right shifts (both
> > > logical and arithmetic) and bit-wise logical operators can be
> > > simplified in a similar fashion. These all reduce the number of
> > > GIMPLE binary operations from 3 to 2, by combining/eliminating a shift
> > operation.
> > >
> > > The second optimization reflects that the middle-end doesn't impose a
> > > canonical form on multiplications by powers of two, vs. left shifts,
> > > instead leaving these operations as specified by the programmer unless
> > > there's a good reason to change them. Hence, GIMPLE code may contain
> > > the expressions "X * 8" and "X << 3" even though these represent the
> > > same value/computation. The tweak to match.pd is that comparison
> > > operations whose operands are equivalent non-canonical expressions can
> > > be taught their equivalence. Hence "(X * 8) == (X << 3)" will always
> > > evaluate to true, and "(X<<2) > 4*X" will always evaluate to false.
> > >
> > > This patch has been tested on x86_64-pc-linux-gnu with make bootstrap
> > > and make -k check, both with and without --target_board=unix{-m32},
> > > with no new failures. Ok for mainline?
> >
> > +/* Shifts by constants distribute over several binary operations,
> > + hence (X << C) + (Y << C) can be simplified to (X + Y) << C. */
> > +(for op (plus minus)
> > + (simplify
> > + (op (lshift:s @0 INTEGER_CST@1) (lshift:s @2 INTEGER_CST@1))
> > + (if (INTEGRAL_TYPE_P (type)
> > + && TYPE_OVERFLOW_WRAPS (type)
> > + && !TYPE_SATURATING (type)
> > + && tree_fits_shwi_p (@1)
> > + && tree_to_shwi (@1) > 0
> > + && tree_to_shwi (@1) < TYPE_PRECISION (type))
> >
> > I do wonder why we need to restrict this to shifts by constants?
> > Any out-of-bound shift was already there, no?
> >
> > +/* Some tree expressions are intentionally non-canonical.
> > + We handle the comparison of the equivalent forms here. */ (for cmp
> > +(eq le ge)
> > + (simplify
> > + (cmp:c (lshift @0 INTEGER_CST@1) (mult @0 integer_pow2p@2))
> > + (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))
> > + && tree_fits_shwi_p (@1)
> > + && tree_to_shwi (@1) > 0
> > + && tree_to_shwi (@1) < TYPE_PRECISION (TREE_TYPE (@0))
> > + && wi::to_wide (@1) == wi::exact_log2 (wi::to_wide (@2)))
> > + { constant_boolean_node (true, type); })))
> > +
> > +(for cmp (ne lt gt)
> > + (simplify
> > + (cmp:c (lshift @0 INTEGER_CST@1) (mult @0 integer_pow2p@2))
> > + (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))
> > + && tree_fits_shwi_p (@1)
> > + && tree_to_shwi (@1) > 0
> > + && tree_to_shwi (@1) < TYPE_PRECISION (TREE_TYPE (@0))
> > + && wi::to_wide (@1) == wi::exact_log2 (wi::to_wide (@2)))
> > + { constant_boolean_node (false, type); })))
> >
> > hmm. I wonder if it makes more sense to handle this in value-numbering.
> > tree-ssa-sccvn.cc:visit_nary_op handles some cases that are not exactly
> > canonicalization issues but the shift vs mult could be handled there by just
> > performing the alternate lookup. That would also enable CSE and by means of
> > that of course the comparisons you do above.
> >
> > Thanks,
> > Richard.
> >
> > >
> > > 2022-08-08 Roger Sayle <roger@nextmovesoftware.com>
> > >
> > > gcc/ChangeLog
> > > PR tree-optimization/71343
> > > * match.pd (op (lshift @0 @1) (lshift @2 @1)): Optimize the
> > > expression (X<<C) + (Y<<C) to (X+Y)<<C for multiple operators.
> > > (op (rshift @0 @1) (rshift @2 @1)): Likwise, simplify (X>>C)^(Y>>C)
> > > to (X^Y)>>C for binary logical operators, AND, IOR and XOR.
> > > (cmp:c (lshift @0) (mult @1)): Optimize comparisons between
> > > shifts by integer constants and multiplications by powers of 2.
> > >
> > > gcc/testsuite/ChangeLog
> > > PR tree-optimization/71343
> > > * gcc.dg/pr71343-1.c: New test case.
> > > * gcc.dg/pr71343-2.c: Likewise.
> > >
> > >
> > > Thanks in advance,
> > > Roger
> > > --
>
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2022-09-14 8:25 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-08 8:07 [PATCH] PR tree-optimization/71343: Optimize (X<<C)&(Y<<C) as (X&Y)<<C Roger Sayle
2022-08-08 11:42 ` Richard Biener
2022-08-12 21:45 ` [PATCH take #2] " Roger Sayle
2022-08-15 7:48 ` Richard Biener
2022-09-13 17:54 ` [PATCH] PR tree-optimization/71343: Value number X<<2 as X*4 Roger Sayle
2022-09-14 8:24 ` Richard Biener
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).