* [PATCH] Further improve VRP BIT_AND_EXPR and BIT_IOR_EXPR handling (PR tree-optimization/28632)
@ 2010-07-09 19:10 Jakub Jelinek
2010-07-09 19:34 ` Richard Guenther
2010-07-12 17:30 ` Jeff Law
0 siblings, 2 replies; 3+ messages in thread
From: Jakub Jelinek @ 2010-07-09 19:10 UTC (permalink / raw)
To: gcc-patches; +Cc: aldot, Denys Vlasenko
Hi!
When writing the last VRP BIT_AND_EXPR patch, I wasn't aware of this PR.
Looking at it there indeed are further possibilities for optimizations,
both for BIT_AND_EXPR and also BIT_IOR_EXPR. For the latter, e.g. we used
to derive from [0x400, 0x40f] | [0x800, 0x80f] a [0x800, 0xc0f]
range, while this patch computes [0xc00, 0xc0f] range.
The new helper function might be useful even when optimizing away useless
BIT_AND_EXPR or BIT_IOR_EXPRs.
Bootstrapped/regtested on x86_64-linux and i686-linux. Ok for trunk?
2010-07-09 Jakub Jelinek <jakub@redhat.com>
Denys Vlasenko <dvlasenk@redhat.com>
Bernhard Reutner-Fischer <aldot@gcc.gnu.org>
PR tree-optimization/28632
* tree-vrp.c (zero_nonzero_bits_from_vr): New function.
(extract_range_from_binary_expr): Further optimize
BIT_AND_EXPR and BIT_IOR_EXPR.
* gcc.dg/tree-ssa/vrp51.c: New test.
* gcc.dg/tree-ssa/vrp52.c: New test.
--- gcc/tree-vrp.c.jj 2010-07-09 13:44:23.000000000 +0200
+++ gcc/tree-vrp.c 2010-07-09 19:04:54.000000000 +0200
@@ -2064,6 +2064,60 @@ vrp_int_const_binop (enum tree_code code
}
+/* For range VR compute two double_int bitmasks. In *MAY_BE_NONZERO
+ bitmask if some bit is unset, it means for all numbers in the range
+ the bit is 0, otherwise it might be 0 or 1. In *MUST_BE_NONZERO
+ bitmask if some bit is set, it means for all numbers in the range
+ the bit is 1, otherwise it might be 0 or 1. */
+
+static bool
+zero_nonzero_bits_from_vr (value_range_t *vr, double_int *may_be_nonzero,
+ double_int *must_be_nonzero)
+{
+ if (range_int_cst_p (vr))
+ {
+ if (range_int_cst_singleton_p (vr))
+ {
+ *may_be_nonzero = tree_to_double_int (vr->min);
+ *must_be_nonzero = *may_be_nonzero;
+ return true;
+ }
+ if (tree_int_cst_sgn (vr->min) >= 0)
+ {
+ double_int dmin = tree_to_double_int (vr->min);
+ double_int dmax = tree_to_double_int (vr->max);
+ double_int xor_mask = double_int_xor (dmin, dmax);
+ *may_be_nonzero = double_int_ior (dmin, dmax);
+ *must_be_nonzero = double_int_and (dmin, dmax);
+ if (xor_mask.high != 0)
+ {
+ unsigned HOST_WIDE_INT mask
+ = ((unsigned HOST_WIDE_INT) 1
+ << floor_log2 (xor_mask.high)) - 1;
+ may_be_nonzero->low = ALL_ONES;
+ may_be_nonzero->high |= mask;
+ must_be_nonzero->low = 0;
+ must_be_nonzero->high &= ~mask;
+ }
+ else if (xor_mask.low != 0)
+ {
+ unsigned HOST_WIDE_INT mask
+ = ((unsigned HOST_WIDE_INT) 1
+ << floor_log2 (xor_mask.low)) - 1;
+ may_be_nonzero->low |= mask;
+ must_be_nonzero->low &= ~mask;
+ }
+ return true;
+ }
+ }
+ may_be_nonzero->low = ALL_ONES;
+ may_be_nonzero->high = ALL_ONES;
+ must_be_nonzero->low = 0;
+ must_be_nonzero->high = 0;
+ return false;
+}
+
+
/* Extract range information from a binary expression EXPR based on
the ranges of each of its operands and the expression code. */
@@ -2569,119 +2623,78 @@ extract_range_from_binary_expr (value_ra
min = vrp_int_const_binop (code, vr0.min, vr1.max);
max = vrp_int_const_binop (code, vr0.max, vr1.min);
}
- else if (code == BIT_AND_EXPR)
+ else if (code == BIT_AND_EXPR || code == BIT_IOR_EXPR)
{
bool vr0_int_cst_singleton_p, vr1_int_cst_singleton_p;
+ bool int_cst_range0, int_cst_range1;
+ double_int may_be_nonzero0, may_be_nonzero1;
+ double_int must_be_nonzero0, must_be_nonzero1;
vr0_int_cst_singleton_p = range_int_cst_singleton_p (&vr0);
vr1_int_cst_singleton_p = range_int_cst_singleton_p (&vr1);
+ int_cst_range0 = zero_nonzero_bits_from_vr (&vr0, &may_be_nonzero0,
+ &must_be_nonzero0);
+ int_cst_range1 = zero_nonzero_bits_from_vr (&vr1, &may_be_nonzero1,
+ &must_be_nonzero1);
+ type = VR_RANGE;
if (vr0_int_cst_singleton_p && vr1_int_cst_singleton_p)
min = max = int_const_binop (code, vr0.max, vr1.max, 0);
- else if (range_int_cst_p (&vr0)
- && range_int_cst_p (&vr1)
- && tree_int_cst_sgn (vr0.min) >= 0
- && tree_int_cst_sgn (vr1.min) >= 0)
- {
- double_int vr0_mask = tree_to_double_int (vr0.min);
- double_int vr1_mask = tree_to_double_int (vr1.min);
- double_int maxd, diff;
- tree mask;
-
- min = build_int_cst (expr_type, 0);
- /* Compute non-zero bits mask from both ranges. */
- if (!vr0_int_cst_singleton_p)
- {
- maxd = tree_to_double_int (vr0.max);
- diff = double_int_sub (maxd, vr0_mask);
- if (diff.high)
- {
- diff.low = ~(unsigned HOST_WIDE_INT)0;
- diff.high = ((HOST_WIDE_INT) 2
- << floor_log2 (diff.high)) - 1;
- }
- else
- diff.low = ((HOST_WIDE_INT) 2 << floor_log2 (diff.low)) - 1;
- vr0_mask = double_int_ior (vr0_mask,
- double_int_ior (maxd, diff));
- }
- if (!vr1_int_cst_singleton_p)
- {
- maxd = tree_to_double_int (vr1.max);
- diff = double_int_sub (maxd, vr1_mask);
- if (diff.high)
- {
- diff.low = ~(unsigned HOST_WIDE_INT)0;
- diff.high = ((HOST_WIDE_INT) 2
- << floor_log2 (diff.high)) - 1;
- }
- else
- diff.low = ((HOST_WIDE_INT) 2 << floor_log2 (diff.low)) - 1;
- vr1_mask = double_int_ior (vr1_mask,
- double_int_ior (maxd, diff));
- }
- mask = double_int_to_tree (expr_type,
- double_int_and (vr0_mask, vr1_mask));
- max = vr0.max;
- if (tree_int_cst_lt (vr1.max, max))
- max = vr1.max;
- if (!TREE_OVERFLOW (mask)
- && tree_int_cst_lt (mask, max)
- && tree_int_cst_sgn (mask) >= 0)
- max = mask;
- }
- else if (vr0_int_cst_singleton_p
- && tree_int_cst_sgn (vr0.max) >= 0)
+ else if (!int_cst_range0 && !int_cst_range1)
{
- min = build_int_cst (expr_type, 0);
- max = vr0.max;
+ set_value_range_to_varying (vr);
+ return;
}
- else if (vr1_int_cst_singleton_p
- && tree_int_cst_sgn (vr1.max) >= 0)
+ else if (code == BIT_AND_EXPR)
{
- type = VR_RANGE;
- min = build_int_cst (expr_type, 0);
- max = vr1.max;
+ min = double_int_to_tree (expr_type,
+ double_int_and (must_be_nonzero0,
+ must_be_nonzero1));
+ max = double_int_to_tree (expr_type,
+ double_int_and (may_be_nonzero0,
+ may_be_nonzero1));
+ if (TREE_OVERFLOW (min) || tree_int_cst_sgn (min) < 0)
+ min = NULL_TREE;
+ if (TREE_OVERFLOW (max) || tree_int_cst_sgn (max) < 0)
+ max = NULL_TREE;
+ if (int_cst_range0 && tree_int_cst_sgn (vr0.min) >= 0)
+ {
+ if (min == NULL_TREE)
+ min = build_int_cst (expr_type, 0);
+ if (max == NULL_TREE || tree_int_cst_lt (vr0.max, max))
+ max = vr0.max;
+ }
+ if (int_cst_range1 && tree_int_cst_sgn (vr1.min) >= 0)
+ {
+ if (min == NULL_TREE)
+ min = build_int_cst (expr_type, 0);
+ if (max == NULL_TREE || tree_int_cst_lt (vr1.max, max))
+ max = vr1.max;
+ }
}
- else
+ else if (!int_cst_range0
+ || !int_cst_range1
+ || tree_int_cst_sgn (vr0.min) < 0
+ || tree_int_cst_sgn (vr1.min) < 0)
{
set_value_range_to_varying (vr);
return;
}
- }
- else if (code == BIT_IOR_EXPR)
- {
- if (range_int_cst_p (&vr0)
- && range_int_cst_p (&vr1)
- && tree_int_cst_sgn (vr0.min) >= 0
- && tree_int_cst_sgn (vr1.min) >= 0)
- {
- double_int vr0_max = tree_to_double_int (vr0.max);
- double_int vr1_max = tree_to_double_int (vr1.max);
- double_int ior_max;
-
- /* Set all bits to the right of the most significant one to 1.
- For example, [0, 4] | [4, 4] = [4, 7]. */
- ior_max.low = vr0_max.low | vr1_max.low;
- ior_max.high = vr0_max.high | vr1_max.high;
- if (ior_max.high != 0)
- {
- ior_max.low = ~(unsigned HOST_WIDE_INT)0u;
- ior_max.high |= ((HOST_WIDE_INT) 1
- << floor_log2 (ior_max.high)) - 1;
- }
- else if (ior_max.low != 0)
- ior_max.low |= ((unsigned HOST_WIDE_INT) 1u
- << floor_log2 (ior_max.low)) - 1;
-
- /* Both of these endpoints are conservative. */
- min = vrp_int_const_binop (MAX_EXPR, vr0.min, vr1.min);
- max = double_int_to_tree (expr_type, ior_max);
- }
else
{
- set_value_range_to_varying (vr);
- return;
+ min = double_int_to_tree (expr_type,
+ double_int_ior (must_be_nonzero0,
+ must_be_nonzero1));
+ max = double_int_to_tree (expr_type,
+ double_int_ior (may_be_nonzero0,
+ may_be_nonzero1));
+ if (TREE_OVERFLOW (min) || tree_int_cst_sgn (min) < 0)
+ min = vr0.min;
+ else
+ min = vrp_int_const_binop (MAX_EXPR, min, vr0.min);
+ if (TREE_OVERFLOW (max) || tree_int_cst_sgn (max) < 0)
+ max = NULL_TREE;
+ min = vrp_int_const_binop (MAX_EXPR, min, vr1.min);
}
}
else
--- gcc/testsuite/gcc.dg/tree-ssa/vrp51.c.jj 2010-07-09 18:38:41.000000000 +0200
+++ gcc/testsuite/gcc.dg/tree-ssa/vrp51.c 2010-07-09 18:37:36.000000000 +0200
@@ -0,0 +1,58 @@
+/* PR tree-optimization/28632 */
+/* { dg-do compile } */
+/* { dg-options "-O2 -ftree-vrp" } */
+
+void
+v4 (unsigned a, unsigned b)
+{
+ if (a < 0x1000) return;
+ if (a > 0x1000) return;
+ if (b < 0x0110) return;
+ /* constant true. */
+ if (!__builtin_constant_p ((a|b) >= 0x01000))
+ __asm__("bug.always.true");
+ /* VRP must not think that this is constant. */
+ if (__builtin_constant_p ((a|b) >= 0x10000))
+ __asm__("bug.not.always.true");
+}
+
+void
+u4 (unsigned n)
+{
+ if (n > 0x10111) return;
+ if (n < 0x10101) return;
+ /* always true. */
+ if (!__builtin_constant_p (n & 0x00100))
+ __asm__("bug.always.true");
+ /* VRP must not think that this is constant true. */
+ if (__builtin_constant_p (n & 0x00001))
+ __asm__("bug.not.always.true");
+ /* Out of range, always evaluates to constant false. */
+ if (!__builtin_constant_p (n & 0x01000))
+ __asm__("bug.always.false");
+}
+
+void
+u5 (unsigned n)
+{
+ struct s {unsigned exp:8;} x;
+ x.exp = n;
+ if (__builtin_constant_p(((n + 1) & 255) > 1))
+ __asm__("bug.not.always.true");
+}
+
+void
+v5 (int a, int b)
+{
+ if (a < 0x1000) return;
+ if (a > 0x1000) return;
+ if (b < 0x0110) return;
+ /* constant true. */
+ if (!__builtin_constant_p ((a|b) >= 0x01000))
+ __asm__("bug.always.true");
+ /* VRP must not think that this is always true. */
+ if (__builtin_constant_p ((a|b) >= 0x10000))
+ __asm__("bug.not.always.true");
+}
+
+/* { dg-final { scan-assembler-not "bug\." } } */
--- gcc/testsuite/gcc.dg/tree-ssa/vrp52.c.jj 2010-07-09 18:39:07.000000000 +0200
+++ gcc/testsuite/gcc.dg/tree-ssa/vrp52.c 2010-07-09 18:43:47.000000000 +0200
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-vrp1" } */
+
+int
+foo (unsigned int i, unsigned int j)
+{
+ i &= 15;
+ j &= 15;
+ i += 1024;
+ j += 2048;
+ i |= j;
+ return i >= 1024 + 2048;
+}
+
+/* { dg-final { scan-tree-dump "Folding predicate i_\[^\n\r\]* to 1" "vrp1" } } */
+/* { dg-final { cleanup-tree-dump "vrp1" } } */
Jakub
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] Further improve VRP BIT_AND_EXPR and BIT_IOR_EXPR handling (PR tree-optimization/28632)
2010-07-09 19:10 [PATCH] Further improve VRP BIT_AND_EXPR and BIT_IOR_EXPR handling (PR tree-optimization/28632) Jakub Jelinek
@ 2010-07-09 19:34 ` Richard Guenther
2010-07-12 17:30 ` Jeff Law
1 sibling, 0 replies; 3+ messages in thread
From: Richard Guenther @ 2010-07-09 19:34 UTC (permalink / raw)
To: Jakub Jelinek; +Cc: gcc-patches, aldot, Denys Vlasenko
On Fri, Jul 9, 2010 at 9:11 PM, Jakub Jelinek <jakub@redhat.com> wrote:
> Hi!
>
> When writing the last VRP BIT_AND_EXPR patch, I wasn't aware of this PR.
> Looking at it there indeed are further possibilities for optimizations,
> both for BIT_AND_EXPR and also BIT_IOR_EXPR. For the latter, e.g. we used
> to derive from [0x400, 0x40f] | [0x800, 0x80f] a [0x800, 0xc0f]
> range, while this patch computes [0xc00, 0xc0f] range.
> The new helper function might be useful even when optimizing away useless
> BIT_AND_EXPR or BIT_IOR_EXPRs.
>
> Bootstrapped/regtested on x86_64-linux and i686-linux. Ok for trunk?
Ok.
Thanks,
Richard.
> 2010-07-09 Jakub Jelinek <jakub@redhat.com>
> Denys Vlasenko <dvlasenk@redhat.com>
> Bernhard Reutner-Fischer <aldot@gcc.gnu.org>
>
> PR tree-optimization/28632
> * tree-vrp.c (zero_nonzero_bits_from_vr): New function.
> (extract_range_from_binary_expr): Further optimize
> BIT_AND_EXPR and BIT_IOR_EXPR.
>
> * gcc.dg/tree-ssa/vrp51.c: New test.
> * gcc.dg/tree-ssa/vrp52.c: New test.
>
> --- gcc/tree-vrp.c.jj 2010-07-09 13:44:23.000000000 +0200
> +++ gcc/tree-vrp.c 2010-07-09 19:04:54.000000000 +0200
> @@ -2064,6 +2064,60 @@ vrp_int_const_binop (enum tree_code code
> }
>
>
> +/* For range VR compute two double_int bitmasks. In *MAY_BE_NONZERO
> + bitmask if some bit is unset, it means for all numbers in the range
> + the bit is 0, otherwise it might be 0 or 1. In *MUST_BE_NONZERO
> + bitmask if some bit is set, it means for all numbers in the range
> + the bit is 1, otherwise it might be 0 or 1. */
> +
> +static bool
> +zero_nonzero_bits_from_vr (value_range_t *vr, double_int *may_be_nonzero,
> + double_int *must_be_nonzero)
> +{
> + if (range_int_cst_p (vr))
> + {
> + if (range_int_cst_singleton_p (vr))
> + {
> + *may_be_nonzero = tree_to_double_int (vr->min);
> + *must_be_nonzero = *may_be_nonzero;
> + return true;
> + }
> + if (tree_int_cst_sgn (vr->min) >= 0)
> + {
> + double_int dmin = tree_to_double_int (vr->min);
> + double_int dmax = tree_to_double_int (vr->max);
> + double_int xor_mask = double_int_xor (dmin, dmax);
> + *may_be_nonzero = double_int_ior (dmin, dmax);
> + *must_be_nonzero = double_int_and (dmin, dmax);
> + if (xor_mask.high != 0)
> + {
> + unsigned HOST_WIDE_INT mask
> + = ((unsigned HOST_WIDE_INT) 1
> + << floor_log2 (xor_mask.high)) - 1;
> + may_be_nonzero->low = ALL_ONES;
> + may_be_nonzero->high |= mask;
> + must_be_nonzero->low = 0;
> + must_be_nonzero->high &= ~mask;
> + }
> + else if (xor_mask.low != 0)
> + {
> + unsigned HOST_WIDE_INT mask
> + = ((unsigned HOST_WIDE_INT) 1
> + << floor_log2 (xor_mask.low)) - 1;
> + may_be_nonzero->low |= mask;
> + must_be_nonzero->low &= ~mask;
> + }
> + return true;
> + }
> + }
> + may_be_nonzero->low = ALL_ONES;
> + may_be_nonzero->high = ALL_ONES;
> + must_be_nonzero->low = 0;
> + must_be_nonzero->high = 0;
> + return false;
> +}
> +
> +
> /* Extract range information from a binary expression EXPR based on
> the ranges of each of its operands and the expression code. */
>
> @@ -2569,119 +2623,78 @@ extract_range_from_binary_expr (value_ra
> min = vrp_int_const_binop (code, vr0.min, vr1.max);
> max = vrp_int_const_binop (code, vr0.max, vr1.min);
> }
> - else if (code == BIT_AND_EXPR)
> + else if (code == BIT_AND_EXPR || code == BIT_IOR_EXPR)
> {
> bool vr0_int_cst_singleton_p, vr1_int_cst_singleton_p;
> + bool int_cst_range0, int_cst_range1;
> + double_int may_be_nonzero0, may_be_nonzero1;
> + double_int must_be_nonzero0, must_be_nonzero1;
>
> vr0_int_cst_singleton_p = range_int_cst_singleton_p (&vr0);
> vr1_int_cst_singleton_p = range_int_cst_singleton_p (&vr1);
> + int_cst_range0 = zero_nonzero_bits_from_vr (&vr0, &may_be_nonzero0,
> + &must_be_nonzero0);
> + int_cst_range1 = zero_nonzero_bits_from_vr (&vr1, &may_be_nonzero1,
> + &must_be_nonzero1);
>
> + type = VR_RANGE;
> if (vr0_int_cst_singleton_p && vr1_int_cst_singleton_p)
> min = max = int_const_binop (code, vr0.max, vr1.max, 0);
> - else if (range_int_cst_p (&vr0)
> - && range_int_cst_p (&vr1)
> - && tree_int_cst_sgn (vr0.min) >= 0
> - && tree_int_cst_sgn (vr1.min) >= 0)
> - {
> - double_int vr0_mask = tree_to_double_int (vr0.min);
> - double_int vr1_mask = tree_to_double_int (vr1.min);
> - double_int maxd, diff;
> - tree mask;
> -
> - min = build_int_cst (expr_type, 0);
> - /* Compute non-zero bits mask from both ranges. */
> - if (!vr0_int_cst_singleton_p)
> - {
> - maxd = tree_to_double_int (vr0.max);
> - diff = double_int_sub (maxd, vr0_mask);
> - if (diff.high)
> - {
> - diff.low = ~(unsigned HOST_WIDE_INT)0;
> - diff.high = ((HOST_WIDE_INT) 2
> - << floor_log2 (diff.high)) - 1;
> - }
> - else
> - diff.low = ((HOST_WIDE_INT) 2 << floor_log2 (diff.low)) - 1;
> - vr0_mask = double_int_ior (vr0_mask,
> - double_int_ior (maxd, diff));
> - }
> - if (!vr1_int_cst_singleton_p)
> - {
> - maxd = tree_to_double_int (vr1.max);
> - diff = double_int_sub (maxd, vr1_mask);
> - if (diff.high)
> - {
> - diff.low = ~(unsigned HOST_WIDE_INT)0;
> - diff.high = ((HOST_WIDE_INT) 2
> - << floor_log2 (diff.high)) - 1;
> - }
> - else
> - diff.low = ((HOST_WIDE_INT) 2 << floor_log2 (diff.low)) - 1;
> - vr1_mask = double_int_ior (vr1_mask,
> - double_int_ior (maxd, diff));
> - }
> - mask = double_int_to_tree (expr_type,
> - double_int_and (vr0_mask, vr1_mask));
> - max = vr0.max;
> - if (tree_int_cst_lt (vr1.max, max))
> - max = vr1.max;
> - if (!TREE_OVERFLOW (mask)
> - && tree_int_cst_lt (mask, max)
> - && tree_int_cst_sgn (mask) >= 0)
> - max = mask;
> - }
> - else if (vr0_int_cst_singleton_p
> - && tree_int_cst_sgn (vr0.max) >= 0)
> + else if (!int_cst_range0 && !int_cst_range1)
> {
> - min = build_int_cst (expr_type, 0);
> - max = vr0.max;
> + set_value_range_to_varying (vr);
> + return;
> }
> - else if (vr1_int_cst_singleton_p
> - && tree_int_cst_sgn (vr1.max) >= 0)
> + else if (code == BIT_AND_EXPR)
> {
> - type = VR_RANGE;
> - min = build_int_cst (expr_type, 0);
> - max = vr1.max;
> + min = double_int_to_tree (expr_type,
> + double_int_and (must_be_nonzero0,
> + must_be_nonzero1));
> + max = double_int_to_tree (expr_type,
> + double_int_and (may_be_nonzero0,
> + may_be_nonzero1));
> + if (TREE_OVERFLOW (min) || tree_int_cst_sgn (min) < 0)
> + min = NULL_TREE;
> + if (TREE_OVERFLOW (max) || tree_int_cst_sgn (max) < 0)
> + max = NULL_TREE;
> + if (int_cst_range0 && tree_int_cst_sgn (vr0.min) >= 0)
> + {
> + if (min == NULL_TREE)
> + min = build_int_cst (expr_type, 0);
> + if (max == NULL_TREE || tree_int_cst_lt (vr0.max, max))
> + max = vr0.max;
> + }
> + if (int_cst_range1 && tree_int_cst_sgn (vr1.min) >= 0)
> + {
> + if (min == NULL_TREE)
> + min = build_int_cst (expr_type, 0);
> + if (max == NULL_TREE || tree_int_cst_lt (vr1.max, max))
> + max = vr1.max;
> + }
> }
> - else
> + else if (!int_cst_range0
> + || !int_cst_range1
> + || tree_int_cst_sgn (vr0.min) < 0
> + || tree_int_cst_sgn (vr1.min) < 0)
> {
> set_value_range_to_varying (vr);
> return;
> }
> - }
> - else if (code == BIT_IOR_EXPR)
> - {
> - if (range_int_cst_p (&vr0)
> - && range_int_cst_p (&vr1)
> - && tree_int_cst_sgn (vr0.min) >= 0
> - && tree_int_cst_sgn (vr1.min) >= 0)
> - {
> - double_int vr0_max = tree_to_double_int (vr0.max);
> - double_int vr1_max = tree_to_double_int (vr1.max);
> - double_int ior_max;
> -
> - /* Set all bits to the right of the most significant one to 1.
> - For example, [0, 4] | [4, 4] = [4, 7]. */
> - ior_max.low = vr0_max.low | vr1_max.low;
> - ior_max.high = vr0_max.high | vr1_max.high;
> - if (ior_max.high != 0)
> - {
> - ior_max.low = ~(unsigned HOST_WIDE_INT)0u;
> - ior_max.high |= ((HOST_WIDE_INT) 1
> - << floor_log2 (ior_max.high)) - 1;
> - }
> - else if (ior_max.low != 0)
> - ior_max.low |= ((unsigned HOST_WIDE_INT) 1u
> - << floor_log2 (ior_max.low)) - 1;
> -
> - /* Both of these endpoints are conservative. */
> - min = vrp_int_const_binop (MAX_EXPR, vr0.min, vr1.min);
> - max = double_int_to_tree (expr_type, ior_max);
> - }
> else
> {
> - set_value_range_to_varying (vr);
> - return;
> + min = double_int_to_tree (expr_type,
> + double_int_ior (must_be_nonzero0,
> + must_be_nonzero1));
> + max = double_int_to_tree (expr_type,
> + double_int_ior (may_be_nonzero0,
> + may_be_nonzero1));
> + if (TREE_OVERFLOW (min) || tree_int_cst_sgn (min) < 0)
> + min = vr0.min;
> + else
> + min = vrp_int_const_binop (MAX_EXPR, min, vr0.min);
> + if (TREE_OVERFLOW (max) || tree_int_cst_sgn (max) < 0)
> + max = NULL_TREE;
> + min = vrp_int_const_binop (MAX_EXPR, min, vr1.min);
> }
> }
> else
> --- gcc/testsuite/gcc.dg/tree-ssa/vrp51.c.jj 2010-07-09 18:38:41.000000000 +0200
> +++ gcc/testsuite/gcc.dg/tree-ssa/vrp51.c 2010-07-09 18:37:36.000000000 +0200
> @@ -0,0 +1,58 @@
> +/* PR tree-optimization/28632 */
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -ftree-vrp" } */
> +
> +void
> +v4 (unsigned a, unsigned b)
> +{
> + if (a < 0x1000) return;
> + if (a > 0x1000) return;
> + if (b < 0x0110) return;
> + /* constant true. */
> + if (!__builtin_constant_p ((a|b) >= 0x01000))
> + __asm__("bug.always.true");
> + /* VRP must not think that this is constant. */
> + if (__builtin_constant_p ((a|b) >= 0x10000))
> + __asm__("bug.not.always.true");
> +}
> +
> +void
> +u4 (unsigned n)
> +{
> + if (n > 0x10111) return;
> + if (n < 0x10101) return;
> + /* always true. */
> + if (!__builtin_constant_p (n & 0x00100))
> + __asm__("bug.always.true");
> + /* VRP must not think that this is constant true. */
> + if (__builtin_constant_p (n & 0x00001))
> + __asm__("bug.not.always.true");
> + /* Out of range, always evaluates to constant false. */
> + if (!__builtin_constant_p (n & 0x01000))
> + __asm__("bug.always.false");
> +}
> +
> +void
> +u5 (unsigned n)
> +{
> + struct s {unsigned exp:8;} x;
> + x.exp = n;
> + if (__builtin_constant_p(((n + 1) & 255) > 1))
> + __asm__("bug.not.always.true");
> +}
> +
> +void
> +v5 (int a, int b)
> +{
> + if (a < 0x1000) return;
> + if (a > 0x1000) return;
> + if (b < 0x0110) return;
> + /* constant true. */
> + if (!__builtin_constant_p ((a|b) >= 0x01000))
> + __asm__("bug.always.true");
> + /* VRP must not think that this is always true. */
> + if (__builtin_constant_p ((a|b) >= 0x10000))
> + __asm__("bug.not.always.true");
> +}
> +
> +/* { dg-final { scan-assembler-not "bug\." } } */
> --- gcc/testsuite/gcc.dg/tree-ssa/vrp52.c.jj 2010-07-09 18:39:07.000000000 +0200
> +++ gcc/testsuite/gcc.dg/tree-ssa/vrp52.c 2010-07-09 18:43:47.000000000 +0200
> @@ -0,0 +1,16 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -fdump-tree-vrp1" } */
> +
> +int
> +foo (unsigned int i, unsigned int j)
> +{
> + i &= 15;
> + j &= 15;
> + i += 1024;
> + j += 2048;
> + i |= j;
> + return i >= 1024 + 2048;
> +}
> +
> +/* { dg-final { scan-tree-dump "Folding predicate i_\[^\n\r\]* to 1" "vrp1" } } */
> +/* { dg-final { cleanup-tree-dump "vrp1" } } */
>
> Jakub
>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] Further improve VRP BIT_AND_EXPR and BIT_IOR_EXPR handling (PR tree-optimization/28632)
2010-07-09 19:10 [PATCH] Further improve VRP BIT_AND_EXPR and BIT_IOR_EXPR handling (PR tree-optimization/28632) Jakub Jelinek
2010-07-09 19:34 ` Richard Guenther
@ 2010-07-12 17:30 ` Jeff Law
1 sibling, 0 replies; 3+ messages in thread
From: Jeff Law @ 2010-07-12 17:30 UTC (permalink / raw)
To: Jakub Jelinek; +Cc: gcc-patches, aldot, Denys Vlasenko
On 07/09/10 13:11, Jakub Jelinek wrote:
> Hi!
>
> When writing the last VRP BIT_AND_EXPR patch, I wasn't aware of this PR.
> Looking at it there indeed are further possibilities for optimizations,
> both for BIT_AND_EXPR and also BIT_IOR_EXPR. For the latter, e.g. we used
> to derive from [0x400, 0x40f] | [0x800, 0x80f] a [0x800, 0xc0f]
> range, while this patch computes [0xc00, 0xc0f] range.
> The new helper function might be useful even when optimizing away useless
> BIT_AND_EXPR or BIT_IOR_EXPRs.
>
> Bootstrapped/regtested on x86_64-linux and i686-linux. Ok for trunk?
>
If you wanted to get ambitious, you could go through all the
zero/nonzero bit stuff in combine.c and translate it to VRP style. It'd
be interesting to know how many of those transformations trigger on the
higher level IR.
Jeff
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2010-07-12 17:30 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-07-09 19:10 [PATCH] Further improve VRP BIT_AND_EXPR and BIT_IOR_EXPR handling (PR tree-optimization/28632) Jakub Jelinek
2010-07-09 19:34 ` Richard Guenther
2010-07-12 17:30 ` Jeff Law
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).