public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [patch 1/8 tree-optimization]: Bitwise logic for fold_truth_not_expr
@ 2011-07-13  7:33 Kai Tietz
  2011-07-13  9:02 ` Richard Guenther
  0 siblings, 1 reply; 8+ messages in thread
From: Kai Tietz @ 2011-07-13  7:33 UTC (permalink / raw)
  To: GCC Patches; +Cc: Richard Guenther

Hello,

I split my old patch into 8 speparate pieces for easier review.  These patches
are a prerequist for enabling boolification of comparisons in gimplifier and
the necessary type-cast preserving in gimple from/to boolean-type.

This patch adds support to fold_truth_not_expr for one-bit precision typed
bitwise-binary and bitwise-not expressions.

ChangeLog

2011-07-13  Kai Tietz  <ktietz@redhat.com>

	* fold-const.c (fold_truth_not_expr): Add
	support for one-bit bitwise operations.

Bootstrapped and regression tested for x86_64-pc-linux-gnu.
Ok for apply?

Regards,
Kai

Index: gcc/gcc/fold-const.c
===================================================================
--- gcc.orig/gcc/fold-const.c	2011-07-13 07:48:29.000000000 +0200
+++ gcc/gcc/fold-const.c	2011-07-13 08:59:36.865620200 +0200
@@ -3074,20 +3074,36 @@ fold_truth_not_expr (location_t loc, tre
     case INTEGER_CST:
       return constant_boolean_node (integer_zerop (arg), type);

+    case BIT_AND_EXPR:
+      if (TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (arg, 0))) != 1)
+        return NULL_TREE;
+      if (integer_onep (TREE_OPERAND (arg, 1)))
+	return build2_loc (loc, EQ_EXPR, type, arg, build_int_cst (type, 0));
+      /* fall through */
     case TRUTH_AND_EXPR:
       loc1 = expr_location_or (TREE_OPERAND (arg, 0), loc);
       loc2 = expr_location_or (TREE_OPERAND (arg, 1), loc);
-      return build2_loc (loc, TRUTH_OR_EXPR, type,
+      return build2_loc (loc, (code == BIT_AND_EXPR ? BIT_IOR_EXPR
+						    : TRUTH_OR_EXPR), type,
 			 invert_truthvalue_loc (loc1, TREE_OPERAND (arg, 0)),
 			 invert_truthvalue_loc (loc2, TREE_OPERAND (arg, 1)));

+    case BIT_IOR_EXPR:
+      if (TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (arg, 0))) != 1)
+        return NULL_TREE;
+      /* fall through.  */
     case TRUTH_OR_EXPR:
       loc1 = expr_location_or (TREE_OPERAND (arg, 0), loc);
       loc2 = expr_location_or (TREE_OPERAND (arg, 1), loc);
-      return build2_loc (loc, TRUTH_AND_EXPR, type,
+      return build2_loc (loc, (code == BIT_IOR_EXPR ? BIT_AND_EXPR
+						    : TRUTH_AND_EXPR), type,
 			 invert_truthvalue_loc (loc1, TREE_OPERAND (arg, 0)),
 			 invert_truthvalue_loc (loc2, TREE_OPERAND (arg, 1)));

+    case BIT_XOR_EXPR:
+      if (TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (arg, 0))) != 1)
+        return NULL_TREE;
+      /* fall through.  */
     case TRUTH_XOR_EXPR:
       /* Here we can invert either operand.  We invert the first operand
 	 unless the second operand is a TRUTH_NOT_EXPR in which case our
@@ -3095,10 +3111,14 @@ fold_truth_not_expr (location_t loc, tre
 	 negation of the second operand.  */

       if (TREE_CODE (TREE_OPERAND (arg, 1)) == TRUTH_NOT_EXPR)
-	return build2_loc (loc, TRUTH_XOR_EXPR, type, TREE_OPERAND (arg, 0),
+	return build2_loc (loc, code, type, TREE_OPERAND (arg, 0),
+			   TREE_OPERAND (TREE_OPERAND (arg, 1), 0));
+      else if (TREE_CODE (TREE_OPERAND (arg, 1)) == BIT_NOT_EXPR
+	       && TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (arg, 1))) == 1)
+	return build2_loc (loc, code, type, TREE_OPERAND (arg, 0),
 			   TREE_OPERAND (TREE_OPERAND (arg, 1), 0));
       else
-	return build2_loc (loc, TRUTH_XOR_EXPR, type,
+	return build2_loc (loc, code, type,
 			   invert_truthvalue_loc (loc, TREE_OPERAND (arg, 0)),
 			   TREE_OPERAND (arg, 1));

@@ -3116,6 +3136,10 @@ fold_truth_not_expr (location_t loc, tre
 			 invert_truthvalue_loc (loc1, TREE_OPERAND (arg, 0)),
 			 invert_truthvalue_loc (loc2, TREE_OPERAND (arg, 1)));

+    case BIT_NOT_EXPR:
+      if (TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (arg, 0))) != 1)
+        return NULL_TREE;
+      /* fall through */
     case TRUTH_NOT_EXPR:
       return TREE_OPERAND (arg, 0);

@@ -3158,11 +3182,6 @@ fold_truth_not_expr (location_t loc, tre
       return build1_loc (loc, TREE_CODE (arg), type,
 			 invert_truthvalue_loc (loc1, TREE_OPERAND (arg, 0)));

-    case BIT_AND_EXPR:
-      if (!integer_onep (TREE_OPERAND (arg, 1)))
-	return NULL_TREE;
-      return build2_loc (loc, EQ_EXPR, type, arg, build_int_cst (type, 0));
-
     case SAVE_EXPR:
       return build1_loc (loc, TRUTH_NOT_EXPR, type, arg);

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [patch 1/8 tree-optimization]: Bitwise logic for fold_truth_not_expr
  2011-07-13  7:33 [patch 1/8 tree-optimization]: Bitwise logic for fold_truth_not_expr Kai Tietz
@ 2011-07-13  9:02 ` Richard Guenther
  2011-07-13  9:08   ` Kai Tietz
  0 siblings, 1 reply; 8+ messages in thread
From: Richard Guenther @ 2011-07-13  9:02 UTC (permalink / raw)
  To: Kai Tietz; +Cc: GCC Patches

On Wed, Jul 13, 2011 at 9:32 AM, Kai Tietz <ktietz70@googlemail.com> wrote:
> Hello,
>
> I split my old patch into 8 speparate pieces for easier review.  These patches
> are a prerequist for enabling boolification of comparisons in gimplifier and
> the necessary type-cast preserving in gimple from/to boolean-type.
>
> This patch adds support to fold_truth_not_expr for one-bit precision typed
> bitwise-binary and bitwise-not expressions.

It seems this is only necessary because we still have TRUTH_NOT_EXPR
in our IL and did not replace that with BIT_NOT_EXPR consistently yet.

So no, this is not ok.  fold-const.c is really mostly supposed to deal
with GENERIC where we distinguish TRUTH_* and BIT_* variants.

Please instead lower TRUTH_NOT_EXPR to BIT_NOT_EXPR for gimple.

Richard.

> ChangeLog
>
> 2011-07-13  Kai Tietz  <ktietz@redhat.com>
>
>        * fold-const.c (fold_truth_not_expr): Add
>        support for one-bit bitwise operations.
>
> Bootstrapped and regression tested for x86_64-pc-linux-gnu.
> Ok for apply?
>
> Regards,
> Kai
>
> Index: gcc/gcc/fold-const.c
> ===================================================================
> --- gcc.orig/gcc/fold-const.c   2011-07-13 07:48:29.000000000 +0200
> +++ gcc/gcc/fold-const.c        2011-07-13 08:59:36.865620200 +0200
> @@ -3074,20 +3074,36 @@ fold_truth_not_expr (location_t loc, tre
>     case INTEGER_CST:
>       return constant_boolean_node (integer_zerop (arg), type);
>
> +    case BIT_AND_EXPR:
> +      if (TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (arg, 0))) != 1)
> +        return NULL_TREE;
> +      if (integer_onep (TREE_OPERAND (arg, 1)))
> +       return build2_loc (loc, EQ_EXPR, type, arg, build_int_cst (type, 0));
> +      /* fall through */
>     case TRUTH_AND_EXPR:
>       loc1 = expr_location_or (TREE_OPERAND (arg, 0), loc);
>       loc2 = expr_location_or (TREE_OPERAND (arg, 1), loc);
> -      return build2_loc (loc, TRUTH_OR_EXPR, type,
> +      return build2_loc (loc, (code == BIT_AND_EXPR ? BIT_IOR_EXPR
> +                                                   : TRUTH_OR_EXPR), type,
>                         invert_truthvalue_loc (loc1, TREE_OPERAND (arg, 0)),
>                         invert_truthvalue_loc (loc2, TREE_OPERAND (arg, 1)));
>
> +    case BIT_IOR_EXPR:
> +      if (TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (arg, 0))) != 1)
> +        return NULL_TREE;
> +      /* fall through.  */
>     case TRUTH_OR_EXPR:
>       loc1 = expr_location_or (TREE_OPERAND (arg, 0), loc);
>       loc2 = expr_location_or (TREE_OPERAND (arg, 1), loc);
> -      return build2_loc (loc, TRUTH_AND_EXPR, type,
> +      return build2_loc (loc, (code == BIT_IOR_EXPR ? BIT_AND_EXPR
> +                                                   : TRUTH_AND_EXPR), type,
>                         invert_truthvalue_loc (loc1, TREE_OPERAND (arg, 0)),
>                         invert_truthvalue_loc (loc2, TREE_OPERAND (arg, 1)));
>
> +    case BIT_XOR_EXPR:
> +      if (TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (arg, 0))) != 1)
> +        return NULL_TREE;
> +      /* fall through.  */
>     case TRUTH_XOR_EXPR:
>       /* Here we can invert either operand.  We invert the first operand
>         unless the second operand is a TRUTH_NOT_EXPR in which case our
> @@ -3095,10 +3111,14 @@ fold_truth_not_expr (location_t loc, tre
>         negation of the second operand.  */
>
>       if (TREE_CODE (TREE_OPERAND (arg, 1)) == TRUTH_NOT_EXPR)
> -       return build2_loc (loc, TRUTH_XOR_EXPR, type, TREE_OPERAND (arg, 0),
> +       return build2_loc (loc, code, type, TREE_OPERAND (arg, 0),
> +                          TREE_OPERAND (TREE_OPERAND (arg, 1), 0));
> +      else if (TREE_CODE (TREE_OPERAND (arg, 1)) == BIT_NOT_EXPR
> +              && TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (arg, 1))) == 1)
> +       return build2_loc (loc, code, type, TREE_OPERAND (arg, 0),
>                           TREE_OPERAND (TREE_OPERAND (arg, 1), 0));
>       else
> -       return build2_loc (loc, TRUTH_XOR_EXPR, type,
> +       return build2_loc (loc, code, type,
>                           invert_truthvalue_loc (loc, TREE_OPERAND (arg, 0)),
>                           TREE_OPERAND (arg, 1));
>
> @@ -3116,6 +3136,10 @@ fold_truth_not_expr (location_t loc, tre
>                         invert_truthvalue_loc (loc1, TREE_OPERAND (arg, 0)),
>                         invert_truthvalue_loc (loc2, TREE_OPERAND (arg, 1)));
>
> +    case BIT_NOT_EXPR:
> +      if (TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (arg, 0))) != 1)
> +        return NULL_TREE;
> +      /* fall through */
>     case TRUTH_NOT_EXPR:
>       return TREE_OPERAND (arg, 0);
>
> @@ -3158,11 +3182,6 @@ fold_truth_not_expr (location_t loc, tre
>       return build1_loc (loc, TREE_CODE (arg), type,
>                         invert_truthvalue_loc (loc1, TREE_OPERAND (arg, 0)));
>
> -    case BIT_AND_EXPR:
> -      if (!integer_onep (TREE_OPERAND (arg, 1)))
> -       return NULL_TREE;
> -      return build2_loc (loc, EQ_EXPR, type, arg, build_int_cst (type, 0));
> -
>     case SAVE_EXPR:
>       return build1_loc (loc, TRUTH_NOT_EXPR, type, arg);
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [patch 1/8 tree-optimization]: Bitwise logic for fold_truth_not_expr
  2011-07-13  9:02 ` Richard Guenther
@ 2011-07-13  9:08   ` Kai Tietz
  2011-07-13 10:56     ` Richard Guenther
  0 siblings, 1 reply; 8+ messages in thread
From: Kai Tietz @ 2011-07-13  9:08 UTC (permalink / raw)
  To: Richard Guenther; +Cc: GCC Patches

Sorrty, the TRUTH_NOT_EXPR isn't here the point at all. The underlying
issue is that fold-const re-inttroduces TRUTH_AND/OR and co.  To avoid
it, it needs to learn to handle 1-bit precision folding for those
bitwise-operation on 1-bit integer types special.
As gimple replies on this FE fold for now, it has to be learn about
that. As soon as gimple_fold (and other passes) don't rely anymore on
FE's fold-const, then we can remove those parts again.  Otherwise this
boolification of compares (and also the transition of TRUTH_NOT ->
BIT_NOT, simply doesn't work so long.

Regards,
Kai

2011/7/13 Richard Guenther <richard.guenther@gmail.com>:
> On Wed, Jul 13, 2011 at 9:32 AM, Kai Tietz <ktietz70@googlemail.com> wrote:
>> Hello,
>>
>> I split my old patch into 8 speparate pieces for easier review.  These patches
>> are a prerequist for enabling boolification of comparisons in gimplifier and
>> the necessary type-cast preserving in gimple from/to boolean-type.
>>
>> This patch adds support to fold_truth_not_expr for one-bit precision typed
>> bitwise-binary and bitwise-not expressions.
>
> It seems this is only necessary because we still have TRUTH_NOT_EXPR
> in our IL and did not replace that with BIT_NOT_EXPR consistently yet.
>
> So no, this is not ok.  fold-const.c is really mostly supposed to deal
> with GENERIC where we distinguish TRUTH_* and BIT_* variants.
>
> Please instead lower TRUTH_NOT_EXPR to BIT_NOT_EXPR for gimple.
>
> Richard.
>
>> ChangeLog
>>
>> 2011-07-13  Kai Tietz  <ktietz@redhat.com>
>>
>>        * fold-const.c (fold_truth_not_expr): Add
>>        support for one-bit bitwise operations.
>>
>> Bootstrapped and regression tested for x86_64-pc-linux-gnu.
>> Ok for apply?
>>
>> Regards,
>> Kai
>>
>> Index: gcc/gcc/fold-const.c
>> ===================================================================
>> --- gcc.orig/gcc/fold-const.c   2011-07-13 07:48:29.000000000 +0200
>> +++ gcc/gcc/fold-const.c        2011-07-13 08:59:36.865620200 +0200
>> @@ -3074,20 +3074,36 @@ fold_truth_not_expr (location_t loc, tre
>>     case INTEGER_CST:
>>       return constant_boolean_node (integer_zerop (arg), type);
>>
>> +    case BIT_AND_EXPR:
>> +      if (TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (arg, 0))) != 1)
>> +        return NULL_TREE;
>> +      if (integer_onep (TREE_OPERAND (arg, 1)))
>> +       return build2_loc (loc, EQ_EXPR, type, arg, build_int_cst (type, 0));
>> +      /* fall through */
>>     case TRUTH_AND_EXPR:
>>       loc1 = expr_location_or (TREE_OPERAND (arg, 0), loc);
>>       loc2 = expr_location_or (TREE_OPERAND (arg, 1), loc);
>> -      return build2_loc (loc, TRUTH_OR_EXPR, type,
>> +      return build2_loc (loc, (code == BIT_AND_EXPR ? BIT_IOR_EXPR
>> +                                                   : TRUTH_OR_EXPR), type,
>>                         invert_truthvalue_loc (loc1, TREE_OPERAND (arg, 0)),
>>                         invert_truthvalue_loc (loc2, TREE_OPERAND (arg, 1)));
>>
>> +    case BIT_IOR_EXPR:
>> +      if (TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (arg, 0))) != 1)
>> +        return NULL_TREE;
>> +      /* fall through.  */
>>     case TRUTH_OR_EXPR:
>>       loc1 = expr_location_or (TREE_OPERAND (arg, 0), loc);
>>       loc2 = expr_location_or (TREE_OPERAND (arg, 1), loc);
>> -      return build2_loc (loc, TRUTH_AND_EXPR, type,
>> +      return build2_loc (loc, (code == BIT_IOR_EXPR ? BIT_AND_EXPR
>> +                                                   : TRUTH_AND_EXPR), type,
>>                         invert_truthvalue_loc (loc1, TREE_OPERAND (arg, 0)),
>>                         invert_truthvalue_loc (loc2, TREE_OPERAND (arg, 1)));
>>
>> +    case BIT_XOR_EXPR:
>> +      if (TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (arg, 0))) != 1)
>> +        return NULL_TREE;
>> +      /* fall through.  */
>>     case TRUTH_XOR_EXPR:
>>       /* Here we can invert either operand.  We invert the first operand
>>         unless the second operand is a TRUTH_NOT_EXPR in which case our
>> @@ -3095,10 +3111,14 @@ fold_truth_not_expr (location_t loc, tre
>>         negation of the second operand.  */
>>
>>       if (TREE_CODE (TREE_OPERAND (arg, 1)) == TRUTH_NOT_EXPR)
>> -       return build2_loc (loc, TRUTH_XOR_EXPR, type, TREE_OPERAND (arg, 0),
>> +       return build2_loc (loc, code, type, TREE_OPERAND (arg, 0),
>> +                          TREE_OPERAND (TREE_OPERAND (arg, 1), 0));
>> +      else if (TREE_CODE (TREE_OPERAND (arg, 1)) == BIT_NOT_EXPR
>> +              && TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (arg, 1))) == 1)
>> +       return build2_loc (loc, code, type, TREE_OPERAND (arg, 0),
>>                           TREE_OPERAND (TREE_OPERAND (arg, 1), 0));
>>       else
>> -       return build2_loc (loc, TRUTH_XOR_EXPR, type,
>> +       return build2_loc (loc, code, type,
>>                           invert_truthvalue_loc (loc, TREE_OPERAND (arg, 0)),
>>                           TREE_OPERAND (arg, 1));
>>
>> @@ -3116,6 +3136,10 @@ fold_truth_not_expr (location_t loc, tre
>>                         invert_truthvalue_loc (loc1, TREE_OPERAND (arg, 0)),
>>                         invert_truthvalue_loc (loc2, TREE_OPERAND (arg, 1)));
>>
>> +    case BIT_NOT_EXPR:
>> +      if (TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (arg, 0))) != 1)
>> +        return NULL_TREE;
>> +      /* fall through */
>>     case TRUTH_NOT_EXPR:
>>       return TREE_OPERAND (arg, 0);
>>
>> @@ -3158,11 +3182,6 @@ fold_truth_not_expr (location_t loc, tre
>>       return build1_loc (loc, TREE_CODE (arg), type,
>>                         invert_truthvalue_loc (loc1, TREE_OPERAND (arg, 0)));
>>
>> -    case BIT_AND_EXPR:
>> -      if (!integer_onep (TREE_OPERAND (arg, 1)))
>> -       return NULL_TREE;
>> -      return build2_loc (loc, EQ_EXPR, type, arg, build_int_cst (type, 0));
>> -
>>     case SAVE_EXPR:
>>       return build1_loc (loc, TRUTH_NOT_EXPR, type, arg);
>>
>



-- 
|  (\_/) This is Bunny. Copy and paste
| (='.'=) Bunny into your signature to help
| (")_(") him gain world domination

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [patch 1/8 tree-optimization]: Bitwise logic for fold_truth_not_expr
  2011-07-13  9:08   ` Kai Tietz
@ 2011-07-13 10:56     ` Richard Guenther
  2011-07-13 11:11       ` Kai Tietz
  0 siblings, 1 reply; 8+ messages in thread
From: Richard Guenther @ 2011-07-13 10:56 UTC (permalink / raw)
  To: Kai Tietz; +Cc: GCC Patches

On Wed, Jul 13, 2011 at 11:04 AM, Kai Tietz <ktietz70@googlemail.com> wrote:
> Sorrty, the TRUTH_NOT_EXPR isn't here the point at all. The underlying
> issue is that fold-const re-inttroduces TRUTH_AND/OR and co.

I'm very sure it doesn't re-constrct TRUTH_ variants out of thin air
when you present it with BIT_ variants as input.

  To avoid
> it, it needs to learn to handle 1-bit precision folding for those
> bitwise-operation on 1-bit integer types special.
> As gimple replies on this FE fold for now, it has to be learn about
> that. As soon as gimple_fold (and other passes) don't rely anymore on
> FE's fold-const, then we can remove those parts again.  Otherwise this
> boolification of compares (and also the transition of TRUTH_NOT ->
> BIT_NOT, simply doesn't work so long.

I do not believe that.

>
> Regards,
> Kai
>
> 2011/7/13 Richard Guenther <richard.guenther@gmail.com>:
>> On Wed, Jul 13, 2011 at 9:32 AM, Kai Tietz <ktietz70@googlemail.com> wrote:
>>> Hello,
>>>
>>> I split my old patch into 8 speparate pieces for easier review.  These patches
>>> are a prerequist for enabling boolification of comparisons in gimplifier and
>>> the necessary type-cast preserving in gimple from/to boolean-type.
>>>
>>> This patch adds support to fold_truth_not_expr for one-bit precision typed
>>> bitwise-binary and bitwise-not expressions.
>>
>> It seems this is only necessary because we still have TRUTH_NOT_EXPR
>> in our IL and did not replace that with BIT_NOT_EXPR consistently yet.
>>
>> So no, this is not ok.  fold-const.c is really mostly supposed to deal
>> with GENERIC where we distinguish TRUTH_* and BIT_* variants.
>>
>> Please instead lower TRUTH_NOT_EXPR to BIT_NOT_EXPR for gimple.
>>
>> Richard.
>>
>>> ChangeLog
>>>
>>> 2011-07-13  Kai Tietz  <ktietz@redhat.com>
>>>
>>>        * fold-const.c (fold_truth_not_expr): Add
>>>        support for one-bit bitwise operations.
>>>
>>> Bootstrapped and regression tested for x86_64-pc-linux-gnu.
>>> Ok for apply?
>>>
>>> Regards,
>>> Kai
>>>
>>> Index: gcc/gcc/fold-const.c
>>> ===================================================================
>>> --- gcc.orig/gcc/fold-const.c   2011-07-13 07:48:29.000000000 +0200
>>> +++ gcc/gcc/fold-const.c        2011-07-13 08:59:36.865620200 +0200
>>> @@ -3074,20 +3074,36 @@ fold_truth_not_expr (location_t loc, tre
>>>     case INTEGER_CST:
>>>       return constant_boolean_node (integer_zerop (arg), type);
>>>
>>> +    case BIT_AND_EXPR:
>>> +      if (TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (arg, 0))) != 1)
>>> +        return NULL_TREE;
>>> +      if (integer_onep (TREE_OPERAND (arg, 1)))
>>> +       return build2_loc (loc, EQ_EXPR, type, arg, build_int_cst (type, 0));
>>> +      /* fall through */
>>>     case TRUTH_AND_EXPR:
>>>       loc1 = expr_location_or (TREE_OPERAND (arg, 0), loc);
>>>       loc2 = expr_location_or (TREE_OPERAND (arg, 1), loc);
>>> -      return build2_loc (loc, TRUTH_OR_EXPR, type,
>>> +      return build2_loc (loc, (code == BIT_AND_EXPR ? BIT_IOR_EXPR
>>> +                                                   : TRUTH_OR_EXPR), type,
>>>                         invert_truthvalue_loc (loc1, TREE_OPERAND (arg, 0)),
>>>                         invert_truthvalue_loc (loc2, TREE_OPERAND (arg, 1)));
>>>
>>> +    case BIT_IOR_EXPR:
>>> +      if (TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (arg, 0))) != 1)
>>> +        return NULL_TREE;
>>> +      /* fall through.  */
>>>     case TRUTH_OR_EXPR:
>>>       loc1 = expr_location_or (TREE_OPERAND (arg, 0), loc);
>>>       loc2 = expr_location_or (TREE_OPERAND (arg, 1), loc);
>>> -      return build2_loc (loc, TRUTH_AND_EXPR, type,
>>> +      return build2_loc (loc, (code == BIT_IOR_EXPR ? BIT_AND_EXPR
>>> +                                                   : TRUTH_AND_EXPR), type,
>>>                         invert_truthvalue_loc (loc1, TREE_OPERAND (arg, 0)),
>>>                         invert_truthvalue_loc (loc2, TREE_OPERAND (arg, 1)));
>>>
>>> +    case BIT_XOR_EXPR:
>>> +      if (TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (arg, 0))) != 1)
>>> +        return NULL_TREE;
>>> +      /* fall through.  */
>>>     case TRUTH_XOR_EXPR:
>>>       /* Here we can invert either operand.  We invert the first operand
>>>         unless the second operand is a TRUTH_NOT_EXPR in which case our
>>> @@ -3095,10 +3111,14 @@ fold_truth_not_expr (location_t loc, tre
>>>         negation of the second operand.  */
>>>
>>>       if (TREE_CODE (TREE_OPERAND (arg, 1)) == TRUTH_NOT_EXPR)
>>> -       return build2_loc (loc, TRUTH_XOR_EXPR, type, TREE_OPERAND (arg, 0),
>>> +       return build2_loc (loc, code, type, TREE_OPERAND (arg, 0),
>>> +                          TREE_OPERAND (TREE_OPERAND (arg, 1), 0));
>>> +      else if (TREE_CODE (TREE_OPERAND (arg, 1)) == BIT_NOT_EXPR
>>> +              && TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (arg, 1))) == 1)
>>> +       return build2_loc (loc, code, type, TREE_OPERAND (arg, 0),
>>>                           TREE_OPERAND (TREE_OPERAND (arg, 1), 0));
>>>       else
>>> -       return build2_loc (loc, TRUTH_XOR_EXPR, type,
>>> +       return build2_loc (loc, code, type,
>>>                           invert_truthvalue_loc (loc, TREE_OPERAND (arg, 0)),
>>>                           TREE_OPERAND (arg, 1));
>>>
>>> @@ -3116,6 +3136,10 @@ fold_truth_not_expr (location_t loc, tre
>>>                         invert_truthvalue_loc (loc1, TREE_OPERAND (arg, 0)),
>>>                         invert_truthvalue_loc (loc2, TREE_OPERAND (arg, 1)));
>>>
>>> +    case BIT_NOT_EXPR:
>>> +      if (TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (arg, 0))) != 1)
>>> +        return NULL_TREE;
>>> +      /* fall through */
>>>     case TRUTH_NOT_EXPR:
>>>       return TREE_OPERAND (arg, 0);
>>>
>>> @@ -3158,11 +3182,6 @@ fold_truth_not_expr (location_t loc, tre
>>>       return build1_loc (loc, TREE_CODE (arg), type,
>>>                         invert_truthvalue_loc (loc1, TREE_OPERAND (arg, 0)));
>>>
>>> -    case BIT_AND_EXPR:
>>> -      if (!integer_onep (TREE_OPERAND (arg, 1)))
>>> -       return NULL_TREE;
>>> -      return build2_loc (loc, EQ_EXPR, type, arg, build_int_cst (type, 0));
>>> -
>>>     case SAVE_EXPR:
>>>       return build1_loc (loc, TRUTH_NOT_EXPR, type, arg);
>>>
>>
>
>
>
> --
> |  (\_/) This is Bunny. Copy and paste
> | (='.'=) Bunny into your signature to help
> | (")_(") him gain world domination
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [patch 1/8 tree-optimization]: Bitwise logic for fold_truth_not_expr
  2011-07-13 10:56     ` Richard Guenther
@ 2011-07-13 11:11       ` Kai Tietz
  2011-07-13 12:14         ` Richard Guenther
  0 siblings, 1 reply; 8+ messages in thread
From: Kai Tietz @ 2011-07-13 11:11 UTC (permalink / raw)
  To: Richard Guenther; +Cc: GCC Patches

2011/7/13 Richard Guenther <richard.guenther@gmail.com>:
> On Wed, Jul 13, 2011 at 11:04 AM, Kai Tietz <ktietz70@googlemail.com> wrote:
>> Sorrty, the TRUTH_NOT_EXPR isn't here the point at all. The underlying
>> issue is that fold-const re-inttroduces TRUTH_AND/OR and co.
>
> I'm very sure it doesn't re-constrct TRUTH_ variants out of thin air
> when you present it with BIT_ variants as input.

Well, look into fold-const's fold_binary_loc function and see

  /* ARG0 is the first operand of EXPR, and ARG1 is the second operand.

     First check for cases where an arithmetic operation is applied to a
     compound, conditional, or comparison operation.  Push the arithmetic
     operation inside the compound or conditional to see if any folding
     can then be done.  Convert comparison to conditional for this purpose.
     The also optimizes non-constant cases that used to be done in
     expand_expr.

     Before we do that, see if this is a BIT_AND_EXPR or a BIT_IOR_EXPR,
     one of the operands is a comparison and the other is a comparison, a
     BIT_AND_EXPR with the constant 1, or a truth value.  In that case, the
     code below would make the expression more complex.  Change it to a
     TRUTH_{AND,OR}_EXPR.  Likewise, convert a similar NE_EXPR to
     TRUTH_XOR_EXPR and an EQ_EXPR to the inversion of a TRUTH_XOR_EXPR.  */

  if ((code == BIT_AND_EXPR || code == BIT_IOR_EXPR
       || code == EQ_EXPR || code == NE_EXPR)
      && ((truth_value_p (TREE_CODE (arg0))
	   && (truth_value_p (TREE_CODE (arg1))
	       || (TREE_CODE (arg1) == BIT_AND_EXPR
		   && integer_onep (TREE_OPERAND (arg1, 1)))))
	  || (truth_value_p (TREE_CODE (arg1))
	      && (truth_value_p (TREE_CODE (arg0))
		  || (TREE_CODE (arg0) == BIT_AND_EXPR
		      && integer_onep (TREE_OPERAND (arg0, 1)))))))
    {
      tem = fold_build2_loc (loc, code == BIT_AND_EXPR ? TRUTH_AND_EXPR
			 : code == BIT_IOR_EXPR ? TRUTH_OR_EXPR
			 : TRUTH_XOR_EXPR,
			 boolean_type_node,
			 fold_convert_loc (loc, boolean_type_node, arg0),
			 fold_convert_loc (loc, boolean_type_node, arg1));

      if (code == EQ_EXPR)
	tem = invert_truthvalue_loc (loc, tem);

      return fold_convert_loc (loc, type, tem);
    }

Here unconditionally TRUTH_AND/TRUTH_OR gets introduced, if operands
are of kind truth.  This is btw the point, why you see that those
cases are handled.  But as soon as this part is turned off for BIT_-
IOR/AND, we need to do the folding for 1-bit precision case explicit.



>   To avoid
>> it, it needs to learn to handle 1-bit precision folding for those
>> bitwise-operation on 1-bit integer types special.
>> As gimple replies on this FE fold for now, it has to be learn about
>> that. As soon as gimple_fold (and other passes) don't rely anymore on
>> FE's fold-const, then we can remove those parts again.  Otherwise this
>> boolification of compares (and also the transition of TRUTH_NOT ->
>> BIT_NOT, simply doesn't work so long.
>
> I do not believe that.
>
>>
>> Regards,
>> Kai
>>
>> 2011/7/13 Richard Guenther <richard.guenther@gmail.com>:
>>> On Wed, Jul 13, 2011 at 9:32 AM, Kai Tietz <ktietz70@googlemail.com> wrote:
>>>> Hello,
>>>>
>>>> I split my old patch into 8 speparate pieces for easier review.  These patches
>>>> are a prerequist for enabling boolification of comparisons in gimplifier and
>>>> the necessary type-cast preserving in gimple from/to boolean-type.
>>>>
>>>> This patch adds support to fold_truth_not_expr for one-bit precision typed
>>>> bitwise-binary and bitwise-not expressions.
>>>
>>> It seems this is only necessary because we still have TRUTH_NOT_EXPR
>>> in our IL and did not replace that with BIT_NOT_EXPR consistently yet.
>>>
>>> So no, this is not ok.  fold-const.c is really mostly supposed to deal
>>> with GENERIC where we distinguish TRUTH_* and BIT_* variants.
>>>
>>> Please instead lower TRUTH_NOT_EXPR to BIT_NOT_EXPR for gimple.
>>>
>>> Richard.
>>>
>>>> ChangeLog
>>>>
>>>> 2011-07-13  Kai Tietz  <ktietz@redhat.com>
>>>>
>>>>        * fold-const.c (fold_truth_not_expr): Add
>>>>        support for one-bit bitwise operations.
>>>>
>>>> Bootstrapped and regression tested for x86_64-pc-linux-gnu.
>>>> Ok for apply?
>>>>
>>>> Regards,
>>>> Kai
>>>>
>>>> Index: gcc/gcc/fold-const.c
>>>> ===================================================================
>>>> --- gcc.orig/gcc/fold-const.c   2011-07-13 07:48:29.000000000 +0200
>>>> +++ gcc/gcc/fold-const.c        2011-07-13 08:59:36.865620200 +0200
>>>> @@ -3074,20 +3074,36 @@ fold_truth_not_expr (location_t loc, tre
>>>>     case INTEGER_CST:
>>>>       return constant_boolean_node (integer_zerop (arg), type);
>>>>
>>>> +    case BIT_AND_EXPR:
>>>> +      if (TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (arg, 0))) != 1)
>>>> +        return NULL_TREE;
>>>> +      if (integer_onep (TREE_OPERAND (arg, 1)))
>>>> +       return build2_loc (loc, EQ_EXPR, type, arg, build_int_cst (type, 0));
>>>> +      /* fall through */
>>>>     case TRUTH_AND_EXPR:
>>>>       loc1 = expr_location_or (TREE_OPERAND (arg, 0), loc);
>>>>       loc2 = expr_location_or (TREE_OPERAND (arg, 1), loc);
>>>> -      return build2_loc (loc, TRUTH_OR_EXPR, type,
>>>> +      return build2_loc (loc, (code == BIT_AND_EXPR ? BIT_IOR_EXPR
>>>> +                                                   : TRUTH_OR_EXPR), type,
>>>>                         invert_truthvalue_loc (loc1, TREE_OPERAND (arg, 0)),
>>>>                         invert_truthvalue_loc (loc2, TREE_OPERAND (arg, 1)));
>>>>
>>>> +    case BIT_IOR_EXPR:
>>>> +      if (TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (arg, 0))) != 1)
>>>> +        return NULL_TREE;
>>>> +      /* fall through.  */
>>>>     case TRUTH_OR_EXPR:
>>>>       loc1 = expr_location_or (TREE_OPERAND (arg, 0), loc);
>>>>       loc2 = expr_location_or (TREE_OPERAND (arg, 1), loc);
>>>> -      return build2_loc (loc, TRUTH_AND_EXPR, type,
>>>> +      return build2_loc (loc, (code == BIT_IOR_EXPR ? BIT_AND_EXPR
>>>> +                                                   : TRUTH_AND_EXPR), type,
>>>>                         invert_truthvalue_loc (loc1, TREE_OPERAND (arg, 0)),
>>>>                         invert_truthvalue_loc (loc2, TREE_OPERAND (arg, 1)));
>>>>
>>>> +    case BIT_XOR_EXPR:
>>>> +      if (TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (arg, 0))) != 1)
>>>> +        return NULL_TREE;
>>>> +      /* fall through.  */
>>>>     case TRUTH_XOR_EXPR:
>>>>       /* Here we can invert either operand.  We invert the first operand
>>>>         unless the second operand is a TRUTH_NOT_EXPR in which case our
>>>> @@ -3095,10 +3111,14 @@ fold_truth_not_expr (location_t loc, tre
>>>>         negation of the second operand.  */
>>>>
>>>>       if (TREE_CODE (TREE_OPERAND (arg, 1)) == TRUTH_NOT_EXPR)
>>>> -       return build2_loc (loc, TRUTH_XOR_EXPR, type, TREE_OPERAND (arg, 0),
>>>> +       return build2_loc (loc, code, type, TREE_OPERAND (arg, 0),
>>>> +                          TREE_OPERAND (TREE_OPERAND (arg, 1), 0));
>>>> +      else if (TREE_CODE (TREE_OPERAND (arg, 1)) == BIT_NOT_EXPR
>>>> +              && TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (arg, 1))) == 1)
>>>> +       return build2_loc (loc, code, type, TREE_OPERAND (arg, 0),
>>>>                           TREE_OPERAND (TREE_OPERAND (arg, 1), 0));
>>>>       else
>>>> -       return build2_loc (loc, TRUTH_XOR_EXPR, type,
>>>> +       return build2_loc (loc, code, type,
>>>>                           invert_truthvalue_loc (loc, TREE_OPERAND (arg, 0)),
>>>>                           TREE_OPERAND (arg, 1));
>>>>
>>>> @@ -3116,6 +3136,10 @@ fold_truth_not_expr (location_t loc, tre
>>>>                         invert_truthvalue_loc (loc1, TREE_OPERAND (arg, 0)),
>>>>                         invert_truthvalue_loc (loc2, TREE_OPERAND (arg, 1)));
>>>>
>>>> +    case BIT_NOT_EXPR:
>>>> +      if (TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (arg, 0))) != 1)
>>>> +        return NULL_TREE;
>>>> +      /* fall through */
>>>>     case TRUTH_NOT_EXPR:
>>>>       return TREE_OPERAND (arg, 0);
>>>>
>>>> @@ -3158,11 +3182,6 @@ fold_truth_not_expr (location_t loc, tre
>>>>       return build1_loc (loc, TREE_CODE (arg), type,
>>>>                         invert_truthvalue_loc (loc1, TREE_OPERAND (arg, 0)));
>>>>
>>>> -    case BIT_AND_EXPR:
>>>> -      if (!integer_onep (TREE_OPERAND (arg, 1)))
>>>> -       return NULL_TREE;
>>>> -      return build2_loc (loc, EQ_EXPR, type, arg, build_int_cst (type, 0));
>>>> -
>>>>     case SAVE_EXPR:
>>>>       return build1_loc (loc, TRUTH_NOT_EXPR, type, arg);
>>>>
>>>
>>
>>
>>
>> --
>> |  (\_/) This is Bunny. Copy and paste
>> | (='.'=) Bunny into your signature to help
>> | (")_(") him gain world domination
>>
>



-- 
|  (\_/) This is Bunny. Copy and paste
| (='.'=) Bunny into your signature to help
| (")_(") him gain world domination

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [patch 1/8 tree-optimization]: Bitwise logic for fold_truth_not_expr
  2011-07-13 11:11       ` Kai Tietz
@ 2011-07-13 12:14         ` Richard Guenther
  2011-07-13 13:04           ` Kai Tietz
  0 siblings, 1 reply; 8+ messages in thread
From: Richard Guenther @ 2011-07-13 12:14 UTC (permalink / raw)
  To: Kai Tietz; +Cc: GCC Patches

On Wed, Jul 13, 2011 at 1:08 PM, Kai Tietz <ktietz70@googlemail.com> wrote:
> 2011/7/13 Richard Guenther <richard.guenther@gmail.com>:
>> On Wed, Jul 13, 2011 at 11:04 AM, Kai Tietz <ktietz70@googlemail.com> wrote:
>>> Sorrty, the TRUTH_NOT_EXPR isn't here the point at all. The underlying
>>> issue is that fold-const re-inttroduces TRUTH_AND/OR and co.
>>
>> I'm very sure it doesn't re-constrct TRUTH_ variants out of thin air
>> when you present it with BIT_ variants as input.
>
> Well, look into fold-const's fold_binary_loc function and see
>
>  /* ARG0 is the first operand of EXPR, and ARG1 is the second operand.
>
>     First check for cases where an arithmetic operation is applied to a
>     compound, conditional, or comparison operation.  Push the arithmetic
>     operation inside the compound or conditional to see if any folding
>     can then be done.  Convert comparison to conditional for this purpose.
>     The also optimizes non-constant cases that used to be done in
>     expand_expr.
>
>     Before we do that, see if this is a BIT_AND_EXPR or a BIT_IOR_EXPR,
>     one of the operands is a comparison and the other is a comparison, a
>     BIT_AND_EXPR with the constant 1, or a truth value.  In that case, the
>     code below would make the expression more complex.  Change it to a
>     TRUTH_{AND,OR}_EXPR.  Likewise, convert a similar NE_EXPR to
>     TRUTH_XOR_EXPR and an EQ_EXPR to the inversion of a TRUTH_XOR_EXPR.  */
>
>  if ((code == BIT_AND_EXPR || code == BIT_IOR_EXPR
>       || code == EQ_EXPR || code == NE_EXPR)
>      && ((truth_value_p (TREE_CODE (arg0))
>           && (truth_value_p (TREE_CODE (arg1))
>               || (TREE_CODE (arg1) == BIT_AND_EXPR
>                   && integer_onep (TREE_OPERAND (arg1, 1)))))
>          || (truth_value_p (TREE_CODE (arg1))
>              && (truth_value_p (TREE_CODE (arg0))
>                  || (TREE_CODE (arg0) == BIT_AND_EXPR
>                      && integer_onep (TREE_OPERAND (arg0, 1)))))))
>    {
>      tem = fold_build2_loc (loc, code == BIT_AND_EXPR ? TRUTH_AND_EXPR
>                         : code == BIT_IOR_EXPR ? TRUTH_OR_EXPR
>                         : TRUTH_XOR_EXPR,
>                         boolean_type_node,
>                         fold_convert_loc (loc, boolean_type_node, arg0),
>                         fold_convert_loc (loc, boolean_type_node, arg1));
>
>      if (code == EQ_EXPR)
>        tem = invert_truthvalue_loc (loc, tem);
>
>      return fold_convert_loc (loc, type, tem);
>    }
>
> Here unconditionally TRUTH_AND/TRUTH_OR gets introduced, if operands
> are of kind truth.  This is btw the point, why you see that those
> cases are handled.  But as soon as this part is turned off for BIT_-
> IOR/AND, we need to do the folding for 1-bit precision case explicit.

First of all this checks for a quite complex pattern - where do we pass
such complex pattern from the gimple level to fold?  For the EQ/NE_EXPR
case forwprop probably might be able to feed it that, but then how does
it go wrong?  The above could also simply be guarded by !in_gimple_form.

Richard.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [patch 1/8 tree-optimization]: Bitwise logic for fold_truth_not_expr
  2011-07-13 12:14         ` Richard Guenther
@ 2011-07-13 13:04           ` Kai Tietz
  2011-07-14 11:48             ` Richard Guenther
  0 siblings, 1 reply; 8+ messages in thread
From: Kai Tietz @ 2011-07-13 13:04 UTC (permalink / raw)
  To: Richard Guenther; +Cc: GCC Patches

2011/7/13 Richard Guenther <richard.guenther@gmail.com>:
> On Wed, Jul 13, 2011 at 1:08 PM, Kai Tietz <ktietz70@googlemail.com> wrote:
>> 2011/7/13 Richard Guenther <richard.guenther@gmail.com>:
>>> On Wed, Jul 13, 2011 at 11:04 AM, Kai Tietz <ktietz70@googlemail.com> wrote:
>>>> Sorrty, the TRUTH_NOT_EXPR isn't here the point at all. The underlying
>>>> issue is that fold-const re-inttroduces TRUTH_AND/OR and co.
>>>
>>> I'm very sure it doesn't re-constrct TRUTH_ variants out of thin air
>>> when you present it with BIT_ variants as input.
>>
>> Well, look into fold-const's fold_binary_loc function and see
>>
>>  /* ARG0 is the first operand of EXPR, and ARG1 is the second operand.
>>
>>     First check for cases where an arithmetic operation is applied to a
>>     compound, conditional, or comparison operation.  Push the arithmetic
>>     operation inside the compound or conditional to see if any folding
>>     can then be done.  Convert comparison to conditional for this purpose.
>>     The also optimizes non-constant cases that used to be done in
>>     expand_expr.
>>
>>     Before we do that, see if this is a BIT_AND_EXPR or a BIT_IOR_EXPR,
>>     one of the operands is a comparison and the other is a comparison, a
>>     BIT_AND_EXPR with the constant 1, or a truth value.  In that case, the
>>     code below would make the expression more complex.  Change it to a
>>     TRUTH_{AND,OR}_EXPR.  Likewise, convert a similar NE_EXPR to
>>     TRUTH_XOR_EXPR and an EQ_EXPR to the inversion of a TRUTH_XOR_EXPR.  */
>>
>>  if ((code == BIT_AND_EXPR || code == BIT_IOR_EXPR
>>       || code == EQ_EXPR || code == NE_EXPR)
>>      && ((truth_value_p (TREE_CODE (arg0))
>>           && (truth_value_p (TREE_CODE (arg1))
>>               || (TREE_CODE (arg1) == BIT_AND_EXPR
>>                   && integer_onep (TREE_OPERAND (arg1, 1)))))
>>          || (truth_value_p (TREE_CODE (arg1))
>>              && (truth_value_p (TREE_CODE (arg0))
>>                  || (TREE_CODE (arg0) == BIT_AND_EXPR
>>                      && integer_onep (TREE_OPERAND (arg0, 1)))))))
>>    {
>>      tem = fold_build2_loc (loc, code == BIT_AND_EXPR ? TRUTH_AND_EXPR
>>                         : code == BIT_IOR_EXPR ? TRUTH_OR_EXPR
>>                         : TRUTH_XOR_EXPR,
>>                         boolean_type_node,
>>                         fold_convert_loc (loc, boolean_type_node, arg0),
>>                         fold_convert_loc (loc, boolean_type_node, arg1));
>>
>>      if (code == EQ_EXPR)
>>        tem = invert_truthvalue_loc (loc, tem);
>>
>>      return fold_convert_loc (loc, type, tem);
>>    }
>>
>> Here unconditionally TRUTH_AND/TRUTH_OR gets introduced, if operands
>> are of kind truth.  This is btw the point, why you see that those
>> cases are handled.  But as soon as this part is turned off for BIT_-
>> IOR/AND, we need to do the folding for 1-bit precision case explicit.
>
> First of all this checks for a quite complex pattern - where do we pass
> such complex pattern from the gimple level to fold?  For the EQ/NE_EXPR
> case forwprop probably might be able to feed it that, but then how does
> it go wrong?  The above could also simply be guarded by !in_gimple_form.
>
> Richard.

See reassoc pass as example and this hacky maybe_fold_and_comparisons
/ maybe_fold_or_comparisons functions.  As indeed we want still be
able to do comparison foldings without getting back an TRUTH-op.
Additionally we have a lot of passes - like vectorizer - which are
happily try to build new condition on tree-level.  This is another
place I saw issues and tree-cfg failures. And last but not least those
truth-ops might be reintroduced in gimple_fold, as soon as we see
bitwise-ops on one-bit precision integral type as truth_value.

Kai

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [patch 1/8 tree-optimization]: Bitwise logic for fold_truth_not_expr
  2011-07-13 13:04           ` Kai Tietz
@ 2011-07-14 11:48             ` Richard Guenther
  0 siblings, 0 replies; 8+ messages in thread
From: Richard Guenther @ 2011-07-14 11:48 UTC (permalink / raw)
  To: Kai Tietz; +Cc: GCC Patches

On Wed, Jul 13, 2011 at 2:15 PM, Kai Tietz <ktietz70@googlemail.com> wrote:
> 2011/7/13 Richard Guenther <richard.guenther@gmail.com>:
>> On Wed, Jul 13, 2011 at 1:08 PM, Kai Tietz <ktietz70@googlemail.com> wrote:
>>> 2011/7/13 Richard Guenther <richard.guenther@gmail.com>:
>>>> On Wed, Jul 13, 2011 at 11:04 AM, Kai Tietz <ktietz70@googlemail.com> wrote:
>>>>> Sorrty, the TRUTH_NOT_EXPR isn't here the point at all. The underlying
>>>>> issue is that fold-const re-inttroduces TRUTH_AND/OR and co.
>>>>
>>>> I'm very sure it doesn't re-constrct TRUTH_ variants out of thin air
>>>> when you present it with BIT_ variants as input.
>>>
>>> Well, look into fold-const's fold_binary_loc function and see
>>>
>>>  /* ARG0 is the first operand of EXPR, and ARG1 is the second operand.
>>>
>>>     First check for cases where an arithmetic operation is applied to a
>>>     compound, conditional, or comparison operation.  Push the arithmetic
>>>     operation inside the compound or conditional to see if any folding
>>>     can then be done.  Convert comparison to conditional for this purpose.
>>>     The also optimizes non-constant cases that used to be done in
>>>     expand_expr.
>>>
>>>     Before we do that, see if this is a BIT_AND_EXPR or a BIT_IOR_EXPR,
>>>     one of the operands is a comparison and the other is a comparison, a
>>>     BIT_AND_EXPR with the constant 1, or a truth value.  In that case, the
>>>     code below would make the expression more complex.  Change it to a
>>>     TRUTH_{AND,OR}_EXPR.  Likewise, convert a similar NE_EXPR to
>>>     TRUTH_XOR_EXPR and an EQ_EXPR to the inversion of a TRUTH_XOR_EXPR.  */
>>>
>>>  if ((code == BIT_AND_EXPR || code == BIT_IOR_EXPR
>>>       || code == EQ_EXPR || code == NE_EXPR)
>>>      && ((truth_value_p (TREE_CODE (arg0))
>>>           && (truth_value_p (TREE_CODE (arg1))
>>>               || (TREE_CODE (arg1) == BIT_AND_EXPR
>>>                   && integer_onep (TREE_OPERAND (arg1, 1)))))
>>>          || (truth_value_p (TREE_CODE (arg1))
>>>              && (truth_value_p (TREE_CODE (arg0))
>>>                  || (TREE_CODE (arg0) == BIT_AND_EXPR
>>>                      && integer_onep (TREE_OPERAND (arg0, 1)))))))
>>>    {
>>>      tem = fold_build2_loc (loc, code == BIT_AND_EXPR ? TRUTH_AND_EXPR
>>>                         : code == BIT_IOR_EXPR ? TRUTH_OR_EXPR
>>>                         : TRUTH_XOR_EXPR,
>>>                         boolean_type_node,
>>>                         fold_convert_loc (loc, boolean_type_node, arg0),
>>>                         fold_convert_loc (loc, boolean_type_node, arg1));
>>>
>>>      if (code == EQ_EXPR)
>>>        tem = invert_truthvalue_loc (loc, tem);
>>>
>>>      return fold_convert_loc (loc, type, tem);
>>>    }
>>>
>>> Here unconditionally TRUTH_AND/TRUTH_OR gets introduced, if operands
>>> are of kind truth.  This is btw the point, why you see that those
>>> cases are handled.  But as soon as this part is turned off for BIT_-
>>> IOR/AND, we need to do the folding for 1-bit precision case explicit.
>>
>> First of all this checks for a quite complex pattern - where do we pass
>> such complex pattern from the gimple level to fold?  For the EQ/NE_EXPR
>> case forwprop probably might be able to feed it that, but then how does
>> it go wrong?  The above could also simply be guarded by !in_gimple_form.
>>
>> Richard.
>
> See reassoc pass as example

I don't see where.

> and this hacky maybe_fold_and_comparisons
> / maybe_fold_or_comparisons functions.

Yes, that case is known - but unless the folding result is sane we
drop it anyway.

>  As indeed we want still be
> able to do comparison foldings without getting back an TRUTH-op.

Well, if we get back one we have canonicalize_cond_expr_cond to
make it sane again.

> Additionally we have a lot of passes - like vectorizer - which are
> happily try to build new condition on tree-level.  This is another

That's true and you promised to clean up remaining TRUTH_*
consumers / producers when doing the canonicalization to BIT_*.
At the moment they get back to BIT_* via force_gimple_operand*
which is probably even fine.

> place I saw issues and tree-cfg failures. And last but not least those
> truth-ops might be reintroduced in gimple_fold, as soon as we see
> bitwise-ops on one-bit precision integral type as truth_value.

Well, I already said that one-bit-precision as truth_value is wrong.

Relying on fold-const.c here with its very strict type rules always
was a stretch.  If it turns out there are now cases that we want to
do differently then we finally have to bite the bullet and do it.
There's always two options - duplicate it properly in fold_stmt
(or forwprop in case it needs to lookup defs), or remove it from fold
and make sure we fold things using fold_stmt at gimplifcation time.

Richard.

> Kai
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2011-07-14 11:42 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-07-13  7:33 [patch 1/8 tree-optimization]: Bitwise logic for fold_truth_not_expr Kai Tietz
2011-07-13  9:02 ` Richard Guenther
2011-07-13  9:08   ` Kai Tietz
2011-07-13 10:56     ` Richard Guenther
2011-07-13 11:11       ` Kai Tietz
2011-07-13 12:14         ` Richard Guenther
2011-07-13 13:04           ` Kai Tietz
2011-07-14 11:48             ` Richard Guenther

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).