public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH][RFC][1/2] Bitfield lowering, add BIT_FIELD_EXPR
@ 2011-06-16 11:46 Richard Guenther
  2011-06-16 12:06 ` Jay Foad
                   ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Richard Guenther @ 2011-06-16 11:46 UTC (permalink / raw)
  To: gcc-patches


This is a (possible) pre-requesite for the bitfield lowering patch,
taken from the old mem-ref branch.  It introduces BIT_FIELD_EXPR
which can be used to do bitfield composition.
BIT_FIELD_EXPR <a, b, C1, C2> is equivalent to computing
a & ~((1 << C1 - 1) << C2) | ((b << C2) & (1 << C1 = 1)), thus
inserting b of width C1 at the bitfield position C2 in a, returning
the new value.  This allows translating
 BIT_FIELD_REF <a, C1, C2> = b;
to
 a = BIT_FIELD_EXPR <a, b, C1, C2>;
which avoids partial definitions of a (thus, BIT_FIELD_EXPR is
similar to COMPLEX_EXPR).  BIT_FIELD_EXPR is supposed to work
on registers only.

Comments welcome, esp. on how to avoid introducing quaternary
RHS on gimple stmts (or using a GIMPLE_SINGLE_RHS as the patch does).

Folders/combiners are missing to handle some of the easy
BIT_FIELD_REF / BIT_FIELD_EXPR cases, as well as eventually
re-writing shift/mask operations to BIT_FIELD_REF/EXPR.

Richard.

2011-06-16  Richard Guenther  <rguenther@suse.de>

	* expr.c (expand_expr_real_1): Handle BIT_FIELD_EXPR.
	* fold-const.c (operand_equal_p): Likewise.
	(build_bit_mask): New function.
	(fold_quaternary_loc): Likewise.
	(fold): Call it.
	(fold_build4_stat_loc): New function.
	* gimplify.c (gimplify_expr): Handle BIT_FIELD_EXPR.
	* tree-inline.c (estimate_operator_cost): Likewise.
	* tree-pretty-print.c (dump_generic_node): Likewise.
	* tree-ssa-operands.c (get_expr_operands): Likewise.
	* tree.def (BIT_FIELD_EXPR): New tree code.
	* tree.h (build_bit_mask): Declare.
	(fold_quaternary): Define.
	(fold_quaternary_loc): Declare.
	(fold_build4): Define.
	(fold_build4_loc): Likewise.
	(fold_build4_stat_loc): Declare.
	* gimple.c (gimple_rhs_class_table): Handle BIT_FIELD_EXPR.

Index: trunk/gcc/expr.c
===================================================================
*** trunk.orig/gcc/expr.c	2011-06-15 13:27:40.000000000 +0200
--- trunk/gcc/expr.c	2011-06-15 15:08:41.000000000 +0200
*************** expand_expr_real_1 (tree exp, rtx target
*** 8680,8685 ****
--- 8680,8708 ----
  
        return expand_constructor (exp, target, modifier, false);
  
+     case BIT_FIELD_EXPR:
+       {
+         unsigned bitpos = (unsigned) TREE_INT_CST_LOW (TREE_OPERAND (exp, 3));
+         unsigned bitsize = (unsigned) TREE_INT_CST_LOW (TREE_OPERAND (exp, 2));
+         tree bits, mask;
+         if (BYTES_BIG_ENDIAN)
+           bitpos = TYPE_PRECISION (type) - bitsize - bitpos;
+         /* build a mask to mask/clear the bits in the word.  */
+         mask = build_bit_mask (type, bitsize, bitpos);
+         /* extend the bits to the word type, shift them to the right
+            place and mask the bits.  */
+         bits = fold_convert (type, TREE_OPERAND (exp, 1));
+         bits = fold_build2 (BIT_AND_EXPR, type,
+                             fold_build2 (LSHIFT_EXPR, type,
+                                          bits, size_int (bitpos)), mask);
+         /* switch to clear mask and do the composition.  */
+         mask = fold_build1 (BIT_NOT_EXPR, type, mask);
+         return expand_normal (fold_build2 (BIT_IOR_EXPR, type,
+                               fold_build2 (BIT_AND_EXPR, type,
+                                            TREE_OPERAND (exp, 0), mask),
+                               bits));
+       }
+ 
      case TARGET_MEM_REF:
        {
  	addr_space_t as = TYPE_ADDR_SPACE (TREE_TYPE (exp));
Index: trunk/gcc/fold-const.c
===================================================================
*** trunk.orig/gcc/fold-const.c	2011-06-15 14:51:31.000000000 +0200
--- trunk/gcc/fold-const.c	2011-06-15 15:33:04.000000000 +0200
*************** operand_equal_p (const_tree arg0, const_
*** 2667,2672 ****
--- 2667,2675 ----
  	case DOT_PROD_EXPR:
  	  return OP_SAME (0) && OP_SAME (1) && OP_SAME (2);
  
+ 	case BIT_FIELD_EXPR:
+ 	  return OP_SAME (0) && OP_SAME (1) && OP_SAME (2) && OP_SAME (3);
+ 
  	default:
  	  return 0;
  	}
*************** contains_label_p (tree st)
*** 13230,13235 ****
--- 13233,13251 ----
     (walk_tree_without_duplicates (&st, contains_label_1 , NULL) != NULL_TREE);
  }
  
+ /* Builds and returns a mask of integral type TYPE for masking out
+    BITSIZE bits at bit position BITPOS in a word of type TYPE.
+    The mask has the bits set from bit BITPOS to BITPOS + BITSIZE - 1.  */
+ 
+ tree
+ build_bit_mask (tree type, unsigned int bitsize, unsigned int bitpos)
+ {
+   tree mask = double_int_to_tree (type, double_int_mask (bitsize));
+   mask = const_binop (LSHIFT_EXPR, mask, size_int (bitpos));
+ 
+   return mask;
+ }
+ 
  /* Fold a ternary expression of code CODE and type TYPE with operands
     OP0, OP1, and OP2.  Return the folded expression if folding is
     successful.  Otherwise, return NULL_TREE.  */
*************** fold_ternary_loc (location_t loc, enum t
*** 13591,13596 ****
--- 13607,13685 ----
      } /* switch (code) */
  }
  
+ /* Fold a quaternary expression of code CODE and type TYPE with operands
+    OP0, OP1, OP2, and OP3.  Return the folded expression if folding is
+    successful.  Otherwise, return NULL_TREE.  */
+ 
+ tree
+ fold_quaternary_loc (location_t loc, enum tree_code code, tree type,
+ 		     tree op0, tree op1, tree op2, tree op3)
+ {
+   tree arg0 = NULL_TREE, arg1 = NULL_TREE;
+   enum tree_code_class kind = TREE_CODE_CLASS (code);
+ 
+   gcc_assert (IS_EXPR_CODE_CLASS (kind)
+               && TREE_CODE_LENGTH (code) == 4);
+ 
+   /* Strip any conversions that don't change the mode.  This is safe
+      for every expression, except for a comparison expression because
+      its signedness is derived from its operands.  So, in the latter
+      case, only strip conversions that don't change the signedness.
+ 
+      Note that this is done as an internal manipulation within the
+      constant folder, in order to find the simplest representation of
+      the arguments so that their form can be studied.  In any cases,
+      the appropriate type conversions should be put back in the tree
+      that will get out of the constant folder.  */
+   if (op0)
+     {
+       arg0 = op0;
+       STRIP_NOPS (arg0);
+     }
+ 
+   if (op1)
+     {
+       arg1 = op1;
+       STRIP_NOPS (arg1);
+     }
+ 
+   switch (code)
+     {
+     case BIT_FIELD_EXPR:
+       /* Constant fold BIT_FIELD_EXPR.  */
+       if (TREE_CODE (arg0) == INTEGER_CST
+ 	  || TREE_CODE (arg1) == INTEGER_CST)
+         {
+           unsigned bitpos = (unsigned) TREE_INT_CST_LOW (op3);
+           unsigned bitsize = (unsigned) TREE_INT_CST_LOW (op2);
+           tree bits, mask;
+           if (BYTES_BIG_ENDIAN)
+             bitpos = TYPE_PRECISION (type) - bitsize - bitpos;
+           /* build a mask to mask/clear the bits in the word.  */
+           mask = build_bit_mask (type, bitsize, bitpos);
+           /* extend the bits to the word type, shift them to the right
+              place and mask the bits.  */
+           bits = fold_convert_loc (loc, type, arg1);
+           bits = fold_build2_loc (loc, BIT_AND_EXPR, type,
+ 				  fold_build2_loc (loc, LSHIFT_EXPR, type,
+ 						   bits, size_int (bitpos)),
+ 				  mask);
+           /* switch to clear mask and do the composition.  */
+           mask = fold_build1_loc (loc, BIT_NOT_EXPR, type, mask);
+           return fold_build2_loc (loc, BIT_IOR_EXPR, type,
+ 				  fold_build2_loc (loc, BIT_AND_EXPR, type,
+ 						   fold_convert (type, arg0),
+ 						   mask),
+ 				  bits);
+         }
+ 
+       return NULL_TREE;
+ 
+     default:
+       return NULL_TREE;
+     }
+ }
+ 
  /* Perform constant folding and related simplification of EXPR.
     The related simplifications include x*1 => x, x*0 => 0, etc.,
     and application of the associative law.
*************** fold (tree expr)
*** 13632,13638 ****
    if (IS_EXPR_CODE_CLASS (kind))
      {
        tree type = TREE_TYPE (t);
!       tree op0, op1, op2;
  
        switch (TREE_CODE_LENGTH (code))
  	{
--- 13721,13727 ----
    if (IS_EXPR_CODE_CLASS (kind))
      {
        tree type = TREE_TYPE (t);
!       tree op0, op1, op2, op3;
  
        switch (TREE_CODE_LENGTH (code))
  	{
*************** fold (tree expr)
*** 13651,13656 ****
--- 13740,13752 ----
  	  op2 = TREE_OPERAND (t, 2);
  	  tem = fold_ternary_loc (loc, code, type, op0, op1, op2);
  	  return tem ? tem : expr;
+ 	case 4:
+ 	  op0 = TREE_OPERAND (t, 0);
+ 	  op1 = TREE_OPERAND (t, 1);
+ 	  op2 = TREE_OPERAND (t, 2);
+ 	  op3 = TREE_OPERAND (t, 3);
+ 	  tem = fold_quaternary_loc (loc, code, type, op0, op1, op2, op3);
+ 	  return tem ? tem : expr;
  	default:
  	  break;
  	}
*************** fold_build3_stat_loc (location_t loc, en
*** 14107,14112 ****
--- 14203,14294 ----
    return tem;
  }
  
+ /* Fold a quaternary tree expression with code CODE of type TYPE with
+    operands OP0, OP1, OP2, and OP3.  Return a folded expression if
+    successful.  Otherwise, return a tree expression with code CODE of
+    type TYPE with operands OP0, OP1, OP2, and OP3.  */
+ 
+ tree
+ fold_build4_stat_loc (location_t loc, enum tree_code code, tree type,
+ 		      tree op0, tree op1, tree op2, tree op3 MEM_STAT_DECL)
+ {
+   tree tem;
+ #ifdef ENABLE_FOLD_CHECKING
+   unsigned char checksum_before_op0[16],
+                 checksum_before_op1[16],
+                 checksum_before_op2[16],
+                 checksum_before_op3[16],
+ 		checksum_after_op0[16],
+ 		checksum_after_op1[16],
+ 		checksum_after_op2[16];
+ 		checksum_after_op3[16];
+   struct md5_ctx ctx;
+   htab_t ht;
+ 
+   ht = htab_create (32, htab_hash_pointer, htab_eq_pointer, NULL);
+   md5_init_ctx (&ctx);
+   fold_checksum_tree (op0, &ctx, ht);
+   md5_finish_ctx (&ctx, checksum_before_op0);
+   htab_empty (ht);
+ 
+   md5_init_ctx (&ctx);
+   fold_checksum_tree (op1, &ctx, ht);
+   md5_finish_ctx (&ctx, checksum_before_op1);
+   htab_empty (ht);
+ 
+   md5_init_ctx (&ctx);
+   fold_checksum_tree (op2, &ctx, ht);
+   md5_finish_ctx (&ctx, checksum_before_op2);
+   htab_empty (ht);
+ 
+   md5_init_ctx (&ctx);
+   fold_checksum_tree (op3, &ctx, ht);
+   md5_finish_ctx (&ctx, checksum_before_op3);
+   htab_empty (ht);
+ #endif
+ 
+   gcc_assert (TREE_CODE_CLASS (code) != tcc_vl_exp);
+   tem = fold_quaternary_loc (loc, code, type, op0, op1, op2, op3);
+   if (!tem)
+     tem = build4_stat_loc (loc, code, type, op0, op1, op2, op3 PASS_MEM_STAT);
+ 
+ #ifdef ENABLE_FOLD_CHECKING
+   md5_init_ctx (&ctx);
+   fold_checksum_tree (op0, &ctx, ht);
+   md5_finish_ctx (&ctx, checksum_after_op0);
+   htab_empty (ht);
+ 
+   if (memcmp (checksum_before_op0, checksum_after_op0, 16))
+     fold_check_failed (op0, tem);
+ 
+   md5_init_ctx (&ctx);
+   fold_checksum_tree (op1, &ctx, ht);
+   md5_finish_ctx (&ctx, checksum_after_op1);
+   htab_empty (ht);
+ 
+   if (memcmp (checksum_before_op1, checksum_after_op1, 16))
+     fold_check_failed (op1, tem);
+ 
+   md5_init_ctx (&ctx);
+   fold_checksum_tree (op2, &ctx, ht);
+   md5_finish_ctx (&ctx, checksum_after_op2);
+   htab_delete (ht);
+ 
+   if (memcmp (checksum_before_op2, checksum_after_op2, 16))
+     fold_check_failed (op2, tem);
+ 
+   md5_init_ctx (&ctx);
+   fold_checksum_tree (op3, &ctx, ht);
+   md5_finish_ctx (&ctx, checksum_after_op3);
+   htab_delete (ht);
+ 
+   if (memcmp (checksum_before_op3, checksum_after_op3, 16))
+     fold_check_failed (op3, tem);
+ #endif
+   return tem;
+ }
+ 
+ 
  /* Fold a CALL_EXPR expression of type TYPE with operands FN and NARGS
     arguments in ARGARRAY, and a null static chain.
     Return a folded expression if successful.  Otherwise, return a CALL_EXPR
Index: trunk/gcc/gimplify.c
===================================================================
*** trunk.orig/gcc/gimplify.c	2011-06-10 13:28:11.000000000 +0200
--- trunk/gcc/gimplify.c	2011-06-15 15:05:55.000000000 +0200
*************** gimplify_expr (tree *expr_p, gimple_seq
*** 7239,7244 ****
--- 7239,7248 ----
  	  /* Classified as tcc_expression.  */
  	  goto expr_3;
  
+ 	case BIT_FIELD_EXPR:
+ 	  /* Arguments 3 and 4 are constants.  */
+ 	  goto expr_2;
+ 
  	case POINTER_PLUS_EXPR:
            /* Convert ((type *)A)+offset into &A->field_of_type_and_offset.
  	     The second is gimple immediate saving a need for extra statement.
Index: trunk/gcc/tree-inline.c
===================================================================
*** trunk.orig/gcc/tree-inline.c	2011-06-07 14:41:22.000000000 +0200
--- trunk/gcc/tree-inline.c	2011-06-15 14:56:09.000000000 +0200
*************** estimate_operator_cost (enum tree_code c
*** 3357,3362 ****
--- 3357,3366 ----
          return weights->div_mod_cost;
        return 1;
  
+     /* Bit-field insertion needs several shift and mask operations.  */
+     case BIT_FIELD_EXPR:
+       return 3;
+ 
      default:
        /* We expect a copy assignment with no operator.  */
        gcc_assert (get_gimple_rhs_class (code) == GIMPLE_SINGLE_RHS);
Index: trunk/gcc/tree-pretty-print.c
===================================================================
*** trunk.orig/gcc/tree-pretty-print.c	2011-06-07 14:41:22.000000000 +0200
--- trunk/gcc/tree-pretty-print.c	2011-06-15 15:07:05.000000000 +0200
*************** dump_generic_node (pretty_printer *buffe
*** 1217,1222 ****
--- 1217,1234 ----
        pp_string (buffer, ">");
        break;
  
+     case BIT_FIELD_EXPR:
+       pp_string (buffer, "BIT_FIELD_EXPR <");
+       dump_generic_node (buffer, TREE_OPERAND (node, 0), spc, flags, false);
+       pp_string (buffer, ", ");
+       dump_generic_node (buffer, TREE_OPERAND (node, 1), spc, flags, false);
+       pp_string (buffer, ", ");
+       dump_generic_node (buffer, TREE_OPERAND (node, 2), spc, flags, false);
+       pp_string (buffer, ", ");
+       dump_generic_node (buffer, TREE_OPERAND (node, 3), spc, flags, false);
+       pp_string (buffer, ">");
+       break;
+ 
      case ARRAY_REF:
      case ARRAY_RANGE_REF:
        op0 = TREE_OPERAND (node, 0);
Index: trunk/gcc/tree-ssa-operands.c
===================================================================
*** trunk.orig/gcc/tree-ssa-operands.c	2011-04-18 11:06:42.000000000 +0200
--- trunk/gcc/tree-ssa-operands.c	2011-06-15 14:54:12.000000000 +0200
*************** get_expr_operands (gimple stmt, tree *ex
*** 974,979 ****
--- 974,984 ----
        get_expr_operands (stmt, &TREE_OPERAND (expr, 0), flags);
        return;
  
+     case BIT_FIELD_EXPR:
+       gcc_assert (TREE_CODE (TREE_OPERAND (expr, 2)) == INTEGER_CST
+ 		  && TREE_CODE (TREE_OPERAND (expr, 3)) == INTEGER_CST);
+       /* Fallthru.  */
+ 
      case TRUTH_AND_EXPR:
      case TRUTH_OR_EXPR:
      case TRUTH_XOR_EXPR:
Index: trunk/gcc/tree.def
===================================================================
*** trunk.orig/gcc/tree.def	2011-05-11 11:09:42.000000000 +0200
--- trunk/gcc/tree.def	2011-06-15 14:53:17.000000000 +0200
*************** DEFTREECODE (ADDR_EXPR, "addr_expr", tcc
*** 784,789 ****
--- 784,801 ----
     descriptor of type ptr_mode.  */
  DEFTREECODE (FDESC_EXPR, "fdesc_expr", tcc_expression, 2)
  
+ /* Given a word, a value and a bitfield position and size within
+    the word, produce the value that results if replacing the
+    described parts of word with value.
+    Operand 0 is a tree for the word of integral type;
+    Operand 1 is a tree for the value of integral type;
+    Operand 2 is a tree giving the constant number of bits being
+    referenced which is less or equal to the precision of the value;
+    Operand 3 is a tree giving the constant position of the first referenced
+    bit such that the sum of operands 2 and 3 is less than or equal to the
+    precision of the word.  */
+ DEFTREECODE (BIT_FIELD_EXPR, "bitfield_expr", tcc_expression, 4)
+ 
  /* Given two real or integer operands of the same type,
     returns a complex value of the corresponding complex type.  */
  DEFTREECODE (COMPLEX_EXPR, "complex_expr", tcc_binary, 2)
Index: trunk/gcc/tree.h
===================================================================
*** trunk.orig/gcc/tree.h	2011-06-15 13:27:06.000000000 +0200
--- trunk/gcc/tree.h	2011-06-15 15:19:59.000000000 +0200
*************** extern bool is_typedef_decl (tree x);
*** 5091,5096 ****
--- 5091,5097 ----
  extern bool typedef_variant_p (tree);
  extern bool auto_var_in_fn_p (const_tree, const_tree);
  extern tree build_low_bits_mask (tree, unsigned);
+ extern tree build_bit_mask (tree type, unsigned int, unsigned int);
  extern tree tree_strip_nop_conversions (tree);
  extern tree tree_strip_sign_nop_conversions (tree);
  extern tree lhd_gcc_personality (void);
*************** extern tree fold_binary_loc (location_t,
*** 5147,5152 ****
--- 5148,5156 ----
  #define fold_ternary(CODE,T1,T2,T3,T4)\
     fold_ternary_loc (UNKNOWN_LOCATION, CODE, T1, T2, T3, T4)
  extern tree fold_ternary_loc (location_t, enum tree_code, tree, tree, tree, tree);
+ #define fold_quaternary(CODE,T1,T2,T3,T4,T5)\
+    fold_quaternary_loc (UNKNOWN_LOCATION, CODE, T1, T2, T3, T4, T5)
+ extern tree fold_quaternary_loc (location_t, enum tree_code, tree, tree, tree, tree, tree);
  #define fold_build1(c,t1,t2)\
     fold_build1_stat_loc (UNKNOWN_LOCATION, c, t1, t2 MEM_STAT_INFO)
  #define fold_build1_loc(l,c,t1,t2)\
*************** extern tree fold_build2_stat_loc (locati
*** 5165,5170 ****
--- 5169,5180 ----
     fold_build3_stat_loc (l, c, t1, t2, t3, t4 MEM_STAT_INFO)
  extern tree fold_build3_stat_loc (location_t, enum tree_code, tree, tree, tree,
  				  tree MEM_STAT_DECL);
+ #define fold_build4(c,t1,t2,t3,t4,t5)\
+    fold_build4_stat_loc (UNKNOWN_LOCATION, c, t1, t2, t3, t4, t5 MEM_STAT_INFO)
+ #define fold_build4_loc(l,c,t1,t2,t3,t4,t5)\
+    fold_build4_stat_loc (l, c, t1, t2, t3, t4, t5 MEM_STAT_INFO)
+ extern tree fold_build4_stat_loc (location_t, enum tree_code, tree, tree, tree,
+ 				  tree, tree MEM_STAT_DECL);
  extern tree fold_build1_initializer_loc (location_t, enum tree_code, tree, tree);
  extern tree fold_build2_initializer_loc (location_t, enum tree_code, tree, tree, tree);
  extern tree fold_build3_initializer_loc (location_t, enum tree_code, tree, tree, tree, tree);
Index: trunk/gcc/gimple.c
===================================================================
*** trunk.orig/gcc/gimple.c	2011-06-15 15:33:04.000000000 +0200
--- trunk/gcc/gimple.c	2011-06-15 15:33:54.000000000 +0200
*************** get_gimple_rhs_num_ops (enum tree_code c
*** 2599,2605 ****
        || (SYM) == ADDR_EXPR						    \
        || (SYM) == WITH_SIZE_EXPR					    \
        || (SYM) == SSA_NAME						    \
!       || (SYM) == VEC_COND_EXPR) ? GIMPLE_SINGLE_RHS			    \
     : GIMPLE_INVALID_RHS),
  #define END_OF_BASE_TREE_CODES (unsigned char) GIMPLE_INVALID_RHS,
  
--- 2599,2606 ----
        || (SYM) == ADDR_EXPR						    \
        || (SYM) == WITH_SIZE_EXPR					    \
        || (SYM) == SSA_NAME						    \
!       || (SYM) == VEC_COND_EXPR						    \
!       || (SYM) == BIT_FIELD_EXPR) ? GIMPLE_SINGLE_RHS			    \
     : GIMPLE_INVALID_RHS),
  #define END_OF_BASE_TREE_CODES (unsigned char) GIMPLE_INVALID_RHS,
  

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH][RFC][1/2] Bitfield lowering, add BIT_FIELD_EXPR
  2011-06-16 11:46 [PATCH][RFC][1/2] Bitfield lowering, add BIT_FIELD_EXPR Richard Guenther
@ 2011-06-16 12:06 ` Jay Foad
  2011-06-16 17:18 ` Richard Henderson
  2011-06-19 23:45 ` William J. Schmidt
  2 siblings, 0 replies; 12+ messages in thread
From: Jay Foad @ 2011-06-16 12:06 UTC (permalink / raw)
  To: Richard Guenther; +Cc: gcc-patches

> BIT_FIELD_EXPR <a, b, C1, C2> is equivalent to computing
> a & ~((1 << C1 - 1) << C2) | ((b << C2) & (1 << C1 = 1)),

a & ~(((1 << C1) - 1) << C2) | ((b & ((1 << C1) - 1)) << C2)

?

Jay.

 thus
> inserting b of width C1 at the bitfield position C2 in a, returning
> the new value.  This allows translating
>  BIT_FIELD_REF <a, C1, C2> = b;
> to
>  a = BIT_FIELD_EXPR <a, b, C1, C2>;
> which avoids partial definitions of a (thus, BIT_FIELD_EXPR is
> similar to COMPLEX_EXPR).  BIT_FIELD_EXPR is supposed to work
> on registers only.
>
> Comments welcome, esp. on how to avoid introducing quaternary
> RHS on gimple stmts (or using a GIMPLE_SINGLE_RHS as the patch does).
>
> Folders/combiners are missing to handle some of the easy
> BIT_FIELD_REF / BIT_FIELD_EXPR cases, as well as eventually
> re-writing shift/mask operations to BIT_FIELD_REF/EXPR.
>
> Richard.
>
> 2011-06-16  Richard Guenther  <rguenther@suse.de>
>
>        * expr.c (expand_expr_real_1): Handle BIT_FIELD_EXPR.
>        * fold-const.c (operand_equal_p): Likewise.
>        (build_bit_mask): New function.
>        (fold_quaternary_loc): Likewise.
>        (fold): Call it.
>        (fold_build4_stat_loc): New function.
>        * gimplify.c (gimplify_expr): Handle BIT_FIELD_EXPR.
>        * tree-inline.c (estimate_operator_cost): Likewise.
>        * tree-pretty-print.c (dump_generic_node): Likewise.
>        * tree-ssa-operands.c (get_expr_operands): Likewise.
>        * tree.def (BIT_FIELD_EXPR): New tree code.
>        * tree.h (build_bit_mask): Declare.
>        (fold_quaternary): Define.
>        (fold_quaternary_loc): Declare.
>        (fold_build4): Define.
>        (fold_build4_loc): Likewise.
>        (fold_build4_stat_loc): Declare.
>        * gimple.c (gimple_rhs_class_table): Handle BIT_FIELD_EXPR.
>
> Index: trunk/gcc/expr.c
> ===================================================================
> *** trunk.orig/gcc/expr.c       2011-06-15 13:27:40.000000000 +0200
> --- trunk/gcc/expr.c    2011-06-15 15:08:41.000000000 +0200
> *************** expand_expr_real_1 (tree exp, rtx target
> *** 8680,8685 ****
> --- 8680,8708 ----
>
>        return expand_constructor (exp, target, modifier, false);
>
> +     case BIT_FIELD_EXPR:
> +       {
> +         unsigned bitpos = (unsigned) TREE_INT_CST_LOW (TREE_OPERAND (exp, 3));
> +         unsigned bitsize = (unsigned) TREE_INT_CST_LOW (TREE_OPERAND (exp, 2));
> +         tree bits, mask;
> +         if (BYTES_BIG_ENDIAN)
> +           bitpos = TYPE_PRECISION (type) - bitsize - bitpos;
> +         /* build a mask to mask/clear the bits in the word.  */
> +         mask = build_bit_mask (type, bitsize, bitpos);
> +         /* extend the bits to the word type, shift them to the right
> +            place and mask the bits.  */
> +         bits = fold_convert (type, TREE_OPERAND (exp, 1));
> +         bits = fold_build2 (BIT_AND_EXPR, type,
> +                             fold_build2 (LSHIFT_EXPR, type,
> +                                          bits, size_int (bitpos)), mask);
> +         /* switch to clear mask and do the composition.  */
> +         mask = fold_build1 (BIT_NOT_EXPR, type, mask);
> +         return expand_normal (fold_build2 (BIT_IOR_EXPR, type,
> +                               fold_build2 (BIT_AND_EXPR, type,
> +                                            TREE_OPERAND (exp, 0), mask),
> +                               bits));
> +       }
> +
>      case TARGET_MEM_REF:
>        {
>        addr_space_t as = TYPE_ADDR_SPACE (TREE_TYPE (exp));
> Index: trunk/gcc/fold-const.c
> ===================================================================
> *** trunk.orig/gcc/fold-const.c 2011-06-15 14:51:31.000000000 +0200
> --- trunk/gcc/fold-const.c      2011-06-15 15:33:04.000000000 +0200
> *************** operand_equal_p (const_tree arg0, const_
> *** 2667,2672 ****
> --- 2667,2675 ----
>        case DOT_PROD_EXPR:
>          return OP_SAME (0) && OP_SAME (1) && OP_SAME (2);
>
> +       case BIT_FIELD_EXPR:
> +         return OP_SAME (0) && OP_SAME (1) && OP_SAME (2) && OP_SAME (3);
> +
>        default:
>          return 0;
>        }
> *************** contains_label_p (tree st)
> *** 13230,13235 ****
> --- 13233,13251 ----
>     (walk_tree_without_duplicates (&st, contains_label_1 , NULL) != NULL_TREE);
>  }
>
> + /* Builds and returns a mask of integral type TYPE for masking out
> +    BITSIZE bits at bit position BITPOS in a word of type TYPE.
> +    The mask has the bits set from bit BITPOS to BITPOS + BITSIZE - 1.  */
> +
> + tree
> + build_bit_mask (tree type, unsigned int bitsize, unsigned int bitpos)
> + {
> +   tree mask = double_int_to_tree (type, double_int_mask (bitsize));
> +   mask = const_binop (LSHIFT_EXPR, mask, size_int (bitpos));
> +
> +   return mask;
> + }
> +
>  /* Fold a ternary expression of code CODE and type TYPE with operands
>     OP0, OP1, and OP2.  Return the folded expression if folding is
>     successful.  Otherwise, return NULL_TREE.  */
> *************** fold_ternary_loc (location_t loc, enum t
> *** 13591,13596 ****
> --- 13607,13685 ----
>      } /* switch (code) */
>  }
>
> + /* Fold a quaternary expression of code CODE and type TYPE with operands
> +    OP0, OP1, OP2, and OP3.  Return the folded expression if folding is
> +    successful.  Otherwise, return NULL_TREE.  */
> +
> + tree
> + fold_quaternary_loc (location_t loc, enum tree_code code, tree type,
> +                    tree op0, tree op1, tree op2, tree op3)
> + {
> +   tree arg0 = NULL_TREE, arg1 = NULL_TREE;
> +   enum tree_code_class kind = TREE_CODE_CLASS (code);
> +
> +   gcc_assert (IS_EXPR_CODE_CLASS (kind)
> +               && TREE_CODE_LENGTH (code) == 4);
> +
> +   /* Strip any conversions that don't change the mode.  This is safe
> +      for every expression, except for a comparison expression because
> +      its signedness is derived from its operands.  So, in the latter
> +      case, only strip conversions that don't change the signedness.
> +
> +      Note that this is done as an internal manipulation within the
> +      constant folder, in order to find the simplest representation of
> +      the arguments so that their form can be studied.  In any cases,
> +      the appropriate type conversions should be put back in the tree
> +      that will get out of the constant folder.  */
> +   if (op0)
> +     {
> +       arg0 = op0;
> +       STRIP_NOPS (arg0);
> +     }
> +
> +   if (op1)
> +     {
> +       arg1 = op1;
> +       STRIP_NOPS (arg1);
> +     }
> +
> +   switch (code)
> +     {
> +     case BIT_FIELD_EXPR:
> +       /* Constant fold BIT_FIELD_EXPR.  */
> +       if (TREE_CODE (arg0) == INTEGER_CST
> +         || TREE_CODE (arg1) == INTEGER_CST)
> +         {
> +           unsigned bitpos = (unsigned) TREE_INT_CST_LOW (op3);
> +           unsigned bitsize = (unsigned) TREE_INT_CST_LOW (op2);
> +           tree bits, mask;
> +           if (BYTES_BIG_ENDIAN)
> +             bitpos = TYPE_PRECISION (type) - bitsize - bitpos;
> +           /* build a mask to mask/clear the bits in the word.  */
> +           mask = build_bit_mask (type, bitsize, bitpos);
> +           /* extend the bits to the word type, shift them to the right
> +              place and mask the bits.  */
> +           bits = fold_convert_loc (loc, type, arg1);
> +           bits = fold_build2_loc (loc, BIT_AND_EXPR, type,
> +                                 fold_build2_loc (loc, LSHIFT_EXPR, type,
> +                                                  bits, size_int (bitpos)),
> +                                 mask);
> +           /* switch to clear mask and do the composition.  */
> +           mask = fold_build1_loc (loc, BIT_NOT_EXPR, type, mask);
> +           return fold_build2_loc (loc, BIT_IOR_EXPR, type,
> +                                 fold_build2_loc (loc, BIT_AND_EXPR, type,
> +                                                  fold_convert (type, arg0),
> +                                                  mask),
> +                                 bits);
> +         }
> +
> +       return NULL_TREE;
> +
> +     default:
> +       return NULL_TREE;
> +     }
> + }
> +
>  /* Perform constant folding and related simplification of EXPR.
>     The related simplifications include x*1 => x, x*0 => 0, etc.,
>     and application of the associative law.
> *************** fold (tree expr)
> *** 13632,13638 ****
>    if (IS_EXPR_CODE_CLASS (kind))
>      {
>        tree type = TREE_TYPE (t);
> !       tree op0, op1, op2;
>
>        switch (TREE_CODE_LENGTH (code))
>        {
> --- 13721,13727 ----
>    if (IS_EXPR_CODE_CLASS (kind))
>      {
>        tree type = TREE_TYPE (t);
> !       tree op0, op1, op2, op3;
>
>        switch (TREE_CODE_LENGTH (code))
>        {
> *************** fold (tree expr)
> *** 13651,13656 ****
> --- 13740,13752 ----
>          op2 = TREE_OPERAND (t, 2);
>          tem = fold_ternary_loc (loc, code, type, op0, op1, op2);
>          return tem ? tem : expr;
> +       case 4:
> +         op0 = TREE_OPERAND (t, 0);
> +         op1 = TREE_OPERAND (t, 1);
> +         op2 = TREE_OPERAND (t, 2);
> +         op3 = TREE_OPERAND (t, 3);
> +         tem = fold_quaternary_loc (loc, code, type, op0, op1, op2, op3);
> +         return tem ? tem : expr;
>        default:
>          break;
>        }
> *************** fold_build3_stat_loc (location_t loc, en
> *** 14107,14112 ****
> --- 14203,14294 ----
>    return tem;
>  }
>
> + /* Fold a quaternary tree expression with code CODE of type TYPE with
> +    operands OP0, OP1, OP2, and OP3.  Return a folded expression if
> +    successful.  Otherwise, return a tree expression with code CODE of
> +    type TYPE with operands OP0, OP1, OP2, and OP3.  */
> +
> + tree
> + fold_build4_stat_loc (location_t loc, enum tree_code code, tree type,
> +                     tree op0, tree op1, tree op2, tree op3 MEM_STAT_DECL)
> + {
> +   tree tem;
> + #ifdef ENABLE_FOLD_CHECKING
> +   unsigned char checksum_before_op0[16],
> +                 checksum_before_op1[16],
> +                 checksum_before_op2[16],
> +                 checksum_before_op3[16],
> +               checksum_after_op0[16],
> +               checksum_after_op1[16],
> +               checksum_after_op2[16];
> +               checksum_after_op3[16];
> +   struct md5_ctx ctx;
> +   htab_t ht;
> +
> +   ht = htab_create (32, htab_hash_pointer, htab_eq_pointer, NULL);
> +   md5_init_ctx (&ctx);
> +   fold_checksum_tree (op0, &ctx, ht);
> +   md5_finish_ctx (&ctx, checksum_before_op0);
> +   htab_empty (ht);
> +
> +   md5_init_ctx (&ctx);
> +   fold_checksum_tree (op1, &ctx, ht);
> +   md5_finish_ctx (&ctx, checksum_before_op1);
> +   htab_empty (ht);
> +
> +   md5_init_ctx (&ctx);
> +   fold_checksum_tree (op2, &ctx, ht);
> +   md5_finish_ctx (&ctx, checksum_before_op2);
> +   htab_empty (ht);
> +
> +   md5_init_ctx (&ctx);
> +   fold_checksum_tree (op3, &ctx, ht);
> +   md5_finish_ctx (&ctx, checksum_before_op3);
> +   htab_empty (ht);
> + #endif
> +
> +   gcc_assert (TREE_CODE_CLASS (code) != tcc_vl_exp);
> +   tem = fold_quaternary_loc (loc, code, type, op0, op1, op2, op3);
> +   if (!tem)
> +     tem = build4_stat_loc (loc, code, type, op0, op1, op2, op3 PASS_MEM_STAT);
> +
> + #ifdef ENABLE_FOLD_CHECKING
> +   md5_init_ctx (&ctx);
> +   fold_checksum_tree (op0, &ctx, ht);
> +   md5_finish_ctx (&ctx, checksum_after_op0);
> +   htab_empty (ht);
> +
> +   if (memcmp (checksum_before_op0, checksum_after_op0, 16))
> +     fold_check_failed (op0, tem);
> +
> +   md5_init_ctx (&ctx);
> +   fold_checksum_tree (op1, &ctx, ht);
> +   md5_finish_ctx (&ctx, checksum_after_op1);
> +   htab_empty (ht);
> +
> +   if (memcmp (checksum_before_op1, checksum_after_op1, 16))
> +     fold_check_failed (op1, tem);
> +
> +   md5_init_ctx (&ctx);
> +   fold_checksum_tree (op2, &ctx, ht);
> +   md5_finish_ctx (&ctx, checksum_after_op2);
> +   htab_delete (ht);
> +
> +   if (memcmp (checksum_before_op2, checksum_after_op2, 16))
> +     fold_check_failed (op2, tem);
> +
> +   md5_init_ctx (&ctx);
> +   fold_checksum_tree (op3, &ctx, ht);
> +   md5_finish_ctx (&ctx, checksum_after_op3);
> +   htab_delete (ht);
> +
> +   if (memcmp (checksum_before_op3, checksum_after_op3, 16))
> +     fold_check_failed (op3, tem);
> + #endif
> +   return tem;
> + }
> +
> +
>  /* Fold a CALL_EXPR expression of type TYPE with operands FN and NARGS
>     arguments in ARGARRAY, and a null static chain.
>     Return a folded expression if successful.  Otherwise, return a CALL_EXPR
> Index: trunk/gcc/gimplify.c
> ===================================================================
> *** trunk.orig/gcc/gimplify.c   2011-06-10 13:28:11.000000000 +0200
> --- trunk/gcc/gimplify.c        2011-06-15 15:05:55.000000000 +0200
> *************** gimplify_expr (tree *expr_p, gimple_seq
> *** 7239,7244 ****
> --- 7239,7248 ----
>          /* Classified as tcc_expression.  */
>          goto expr_3;
>
> +       case BIT_FIELD_EXPR:
> +         /* Arguments 3 and 4 are constants.  */
> +         goto expr_2;
> +
>        case POINTER_PLUS_EXPR:
>            /* Convert ((type *)A)+offset into &A->field_of_type_and_offset.
>             The second is gimple immediate saving a need for extra statement.
> Index: trunk/gcc/tree-inline.c
> ===================================================================
> *** trunk.orig/gcc/tree-inline.c        2011-06-07 14:41:22.000000000 +0200
> --- trunk/gcc/tree-inline.c     2011-06-15 14:56:09.000000000 +0200
> *************** estimate_operator_cost (enum tree_code c
> *** 3357,3362 ****
> --- 3357,3366 ----
>          return weights->div_mod_cost;
>        return 1;
>
> +     /* Bit-field insertion needs several shift and mask operations.  */
> +     case BIT_FIELD_EXPR:
> +       return 3;
> +
>      default:
>        /* We expect a copy assignment with no operator.  */
>        gcc_assert (get_gimple_rhs_class (code) == GIMPLE_SINGLE_RHS);
> Index: trunk/gcc/tree-pretty-print.c
> ===================================================================
> *** trunk.orig/gcc/tree-pretty-print.c  2011-06-07 14:41:22.000000000 +0200
> --- trunk/gcc/tree-pretty-print.c       2011-06-15 15:07:05.000000000 +0200
> *************** dump_generic_node (pretty_printer *buffe
> *** 1217,1222 ****
> --- 1217,1234 ----
>        pp_string (buffer, ">");
>        break;
>
> +     case BIT_FIELD_EXPR:
> +       pp_string (buffer, "BIT_FIELD_EXPR <");
> +       dump_generic_node (buffer, TREE_OPERAND (node, 0), spc, flags, false);
> +       pp_string (buffer, ", ");
> +       dump_generic_node (buffer, TREE_OPERAND (node, 1), spc, flags, false);
> +       pp_string (buffer, ", ");
> +       dump_generic_node (buffer, TREE_OPERAND (node, 2), spc, flags, false);
> +       pp_string (buffer, ", ");
> +       dump_generic_node (buffer, TREE_OPERAND (node, 3), spc, flags, false);
> +       pp_string (buffer, ">");
> +       break;
> +
>      case ARRAY_REF:
>      case ARRAY_RANGE_REF:
>        op0 = TREE_OPERAND (node, 0);
> Index: trunk/gcc/tree-ssa-operands.c
> ===================================================================
> *** trunk.orig/gcc/tree-ssa-operands.c  2011-04-18 11:06:42.000000000 +0200
> --- trunk/gcc/tree-ssa-operands.c       2011-06-15 14:54:12.000000000 +0200
> *************** get_expr_operands (gimple stmt, tree *ex
> *** 974,979 ****
> --- 974,984 ----
>        get_expr_operands (stmt, &TREE_OPERAND (expr, 0), flags);
>        return;
>
> +     case BIT_FIELD_EXPR:
> +       gcc_assert (TREE_CODE (TREE_OPERAND (expr, 2)) == INTEGER_CST
> +                 && TREE_CODE (TREE_OPERAND (expr, 3)) == INTEGER_CST);
> +       /* Fallthru.  */
> +
>      case TRUTH_AND_EXPR:
>      case TRUTH_OR_EXPR:
>      case TRUTH_XOR_EXPR:
> Index: trunk/gcc/tree.def
> ===================================================================
> *** trunk.orig/gcc/tree.def     2011-05-11 11:09:42.000000000 +0200
> --- trunk/gcc/tree.def  2011-06-15 14:53:17.000000000 +0200
> *************** DEFTREECODE (ADDR_EXPR, "addr_expr", tcc
> *** 784,789 ****
> --- 784,801 ----
>     descriptor of type ptr_mode.  */
>  DEFTREECODE (FDESC_EXPR, "fdesc_expr", tcc_expression, 2)
>
> + /* Given a word, a value and a bitfield position and size within
> +    the word, produce the value that results if replacing the
> +    described parts of word with value.
> +    Operand 0 is a tree for the word of integral type;
> +    Operand 1 is a tree for the value of integral type;
> +    Operand 2 is a tree giving the constant number of bits being
> +    referenced which is less or equal to the precision of the value;
> +    Operand 3 is a tree giving the constant position of the first referenced
> +    bit such that the sum of operands 2 and 3 is less than or equal to the
> +    precision of the word.  */
> + DEFTREECODE (BIT_FIELD_EXPR, "bitfield_expr", tcc_expression, 4)
> +
>  /* Given two real or integer operands of the same type,
>     returns a complex value of the corresponding complex type.  */
>  DEFTREECODE (COMPLEX_EXPR, "complex_expr", tcc_binary, 2)
> Index: trunk/gcc/tree.h
> ===================================================================
> *** trunk.orig/gcc/tree.h       2011-06-15 13:27:06.000000000 +0200
> --- trunk/gcc/tree.h    2011-06-15 15:19:59.000000000 +0200
> *************** extern bool is_typedef_decl (tree x);
> *** 5091,5096 ****
> --- 5091,5097 ----
>  extern bool typedef_variant_p (tree);
>  extern bool auto_var_in_fn_p (const_tree, const_tree);
>  extern tree build_low_bits_mask (tree, unsigned);
> + extern tree build_bit_mask (tree type, unsigned int, unsigned int);
>  extern tree tree_strip_nop_conversions (tree);
>  extern tree tree_strip_sign_nop_conversions (tree);
>  extern tree lhd_gcc_personality (void);
> *************** extern tree fold_binary_loc (location_t,
> *** 5147,5152 ****
> --- 5148,5156 ----
>  #define fold_ternary(CODE,T1,T2,T3,T4)\
>     fold_ternary_loc (UNKNOWN_LOCATION, CODE, T1, T2, T3, T4)
>  extern tree fold_ternary_loc (location_t, enum tree_code, tree, tree, tree, tree);
> + #define fold_quaternary(CODE,T1,T2,T3,T4,T5)\
> +    fold_quaternary_loc (UNKNOWN_LOCATION, CODE, T1, T2, T3, T4, T5)
> + extern tree fold_quaternary_loc (location_t, enum tree_code, tree, tree, tree, tree, tree);
>  #define fold_build1(c,t1,t2)\
>     fold_build1_stat_loc (UNKNOWN_LOCATION, c, t1, t2 MEM_STAT_INFO)
>  #define fold_build1_loc(l,c,t1,t2)\
> *************** extern tree fold_build2_stat_loc (locati
> *** 5165,5170 ****
> --- 5169,5180 ----
>     fold_build3_stat_loc (l, c, t1, t2, t3, t4 MEM_STAT_INFO)
>  extern tree fold_build3_stat_loc (location_t, enum tree_code, tree, tree, tree,
>                                  tree MEM_STAT_DECL);
> + #define fold_build4(c,t1,t2,t3,t4,t5)\
> +    fold_build4_stat_loc (UNKNOWN_LOCATION, c, t1, t2, t3, t4, t5 MEM_STAT_INFO)
> + #define fold_build4_loc(l,c,t1,t2,t3,t4,t5)\
> +    fold_build4_stat_loc (l, c, t1, t2, t3, t4, t5 MEM_STAT_INFO)
> + extern tree fold_build4_stat_loc (location_t, enum tree_code, tree, tree, tree,
> +                                 tree, tree MEM_STAT_DECL);
>  extern tree fold_build1_initializer_loc (location_t, enum tree_code, tree, tree);
>  extern tree fold_build2_initializer_loc (location_t, enum tree_code, tree, tree, tree);
>  extern tree fold_build3_initializer_loc (location_t, enum tree_code, tree, tree, tree, tree);
> Index: trunk/gcc/gimple.c
> ===================================================================
> *** trunk.orig/gcc/gimple.c     2011-06-15 15:33:04.000000000 +0200
> --- trunk/gcc/gimple.c  2011-06-15 15:33:54.000000000 +0200
> *************** get_gimple_rhs_num_ops (enum tree_code c
> *** 2599,2605 ****
>        || (SYM) == ADDR_EXPR                                               \
>        || (SYM) == WITH_SIZE_EXPR                                          \
>        || (SYM) == SSA_NAME                                                \
> !       || (SYM) == VEC_COND_EXPR) ? GIMPLE_SINGLE_RHS                      \
>     : GIMPLE_INVALID_RHS),
>  #define END_OF_BASE_TREE_CODES (unsigned char) GIMPLE_INVALID_RHS,
>
> --- 2599,2606 ----
>        || (SYM) == ADDR_EXPR                                               \
>        || (SYM) == WITH_SIZE_EXPR                                          \
>        || (SYM) == SSA_NAME                                                \
> !       || (SYM) == VEC_COND_EXPR                                                   \
> !       || (SYM) == BIT_FIELD_EXPR) ? GIMPLE_SINGLE_RHS                     \
>     : GIMPLE_INVALID_RHS),
>  #define END_OF_BASE_TREE_CODES (unsigned char) GIMPLE_INVALID_RHS,
>
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH][RFC][1/2] Bitfield lowering, add BIT_FIELD_EXPR
  2011-06-16 11:46 [PATCH][RFC][1/2] Bitfield lowering, add BIT_FIELD_EXPR Richard Guenther
  2011-06-16 12:06 ` Jay Foad
@ 2011-06-16 17:18 ` Richard Henderson
  2011-06-16 18:10   ` Eric Botcazou
  2011-06-20 14:23   ` William J. Schmidt
  2011-06-19 23:45 ` William J. Schmidt
  2 siblings, 2 replies; 12+ messages in thread
From: Richard Henderson @ 2011-06-16 17:18 UTC (permalink / raw)
  To: Richard Guenther; +Cc: gcc-patches

On 06/16/2011 04:35 AM, Richard Guenther wrote:
> 
> This is a (possible) pre-requesite for the bitfield lowering patch,
> taken from the old mem-ref branch.  It introduces BIT_FIELD_EXPR
> which can be used to do bitfield composition.
> BIT_FIELD_EXPR <a, b, C1, C2> is equivalent to computing
> a & ~((1 << C1 - 1) << C2) | ((b << C2) & (1 << C1 = 1)), thus
> inserting b of width C1 at the bitfield position C2 in a, returning
> the new value.  This allows translating
>  BIT_FIELD_REF <a, C1, C2> = b;
> to
>  a = BIT_FIELD_EXPR <a, b, C1, C2>;
> which avoids partial definitions of a (thus, BIT_FIELD_EXPR is
> similar to COMPLEX_EXPR).  BIT_FIELD_EXPR is supposed to work
> on registers only.

The name BIT_FIELD_EXPR isn't immediately obvious to me what
it does.  Does it insert or extract a field?

I suppose you're using BIT_FIELD_REF for extraction?

I think this would be clearer with a name like DEPOSIT_EXPR,
similar to the ia64 deposit instruction.

Or INSV_EXPR, if we want to retain existing GCC terminology
(although the insv pattern itself uses an in-out first operand,
leading to a comment in the ia64 md file about insv being less
optimal that it could be).


> *************** expand_expr_real_1 (tree exp, rtx target
> *** 8680,8685 ****
> --- 8680,8708 ----
>   
>         return expand_constructor (exp, target, modifier, false);
>   
> +     case BIT_FIELD_EXPR:
> +       {
> +         unsigned bitpos = (unsigned) TREE_INT_CST_LOW (TREE_OPERAND (exp, 3));
> +         unsigned bitsize = (unsigned) TREE_INT_CST_LOW (TREE_OPERAND (exp, 2));
> +         tree bits, mask;
> +         if (BYTES_BIG_ENDIAN)
> +           bitpos = TYPE_PRECISION (type) - bitsize - bitpos;
> +         /* build a mask to mask/clear the bits in the word.  */
> +         mask = build_bit_mask (type, bitsize, bitpos);
> +         /* extend the bits to the word type, shift them to the right
> +            place and mask the bits.  */
> +         bits = fold_convert (type, TREE_OPERAND (exp, 1));
> +         bits = fold_build2 (BIT_AND_EXPR, type,
> +                             fold_build2 (LSHIFT_EXPR, type,
> +                                          bits, size_int (bitpos)), mask);
> +         /* switch to clear mask and do the composition.  */
> +         mask = fold_build1 (BIT_NOT_EXPR, type, mask);
> +         return expand_normal (fold_build2 (BIT_IOR_EXPR, type,
> +                               fold_build2 (BIT_AND_EXPR, type,
> +                                            TREE_OPERAND (exp, 0), mask),
> +                               bits));

Surely you should implement this with store_bit_field, even if
you have to introduce a copy to a temporary for correctness.
Otherwise you'll have to rely on combine to put this back
together into the backend's insv pattern.


> +     /* Bit-field insertion needs several shift and mask operations.  */
> +     case BIT_FIELD_EXPR:
> +       return 3;

... depending on the target, of course.



r~

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH][RFC][1/2] Bitfield lowering, add BIT_FIELD_EXPR
  2011-06-16 17:18 ` Richard Henderson
@ 2011-06-16 18:10   ` Eric Botcazou
  2011-06-16 19:23     ` Richard Guenther
  2011-06-20 14:23   ` William J. Schmidt
  1 sibling, 1 reply; 12+ messages in thread
From: Eric Botcazou @ 2011-06-16 18:10 UTC (permalink / raw)
  To: Richard Henderson; +Cc: gcc-patches, Richard Guenther

> I think this would be clearer with a name like DEPOSIT_EXPR,
> similar to the ia64 deposit instruction.

ia64's demise wasn't entirely undeserved then.  IMO the descriptive power of
DEPOSIT_EXPR is almost null.  BIT_FIELD_MODIFY_EXPR or something like this.

-- 
Eric Botcazou

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH][RFC][1/2] Bitfield lowering, add BIT_FIELD_EXPR
  2011-06-16 18:10   ` Eric Botcazou
@ 2011-06-16 19:23     ` Richard Guenther
  2011-06-16 19:53       ` Richard Henderson
  0 siblings, 1 reply; 12+ messages in thread
From: Richard Guenther @ 2011-06-16 19:23 UTC (permalink / raw)
  To: Eric Botcazou; +Cc: Richard Henderson, gcc-patches, Richard Guenther

On Thu, Jun 16, 2011 at 7:24 PM, Eric Botcazou <ebotcazou@adacore.com> wrote:
>> I think this would be clearer with a name like DEPOSIT_EXPR,
>> similar to the ia64 deposit instruction.
>
> ia64's demise wasn't entirely undeserved then.  IMO the descriptive power of
> DEPOSIT_EXPR is almost null.  BIT_FIELD_MODIFY_EXPR or something like this.

It's more like BIT_FIELD_COMPOSE_EXPR which is why I chose BIT_FIELD_EXPR,
similar to how we have COMPLEX_EXPR which composes two scalar values.
I don't mind changing the name though, but maybe to BIT_FIELD_COMPOSE_EXPR
then?

The expansion code is ad-hoc, I'm not too familiar with what utilities
we have to do a better job here.  I'll have a look at store_bit_field
(though that sounds memory-esque).

Richard.

> --
> Eric Botcazou
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH][RFC][1/2] Bitfield lowering, add BIT_FIELD_EXPR
  2011-06-16 19:23     ` Richard Guenther
@ 2011-06-16 19:53       ` Richard Henderson
  0 siblings, 0 replies; 12+ messages in thread
From: Richard Henderson @ 2011-06-16 19:53 UTC (permalink / raw)
  To: Richard Guenther; +Cc: Eric Botcazou, gcc-patches, Richard Guenther

On 06/16/2011 12:14 PM, Richard Guenther wrote:
> On Thu, Jun 16, 2011 at 7:24 PM, Eric Botcazou <ebotcazou@adacore.com> wrote:
>>> I think this would be clearer with a name like DEPOSIT_EXPR,
>>> similar to the ia64 deposit instruction.
>>
>> ia64's demise wasn't entirely undeserved then.  IMO the descriptive power of
>> DEPOSIT_EXPR is almost null.  BIT_FIELD_MODIFY_EXPR or something like this.
> 
> It's more like BIT_FIELD_COMPOSE_EXPR which is why I chose BIT_FIELD_EXPR,
> similar to how we have COMPLEX_EXPR which composes two scalar values.
> I don't mind changing the name though, but maybe to BIT_FIELD_COMPOSE_EXPR
> then?

That'd be fine.

> The expansion code is ad-hoc, I'm not too familiar with what utilities
> we have to do a better job here.  I'll have a look at store_bit_field
> (though that sounds memory-esque).

It probably was, originally.  But it's got paths through there to handle
writing into a register destination, which will wind up in in the 
various insv and strict_low_part patterns that the md file supports.


r~

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH][RFC][1/2] Bitfield lowering, add BIT_FIELD_EXPR
  2011-06-16 11:46 [PATCH][RFC][1/2] Bitfield lowering, add BIT_FIELD_EXPR Richard Guenther
  2011-06-16 12:06 ` Jay Foad
  2011-06-16 17:18 ` Richard Henderson
@ 2011-06-19 23:45 ` William J. Schmidt
  2011-06-20 13:44   ` Michael Matz
  2 siblings, 1 reply; 12+ messages in thread
From: William J. Schmidt @ 2011-06-19 23:45 UTC (permalink / raw)
  To: Richard Guenther; +Cc: gcc-patches

On Thu, 2011-06-16 at 13:35 +0200, Richard Guenther wrote:
> This is a (possible) pre-requesite for the bitfield lowering patch,
> taken from the old mem-ref branch.  It introduces BIT_FIELD_EXPR
> which can be used to do bitfield composition.
> BIT_FIELD_EXPR <a, b, C1, C2> is equivalent to computing
> a & ~((1 << C1 - 1) << C2) | ((b << C2) & (1 << C1 = 1)), thus
> inserting b of width C1 at the bitfield position C2 in a, returning
> the new value.  This allows translating
>  BIT_FIELD_REF <a, C1, C2> = b;
> to
>  a = BIT_FIELD_EXPR <a, b, C1, C2>;
> which avoids partial definitions of a (thus, BIT_FIELD_EXPR is
> similar to COMPLEX_EXPR).  BIT_FIELD_EXPR is supposed to work
> on registers only.
> 
> Comments welcome, esp. on how to avoid introducing quaternary
> RHS on gimple stmts (or using a GIMPLE_SINGLE_RHS as the patch does).
> 

At the risk of being obvious...it seems you could easily combine C1 and
C2 into a single "bitfield descriptor" TREE_INT_CST operand by using
both the high and low portions of the constant.  Accessor macros could
be used to hide the slight hackishness of the solution.  I didn't see
anything in either patch where this would look particularly ugly.

Storing operands differently than in BIT_FIELD_REF isn't ideal, but
perhaps it's better than a quaternary RHS.  /shrug

Bill



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH][RFC][1/2] Bitfield lowering, add BIT_FIELD_EXPR
  2011-06-19 23:45 ` William J. Schmidt
@ 2011-06-20 13:44   ` Michael Matz
  0 siblings, 0 replies; 12+ messages in thread
From: Michael Matz @ 2011-06-20 13:44 UTC (permalink / raw)
  To: William J. Schmidt; +Cc: Richard Guenther, gcc-patches

Hi,

On Sun, 19 Jun 2011, William J. Schmidt wrote:

> At the risk of being obvious...it seems you could easily combine C1 and 
> C2 into a single "bitfield descriptor" TREE_INT_CST operand by using 
> both the high and low portions of the constant.  Accessor macros could 
> be used to hide the slight hackishness of the solution.  I didn't see 
> anything in either patch where this would look particularly ugly.

Agreed.  I don't even see it as hackish, it's quite normal that the 
implementation of data structures actually uses a different layout than a 
completely orthogonal one.  As long as the accessors are looking 
orthogonal.


Ciao,
Michael.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH][RFC][1/2] Bitfield lowering, add BIT_FIELD_EXPR
  2011-06-16 17:18 ` Richard Henderson
  2011-06-16 18:10   ` Eric Botcazou
@ 2011-06-20 14:23   ` William J. Schmidt
  2011-06-20 19:09     ` Andrew Pinski
  1 sibling, 1 reply; 12+ messages in thread
From: William J. Schmidt @ 2011-06-20 14:23 UTC (permalink / raw)
  To: Richard Henderson; +Cc: Richard Guenther, gcc-patches

On Thu, 2011-06-16 at 09:49 -0700, Richard Henderson wrote:
> On 06/16/2011 04:35 AM, Richard Guenther wrote:
> > 
> > +     /* Bit-field insertion needs several shift and mask operations.  */
> > +     case BIT_FIELD_EXPR:
> > +       return 3;
> 
> ... depending on the target, of course.
> 

Agreed -- this is a single-instruction operation on PowerPC.  Probably
need some target-specific weights here.

> 
> 
> r~


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH][RFC][1/2] Bitfield lowering, add BIT_FIELD_EXPR
  2011-06-20 14:23   ` William J. Schmidt
@ 2011-06-20 19:09     ` Andrew Pinski
  2011-06-20 21:03       ` Richard Guenther
  0 siblings, 1 reply; 12+ messages in thread
From: Andrew Pinski @ 2011-06-20 19:09 UTC (permalink / raw)
  To: William J. Schmidt; +Cc: Richard Henderson, Richard Guenther, gcc-patches

On Mon, Jun 20, 2011 at 7:19 AM, William J. Schmidt
<wschmidt@linux.vnet.ibm.com> wrote:
> On Thu, 2011-06-16 at 09:49 -0700, Richard Henderson wrote:
>> On 06/16/2011 04:35 AM, Richard Guenther wrote:
>> >
>> > +     /* Bit-field insertion needs several shift and mask operations.  */
>> > +     case BIT_FIELD_EXPR:
>> > +       return 3;
>>
>> ... depending on the target, of course.
>>
>
> Agreed -- this is a single-instruction operation on PowerPC.  Probably
> need some target-specific weights here.

It is also a single instruction on MIPS32R2 and MIPS64R2.  So a target
hook is the best here rather than a constant number in the target hook
field.

Thanks,
Andrew Pinski

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH][RFC][1/2] Bitfield lowering, add BIT_FIELD_EXPR
  2011-06-20 19:09     ` Andrew Pinski
@ 2011-06-20 21:03       ` Richard Guenther
  2011-06-20 21:05         ` Andrew Pinski
  0 siblings, 1 reply; 12+ messages in thread
From: Richard Guenther @ 2011-06-20 21:03 UTC (permalink / raw)
  To: Andrew Pinski
  Cc: William J. Schmidt, Richard Henderson, Richard Guenther, gcc-patches

On Mon, Jun 20, 2011 at 8:43 PM, Andrew Pinski <pinskia@gmail.com> wrote:
> On Mon, Jun 20, 2011 at 7:19 AM, William J. Schmidt
> <wschmidt@linux.vnet.ibm.com> wrote:
>> On Thu, 2011-06-16 at 09:49 -0700, Richard Henderson wrote:
>>> On 06/16/2011 04:35 AM, Richard Guenther wrote:
>>> >
>>> > +     /* Bit-field insertion needs several shift and mask operations.  */
>>> > +     case BIT_FIELD_EXPR:
>>> > +       return 3;
>>>
>>> ... depending on the target, of course.
>>>
>>
>> Agreed -- this is a single-instruction operation on PowerPC.  Probably
>> need some target-specific weights here.
>
> It is also a single instruction on MIPS32R2 and MIPS64R2.  So a target
> hook is the best here rather than a constant number in the target hook
> field.

Yeah, well.  We have mostly no target dependency in those gimple
statement speed/size cost metric, so the above 3 is matching
how the expansion to gimple shift/mask stmts would measure.

Richard.

> Thanks,
> Andrew Pinski
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH][RFC][1/2] Bitfield lowering, add BIT_FIELD_EXPR
  2011-06-20 21:03       ` Richard Guenther
@ 2011-06-20 21:05         ` Andrew Pinski
  0 siblings, 0 replies; 12+ messages in thread
From: Andrew Pinski @ 2011-06-20 21:05 UTC (permalink / raw)
  To: Richard Guenther
  Cc: William J. Schmidt, Richard Henderson, Richard Guenther, gcc-patches

On Mon, Jun 20, 2011 at 1:54 PM, Richard Guenther
<richard.guenther@gmail.com> wrote:
> Yeah, well.  We have mostly no target dependency in those gimple
> statement speed/size cost metric, so the above 3 is matching
> how the expansion to gimple shift/mask stmts would measure.

Except RTH has mentioned there is already a way to expand it using
that one instruction (insv and such). So I think you can measure it
without a target hook really.

Thanks,
Andrew Pinski

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2011-06-20 20:58 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-06-16 11:46 [PATCH][RFC][1/2] Bitfield lowering, add BIT_FIELD_EXPR Richard Guenther
2011-06-16 12:06 ` Jay Foad
2011-06-16 17:18 ` Richard Henderson
2011-06-16 18:10   ` Eric Botcazou
2011-06-16 19:23     ` Richard Guenther
2011-06-16 19:53       ` Richard Henderson
2011-06-20 14:23   ` William J. Schmidt
2011-06-20 19:09     ` Andrew Pinski
2011-06-20 21:03       ` Richard Guenther
2011-06-20 21:05         ` Andrew Pinski
2011-06-19 23:45 ` William J. Schmidt
2011-06-20 13:44   ` Michael Matz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).