public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
@ 2012-03-05 13:26 Richard Guenther
  2012-03-09  9:02 ` Eric Botcazou
  0 siblings, 1 reply; 46+ messages in thread
From: Richard Guenther @ 2012-03-05 13:26 UTC (permalink / raw)
  To: gcc-patches


This fixes PRs 52080 (unexpected access to adjacent fields for bitfield
access), 52097 (ICE with the C++ memory model enabled) and 48124
(wrong code accessing adjacent objects).  It's an unchanged re-post
of the RFCs I sent out during stage4 (it now re-uses the pointer
for DECL_QUALIFIER instead of enlarging each FIELD_DECL).

The core of the patch is computation of an underlying FIELD_DECL that
should be used for the access of bitfield FIELD_DECLs (or at least
constrain such access).  This computation is done when laying out
a RECORD_TYPE, for simplicity after it has been layed out.  This
FIELD_DECL is currently only used to constrain bitfield accesses,
so it employs a 1:1 mapping.  I plan to change this to a 1:n
(1 bitfield group, multiple possibly overlapping representatives)
when resuming work on lowering bitfield accesses on the gimple level
(to address PR48696).

This patch also completely replaces get_bit_range (which is where
PR52097 ICEs) by a trivial implementation.

There is PR52134 which will make this patch cause 1 gnat regression.

Bootstrapped and tested on 
{x86_64,i586,s390,s390x,ia64,ppc,ppc64}-suse-linux-gnu.

If there are no serious objections (or reasonable requests) I plan
to go forward with this version early next week.

If you have a weird target it might be good to double-check codegen
quality for bitfield stores (which is what you should have done
with the C++ memory model enabled already).

Thanks,
Richard.

2012-03-05  Richard Guenther  <rguenther@suse.de>

	* tree.h (DECL_BIT_FIELD_REPRESENTATIVE): New define.
	* stor-layout.c (start_bitfield_representative): New function.
	(finish_bitfield_representative): Likewise.
	(finish_bitfield_layout): Likewise.
	(finish_record_layout): Call finish_bitfield_layout.
	* tree-streamer-in.c (lto_input_ts_field_decl_tree_pointers):
	Stream DECL_BIT_FIELD_REPRESENTATIVE.
	* tree-streamer-out.c (write_ts_field_decl_tree_pointers): Likewise.

	PR middle-end/52080
	PR middle-end/52097
	PR middle-end/48124
	* expr.c (get_bit_range): Unconditionally extract bitrange
	from DECL_BIT_FIELD_REPRESENTATIVE.
	(expand_assignment): Adjust call to get_bit_range.

	* gimplify.c (gimplify_expr): Translate bitfield accesses
	to BIT_FIELD_REFs of the representative.

        * tree-sra.c (create_access_replacement): Only rename the
        replacement if we can rewrite it into SSA form.  Properly
        mark register typed replacements that we cannot rewrite
        with TREE_ADDRESSABLE.

	* gcc.dg/torture/pr48124-1.c: New testcase.
	* gcc.dg/torture/pr48124-2.c: Likewise.
	* gcc.dg/torture/pr48124-3.c: Likewise.
	* gcc.dg/torture/pr48124-4.c: Likewise.

Index: gcc/stor-layout.c
===================================================================
*** gcc/stor-layout.c.orig	2012-02-21 12:59:35.000000000 +0100
--- gcc/stor-layout.c	2012-02-24 12:53:11.000000000 +0100
*************** finalize_type_size (tree type)
*** 1722,1727 ****
--- 1722,1908 ----
      }
  }
  
+ /* Return a new underlying object for a bitfield started with FIELD.  */
+ 
+ static tree
+ start_bitfield_representative (tree field)
+ {
+   tree repr = make_node (FIELD_DECL);
+   DECL_FIELD_OFFSET (repr) = DECL_FIELD_OFFSET (field);
+   /* Force the representative to begin at an BITS_PER_UNIT aligned
+      boundary - C++ may use tail-padding of a base object to
+      continue packing bits so the bitfield region does not start
+      at bit zero (see g++.dg/abi/bitfield5.C for example).
+      Unallocated bits may happen for other reasons as well,
+      for example Ada which allows explicit bit-granular structure layout.  */
+   DECL_FIELD_BIT_OFFSET (repr)
+     = size_binop (BIT_AND_EXPR,
+ 		  DECL_FIELD_BIT_OFFSET (field),
+ 		  bitsize_int (~(BITS_PER_UNIT - 1)));
+   SET_DECL_OFFSET_ALIGN (repr, DECL_OFFSET_ALIGN (field));
+   DECL_SIZE (repr) = DECL_SIZE (field);
+   DECL_PACKED (repr) = DECL_PACKED (field);
+   DECL_CONTEXT (repr) = DECL_CONTEXT (field);
+   return repr;
+ }
+ 
+ /* Finish up a bitfield group that was started by creating the underlying
+    object REPR with the last fied in the bitfield group FIELD.  */
+ 
+ static void
+ finish_bitfield_representative (tree repr, tree field)
+ {
+   unsigned HOST_WIDE_INT bitsize, maxbitsize, modesize;
+   enum machine_mode mode = BLKmode;
+   tree nextf, size;
+ 
+   size = size_diffop (DECL_FIELD_OFFSET (field),
+ 		      DECL_FIELD_OFFSET (repr));
+   gcc_assert (host_integerp (size, 1));
+   bitsize = (tree_low_cst (size, 1) * BITS_PER_UNIT
+ 	     + tree_low_cst (DECL_FIELD_BIT_OFFSET (field), 1)
+ 	     - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1)
+ 	     + tree_low_cst (DECL_SIZE (field), 1));
+ 
+   /* Now nothing tells us how to pad out bitsize ...  */
+   nextf = DECL_CHAIN (field);
+   while (nextf && TREE_CODE (nextf) != FIELD_DECL)
+     nextf = DECL_CHAIN (nextf);
+   if (nextf)
+     {
+       tree maxsize;
+       /* If there was an error, the field may be not layed out
+          correctly.  Don't bother to do anything.  */
+       if (TREE_TYPE (nextf) == error_mark_node)
+ 	return;
+       maxsize = size_diffop (DECL_FIELD_OFFSET (nextf),
+ 			     DECL_FIELD_OFFSET (repr));
+       gcc_assert (host_integerp (maxsize, 1));
+       maxbitsize = (tree_low_cst (maxsize, 1) * BITS_PER_UNIT
+ 		    + tree_low_cst (DECL_FIELD_BIT_OFFSET (nextf), 1)
+ 		    - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1));
+     }
+   else
+     {
+       /* ???  If you consider that tail-padding of this struct might be
+          re-used when deriving from it we cannot really do the following
+ 	 and thus need to set maxsize to bitsize?  */
+       tree maxsize = size_diffop (TYPE_SIZE_UNIT (DECL_CONTEXT (field)),
+ 				  DECL_FIELD_OFFSET (repr));
+       gcc_assert (host_integerp (maxsize, 1));
+       maxbitsize = (tree_low_cst (maxsize, 1) * BITS_PER_UNIT
+ 		    - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1));
+     }
+ 
+   /* Only if we don't artificially break up the representative in
+      the middle of a large bitfield with different possibly
+      overlapping representatives.  And all representatives start
+      at byte offset.  */
+   gcc_assert (maxbitsize % BITS_PER_UNIT == 0);
+ 
+   /* Round up bitsize to multiples of BITS_PER_UNIT.  */
+   bitsize = (bitsize + BITS_PER_UNIT - 1) & ~(BITS_PER_UNIT - 1);
+ 
+   /* Find the smallest nice mode to use.
+      ???  Possibly use get_best_mode with appropriate arguments instead
+      (which would eventually require splitting representatives here).  */
+   for (modesize = bitsize; modesize <= maxbitsize; modesize += BITS_PER_UNIT)
+     {
+       mode = mode_for_size (modesize, MODE_INT, 1);
+       if (mode != BLKmode)
+ 	break;
+     }
+ 
+   if (mode == BLKmode)
+     {
+       /* We really want a BLKmode representative only as a last resort,
+          considering the member b in
+ 	   struct { int a : 7; int b : 17; int c; } __attribute__((packed));
+ 	 Otherwise we simply want to split the representative up
+ 	 allowing for overlaps within the bitfield region as required for
+ 	   struct { int a : 7; int b : 7; int c : 10; int d; } __attribute__((packed));
+ 	 [0, 15] HImode for a and b, [8, 23] HImode for c.  */
+       DECL_SIZE (repr) = bitsize_int (bitsize);
+       DECL_SIZE_UNIT (repr) = size_int (bitsize / BITS_PER_UNIT);
+       DECL_MODE (repr) = BLKmode;
+       TREE_TYPE (repr) = build_array_type_nelts (unsigned_char_type_node,
+ 						 bitsize / BITS_PER_UNIT);
+     }
+   else
+     {
+       DECL_SIZE (repr) = bitsize_int (modesize);
+       DECL_SIZE_UNIT (repr) = size_int (modesize / BITS_PER_UNIT);
+       DECL_MODE (repr) = mode;
+       TREE_TYPE (repr) = lang_hooks.types.type_for_mode (mode, 1);
+     }
+ 
+   /* Remember whether the bitfield group is at the end of the
+      structure or not.  */
+   DECL_CHAIN (repr) = nextf;
+ }
+ 
+ /* Compute and set FIELD_DECLs for the underlying objects we should
+    use for bitfield access for the structure layed out with RLI.  */
+ 
+ static void
+ finish_bitfield_layout (record_layout_info rli)
+ {
+   tree field, prev;
+   tree repr = NULL_TREE;
+ 
+   /* Unions would be special, for the ease of type-punning optimizations
+      we could use the underlying type as hint for the representative
+      if the bitfield would fit and the representative would not exceed
+      the union in size.  */
+   if (TREE_CODE (rli->t) != RECORD_TYPE)
+     return;
+ 
+   for (prev = NULL_TREE, field = TYPE_FIELDS (rli->t);
+        field; field = DECL_CHAIN (field))
+     {
+       if (TREE_CODE (field) != FIELD_DECL)
+ 	continue;
+ 
+       /* In the C++ memory model, consecutive bit fields in a structure are
+ 	 considered one memory location and updating a memory location
+ 	 may not store into adjacent memory locations.  */
+       if (!repr
+ 	  && DECL_BIT_FIELD_TYPE (field))
+ 	{
+ 	  /* Start new representative.  */
+ 	  repr = start_bitfield_representative (field);
+ 	}
+       else if (repr
+ 	       && ! DECL_BIT_FIELD_TYPE (field))
+ 	{
+ 	  /* Finish off new representative.  */
+ 	  finish_bitfield_representative (repr, prev);
+ 	  repr = NULL_TREE;
+ 	}
+       else if (DECL_BIT_FIELD_TYPE (field))
+ 	{
+ 	  /* Zero-size bitfields finish off a representative and
+ 	     do not have a representative themselves.  This is
+ 	     required by the C++ memory model.  */
+ 	  if (integer_zerop (DECL_SIZE (field)))
+ 	    {
+ 	      finish_bitfield_representative (repr, prev);
+ 	      repr = NULL_TREE;
+ 	    }
+ 	}
+       else
+ 	continue;
+ 
+       if (repr)
+ 	DECL_BIT_FIELD_REPRESENTATIVE (field) = repr;
+ 
+       prev = field;
+     }
+ 
+   if (repr)
+     finish_bitfield_representative (repr, prev);
+ }
+ 
  /* Do all of the work required to layout the type indicated by RLI,
     once the fields have been laid out.  This function will call `free'
     for RLI, unless FREE_P is false.  Passing a value other than false
*************** finish_record_layout (record_layout_info
*** 1742,1747 ****
--- 1923,1931 ----
    /* Perform any last tweaks to the TYPE_SIZE, etc.  */
    finalize_type_size (rli->t);
  
+   /* Compute bitfield representatives.  */
+   finish_bitfield_layout (rli);
+ 
    /* Propagate TYPE_PACKED to variants.  With C++ templates,
       handle_packed_attribute is too early to do this.  */
    for (variant = TYPE_NEXT_VARIANT (rli->t); variant;
Index: gcc/tree.h
===================================================================
*** gcc/tree.h.orig	2012-02-21 12:59:35.000000000 +0100
--- gcc/tree.h	2012-02-24 12:52:39.000000000 +0100
*************** struct GTY(()) tree_decl_with_rtl {
*** 3021,3026 ****
--- 3021,3031 ----
  #define DECL_BIT_FIELD_TYPE(NODE) \
    (FIELD_DECL_CHECK (NODE)->field_decl.bit_field_type)
  
+ /* In a FIELD_DECL of a RECORD_TYPE, this is a pointer to the storage
+    representative FIELD_DECL.  */
+ #define DECL_BIT_FIELD_REPRESENTATIVE(NODE) \
+   (FIELD_DECL_CHECK (NODE)->field_decl.qualifier)
+ 
  /* For a FIELD_DECL in a QUAL_UNION_TYPE, records the expression, which
     if nonzero, indicates that the field occupies the type.  */
  #define DECL_QUALIFIER(NODE) (FIELD_DECL_CHECK (NODE)->field_decl.qualifier)
Index: gcc/tree-sra.c
===================================================================
*** gcc/tree-sra.c.orig	2012-02-21 12:59:35.000000000 +0100
--- gcc/tree-sra.c	2012-02-24 12:52:39.000000000 +0100
*************** create_access_replacement (struct access
*** 1922,1934 ****
  
    repl = create_tmp_var (access->type, "SR");
    add_referenced_var (repl);
!   if (rename)
      mark_sym_for_renaming (repl);
  
!   if (!access->grp_partial_lhs
!       && (TREE_CODE (access->type) == COMPLEX_TYPE
! 	  || TREE_CODE (access->type) == VECTOR_TYPE))
!     DECL_GIMPLE_REG_P (repl) = 1;
  
    DECL_SOURCE_LOCATION (repl) = DECL_SOURCE_LOCATION (access->base);
    DECL_ARTIFICIAL (repl) = 1;
--- 1922,1940 ----
  
    repl = create_tmp_var (access->type, "SR");
    add_referenced_var (repl);
!   if (!access->grp_partial_lhs
!       && rename)
      mark_sym_for_renaming (repl);
  
!   if (TREE_CODE (access->type) == COMPLEX_TYPE
!       || TREE_CODE (access->type) == VECTOR_TYPE)
!     {
!       if (!access->grp_partial_lhs)
! 	DECL_GIMPLE_REG_P (repl) = 1;
!     }
!   else if (access->grp_partial_lhs
! 	   && is_gimple_reg_type (access->type))
!     TREE_ADDRESSABLE (repl) = 1;
  
    DECL_SOURCE_LOCATION (repl) = DECL_SOURCE_LOCATION (access->base);
    DECL_ARTIFICIAL (repl) = 1;
Index: gcc/tree-cfg.c
===================================================================
*** gcc/tree-cfg.c.orig	2012-02-24 12:29:18.000000000 +0100
--- gcc/tree-cfg.c	2012-02-24 12:52:39.000000000 +0100
*************** verify_expr (tree *tp, int *walk_subtree
*** 2879,2895 ****
  		  error ("invalid position or size operand to BIT_FIELD_REF");
  		  return t;
  		}
! 	      else if (INTEGRAL_TYPE_P (TREE_TYPE (t))
! 		       && (TYPE_PRECISION (TREE_TYPE (t))
! 			   != TREE_INT_CST_LOW (TREE_OPERAND (t, 1))))
  		{
  		  error ("integral result type precision does not match "
  			 "field size of BIT_FIELD_REF");
  		  return t;
  		}
! 	      if (!INTEGRAL_TYPE_P (TREE_TYPE (t))
! 		  && (GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (t)))
! 		      != TREE_INT_CST_LOW (TREE_OPERAND (t, 1))))
  		{
  		  error ("mode precision of non-integral result does not "
  			 "match field size of BIT_FIELD_REF");
--- 2879,2897 ----
  		  error ("invalid position or size operand to BIT_FIELD_REF");
  		  return t;
  		}
! 	      if (INTEGRAL_TYPE_P (TREE_TYPE (t))
! 		  && (TYPE_PRECISION (TREE_TYPE (t))
! 		      != TREE_INT_CST_LOW (TREE_OPERAND (t, 1))))
  		{
  		  error ("integral result type precision does not match "
  			 "field size of BIT_FIELD_REF");
  		  return t;
  		}
! 	      else if (!INTEGRAL_TYPE_P (TREE_TYPE (t))
! 		       && !AGGREGATE_TYPE_P (TREE_TYPE (t))
! 		       && TYPE_MODE (TREE_TYPE (t)) != BLKmode
! 		       && (GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (t)))
! 			   != TREE_INT_CST_LOW (TREE_OPERAND (t, 1))))
  		{
  		  error ("mode precision of non-integral result does not "
  			 "match field size of BIT_FIELD_REF");
Index: gcc/expr.c
===================================================================
*** gcc/expr.c.orig	2012-02-21 12:59:35.000000000 +0100
--- gcc/expr.c	2012-02-24 12:52:39.000000000 +0100
*************** optimize_bitfield_assignment_op (unsigne
*** 4439,4551 ****
  /* In the C++ memory model, consecutive bit fields in a structure are
     considered one memory location.
  
!    Given a COMPONENT_REF, this function returns the bit range of
!    consecutive bits in which this COMPONENT_REF belongs in.  The
!    values are returned in *BITSTART and *BITEND.  If either the C++
!    memory model is not activated, or this memory access is not thread
!    visible, 0 is returned in *BITSTART and *BITEND.
! 
!    EXP is the COMPONENT_REF.
!    INNERDECL is the actual object being referenced.
!    BITPOS is the position in bits where the bit starts within the structure.
!    BITSIZE is size in bits of the field being referenced in EXP.
! 
!    For example, while storing into FOO.A here...
! 
!       struct {
!         BIT 0:
!           unsigned int a : 4;
! 	  unsigned int b : 1;
! 	BIT 8:
! 	  unsigned char c;
! 	  unsigned int d : 6;
!       } foo;
! 
!    ...we are not allowed to store past <b>, so for the layout above, a
!    range of 0..7 (because no one cares if we store into the
!    padding).  */
  
  static void
  get_bit_range (unsigned HOST_WIDE_INT *bitstart,
  	       unsigned HOST_WIDE_INT *bitend,
! 	       tree exp, tree innerdecl,
! 	       HOST_WIDE_INT bitpos, HOST_WIDE_INT bitsize)
  {
!   tree field, record_type, fld;
!   bool found_field = false;
!   bool prev_field_is_bitfield;
  
    gcc_assert (TREE_CODE (exp) == COMPONENT_REF);
  
!   /* If other threads can't see this value, no need to restrict stores.  */
!   if (ALLOW_STORE_DATA_RACES
!       || ((TREE_CODE (innerdecl) == MEM_REF
! 	   || TREE_CODE (innerdecl) == TARGET_MEM_REF)
! 	  && !ptr_deref_may_alias_global_p (TREE_OPERAND (innerdecl, 0)))
!       || (DECL_P (innerdecl)
! 	  && ((TREE_CODE (innerdecl) == VAR_DECL
! 	       && DECL_THREAD_LOCAL_P (innerdecl))
! 	      || !TREE_STATIC (innerdecl))))
      {
        *bitstart = *bitend = 0;
        return;
      }
  
!   /* Bit field we're storing into.  */
!   field = TREE_OPERAND (exp, 1);
!   record_type = DECL_FIELD_CONTEXT (field);
! 
!   /* Count the contiguous bitfields for the memory location that
!      contains FIELD.  */
!   *bitstart = 0;
!   prev_field_is_bitfield = true;
!   for (fld = TYPE_FIELDS (record_type); fld; fld = DECL_CHAIN (fld))
!     {
!       tree t, offset;
!       enum machine_mode mode;
!       int unsignedp, volatilep;
! 
!       if (TREE_CODE (fld) != FIELD_DECL)
! 	continue;
! 
!       t = build3 (COMPONENT_REF, TREE_TYPE (exp),
! 		  unshare_expr (TREE_OPERAND (exp, 0)),
! 		  fld, NULL_TREE);
!       get_inner_reference (t, &bitsize, &bitpos, &offset,
! 			   &mode, &unsignedp, &volatilep, true);
! 
!       if (field == fld)
! 	found_field = true;
! 
!       if (DECL_BIT_FIELD_TYPE (fld) && bitsize > 0)
! 	{
! 	  if (prev_field_is_bitfield == false)
! 	    {
! 	      *bitstart = bitpos;
! 	      prev_field_is_bitfield = true;
! 	    }
! 	}
!       else
! 	{
! 	  prev_field_is_bitfield = false;
! 	  if (found_field)
! 	    break;
! 	}
!     }
!   gcc_assert (found_field);
  
!   if (fld)
!     {
!       /* We found the end of the bit field sequence.  Include the
! 	 padding up to the next field and be done.  */
!       *bitend = bitpos - 1;
!     }
!   else
!     {
!       /* If this is the last element in the structure, include the padding
! 	 at the end of structure.  */
!       *bitend = TREE_INT_CST_LOW (TYPE_SIZE (record_type)) - 1;
!     }
  }
  
  /* Returns true if the MEM_REF REF refers to an object that does not
--- 4439,4481 ----
  /* In the C++ memory model, consecutive bit fields in a structure are
     considered one memory location.
  
!    Given a COMPONENT_REF EXP at bit position BITPOS, this function
!    returns the bit range of consecutive bits in which this COMPONENT_REF
!    belongs in.  The values are returned in *BITSTART and *BITEND.
!    If the access does not need to be restricted 0 is returned in
!    *BITSTART and *BITEND.  */
  
  static void
  get_bit_range (unsigned HOST_WIDE_INT *bitstart,
  	       unsigned HOST_WIDE_INT *bitend,
! 	       tree exp,
! 	       HOST_WIDE_INT bitpos)
  {
!   unsigned HOST_WIDE_INT bitoffset;
!   tree field, repr, offset;
  
    gcc_assert (TREE_CODE (exp) == COMPONENT_REF);
  
!   field = TREE_OPERAND (exp, 1);
!   repr = DECL_BIT_FIELD_REPRESENTATIVE (field);
!   /* If we do not have a DECL_BIT_FIELD_REPRESENTATIVE there is no
!      need to limit the range we can access.  */
!   if (!repr)
      {
        *bitstart = *bitend = 0;
        return;
      }
  
!   /* Compute the adjustment to bitpos from the offset of the field
!      relative to the representative.  */
!   offset = size_diffop (DECL_FIELD_OFFSET (field),
! 			DECL_FIELD_OFFSET (repr));
!   bitoffset = (tree_low_cst (offset, 1) * BITS_PER_UNIT
! 	       + tree_low_cst (DECL_FIELD_BIT_OFFSET (field), 1)
! 	       - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1));
  
!   *bitstart = bitpos - bitoffset;
!   *bitend = *bitstart + tree_low_cst (DECL_SIZE (repr), 1) - 1;
  }
  
  /* Returns true if the MEM_REF REF refers to an object that does not
*************** expand_assignment (tree to, tree from, b
*** 4673,4680 ****
  
        if (TREE_CODE (to) == COMPONENT_REF
  	  && DECL_BIT_FIELD_TYPE (TREE_OPERAND (to, 1)))
! 	get_bit_range (&bitregion_start, &bitregion_end,
! 		       to, tem, bitpos, bitsize);
  
        /* If we are going to use store_bit_field and extract_bit_field,
  	 make sure to_rtx will be safe for multiple use.  */
--- 4603,4609 ----
  
        if (TREE_CODE (to) == COMPONENT_REF
  	  && DECL_BIT_FIELD_TYPE (TREE_OPERAND (to, 1)))
! 	get_bit_range (&bitregion_start, &bitregion_end, to, bitpos);
  
        /* If we are going to use store_bit_field and extract_bit_field,
  	 make sure to_rtx will be safe for multiple use.  */
Index: gcc/testsuite/gcc.dg/torture/pr48124-1.c
===================================================================
*** /dev/null	1970-01-01 00:00:00.000000000 +0000
--- gcc/testsuite/gcc.dg/torture/pr48124-1.c	2012-02-24 12:52:39.000000000 +0100
***************
*** 0 ****
--- 1,33 ----
+ /* { dg-do run } */
+ /* { dg-options "-fno-toplevel-reorder" } */
+ 
+ extern void abort (void);
+ 
+ struct S
+ {
+   signed a : 26;
+   signed b : 16;
+   signed c : 10;
+   volatile signed d : 14;
+ };
+ 
+ static struct S e = { 0, 0, 0, 1 };
+ static int f = 1;
+ 
+ void __attribute__((noinline))
+ foo (void)
+ {
+   e.d = 0;
+   f = 2;
+ }
+ 
+ int
+ main ()
+ {
+   if (e.a || e.b || e.c || e.d != 1 || f != 1)
+     abort ();
+   foo ();
+   if (e.a || e.b || e.c || e.d || f != 2)
+     abort ();
+   return 0;
+ }
Index: gcc/testsuite/gcc.dg/torture/pr48124-2.c
===================================================================
*** /dev/null	1970-01-01 00:00:00.000000000 +0000
--- gcc/testsuite/gcc.dg/torture/pr48124-2.c	2012-02-24 12:52:39.000000000 +0100
***************
*** 0 ****
--- 1,27 ----
+ /* { dg-do run } */
+ 
+ extern void abort (void);
+ 
+ static volatile struct S0 {
+     short f3[9];
+     unsigned f8 : 15;
+ } s = {1};
+ static unsigned short sh = 0x1234;
+ 
+ struct S0 a, b;
+ int vi = 0;
+ 
+ void func_4()
+ {
+   s.f8 |= 1;
+   sh = 15;
+   if (vi) a = b;
+ }
+ 
+ int main()
+ {
+   func_4();
+   if (sh != 15)
+     abort ();
+   return 0;
+ }
Index: gcc/testsuite/gcc.dg/torture/pr48124-3.c
===================================================================
*** /dev/null	1970-01-01 00:00:00.000000000 +0000
--- gcc/testsuite/gcc.dg/torture/pr48124-3.c	2012-02-24 12:52:39.000000000 +0100
***************
*** 0 ****
--- 1,32 ----
+ /* { dg-do run } */
+ 
+ extern void abort (void);
+ struct S1
+ {
+   int f0;
+   int:1;
+   int f3;
+   int:1;
+   int:0;
+   int f6:1;
+ };
+ int g_13 = 1;
+ volatile struct S1 g_118 = {
+     1
+ };
+ 
+ void __attribute__((noinline))
+ func_46 ()
+ {
+   for (g_13 = 0; g_13 >= 0; g_13 -= 1)
+     g_118.f6 = 0;
+ }
+ 
+ int
+ main ()
+ {
+   func_46 ();
+   if (g_13 != -1)
+     abort ();
+   return 0;
+ }
Index: gcc/testsuite/gcc.dg/torture/pr48124-4.c
===================================================================
*** /dev/null	1970-01-01 00:00:00.000000000 +0000
--- gcc/testsuite/gcc.dg/torture/pr48124-4.c	2012-02-24 12:52:39.000000000 +0100
***************
*** 0 ****
--- 1,28 ----
+ /* { dg-do run } */
+ 
+ extern void abort (void);
+ struct S1 {
+     unsigned f0, f1;
+     unsigned short f2, f3;
+     unsigned f4 : 16;
+     unsigned f5, f6;
+     volatile unsigned f7 : 28;
+ };
+ static struct S1 g_76;
+ static struct S1 g_245 = {0,0,0,0,0,0,0,1};
+ static signed char g_323 = 0x80;
+ static void func_1(void)
+ {
+   g_245.f7 &= 1;
+   for (g_323 = 0; g_323 <= -1; g_323 -= 2) {
+       g_76 = g_76;
+       g_76.f4 ^= 11;
+   }
+ }
+ int main()
+ {
+   func_1();
+   if (g_323 != 0 || g_245.f7 != 1)
+     abort ();
+   return 0;
+ }
Index: gcc/tree-streamer-in.c
===================================================================
*** gcc/tree-streamer-in.c.orig	2012-02-21 12:59:35.000000000 +0100
--- gcc/tree-streamer-in.c	2012-02-24 12:52:39.000000000 +0100
*************** lto_input_ts_field_decl_tree_pointers (s
*** 640,646 ****
  {
    DECL_FIELD_OFFSET (expr) = stream_read_tree (ib, data_in);
    DECL_BIT_FIELD_TYPE (expr) = stream_read_tree (ib, data_in);
!   /* Do not stream DECL_QUALIFIER, it is useless after gimplification.  */
    DECL_FIELD_BIT_OFFSET (expr) = stream_read_tree (ib, data_in);
    DECL_FCONTEXT (expr) = stream_read_tree (ib, data_in);
  }
--- 640,646 ----
  {
    DECL_FIELD_OFFSET (expr) = stream_read_tree (ib, data_in);
    DECL_BIT_FIELD_TYPE (expr) = stream_read_tree (ib, data_in);
!   DECL_BIT_FIELD_REPRESENTATIVE (expr) = stream_read_tree (ib, data_in);
    DECL_FIELD_BIT_OFFSET (expr) = stream_read_tree (ib, data_in);
    DECL_FCONTEXT (expr) = stream_read_tree (ib, data_in);
  }
Index: gcc/tree-streamer-out.c
===================================================================
*** gcc/tree-streamer-out.c.orig	2012-02-21 12:59:35.000000000 +0100
--- gcc/tree-streamer-out.c	2012-02-24 12:52:39.000000000 +0100
*************** write_ts_field_decl_tree_pointers (struc
*** 552,558 ****
  {
    stream_write_tree (ob, DECL_FIELD_OFFSET (expr), ref_p);
    stream_write_tree (ob, DECL_BIT_FIELD_TYPE (expr), ref_p);
!   /* Do not stream DECL_QUALIFIER, it is useless after gimplification.  */
    stream_write_tree (ob, DECL_FIELD_BIT_OFFSET (expr), ref_p);
    stream_write_tree (ob, DECL_FCONTEXT (expr), ref_p);
  }
--- 552,558 ----
  {
    stream_write_tree (ob, DECL_FIELD_OFFSET (expr), ref_p);
    stream_write_tree (ob, DECL_BIT_FIELD_TYPE (expr), ref_p);
!   stream_write_tree (ob, DECL_BIT_FIELD_REPRESENTATIVE (expr), ref_p);
    stream_write_tree (ob, DECL_FIELD_BIT_OFFSET (expr), ref_p);
    stream_write_tree (ob, DECL_FCONTEXT (expr), ref_p);
  }

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-05 13:26 [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere Richard Guenther
@ 2012-03-09  9:02 ` Eric Botcazou
  2012-03-12  9:38   ` Richard Guenther
  0 siblings, 1 reply; 46+ messages in thread
From: Eric Botcazou @ 2012-03-09  9:02 UTC (permalink / raw)
  To: Richard Guenther; +Cc: gcc-patches

> This patch also completely replaces get_bit_range (which is where
> PR52097 ICEs) by a trivial implementation.

How does it short-circuit the decision made by get_best_mode exactly?  By 
making get_bit_range return non-zero in more cases?

> There is PR52134 which will make this patch cause 1 gnat regression.

This looks rather benign to me.

> 	* gimplify.c (gimplify_expr): Translate bitfield accesses
> 	to BIT_FIELD_REFs of the representative.

This part isn't in the patch.

> + /* Return a new underlying object for a bitfield started with FIELD.  */
> +
> + static tree
> + start_bitfield_representative (tree field)
> + {
> +   tree repr = make_node (FIELD_DECL);
> +   DECL_FIELD_OFFSET (repr) = DECL_FIELD_OFFSET (field);
> +   /* Force the representative to begin at an BITS_PER_UNIT aligned

...at a BITS_PER_UNIT aligned...

> +      boundary - C++ may use tail-padding of a base object to
> +      continue packing bits so the bitfield region does not start
> +      at bit zero (see g++.dg/abi/bitfield5.C for example).
> +      Unallocated bits may happen for other reasons as well,
> +      for example Ada which allows explicit bit-granular structure layout.
>  */ +   DECL_FIELD_BIT_OFFSET (repr)
> +     = size_binop (BIT_AND_EXPR,
> + 		  DECL_FIELD_BIT_OFFSET (field),
> + 		  bitsize_int (~(BITS_PER_UNIT - 1)));
> +   SET_DECL_OFFSET_ALIGN (repr, DECL_OFFSET_ALIGN (field));
> +   DECL_SIZE (repr) = DECL_SIZE (field);
> +   DECL_PACKED (repr) = DECL_PACKED (field);
> +   DECL_CONTEXT (repr) = DECL_CONTEXT (field);
> +   return repr;

Any particular reason to set DECL_SIZE but not DECL_SIZE_UNIT here?  They are 
generally set together.

> +
> + /* Finish up a bitfield group that was started by creating the underlying
> +    object REPR with the last fied in the bitfield group FIELD.  */

...the last field...

> +   /* Round up bitsize to multiples of BITS_PER_UNIT.  */
> +   bitsize = (bitsize + BITS_PER_UNIT - 1) & ~(BITS_PER_UNIT - 1);
> +
> +   /* Find the smallest nice mode to use.
> +      ???  Possibly use get_best_mode with appropriate arguments instead
> +      (which would eventually require splitting representatives here).  */
> +   for (modesize = bitsize; modesize <= maxbitsize; modesize +=
> BITS_PER_UNIT) +     {
> +       mode = mode_for_size (modesize, MODE_INT, 1);
> +       if (mode != BLKmode)
> + 	break;
> +     }

The double loop looks superfluous.  Why not re-using the implementation of 
smallest_mode_for_size?

  for (mode = GET_CLASS_NARROWEST_MODE (MODE_INT); mode != VOIDmode;
       mode = GET_MODE_WIDER_MODE (mode))
    if (GET_MODE_PRECISION (mode) >= bitsize)
      break;

  if (mode != VOIDmode && GET_MODE_PRECISION (mode) > maxbitsize)
    mode = VOIDmode;

  if (mode == VOIDmode)
    ...

-- 
Eric Botcazou

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-09  9:02 ` Eric Botcazou
@ 2012-03-12  9:38   ` Richard Guenther
  2012-03-12 10:21     ` Eric Botcazou
  0 siblings, 1 reply; 46+ messages in thread
From: Richard Guenther @ 2012-03-12  9:38 UTC (permalink / raw)
  To: Eric Botcazou; +Cc: gcc-patches

On Fri, 9 Mar 2012, Eric Botcazou wrote:

> > This patch also completely replaces get_bit_range (which is where
> > PR52097 ICEs) by a trivial implementation.
> 
> How does it short-circuit the decision made by get_best_mode exactly?  By 
> making get_bit_range return non-zero in more cases?

It will make get_bit_range return non-zero in all cases (well, in
all cases where there is a COMPONENT_REF with a FIELD_DECL that
was part of a RECORD_TYPE at the time of finish_record_layout)

> > There is PR52134 which will make this patch cause 1 gnat regression.
> 
> This looks rather benign to me.

Yeah, it should be easy to fix - the question is whether we can
really rely on TYPE_SIZE_UNIT (DECL_CONTEXT (field)) - DECL_FIELD_OFFSET 
(field) to fold to a constant if field is the first field of a bitfield
group.  We can as well easily avoid the ICE by not computing a
DECL_BIT_FIELD_REPRESENTATIVE for such field in some way.
In the end how we divide bitfield groups will probably get some
control via a langhook.

> > 	* gimplify.c (gimplify_expr): Translate bitfield accesses
> > 	to BIT_FIELD_REFs of the representative.
> 
> This part isn't in the patch.

Oops.  And it should not be (I did that for experimentation), ChangeLog
piece dropped.

> > + /* Return a new underlying object for a bitfield started with FIELD.  */
> > +
> > + static tree
> > + start_bitfield_representative (tree field)
> > + {
> > +   tree repr = make_node (FIELD_DECL);
> > +   DECL_FIELD_OFFSET (repr) = DECL_FIELD_OFFSET (field);
> > +   /* Force the representative to begin at an BITS_PER_UNIT aligned
> 
> ...at a BITS_PER_UNIT aligned...

Fixed.

> > +      boundary - C++ may use tail-padding of a base object to
> > +      continue packing bits so the bitfield region does not start
> > +      at bit zero (see g++.dg/abi/bitfield5.C for example).
> > +      Unallocated bits may happen for other reasons as well,
> > +      for example Ada which allows explicit bit-granular structure layout.
> >  */ +   DECL_FIELD_BIT_OFFSET (repr)
> > +     = size_binop (BIT_AND_EXPR,
> > + 		  DECL_FIELD_BIT_OFFSET (field),
> > + 		  bitsize_int (~(BITS_PER_UNIT - 1)));
> > +   SET_DECL_OFFSET_ALIGN (repr, DECL_OFFSET_ALIGN (field));
> > +   DECL_SIZE (repr) = DECL_SIZE (field);
> > +   DECL_PACKED (repr) = DECL_PACKED (field);
> > +   DECL_CONTEXT (repr) = DECL_CONTEXT (field);
> > +   return repr;
> 
> Any particular reason to set DECL_SIZE but not DECL_SIZE_UNIT here?  They are 
> generally set together.

No reason, fixed (I just set those fields I will use during updating
of 'repr', the other fields are set once the field has final size).

> > +
> > + /* Finish up a bitfield group that was started by creating the underlying
> > +    object REPR with the last fied in the bitfield group FIELD.  */
> 
> ...the last field...

Fixed.

> > +   /* Round up bitsize to multiples of BITS_PER_UNIT.  */
> > +   bitsize = (bitsize + BITS_PER_UNIT - 1) & ~(BITS_PER_UNIT - 1);
> > +
> > +   /* Find the smallest nice mode to use.
> > +      ???  Possibly use get_best_mode with appropriate arguments instead
> > +      (which would eventually require splitting representatives here).  */
> > +   for (modesize = bitsize; modesize <= maxbitsize; modesize +=
> > BITS_PER_UNIT) +     {
> > +       mode = mode_for_size (modesize, MODE_INT, 1);
> > +       if (mode != BLKmode)
> > + 	break;
> > +     }
> 
> The double loop looks superfluous.  Why not re-using the implementation of 
> smallest_mode_for_size?

Ah, indeed.  Matching the current implementation would be

>   for (mode = GET_CLASS_NARROWEST_MODE (MODE_INT); mode != VOIDmode;
>        mode = GET_MODE_WIDER_MODE (mode))
>     if (GET_MODE_PRECISION (mode) >= bitsize)
>       break;
> 
>   if (mode != VOIDmode
        && (GET_MODE_PRECISION (mode) > maxbitsize
            || GET_MODE_PRECISION (mode) > MAX_FIXED_MODE_SIZE))
>     mode = VOIDmode;
> 
>   if (mode == VOIDmode)
>     ...

Btw, I _think_ I want GET_MODE_BITSIZE here - we cannot allow
GET_MODE_BITSIZE > GET_MODE_PRECISION as that would possibly
access memory that is not allowed.  Thus, what GET_MODE_* would
identify the access size used for a MEM of that mode?

Anyway, fixed as you suggested, with the MAX_FIXED_MODE_SIZE check.

I'll split out the tree-sra.c piece, re-test and re-post.

Thanks,
Richard.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-12  9:38   ` Richard Guenther
@ 2012-03-12 10:21     ` Eric Botcazou
  2012-03-13 14:19       ` Richard Guenther
  0 siblings, 1 reply; 46+ messages in thread
From: Eric Botcazou @ 2012-03-12 10:21 UTC (permalink / raw)
  To: Richard Guenther; +Cc: gcc-patches

> Btw, I _think_ I want GET_MODE_BITSIZE here - we cannot allow
> GET_MODE_BITSIZE > GET_MODE_PRECISION as that would possibly
> access memory that is not allowed.  Thus, what GET_MODE_* would
> identify the access size used for a MEM of that mode?

I agree that GET_MODE_BITSIZE makes more sense than GET_MODE_PRECISION here.

-- 
Eric Botcazou

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-12 10:21     ` Eric Botcazou
@ 2012-03-13 14:19       ` Richard Guenther
  2012-03-15 14:43         ` Eric Botcazou
  0 siblings, 1 reply; 46+ messages in thread
From: Richard Guenther @ 2012-03-13 14:19 UTC (permalink / raw)
  To: Eric Botcazou; +Cc: gcc-patches

On Mon, 12 Mar 2012, Eric Botcazou wrote:

> > Btw, I _think_ I want GET_MODE_BITSIZE here - we cannot allow
> > GET_MODE_BITSIZE > GET_MODE_PRECISION as that would possibly
> > access memory that is not allowed.  Thus, what GET_MODE_* would
> > identify the access size used for a MEM of that mode?
> 
> I agree that GET_MODE_BITSIZE makes more sense than GET_MODE_PRECISION here.

Ok, I applied a fix for PR52134 and am preparing a fix for PR52578.
It seems we might not be able to rely on

      tree maxsize = size_diffop (TYPE_SIZE_UNIT (DECL_CONTEXT (field)),
                                  DECL_FIELD_OFFSET (repr));
      gcc_assert (host_integerp (maxsize, 1));

but at least until we get a testcase that shows so I won't add
(unexercised) code that handles it.  Eventually we'd need to treat
tail-padding specially for some languages anyway, via a new langhook.

So, bootstrapped and tested on x86_64-unknown-linux-gnu for all
languages including Ada, Objective-C++ and Go.  PR52578 remains,
fix currently testing.

I'll apply this patch tomorrow unless there are any further comments.

Thanks,
Richard.

2012-03-13  Richard Guenther  <rguenther@suse.de>

	* tree.h (DECL_BIT_FIELD_REPRESENTATIVE): New define.
	* stor-layout.c (start_bitfield_representative): New function.
	(finish_bitfield_representative): Likewise.
	(finish_bitfield_layout): Likewise.
	(finish_record_layout): Call finish_bitfield_layout.
	* tree.c (free_lang_data_in_decl): Only free DECL_QUALIFIER
	for QUAL_UNION_TYPE fields.
	* tree-streamer-in.c (lto_input_ts_field_decl_tree_pointers):
	Stream DECL_BIT_FIELD_REPRESENTATIVE.
	* tree-streamer-out.c (write_ts_field_decl_tree_pointers): Likewise.

	PR middle-end/52080
	PR middle-end/52097
	PR middle-end/48124
	* expr.c (get_bit_range): Unconditionally extract bitrange
	from DECL_BIT_FIELD_REPRESENTATIVE.
	(expand_assignment): Adjust call to get_bit_range.

	* gcc.dg/torture/pr48124-1.c: New testcase.
	* gcc.dg/torture/pr48124-2.c: Likewise.
	* gcc.dg/torture/pr48124-3.c: Likewise.
	* gcc.dg/torture/pr48124-4.c: Likewise.

Index: gcc/stor-layout.c
===================================================================
*** gcc/stor-layout.c.orig	2012-03-13 10:30:49.000000000 +0100
--- gcc/stor-layout.c	2012-03-13 11:48:26.000000000 +0100
*************** finalize_type_size (tree type)
*** 1722,1727 ****
--- 1722,1911 ----
      }
  }
  
+ /* Return a new underlying object for a bitfield started with FIELD.  */
+ 
+ static tree
+ start_bitfield_representative (tree field)
+ {
+   tree repr = make_node (FIELD_DECL);
+   DECL_FIELD_OFFSET (repr) = DECL_FIELD_OFFSET (field);
+   /* Force the representative to begin at a BITS_PER_UNIT aligned
+      boundary - C++ may use tail-padding of a base object to
+      continue packing bits so the bitfield region does not start
+      at bit zero (see g++.dg/abi/bitfield5.C for example).
+      Unallocated bits may happen for other reasons as well,
+      for example Ada which allows explicit bit-granular structure layout.  */
+   DECL_FIELD_BIT_OFFSET (repr)
+     = size_binop (BIT_AND_EXPR,
+ 		  DECL_FIELD_BIT_OFFSET (field),
+ 		  bitsize_int (~(BITS_PER_UNIT - 1)));
+   SET_DECL_OFFSET_ALIGN (repr, DECL_OFFSET_ALIGN (field));
+   DECL_SIZE (repr) = DECL_SIZE (field);
+   DECL_SIZE_UNIT (repr) = DECL_SIZE_UNIT (field);
+   DECL_PACKED (repr) = DECL_PACKED (field);
+   DECL_CONTEXT (repr) = DECL_CONTEXT (field);
+   return repr;
+ }
+ 
+ /* Finish up a bitfield group that was started by creating the underlying
+    object REPR with the last field in the bitfield group FIELD.  */
+ 
+ static void
+ finish_bitfield_representative (tree repr, tree field)
+ {
+   unsigned HOST_WIDE_INT bitsize, maxbitsize;
+   enum machine_mode mode;
+   tree nextf, size;
+ 
+   size = size_diffop (DECL_FIELD_OFFSET (field),
+ 		      DECL_FIELD_OFFSET (repr));
+   gcc_assert (host_integerp (size, 1));
+   bitsize = (tree_low_cst (size, 1) * BITS_PER_UNIT
+ 	     + tree_low_cst (DECL_FIELD_BIT_OFFSET (field), 1)
+ 	     - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1)
+ 	     + tree_low_cst (DECL_SIZE (field), 1));
+ 
+   /* Now nothing tells us how to pad out bitsize ...  */
+   nextf = DECL_CHAIN (field);
+   while (nextf && TREE_CODE (nextf) != FIELD_DECL)
+     nextf = DECL_CHAIN (nextf);
+   if (nextf)
+     {
+       tree maxsize;
+       /* If there was an error, the field may be not layed out
+          correctly.  Don't bother to do anything.  */
+       if (TREE_TYPE (nextf) == error_mark_node)
+ 	return;
+       maxsize = size_diffop (DECL_FIELD_OFFSET (nextf),
+ 			     DECL_FIELD_OFFSET (repr));
+       gcc_assert (host_integerp (maxsize, 1));
+       maxbitsize = (tree_low_cst (maxsize, 1) * BITS_PER_UNIT
+ 		    + tree_low_cst (DECL_FIELD_BIT_OFFSET (nextf), 1)
+ 		    - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1));
+     }
+   else
+     {
+       /* ???  If you consider that tail-padding of this struct might be
+          re-used when deriving from it we cannot really do the following
+ 	 and thus need to set maxsize to bitsize?  */
+       tree maxsize = size_diffop (TYPE_SIZE_UNIT (DECL_CONTEXT (field)),
+ 				  DECL_FIELD_OFFSET (repr));
+       gcc_assert (host_integerp (maxsize, 1));
+       maxbitsize = (tree_low_cst (maxsize, 1) * BITS_PER_UNIT
+ 		    - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1));
+     }
+ 
+   /* Only if we don't artificially break up the representative in
+      the middle of a large bitfield with different possibly
+      overlapping representatives.  And all representatives start
+      at byte offset.  */
+   gcc_assert (maxbitsize % BITS_PER_UNIT == 0);
+ 
+   /* Round up bitsize to multiples of BITS_PER_UNIT.  */
+   bitsize = (bitsize + BITS_PER_UNIT - 1) & ~(BITS_PER_UNIT - 1);
+ 
+   /* Find the smallest nice mode to use.  */
+   for (mode = GET_CLASS_NARROWEST_MODE (MODE_INT); mode != VOIDmode;
+        mode = GET_MODE_WIDER_MODE (mode))
+     if (GET_MODE_BITSIZE (mode) >= bitsize)
+       break;
+   if (mode != VOIDmode
+       && (GET_MODE_BITSIZE (mode) > maxbitsize
+ 	  || GET_MODE_BITSIZE (mode) > MAX_FIXED_MODE_SIZE))
+     mode = VOIDmode;
+ 
+   if (mode == VOIDmode)
+     {
+       /* We really want a BLKmode representative only as a last resort,
+          considering the member b in
+ 	   struct { int a : 7; int b : 17; int c; } __attribute__((packed));
+ 	 Otherwise we simply want to split the representative up
+ 	 allowing for overlaps within the bitfield region as required for
+ 	   struct { int a : 7; int b : 7;
+ 		    int c : 10; int d; } __attribute__((packed));
+ 	 [0, 15] HImode for a and b, [8, 23] HImode for c.  */
+       DECL_SIZE (repr) = bitsize_int (bitsize);
+       DECL_SIZE_UNIT (repr) = size_int (bitsize / BITS_PER_UNIT);
+       DECL_MODE (repr) = BLKmode;
+       TREE_TYPE (repr) = build_array_type_nelts (unsigned_char_type_node,
+ 						 bitsize / BITS_PER_UNIT);
+     }
+   else
+     {
+       unsigned HOST_WIDE_INT modesize = GET_MODE_BITSIZE (mode);
+       DECL_SIZE (repr) = bitsize_int (modesize);
+       DECL_SIZE_UNIT (repr) = size_int (modesize / BITS_PER_UNIT);
+       DECL_MODE (repr) = mode;
+       TREE_TYPE (repr) = lang_hooks.types.type_for_mode (mode, 1);
+     }
+ 
+   /* Remember whether the bitfield group is at the end of the
+      structure or not.  */
+   DECL_CHAIN (repr) = nextf;
+ }
+ 
+ /* Compute and set FIELD_DECLs for the underlying objects we should
+    use for bitfield access for the structure layed out with RLI.  */
+ 
+ static void
+ finish_bitfield_layout (record_layout_info rli)
+ {
+   tree field, prev;
+   tree repr = NULL_TREE;
+ 
+   /* Unions would be special, for the ease of type-punning optimizations
+      we could use the underlying type as hint for the representative
+      if the bitfield would fit and the representative would not exceed
+      the union in size.  */
+   if (TREE_CODE (rli->t) != RECORD_TYPE)
+     return;
+ 
+   for (prev = NULL_TREE, field = TYPE_FIELDS (rli->t);
+        field; field = DECL_CHAIN (field))
+     {
+       if (TREE_CODE (field) != FIELD_DECL)
+ 	continue;
+ 
+       /* In the C++ memory model, consecutive bit fields in a structure are
+ 	 considered one memory location and updating a memory location
+ 	 may not store into adjacent memory locations.  */
+       if (!repr
+ 	  && DECL_BIT_FIELD_TYPE (field))
+ 	{
+ 	  /* Start new representative.  */
+ 	  repr = start_bitfield_representative (field);
+ 	}
+       else if (repr
+ 	       && ! DECL_BIT_FIELD_TYPE (field))
+ 	{
+ 	  /* Finish off new representative.  */
+ 	  finish_bitfield_representative (repr, prev);
+ 	  repr = NULL_TREE;
+ 	}
+       else if (DECL_BIT_FIELD_TYPE (field))
+ 	{
+ 	  /* Zero-size bitfields finish off a representative and
+ 	     do not have a representative themselves.  This is
+ 	     required by the C++ memory model.  */
+ 	  if (integer_zerop (DECL_SIZE (field)))
+ 	    {
+ 	      finish_bitfield_representative (repr, prev);
+ 	      repr = NULL_TREE;
+ 	    }
+ 	}
+       else
+ 	continue;
+ 
+       if (repr)
+ 	DECL_BIT_FIELD_REPRESENTATIVE (field) = repr;
+ 
+       prev = field;
+     }
+ 
+   if (repr)
+     finish_bitfield_representative (repr, prev);
+ }
+ 
  /* Do all of the work required to layout the type indicated by RLI,
     once the fields have been laid out.  This function will call `free'
     for RLI, unless FREE_P is false.  Passing a value other than false
*************** finish_record_layout (record_layout_info
*** 1742,1747 ****
--- 1926,1934 ----
    /* Perform any last tweaks to the TYPE_SIZE, etc.  */
    finalize_type_size (rli->t);
  
+   /* Compute bitfield representatives.  */
+   finish_bitfield_layout (rli);
+ 
    /* Propagate TYPE_PACKED to variants.  With C++ templates,
       handle_packed_attribute is too early to do this.  */
    for (variant = TYPE_NEXT_VARIANT (rli->t); variant;
Index: gcc/tree.h
===================================================================
*** gcc/tree.h.orig	2012-03-13 10:30:49.000000000 +0100
--- gcc/tree.h	2012-03-13 11:18:49.000000000 +0100
*************** struct GTY(()) tree_decl_with_rtl {
*** 3024,3029 ****
--- 3024,3034 ----
  #define DECL_BIT_FIELD_TYPE(NODE) \
    (FIELD_DECL_CHECK (NODE)->field_decl.bit_field_type)
  
+ /* In a FIELD_DECL of a RECORD_TYPE, this is a pointer to the storage
+    representative FIELD_DECL.  */
+ #define DECL_BIT_FIELD_REPRESENTATIVE(NODE) \
+   (FIELD_DECL_CHECK (NODE)->field_decl.qualifier)
+ 
  /* For a FIELD_DECL in a QUAL_UNION_TYPE, records the expression, which
     if nonzero, indicates that the field occupies the type.  */
  #define DECL_QUALIFIER(NODE) (FIELD_DECL_CHECK (NODE)->field_decl.qualifier)
Index: gcc/expr.c
===================================================================
*** gcc/expr.c.orig	2012-03-13 10:30:49.000000000 +0100
--- gcc/expr.c	2012-03-13 11:18:49.000000000 +0100
*************** optimize_bitfield_assignment_op (unsigne
*** 4439,4551 ****
  /* In the C++ memory model, consecutive bit fields in a structure are
     considered one memory location.
  
!    Given a COMPONENT_REF, this function returns the bit range of
!    consecutive bits in which this COMPONENT_REF belongs in.  The
!    values are returned in *BITSTART and *BITEND.  If either the C++
!    memory model is not activated, or this memory access is not thread
!    visible, 0 is returned in *BITSTART and *BITEND.
! 
!    EXP is the COMPONENT_REF.
!    INNERDECL is the actual object being referenced.
!    BITPOS is the position in bits where the bit starts within the structure.
!    BITSIZE is size in bits of the field being referenced in EXP.
! 
!    For example, while storing into FOO.A here...
! 
!       struct {
!         BIT 0:
!           unsigned int a : 4;
! 	  unsigned int b : 1;
! 	BIT 8:
! 	  unsigned char c;
! 	  unsigned int d : 6;
!       } foo;
! 
!    ...we are not allowed to store past <b>, so for the layout above, a
!    range of 0..7 (because no one cares if we store into the
!    padding).  */
  
  static void
  get_bit_range (unsigned HOST_WIDE_INT *bitstart,
  	       unsigned HOST_WIDE_INT *bitend,
! 	       tree exp, tree innerdecl,
! 	       HOST_WIDE_INT bitpos, HOST_WIDE_INT bitsize)
  {
!   tree field, record_type, fld;
!   bool found_field = false;
!   bool prev_field_is_bitfield;
  
    gcc_assert (TREE_CODE (exp) == COMPONENT_REF);
  
!   /* If other threads can't see this value, no need to restrict stores.  */
!   if (ALLOW_STORE_DATA_RACES
!       || ((TREE_CODE (innerdecl) == MEM_REF
! 	   || TREE_CODE (innerdecl) == TARGET_MEM_REF)
! 	  && !ptr_deref_may_alias_global_p (TREE_OPERAND (innerdecl, 0)))
!       || (DECL_P (innerdecl)
! 	  && ((TREE_CODE (innerdecl) == VAR_DECL
! 	       && DECL_THREAD_LOCAL_P (innerdecl))
! 	      || !TREE_STATIC (innerdecl))))
      {
        *bitstart = *bitend = 0;
        return;
      }
  
!   /* Bit field we're storing into.  */
!   field = TREE_OPERAND (exp, 1);
!   record_type = DECL_FIELD_CONTEXT (field);
! 
!   /* Count the contiguous bitfields for the memory location that
!      contains FIELD.  */
!   *bitstart = 0;
!   prev_field_is_bitfield = true;
!   for (fld = TYPE_FIELDS (record_type); fld; fld = DECL_CHAIN (fld))
!     {
!       tree t, offset;
!       enum machine_mode mode;
!       int unsignedp, volatilep;
! 
!       if (TREE_CODE (fld) != FIELD_DECL)
! 	continue;
! 
!       t = build3 (COMPONENT_REF, TREE_TYPE (exp),
! 		  unshare_expr (TREE_OPERAND (exp, 0)),
! 		  fld, NULL_TREE);
!       get_inner_reference (t, &bitsize, &bitpos, &offset,
! 			   &mode, &unsignedp, &volatilep, true);
! 
!       if (field == fld)
! 	found_field = true;
! 
!       if (DECL_BIT_FIELD_TYPE (fld) && bitsize > 0)
! 	{
! 	  if (prev_field_is_bitfield == false)
! 	    {
! 	      *bitstart = bitpos;
! 	      prev_field_is_bitfield = true;
! 	    }
! 	}
!       else
! 	{
! 	  prev_field_is_bitfield = false;
! 	  if (found_field)
! 	    break;
! 	}
!     }
!   gcc_assert (found_field);
  
!   if (fld)
!     {
!       /* We found the end of the bit field sequence.  Include the
! 	 padding up to the next field and be done.  */
!       *bitend = bitpos - 1;
!     }
!   else
!     {
!       /* If this is the last element in the structure, include the padding
! 	 at the end of structure.  */
!       *bitend = TREE_INT_CST_LOW (TYPE_SIZE (record_type)) - 1;
!     }
  }
  
  /* Returns true if the MEM_REF REF refers to an object that does not
--- 4439,4481 ----
  /* In the C++ memory model, consecutive bit fields in a structure are
     considered one memory location.
  
!    Given a COMPONENT_REF EXP at bit position BITPOS, this function
!    returns the bit range of consecutive bits in which this COMPONENT_REF
!    belongs in.  The values are returned in *BITSTART and *BITEND.
!    If the access does not need to be restricted 0 is returned in
!    *BITSTART and *BITEND.  */
  
  static void
  get_bit_range (unsigned HOST_WIDE_INT *bitstart,
  	       unsigned HOST_WIDE_INT *bitend,
! 	       tree exp,
! 	       HOST_WIDE_INT bitpos)
  {
!   unsigned HOST_WIDE_INT bitoffset;
!   tree field, repr, offset;
  
    gcc_assert (TREE_CODE (exp) == COMPONENT_REF);
  
!   field = TREE_OPERAND (exp, 1);
!   repr = DECL_BIT_FIELD_REPRESENTATIVE (field);
!   /* If we do not have a DECL_BIT_FIELD_REPRESENTATIVE there is no
!      need to limit the range we can access.  */
!   if (!repr)
      {
        *bitstart = *bitend = 0;
        return;
      }
  
!   /* Compute the adjustment to bitpos from the offset of the field
!      relative to the representative.  */
!   offset = size_diffop (DECL_FIELD_OFFSET (field),
! 			DECL_FIELD_OFFSET (repr));
!   bitoffset = (tree_low_cst (offset, 1) * BITS_PER_UNIT
! 	       + tree_low_cst (DECL_FIELD_BIT_OFFSET (field), 1)
! 	       - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1));
  
!   *bitstart = bitpos - bitoffset;
!   *bitend = *bitstart + tree_low_cst (DECL_SIZE (repr), 1) - 1;
  }
  
  /* Returns true if the MEM_REF REF refers to an object that does not
*************** expand_assignment (tree to, tree from, b
*** 4674,4681 ****
  
        if (TREE_CODE (to) == COMPONENT_REF
  	  && DECL_BIT_FIELD_TYPE (TREE_OPERAND (to, 1)))
! 	get_bit_range (&bitregion_start, &bitregion_end,
! 		       to, tem, bitpos, bitsize);
  
        /* If we are going to use store_bit_field and extract_bit_field,
  	 make sure to_rtx will be safe for multiple use.  */
--- 4604,4610 ----
  
        if (TREE_CODE (to) == COMPONENT_REF
  	  && DECL_BIT_FIELD_TYPE (TREE_OPERAND (to, 1)))
! 	get_bit_range (&bitregion_start, &bitregion_end, to, bitpos);
  
        /* If we are going to use store_bit_field and extract_bit_field,
  	 make sure to_rtx will be safe for multiple use.  */
Index: gcc/testsuite/gcc.dg/torture/pr48124-1.c
===================================================================
*** /dev/null	1970-01-01 00:00:00.000000000 +0000
--- gcc/testsuite/gcc.dg/torture/pr48124-1.c	2012-03-13 11:18:49.000000000 +0100
***************
*** 0 ****
--- 1,33 ----
+ /* { dg-do run } */
+ /* { dg-options "-fno-toplevel-reorder" } */
+ 
+ extern void abort (void);
+ 
+ struct S
+ {
+   signed a : 26;
+   signed b : 16;
+   signed c : 10;
+   volatile signed d : 14;
+ };
+ 
+ static struct S e = { 0, 0, 0, 1 };
+ static int f = 1;
+ 
+ void __attribute__((noinline))
+ foo (void)
+ {
+   e.d = 0;
+   f = 2;
+ }
+ 
+ int
+ main ()
+ {
+   if (e.a || e.b || e.c || e.d != 1 || f != 1)
+     abort ();
+   foo ();
+   if (e.a || e.b || e.c || e.d || f != 2)
+     abort ();
+   return 0;
+ }
Index: gcc/testsuite/gcc.dg/torture/pr48124-2.c
===================================================================
*** /dev/null	1970-01-01 00:00:00.000000000 +0000
--- gcc/testsuite/gcc.dg/torture/pr48124-2.c	2012-03-13 11:18:49.000000000 +0100
***************
*** 0 ****
--- 1,27 ----
+ /* { dg-do run } */
+ 
+ extern void abort (void);
+ 
+ static volatile struct S0 {
+     short f3[9];
+     unsigned f8 : 15;
+ } s = {1};
+ static unsigned short sh = 0x1234;
+ 
+ struct S0 a, b;
+ int vi = 0;
+ 
+ void func_4()
+ {
+   s.f8 |= 1;
+   sh = 15;
+   if (vi) a = b;
+ }
+ 
+ int main()
+ {
+   func_4();
+   if (sh != 15)
+     abort ();
+   return 0;
+ }
Index: gcc/testsuite/gcc.dg/torture/pr48124-3.c
===================================================================
*** /dev/null	1970-01-01 00:00:00.000000000 +0000
--- gcc/testsuite/gcc.dg/torture/pr48124-3.c	2012-03-13 11:18:49.000000000 +0100
***************
*** 0 ****
--- 1,32 ----
+ /* { dg-do run } */
+ 
+ extern void abort (void);
+ struct S1
+ {
+   int f0;
+   int:1;
+   int f3;
+   int:1;
+   int:0;
+   int f6:1;
+ };
+ int g_13 = 1;
+ volatile struct S1 g_118 = {
+     1
+ };
+ 
+ void __attribute__((noinline))
+ func_46 ()
+ {
+   for (g_13 = 0; g_13 >= 0; g_13 -= 1)
+     g_118.f6 = 0;
+ }
+ 
+ int
+ main ()
+ {
+   func_46 ();
+   if (g_13 != -1)
+     abort ();
+   return 0;
+ }
Index: gcc/testsuite/gcc.dg/torture/pr48124-4.c
===================================================================
*** /dev/null	1970-01-01 00:00:00.000000000 +0000
--- gcc/testsuite/gcc.dg/torture/pr48124-4.c	2012-03-13 11:18:49.000000000 +0100
***************
*** 0 ****
--- 1,28 ----
+ /* { dg-do run } */
+ 
+ extern void abort (void);
+ struct S1 {
+     unsigned f0, f1;
+     unsigned short f2, f3;
+     unsigned f4 : 16;
+     unsigned f5, f6;
+     volatile unsigned f7 : 28;
+ };
+ static struct S1 g_76;
+ static struct S1 g_245 = {0,0,0,0,0,0,0,1};
+ static signed char g_323 = 0x80;
+ static void func_1(void)
+ {
+   g_245.f7 &= 1;
+   for (g_323 = 0; g_323 <= -1; g_323 -= 2) {
+       g_76 = g_76;
+       g_76.f4 ^= 11;
+   }
+ }
+ int main()
+ {
+   func_1();
+   if (g_323 != 0 || g_245.f7 != 1)
+     abort ();
+   return 0;
+ }
Index: gcc/tree-streamer-in.c
===================================================================
*** gcc/tree-streamer-in.c.orig	2012-03-13 10:30:49.000000000 +0100
--- gcc/tree-streamer-in.c	2012-03-13 11:18:49.000000000 +0100
*************** lto_input_ts_field_decl_tree_pointers (s
*** 640,646 ****
  {
    DECL_FIELD_OFFSET (expr) = stream_read_tree (ib, data_in);
    DECL_BIT_FIELD_TYPE (expr) = stream_read_tree (ib, data_in);
!   /* Do not stream DECL_QUALIFIER, it is useless after gimplification.  */
    DECL_FIELD_BIT_OFFSET (expr) = stream_read_tree (ib, data_in);
    DECL_FCONTEXT (expr) = stream_read_tree (ib, data_in);
  }
--- 640,646 ----
  {
    DECL_FIELD_OFFSET (expr) = stream_read_tree (ib, data_in);
    DECL_BIT_FIELD_TYPE (expr) = stream_read_tree (ib, data_in);
!   DECL_BIT_FIELD_REPRESENTATIVE (expr) = stream_read_tree (ib, data_in);
    DECL_FIELD_BIT_OFFSET (expr) = stream_read_tree (ib, data_in);
    DECL_FCONTEXT (expr) = stream_read_tree (ib, data_in);
  }
Index: gcc/tree-streamer-out.c
===================================================================
*** gcc/tree-streamer-out.c.orig	2012-03-13 10:30:49.000000000 +0100
--- gcc/tree-streamer-out.c	2012-03-13 11:18:49.000000000 +0100
*************** write_ts_field_decl_tree_pointers (struc
*** 552,558 ****
  {
    stream_write_tree (ob, DECL_FIELD_OFFSET (expr), ref_p);
    stream_write_tree (ob, DECL_BIT_FIELD_TYPE (expr), ref_p);
!   /* Do not stream DECL_QUALIFIER, it is useless after gimplification.  */
    stream_write_tree (ob, DECL_FIELD_BIT_OFFSET (expr), ref_p);
    stream_write_tree (ob, DECL_FCONTEXT (expr), ref_p);
  }
--- 552,558 ----
  {
    stream_write_tree (ob, DECL_FIELD_OFFSET (expr), ref_p);
    stream_write_tree (ob, DECL_BIT_FIELD_TYPE (expr), ref_p);
!   stream_write_tree (ob, DECL_BIT_FIELD_REPRESENTATIVE (expr), ref_p);
    stream_write_tree (ob, DECL_FIELD_BIT_OFFSET (expr), ref_p);
    stream_write_tree (ob, DECL_FCONTEXT (expr), ref_p);
  }
Index: gcc/tree.c
===================================================================
*** gcc/tree.c.orig	2012-03-12 16:55:39.000000000 +0100
--- gcc/tree.c	2012-03-13 11:57:11.000000000 +0100
*************** free_lang_data_in_decl (tree decl)
*** 4633,4639 ****
    if (TREE_CODE (decl) == FIELD_DECL)
      {
        free_lang_data_in_one_sizepos (&DECL_FIELD_OFFSET (decl));
!       DECL_QUALIFIER (decl) = NULL_TREE;
      }
  
   if (TREE_CODE (decl) == FUNCTION_DECL)
--- 4633,4640 ----
    if (TREE_CODE (decl) == FIELD_DECL)
      {
        free_lang_data_in_one_sizepos (&DECL_FIELD_OFFSET (decl));
!       if (TREE_CODE (DECL_CONTEXT (decl)) == QUAL_UNION_TYPE)
! 	DECL_QUALIFIER (decl) = NULL_TREE;
      }
  
   if (TREE_CODE (decl) == FUNCTION_DECL)

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-13 14:19       ` Richard Guenther
@ 2012-03-15 14:43         ` Eric Botcazou
  2012-03-15 14:58           ` Richard Guenther
  0 siblings, 1 reply; 46+ messages in thread
From: Eric Botcazou @ 2012-03-15 14:43 UTC (permalink / raw)
  To: Richard Guenther; +Cc: gcc-patches

[-- Attachment #1: Type: text/plain, Size: 960 bytes --]

> Ok, I applied a fix for PR52134 and am preparing a fix for PR52578.
> It seems we might not be able to rely on
>
>       tree maxsize = size_diffop (TYPE_SIZE_UNIT (DECL_CONTEXT (field)),
>                                   DECL_FIELD_OFFSET (repr));
>       gcc_assert (host_integerp (maxsize, 1));
>
> but at least until we get a testcase that shows so I won't add
> (unexercised) code that handles it.  Eventually we'd need to treat
> tail-padding specially for some languages anyway, via a new langhook.

This caused 3 classes of problems in Ada:

  1. failure of the above assertion (pack7.ads)
  2. ICE in tree_low_cst (pack16.adb, pack16_pkg.ads)
  3. miscompilation (to be dealt with later).

1. and 2. appear to come from variable-sized fields (and 3. from record types 
with variant part).  Testcases attached, they can be installed as:

  gnat.dg/pack16.adb
  gnat.dg/pack16_pkg.ads
  gnat.dg/specs/pack7.ads

in the testsuite.

-- 
Eric Botcazou

[-- Attachment #2: pack16.adb --]
[-- Type: text/x-adasrc, Size: 471 bytes --]

-- { dg-do compile }
-- { dg-options "-gnatws" }

with Pack16_Pkg; use Pack16_Pkg;

procedure Pack16 is

   type Sample_Table_T is array (1 .. N) of Integer;

   type Clock_T is record
      N_Ticks  : Integer := 0;
   end record;

   type Sampling_Descriptor_T is record
      Values : Sample_Table_T;
      Valid  : Boolean;
      Tstamp : Clock_T;
   end record;

   pragma Pack (Sampling_Descriptor_T);

   Sampling_Data : Sampling_Descriptor_T;

begin
   null;
end;

[-- Attachment #3: pack16_pkg.ads --]
[-- Type: text/x-adasrc, Size: 62 bytes --]

package Pack16_Pkg is

   N : Natural := 16;

end Pack16_Pkg;

[-- Attachment #4: pack7.ads --]
[-- Type: text/x-adasrc, Size: 212 bytes --]

-- { dg-do compile }

package Pack7 is

   type R (D : Natural) is record
      S : String (1 .. D);
      N : Natural;
      B : Boolean;
   end record;
   for R'Alignment use 4;
   pragma Pack (R);

end Pack7;

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-15 14:43         ` Eric Botcazou
@ 2012-03-15 14:58           ` Richard Guenther
  2012-03-15 15:30             ` Richard Guenther
  0 siblings, 1 reply; 46+ messages in thread
From: Richard Guenther @ 2012-03-15 14:58 UTC (permalink / raw)
  To: Eric Botcazou; +Cc: gcc-patches

On Thu, 15 Mar 2012, Eric Botcazou wrote:

> > Ok, I applied a fix for PR52134 and am preparing a fix for PR52578.
> > It seems we might not be able to rely on
> >
> >       tree maxsize = size_diffop (TYPE_SIZE_UNIT (DECL_CONTEXT (field)),
> >                                   DECL_FIELD_OFFSET (repr));
> >       gcc_assert (host_integerp (maxsize, 1));
> >
> > but at least until we get a testcase that shows so I won't add
> > (unexercised) code that handles it.  Eventually we'd need to treat
> > tail-padding specially for some languages anyway, via a new langhook.
> 
> This caused 3 classes of problems in Ada:
> 
>   1. failure of the above assertion (pack7.ads)
>   2. ICE in tree_low_cst (pack16.adb, pack16_pkg.ads)
>   3. miscompilation (to be dealt with later).
> 
> 1. and 2. appear to come from variable-sized fields (and 3. from record types 
> with variant part).  Testcases attached, they can be installed as:
> 
>   gnat.dg/pack16.adb
>   gnat.dg/pack16_pkg.ads
>   gnat.dg/specs/pack7.ads
> 
> in the testsuite.

I'll address these and add the testcases.

Thanks,
Richard.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-15 14:58           ` Richard Guenther
@ 2012-03-15 15:30             ` Richard Guenther
  2012-03-15 16:19               ` Eric Botcazou
  0 siblings, 1 reply; 46+ messages in thread
From: Richard Guenther @ 2012-03-15 15:30 UTC (permalink / raw)
  To: Eric Botcazou; +Cc: gcc-patches

On Thu, 15 Mar 2012, Richard Guenther wrote:

> On Thu, 15 Mar 2012, Eric Botcazou wrote:
> 
> > > Ok, I applied a fix for PR52134 and am preparing a fix for PR52578.
> > > It seems we might not be able to rely on
> > >
> > >       tree maxsize = size_diffop (TYPE_SIZE_UNIT (DECL_CONTEXT (field)),
> > >                                   DECL_FIELD_OFFSET (repr));
> > >       gcc_assert (host_integerp (maxsize, 1));
> > >
> > > but at least until we get a testcase that shows so I won't add
> > > (unexercised) code that handles it.  Eventually we'd need to treat
> > > tail-padding specially for some languages anyway, via a new langhook.
> > 
> > This caused 3 classes of problems in Ada:
> > 
> >   1. failure of the above assertion (pack7.ads)
> >   2. ICE in tree_low_cst (pack16.adb, pack16_pkg.ads)
> >   3. miscompilation (to be dealt with later).
> > 
> > 1. and 2. appear to come from variable-sized fields (and 3. from record types 
> > with variant part).  Testcases attached, they can be installed as:
> > 
> >   gnat.dg/pack16.adb
> >   gnat.dg/pack16_pkg.ads
> >   gnat.dg/specs/pack7.ads
> > 
> > in the testsuite.
> 
> I'll address these and add the testcases.

1. is easy, see patch below.  2. is much harder - we need to
compute the bit-offset relative to the bitfield group start,
thus in get_bit_range we do

  /* Compute the adjustment to bitpos from the offset of the field
     relative to the representative.  */ 
  offset = size_diffop (DECL_FIELD_OFFSET (field),
                        DECL_FIELD_OFFSET (repr));
  bitoffset = (tree_low_cst (offset, 1) * BITS_PER_UNIT
               + tree_low_cst (DECL_FIELD_BIT_OFFSET (field), 1)
               - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1));

and we cannot generally assume offset is zero (well, maybe we could
arrange that though, at least we could assert that for all fields
in a bitfield group DECL_FIELD_OFFSET is the same?).  Now, first
of we lack gimplification of the DECL_BIT_FIELD_REPRESENTATIVE
DECL_FIELD_OFFSET (not that we really need it - nothing could
later introduce a COMPONENT_REF with that reliably as nothing
holds on to the generated expressions).  But then even for
originally equal DECL_FIELD_OFFSETs we gimplify different
stmts and thus have a different decl in DECL_FIELD_OFFSET so
the folding above does not return a constant either (this makes
me think that we can avoid gimplifying most of the sizepos - we
only need to gimplify those that actually appear in COMPONENT_REFs).

Any suggestion?  Apart from trying to make sure that offset will
be zero by construction?  Or by simply not handling bitfields
properly that start at a variable offset?

The finish_bitfield_representative hunk implements the fix for 1.,
the rest the proposed zero-by-construction solution for 2.

Thanks,
Richard.

Index: gcc/stor-layout.c
===================================================================
*** gcc/stor-layout.c	(revision 185426)
--- gcc/stor-layout.c	(working copy)
*************** finish_bitfield_representative (tree rep
*** 1765,1770 ****
--- 1765,1773 ----
  	     - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1)
  	     + tree_low_cst (DECL_SIZE (field), 1));
  
+   /* Round up bitsize to multiples of BITS_PER_UNIT.  */
+   bitsize = (bitsize + BITS_PER_UNIT - 1) & ~(BITS_PER_UNIT - 1);
+ 
    /* Now nothing tells us how to pad out bitsize ...  */
    nextf = DECL_CHAIN (field);
    while (nextf && TREE_CODE (nextf) != FIELD_DECL)
*************** finish_bitfield_representative (tree rep
*** 1787,1798 ****
      {
        /* ???  If you consider that tail-padding of this struct might be
           re-used when deriving from it we cannot really do the following
! 	 and thus need to set maxsize to bitsize?  */
        tree maxsize = size_diffop (TYPE_SIZE_UNIT (DECL_CONTEXT (field)),
  				  DECL_FIELD_OFFSET (repr));
!       gcc_assert (host_integerp (maxsize, 1));
!       maxbitsize = (tree_low_cst (maxsize, 1) * BITS_PER_UNIT
! 		    - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1));
      }
  
    /* Only if we don't artificially break up the representative in
--- 1790,1805 ----
      {
        /* ???  If you consider that tail-padding of this struct might be
           re-used when deriving from it we cannot really do the following
! 	 and thus need to set maxsize to bitsize?  Also we cannot
! 	 generally rely on maxsize to fold to an integer constant, so
! 	 use bitsize as fallback for this case.  */
        tree maxsize = size_diffop (TYPE_SIZE_UNIT (DECL_CONTEXT (field)),
  				  DECL_FIELD_OFFSET (repr));
!       if (host_integerp (maxsize, 1))
! 	maxbitsize = (tree_low_cst (maxsize, 1) * BITS_PER_UNIT
! 		      - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1));
!       else
! 	maxbitsize = bitsize;
      }
  
    /* Only if we don't artificially break up the representative in
*************** finish_bitfield_representative (tree rep
*** 1801,1809 ****
       at byte offset.  */
    gcc_assert (maxbitsize % BITS_PER_UNIT == 0);
  
-   /* Round up bitsize to multiples of BITS_PER_UNIT.  */
-   bitsize = (bitsize + BITS_PER_UNIT - 1) & ~(BITS_PER_UNIT - 1);
- 
    /* Find the smallest nice mode to use.  */
    for (mode = GET_CLASS_NARROWEST_MODE (MODE_INT); mode != VOIDmode;
         mode = GET_MODE_WIDER_MODE (mode))
--- 1808,1813 ----
*************** finish_bitfield_layout (record_layout_in
*** 1884,1889 ****
--- 1888,1896 ----
  	}
        else if (DECL_BIT_FIELD_TYPE (field))
  	{
+ 	  gcc_assert (repr != NULL_TREE
+ 		      && operand_equal_p (DECL_FIELD_OFFSET (repr),
+ 					  DECL_FIELD_OFFSET (field), 0));
  	  /* Zero-size bitfields finish off a representative and
  	     do not have a representative themselves.  This is
  	     required by the C++ memory model.  */
Index: gcc/expr.c
===================================================================
*** gcc/expr.c	(revision 185426)
--- gcc/expr.c	(working copy)
*************** get_bit_range (unsigned HOST_WIDE_INT *b
*** 4452,4458 ****
  	       HOST_WIDE_INT bitpos)
  {
    unsigned HOST_WIDE_INT bitoffset;
!   tree field, repr, offset;
  
    gcc_assert (TREE_CODE (exp) == COMPONENT_REF);
  
--- 4452,4458 ----
  	       HOST_WIDE_INT bitpos)
  {
    unsigned HOST_WIDE_INT bitoffset;
!   tree field, repr;
  
    gcc_assert (TREE_CODE (exp) == COMPONENT_REF);
  
*************** get_bit_range (unsigned HOST_WIDE_INT *b
*** 4467,4477 ****
      }
  
    /* Compute the adjustment to bitpos from the offset of the field
!      relative to the representative.  */
!   offset = size_diffop (DECL_FIELD_OFFSET (field),
! 			DECL_FIELD_OFFSET (repr));
!   bitoffset = (tree_low_cst (offset, 1) * BITS_PER_UNIT
! 	       + tree_low_cst (DECL_FIELD_BIT_OFFSET (field), 1)
  	       - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1));
  
    *bitstart = bitpos - bitoffset;
--- 4467,4475 ----
      }
  
    /* Compute the adjustment to bitpos from the offset of the field
!      relative to the representative.  DECL_FIELD_OFFSET of field and
!      repr are the same by construction.  */
!   bitoffset = (tree_low_cst (DECL_FIELD_BIT_OFFSET (field), 1)
  	       - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1));
  
    *bitstart = bitpos - bitoffset;

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-15 15:30             ` Richard Guenther
@ 2012-03-15 16:19               ` Eric Botcazou
  2012-03-15 17:03                 ` Eric Botcazou
  0 siblings, 1 reply; 46+ messages in thread
From: Eric Botcazou @ 2012-03-15 16:19 UTC (permalink / raw)
  To: Richard Guenther; +Cc: gcc-patches

> 1. is easy, see patch below.  2. is much harder - we need to
> compute the bit-offset relative to the bitfield group start,
> thus in get_bit_range we do
>
>   /* Compute the adjustment to bitpos from the offset of the field
>      relative to the representative.  */
>   offset = size_diffop (DECL_FIELD_OFFSET (field),
>                         DECL_FIELD_OFFSET (repr));
>   bitoffset = (tree_low_cst (offset, 1) * BITS_PER_UNIT
>                + tree_low_cst (DECL_FIELD_BIT_OFFSET (field), 1)
>                - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1));
>
> and we cannot generally assume offset is zero (well, maybe we could
> arrange that though, at least we could assert that for all fields
> in a bitfield group DECL_FIELD_OFFSET is the same?).

I'm skeptical.

> Any suggestion?  Apart from trying to make sure that offset will
> be zero by construction?  Or by simply not handling bitfields
> properly that start at a variable offset?

Computing the offset in stor-layout.c and storing it in DECL_INITIAL?

> The finish_bitfield_representative hunk implements the fix for 1.,
> the rest the proposed zero-by-construction solution for 2.

Thanks!

-- 
Eric Botcazou

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-15 16:19               ` Eric Botcazou
@ 2012-03-15 17:03                 ` Eric Botcazou
  2012-03-16  9:42                   ` Richard Guenther
  0 siblings, 1 reply; 46+ messages in thread
From: Eric Botcazou @ 2012-03-15 17:03 UTC (permalink / raw)
  To: Richard Guenther; +Cc: gcc-patches

> Computing the offset in stor-layout.c and storing it in DECL_INITIAL?

Ugh.  I just realized that the DECL_BIT_FIELD_REPRESENTATIVE is built during 
layout... but is overloaded with DECL_QUALIFIER.  That's probably the source 
of the miscompilation I talked about earlier.

Can we delay it until after gimplification ?  Then we could use DECL_INITIAL.

-- 
Eric Botcazou

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-15 17:03                 ` Eric Botcazou
@ 2012-03-16  9:42                   ` Richard Guenther
  2012-03-19 10:45                     ` Eric Botcazou
  0 siblings, 1 reply; 46+ messages in thread
From: Richard Guenther @ 2012-03-16  9:42 UTC (permalink / raw)
  To: Eric Botcazou; +Cc: gcc-patches

On Thu, 15 Mar 2012, Eric Botcazou wrote:

> > Computing the offset in stor-layout.c and storing it in DECL_INITIAL?
> 
> Ugh.  I just realized that the DECL_BIT_FIELD_REPRESENTATIVE is built during 
> layout... but is overloaded with DECL_QUALIFIER.  That's probably the source 
> of the miscompilation I talked about earlier.

But it's only ever computed for RECORD_TYPEs where DECL_QUALIFIER is
unused.

> Can we delay it until after gimplification ?  Then we could use DECL_INITIAL.

No, I don't think so.  We can store the bit-offset relative to the
start of the group in tree_base.address_space (ugh), if 8 bits
are enough for that (ugh).

For now giving up seems to be easiest (just give up when
DECL_FIELD_OFFSET is not equal for all of the bitfield members).
That will at most get you the miscompiles for the PRs back, for
languages with funny structure layout.

Well.  I'll think about this some more and in the meantime install
the fix for the easy problem.

Richard.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-16  9:42                   ` Richard Guenther
@ 2012-03-19 10:45                     ` Eric Botcazou
  2012-03-19 13:14                       ` Richard Guenther
  0 siblings, 1 reply; 46+ messages in thread
From: Eric Botcazou @ 2012-03-19 10:45 UTC (permalink / raw)
  To: Richard Guenther; +Cc: gcc-patches

[-- Attachment #1: Type: text/plain, Size: 929 bytes --]

> But it's only ever computed for RECORD_TYPEs where DECL_QUALIFIER is
> unused.

OK, that could work indeed.

> For now giving up seems to be easiest (just give up when
> DECL_FIELD_OFFSET is not equal for all of the bitfield members).
> That will at most get you the miscompiles for the PRs back, for
> languages with funny structure layout.

I have another variant of the DECL_FIELD_OFFSET problem:

FAIL: gnat.dg/specs/pack8.ads (test for excess errors)
Excess errors:
+===========================GNAT BUG DETECTED==============================+
| 4.8.0 20120314 (experimental) [trunk revision 185395] (i586-suse-linux) GCC 
error:|
| in finish_bitfield_representative, at stor-layout.c:1762                 |
| Error detected at pack8.ads:17:4                   

Testcase attached:

  gnat.dg/specs/pack8.ads
  gnat.dg/specs/pack8_pkg.ads

I agree that giving up (for now) is a sensible option.  Thanks.

-- 
Eric Botcazou

[-- Attachment #2: pack8.ads --]
[-- Type: text/x-adasrc, Size: 311 bytes --]

with Pack8_Pkg;

package Pack8 is

   subtype Index_Type is Integer range 1 .. Pack8_Pkg.N;

   subtype Str is String( Index_Type);

   subtype Str2 is String (1 .. 11);

   type Rec is record
      S1 : Str;
      S2 : Str;
      B  : Boolean;
      S3 : Str2;
   end record;
   pragma Pack (Rec);

end Pack8;

[-- Attachment #3: pack8_pkg.ads --]
[-- Type: text/x-adasrc, Size: 59 bytes --]

package Pack8_Pkg is

   N : Natural := 1;

end Pack8_Pkg;

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-19 10:45                     ` Eric Botcazou
@ 2012-03-19 13:14                       ` Richard Guenther
  2012-03-20  9:30                         ` Richard Guenther
  0 siblings, 1 reply; 46+ messages in thread
From: Richard Guenther @ 2012-03-19 13:14 UTC (permalink / raw)
  To: Eric Botcazou; +Cc: gcc-patches

On Mon, 19 Mar 2012, Eric Botcazou wrote:

> > But it's only ever computed for RECORD_TYPEs where DECL_QUALIFIER is
> > unused.
> 
> OK, that could work indeed.
> 
> > For now giving up seems to be easiest (just give up when
> > DECL_FIELD_OFFSET is not equal for all of the bitfield members).
> > That will at most get you the miscompiles for the PRs back, for
> > languages with funny structure layout.
> 
> I have another variant of the DECL_FIELD_OFFSET problem:
> 
> FAIL: gnat.dg/specs/pack8.ads (test for excess errors)
> Excess errors:
> +===========================GNAT BUG DETECTED==============================+
> | 4.8.0 20120314 (experimental) [trunk revision 185395] (i586-suse-linux) GCC 
> error:|
> | in finish_bitfield_representative, at stor-layout.c:1762                 |
> | Error detected at pack8.ads:17:4                   
> 
> Testcase attached:
> 
>   gnat.dg/specs/pack8.ads
>   gnat.dg/specs/pack8_pkg.ads

Thanks.  That one indeed has different DECL_FIELD_OFFSET,

((sizetype) MAX_EXPR <(integer) pack8__R1s, 0> + (sizetype) MAX_EXPR 
<(integer) pack8__R1s, 0>) + 1

vs.

(sizetype) MAX_EXPR <(integer) pack8__R1s, 0> + (sizetype) MAX_EXPR 
<(integer) pack8__R1s, 0>

we're not putting the 1 byte offset into DECL_FIELD_BIT_OFFSET
because DECL_OFFSET_ALIGN is 8 in this case.  Eventually we should
be able to relax how many bits we push into DECL_FIELD_BIT_OFFSET.

> I agree that giving up (for now) is a sensible option.  Thanks.

Done with the patch below.  We're actually not going to generate
possibly wrong-code again but sub-optimal code.

Bootstrap & regtest pending on x86_64-unknown-linux-gnu.

Richard.

2012-03-19  Richard Guenther  <rguenther@suse.de>

	* stor-layout.c (finish_bitfield_representative): Fallback
	to conservative maximum size if the padding up to the next
	field cannot be computed as a constant.
	(finish_bitfield_layout): If we cannot compute the distance
	between the start of the bitfield representative and the
	bitfield member start a new representative.
	* expr.c (get_bit_range): The distance between the start of
	the bitfield representative and the bitfield member is zero
	if the field offsets are not constants.

	* gnat.dg/pack16.adb: New testcase.
	* gnat.dg/pack16_pkg.ads: Likewise.
	* gnat.dg/specs/pack8.ads: Likewise.
	* gnat.dg/specs/pack8_pkg.ads: Likewise.

Index: gcc/stor-layout.c
===================================================================
*** gcc/stor-layout.c	(revision 185518)
--- gcc/stor-layout.c	(working copy)
*************** finish_bitfield_representative (tree rep
*** 1781,1790 ****
  	return;
        maxsize = size_diffop (DECL_FIELD_OFFSET (nextf),
  			     DECL_FIELD_OFFSET (repr));
!       gcc_assert (host_integerp (maxsize, 1));
!       maxbitsize = (tree_low_cst (maxsize, 1) * BITS_PER_UNIT
! 		    + tree_low_cst (DECL_FIELD_BIT_OFFSET (nextf), 1)
! 		    - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1));
      }
    else
      {
--- 1781,1792 ----
  	return;
        maxsize = size_diffop (DECL_FIELD_OFFSET (nextf),
  			     DECL_FIELD_OFFSET (repr));
!       if (host_integerp (maxsize, 1))
! 	maxbitsize = (tree_low_cst (maxsize, 1) * BITS_PER_UNIT
! 		      + tree_low_cst (DECL_FIELD_BIT_OFFSET (nextf), 1)
! 		      - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1));
!       else
! 	maxbitsize = bitsize;
      }
    else
      {
*************** finish_bitfield_layout (record_layout_in
*** 1888,1893 ****
--- 1890,1897 ----
  	}
        else if (DECL_BIT_FIELD_TYPE (field))
  	{
+ 	  gcc_assert (repr != NULL_TREE);
+ 
  	  /* Zero-size bitfields finish off a representative and
  	     do not have a representative themselves.  This is
  	     required by the C++ memory model.  */
*************** finish_bitfield_layout (record_layout_in
*** 1896,1901 ****
--- 1900,1923 ----
  	      finish_bitfield_representative (repr, prev);
  	      repr = NULL_TREE;
  	    }
+ 
+ 	  /* We assume that either DECL_FIELD_OFFSET of the representative
+ 	     and each bitfield member is a constant or they are equal.
+ 	     This is because we need to be able to compute the bit-offset
+ 	     of each field relative to the representative in get_bit_range
+ 	     during RTL expansion.
+ 	     If these constraints are not met, simply force a new
+ 	     representative to be generated.  That will at most
+ 	     generate worse code but still maintain correctness with
+ 	     respect to the C++ memory model.  */
+ 	  if (!((host_integerp (DECL_FIELD_OFFSET (repr), 1)
+ 		 && host_integerp (DECL_FIELD_OFFSET (field), 1))
+ 		|| operand_equal_p (DECL_FIELD_OFFSET (repr),
+ 				    DECL_FIELD_OFFSET (field), 0)))
+ 	    {
+ 	      finish_bitfield_representative (repr, prev);
+ 	      repr = start_bitfield_representative (field);
+ 	    }
  	}
        else
  	continue;
Index: gcc/expr.c
===================================================================
*** gcc/expr.c	(revision 185518)
--- gcc/expr.c	(working copy)
*************** get_bit_range (unsigned HOST_WIDE_INT *b
*** 4452,4458 ****
  	       HOST_WIDE_INT bitpos)
  {
    unsigned HOST_WIDE_INT bitoffset;
!   tree field, repr, offset;
  
    gcc_assert (TREE_CODE (exp) == COMPONENT_REF);
  
--- 4452,4458 ----
  	       HOST_WIDE_INT bitpos)
  {
    unsigned HOST_WIDE_INT bitoffset;
!   tree field, repr;
  
    gcc_assert (TREE_CODE (exp) == COMPONENT_REF);
  
*************** get_bit_range (unsigned HOST_WIDE_INT *b
*** 4467,4478 ****
      }
  
    /* Compute the adjustment to bitpos from the offset of the field
!      relative to the representative.  */
!   offset = size_diffop (DECL_FIELD_OFFSET (field),
! 			DECL_FIELD_OFFSET (repr));
!   bitoffset = (tree_low_cst (offset, 1) * BITS_PER_UNIT
! 	       + tree_low_cst (DECL_FIELD_BIT_OFFSET (field), 1)
! 	       - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1));
  
    *bitstart = bitpos - bitoffset;
    *bitend = *bitstart + tree_low_cst (DECL_SIZE (repr), 1) - 1;
--- 4467,4483 ----
      }
  
    /* Compute the adjustment to bitpos from the offset of the field
!      relative to the representative.  DECL_FIELD_OFFSET of field and
!      repr are the same by construction if they are not constants,
!      see finish_bitfield_layout.  */
!   if (host_integerp (DECL_FIELD_OFFSET (field), 1)
!       && host_integerp (DECL_FIELD_OFFSET (repr), 1))
!     bitoffset = (tree_low_cst (DECL_FIELD_OFFSET (field), 1)
! 		 - tree_low_cst (DECL_FIELD_OFFSET (repr), 1)) * BITS_PER_UNIT;
!   else
!     bitoffset = 0;
!   bitoffset += (tree_low_cst (DECL_FIELD_BIT_OFFSET (field), 1)
! 		- tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1));
  
    *bitstart = bitpos - bitoffset;
    *bitend = *bitstart + tree_low_cst (DECL_SIZE (repr), 1) - 1;
Index: gcc/testsuite/gnat.dg/pack16.adb
===================================================================
*** gcc/testsuite/gnat.dg/pack16.adb	(revision 0)
--- gcc/testsuite/gnat.dg/pack16.adb	(revision 0)
***************
*** 0 ****
--- 1,26 ----
+ -- { dg-do compile }
+ -- { dg-options "-gnatws" }
+ 
+ with Pack16_Pkg; use Pack16_Pkg;
+ 
+ procedure Pack16 is
+ 
+    type Sample_Table_T is array (1 .. N) of Integer;
+ 
+    type Clock_T is record
+       N_Ticks  : Integer := 0;
+    end record;
+ 
+    type Sampling_Descriptor_T is record
+       Values : Sample_Table_T;
+       Valid  : Boolean;
+       Tstamp : Clock_T;
+    end record;
+ 
+    pragma Pack (Sampling_Descriptor_T);
+ 
+    Sampling_Data : Sampling_Descriptor_T;
+ 
+ begin
+    null;
+ end;
Index: gcc/testsuite/gnat.dg/pack16_pkg.ads
===================================================================
*** gcc/testsuite/gnat.dg/pack16_pkg.ads	(revision 0)
--- gcc/testsuite/gnat.dg/pack16_pkg.ads	(revision 0)
***************
*** 0 ****
--- 1,5 ----
+ package Pack16_Pkg is
+ 
+    N : Natural := 16;
+ 
+ end Pack16_Pkg;
Index: gcc/testsuite/gnat.dg/specs/pack8.ads
===================================================================
*** gcc/testsuite/gnat.dg/specs/pack8.ads	(revision 0)
--- gcc/testsuite/gnat.dg/specs/pack8.ads	(revision 0)
***************
*** 0 ****
--- 1,19 ----
+ with Pack8_Pkg;
+ 
+ package Pack8 is
+ 
+    subtype Index_Type is Integer range 1 .. Pack8_Pkg.N;
+ 
+    subtype Str is String( Index_Type);
+ 
+    subtype Str2 is String (1 .. 11);
+ 
+    type Rec is record
+       S1 : Str;
+       S2 : Str;
+       B  : Boolean;
+       S3 : Str2;
+    end record;
+    pragma Pack (Rec);
+ 
+ end Pack8;
Index: gcc/testsuite/gnat.dg/specs/pack8_pkg.ads
===================================================================
*** gcc/testsuite/gnat.dg/specs/pack8_pkg.ads	(revision 0)
--- gcc/testsuite/gnat.dg/specs/pack8_pkg.ads	(revision 0)
***************
*** 0 ****
--- 1,5 ----
+ package Pack8_Pkg is
+ 
+    N : Natural := 1;
+ 
+ end Pack8_Pkg;

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-19 13:14                       ` Richard Guenther
@ 2012-03-20  9:30                         ` Richard Guenther
  2012-03-22 10:26                           ` Eric Botcazou
  0 siblings, 1 reply; 46+ messages in thread
From: Richard Guenther @ 2012-03-20  9:30 UTC (permalink / raw)
  To: Eric Botcazou; +Cc: gcc-patches

On Mon, 19 Mar 2012, Richard Guenther wrote:

> On Mon, 19 Mar 2012, Eric Botcazou wrote:
> 
> > > But it's only ever computed for RECORD_TYPEs where DECL_QUALIFIER is
> > > unused.
> > 
> > OK, that could work indeed.
> > 
> > > For now giving up seems to be easiest (just give up when
> > > DECL_FIELD_OFFSET is not equal for all of the bitfield members).
> > > That will at most get you the miscompiles for the PRs back, for
> > > languages with funny structure layout.
> > 
> > I have another variant of the DECL_FIELD_OFFSET problem:
> > 
> > FAIL: gnat.dg/specs/pack8.ads (test for excess errors)
> > Excess errors:
> > +===========================GNAT BUG DETECTED==============================+
> > | 4.8.0 20120314 (experimental) [trunk revision 185395] (i586-suse-linux) GCC 
> > error:|
> > | in finish_bitfield_representative, at stor-layout.c:1762                 |
> > | Error detected at pack8.ads:17:4                   
> > 
> > Testcase attached:
> > 
> >   gnat.dg/specs/pack8.ads
> >   gnat.dg/specs/pack8_pkg.ads
> 
> Thanks.  That one indeed has different DECL_FIELD_OFFSET,
> 
> ((sizetype) MAX_EXPR <(integer) pack8__R1s, 0> + (sizetype) MAX_EXPR 
> <(integer) pack8__R1s, 0>) + 1
> 
> vs.
> 
> (sizetype) MAX_EXPR <(integer) pack8__R1s, 0> + (sizetype) MAX_EXPR 
> <(integer) pack8__R1s, 0>
> 
> we're not putting the 1 byte offset into DECL_FIELD_BIT_OFFSET
> because DECL_OFFSET_ALIGN is 8 in this case.  Eventually we should
> be able to relax how many bits we push into DECL_FIELD_BIT_OFFSET.
> 
> > I agree that giving up (for now) is a sensible option.  Thanks.
> 
> Done with the patch below.  We're actually not going to generate
> possibly wrong-code again but sub-optimal code.
> 
> Bootstrap & regtest pending on x86_64-unknown-linux-gnu.

This is what I have applied after bootstrapping and testing on
x86_64-unknown-linux-gnu.

Richard.

2012-03-20  Richard Guenther  <rguenther@suse.de>

	* stor-layout.c (finish_bitfield_representative): Fallback
	to conservative maximum size if the padding up to the next
	field cannot be computed as a constant.
	(finish_bitfield_layout): If we cannot compute the distance
	between the start of the bitfield representative and the
	bitfield member start a new representative.
	* expr.c (get_bit_range): The distance between the start of
	the bitfield representative and the bitfield member is zero
	if the field offsets are not constants.

	* gnat.dg/pack16.adb: New testcase.
	* gnat.dg/pack16_pkg.ads: Likewise.
	* gnat.dg/specs/pack8.ads: Likewise.
	* gnat.dg/specs/pack8_pkg.ads: Likewise.

Index: gcc/stor-layout.c
===================================================================
*** gcc/stor-layout.c	(revision 185518)
--- gcc/stor-layout.c	(working copy)
*************** finish_bitfield_representative (tree rep
*** 1781,1790 ****
  	return;
        maxsize = size_diffop (DECL_FIELD_OFFSET (nextf),
  			     DECL_FIELD_OFFSET (repr));
!       gcc_assert (host_integerp (maxsize, 1));
!       maxbitsize = (tree_low_cst (maxsize, 1) * BITS_PER_UNIT
! 		    + tree_low_cst (DECL_FIELD_BIT_OFFSET (nextf), 1)
! 		    - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1));
      }
    else
      {
--- 1781,1797 ----
  	return;
        maxsize = size_diffop (DECL_FIELD_OFFSET (nextf),
  			     DECL_FIELD_OFFSET (repr));
!       if (host_integerp (maxsize, 1))
! 	{
! 	  maxbitsize = (tree_low_cst (maxsize, 1) * BITS_PER_UNIT
! 			+ tree_low_cst (DECL_FIELD_BIT_OFFSET (nextf), 1)
! 			- tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1));
! 	  /* If the group ends within a bitfield nextf does not need to be
! 	     aligned to BITS_PER_UNIT.  Thus round up.  */
! 	  maxbitsize = (maxbitsize + BITS_PER_UNIT - 1) & ~(BITS_PER_UNIT - 1);
! 	}
!       else
! 	maxbitsize = bitsize;
      }
    else
      {
*************** finish_bitfield_layout (record_layout_in
*** 1888,1893 ****
--- 1895,1902 ----
  	}
        else if (DECL_BIT_FIELD_TYPE (field))
  	{
+ 	  gcc_assert (repr != NULL_TREE);
+ 
  	  /* Zero-size bitfields finish off a representative and
  	     do not have a representative themselves.  This is
  	     required by the C++ memory model.  */
*************** finish_bitfield_layout (record_layout_in
*** 1896,1901 ****
--- 1905,1928 ----
  	      finish_bitfield_representative (repr, prev);
  	      repr = NULL_TREE;
  	    }
+ 
+ 	  /* We assume that either DECL_FIELD_OFFSET of the representative
+ 	     and each bitfield member is a constant or they are equal.
+ 	     This is because we need to be able to compute the bit-offset
+ 	     of each field relative to the representative in get_bit_range
+ 	     during RTL expansion.
+ 	     If these constraints are not met, simply force a new
+ 	     representative to be generated.  That will at most
+ 	     generate worse code but still maintain correctness with
+ 	     respect to the C++ memory model.  */
+ 	  else if (!((host_integerp (DECL_FIELD_OFFSET (repr), 1)
+ 		      && host_integerp (DECL_FIELD_OFFSET (field), 1))
+ 		     || operand_equal_p (DECL_FIELD_OFFSET (repr),
+ 					 DECL_FIELD_OFFSET (field), 0)))
+ 	    {
+ 	      finish_bitfield_representative (repr, prev);
+ 	      repr = start_bitfield_representative (field);
+ 	    }
  	}
        else
  	continue;
Index: gcc/expr.c
===================================================================
*** gcc/expr.c	(revision 185518)
--- gcc/expr.c	(working copy)
*************** get_bit_range (unsigned HOST_WIDE_INT *b
*** 4452,4458 ****
  	       HOST_WIDE_INT bitpos)
  {
    unsigned HOST_WIDE_INT bitoffset;
!   tree field, repr, offset;
  
    gcc_assert (TREE_CODE (exp) == COMPONENT_REF);
  
--- 4452,4458 ----
  	       HOST_WIDE_INT bitpos)
  {
    unsigned HOST_WIDE_INT bitoffset;
!   tree field, repr;
  
    gcc_assert (TREE_CODE (exp) == COMPONENT_REF);
  
*************** get_bit_range (unsigned HOST_WIDE_INT *b
*** 4467,4478 ****
      }
  
    /* Compute the adjustment to bitpos from the offset of the field
!      relative to the representative.  */
!   offset = size_diffop (DECL_FIELD_OFFSET (field),
! 			DECL_FIELD_OFFSET (repr));
!   bitoffset = (tree_low_cst (offset, 1) * BITS_PER_UNIT
! 	       + tree_low_cst (DECL_FIELD_BIT_OFFSET (field), 1)
! 	       - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1));
  
    *bitstart = bitpos - bitoffset;
    *bitend = *bitstart + tree_low_cst (DECL_SIZE (repr), 1) - 1;
--- 4467,4483 ----
      }
  
    /* Compute the adjustment to bitpos from the offset of the field
!      relative to the representative.  DECL_FIELD_OFFSET of field and
!      repr are the same by construction if they are not constants,
!      see finish_bitfield_layout.  */
!   if (host_integerp (DECL_FIELD_OFFSET (field), 1)
!       && host_integerp (DECL_FIELD_OFFSET (repr), 1))
!     bitoffset = (tree_low_cst (DECL_FIELD_OFFSET (field), 1)
! 		 - tree_low_cst (DECL_FIELD_OFFSET (repr), 1)) * BITS_PER_UNIT;
!   else
!     bitoffset = 0;
!   bitoffset += (tree_low_cst (DECL_FIELD_BIT_OFFSET (field), 1)
! 		- tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1));
  
    *bitstart = bitpos - bitoffset;
    *bitend = *bitstart + tree_low_cst (DECL_SIZE (repr), 1) - 1;
Index: gcc/testsuite/gnat.dg/pack16.adb
===================================================================
*** gcc/testsuite/gnat.dg/pack16.adb	(revision 0)
--- gcc/testsuite/gnat.dg/pack16.adb	(revision 0)
***************
*** 0 ****
--- 1,26 ----
+ -- { dg-do compile }
+ -- { dg-options "-gnatws" }
+ 
+ with Pack16_Pkg; use Pack16_Pkg;
+ 
+ procedure Pack16 is
+ 
+    type Sample_Table_T is array (1 .. N) of Integer;
+ 
+    type Clock_T is record
+       N_Ticks  : Integer := 0;
+    end record;
+ 
+    type Sampling_Descriptor_T is record
+       Values : Sample_Table_T;
+       Valid  : Boolean;
+       Tstamp : Clock_T;
+    end record;
+ 
+    pragma Pack (Sampling_Descriptor_T);
+ 
+    Sampling_Data : Sampling_Descriptor_T;
+ 
+ begin
+    null;
+ end;
Index: gcc/testsuite/gnat.dg/pack16_pkg.ads
===================================================================
*** gcc/testsuite/gnat.dg/pack16_pkg.ads	(revision 0)
--- gcc/testsuite/gnat.dg/pack16_pkg.ads	(revision 0)
***************
*** 0 ****
--- 1,5 ----
+ package Pack16_Pkg is
+ 
+    N : Natural := 16;
+ 
+ end Pack16_Pkg;
Index: gcc/testsuite/gnat.dg/specs/pack8.ads
===================================================================
*** gcc/testsuite/gnat.dg/specs/pack8.ads	(revision 0)
--- gcc/testsuite/gnat.dg/specs/pack8.ads	(revision 0)
***************
*** 0 ****
--- 1,19 ----
+ with Pack8_Pkg;
+ 
+ package Pack8 is
+ 
+    subtype Index_Type is Integer range 1 .. Pack8_Pkg.N;
+ 
+    subtype Str is String( Index_Type);
+ 
+    subtype Str2 is String (1 .. 11);
+ 
+    type Rec is record
+       S1 : Str;
+       S2 : Str;
+       B  : Boolean;
+       S3 : Str2;
+    end record;
+    pragma Pack (Rec);
+ 
+ end Pack8;
Index: gcc/testsuite/gnat.dg/specs/pack8_pkg.ads
===================================================================
*** gcc/testsuite/gnat.dg/specs/pack8_pkg.ads	(revision 0)
--- gcc/testsuite/gnat.dg/specs/pack8_pkg.ads	(revision 0)
***************
*** 0 ****
--- 1,5 ----
+ package Pack8_Pkg is
+ 
+    N : Natural := 1;
+ 
+ end Pack8_Pkg;

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-20  9:30                         ` Richard Guenther
@ 2012-03-22 10:26                           ` Eric Botcazou
  2012-03-22 10:38                             ` Richard Guenther
  0 siblings, 1 reply; 46+ messages in thread
From: Eric Botcazou @ 2012-03-22 10:26 UTC (permalink / raw)
  To: Richard Guenther; +Cc: gcc-patches

[-- Attachment #1: Type: text/plain, Size: 1553 bytes --]

> This is what I have applied after bootstrapping and testing on
> x86_64-unknown-linux-gnu.

Thanks.  Last identified problem: the miscompilation with variant record.

  gnat.dg/pack17.adb

raised PROGRAM_ERROR : pack17.adb:36 explicit raise
FAIL: gnat.dg/pack17.adb execution test


Things start to go wrong at line 28:

24   procedure Fill (R : out Record_With_QImode_Variants) is
25   begin
26      R.C_Filler := (True, False, True, False, True, False, True);
27      R.C_Map := (True, False, True);
28      R.T_Int := 17;
29   end;

R is a packed record with variant part and -gnatR1 gives its layout:

for Record_With_Qimode_Variants'Object_Size use 24;
for Record_With_Qimode_Variants'Value_Size use 19;
for Record_With_Qimode_Variants'Alignment use 1;
for Record_With_Qimode_Variants use record
   D        at 0 range  0 ..  0;
   C_Filler at 0 range  1 ..  7;
   C_Map    at 1 range  0 ..  2;
   F_Bit    at 1 range  3 ..  3;
   F_Filler at 1 range  4 .. 10;
   T_Int    at 1 range  3 .. 10;

Now, since T_Int is within a variant, the assignment is translated into:

  r_1(D)->d___XVN.O.t_int = 17;

d___XVN is QUAL_UNION_TYPE and O a RECORD_TYPE (fields of QUAL_UNION_TYPE are 
always of RECORD_TYPE, that's why the overloading of qualifier is indeed OK).

The compiler now generates:

	movl	8(%ebp), %eax
	movb	$17, 1(%eax)

which is of course wrong given the layout above.

It seems to me that the new code implicitly assumes that the first field in a 
bitfield group starts on a byte boundary, which doesn't hold in Ada.

-- 
Eric Botcazou

[-- Attachment #2: pack17.adb --]
[-- Type: text/x-adasrc, Size: 891 bytes --]

-- { dg-do run }

procedure Pack17 is

   type Bitmap_T is array (Natural range <>) of Boolean;
   pragma Pack (Bitmap_T);

   type Uint8 is range 0 .. 2 ** 8 - 1;
   for Uint8'Size use 8;

   type Record_With_QImode_Variants (D : Boolean) is record
      C_Filler : Bitmap_T (1..7);
      C_Map : Bitmap_T (1..3);
      case D is
         when False =>
            F_Bit : Boolean;
            F_Filler : Bitmap_T (1..7);
         when True =>
            T_Int : Uint8;
      end case;
   end record;
   pragma Pack (Record_With_QImode_Variants);

   procedure Fill (R : out Record_With_QImode_Variants) is
   begin
      R.C_Filler := (True, False, True, False, True, False, True);
      R.C_Map := (True, False, True);
      R.T_Int := 17;
   end;

   RT : Record_With_QImode_Variants (D => True);

begin
   Fill (RT);
   if RT.T_Int /= 17 then
     raise Program_Error;
   end if;
end;

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-22 10:26                           ` Eric Botcazou
@ 2012-03-22 10:38                             ` Richard Guenther
  2012-03-22 11:13                               ` Eric Botcazou
  0 siblings, 1 reply; 46+ messages in thread
From: Richard Guenther @ 2012-03-22 10:38 UTC (permalink / raw)
  To: Eric Botcazou; +Cc: gcc-patches

On Thu, 22 Mar 2012, Eric Botcazou wrote:

> > This is what I have applied after bootstrapping and testing on
> > x86_64-unknown-linux-gnu.
> 
> Thanks.  Last identified problem: the miscompilation with variant record.
> 
>   gnat.dg/pack17.adb
> 
> raised PROGRAM_ERROR : pack17.adb:36 explicit raise
> FAIL: gnat.dg/pack17.adb execution test
> 
> 
> Things start to go wrong at line 28:
> 
> 24   procedure Fill (R : out Record_With_QImode_Variants) is
> 25   begin
> 26      R.C_Filler := (True, False, True, False, True, False, True);
> 27      R.C_Map := (True, False, True);
> 28      R.T_Int := 17;
> 29   end;
> 
> R is a packed record with variant part and -gnatR1 gives its layout:
> 
> for Record_With_Qimode_Variants'Object_Size use 24;
> for Record_With_Qimode_Variants'Value_Size use 19;
> for Record_With_Qimode_Variants'Alignment use 1;
> for Record_With_Qimode_Variants use record
>    D        at 0 range  0 ..  0;
>    C_Filler at 0 range  1 ..  7;
>    C_Map    at 1 range  0 ..  2;
>    F_Bit    at 1 range  3 ..  3;
>    F_Filler at 1 range  4 .. 10;
>    T_Int    at 1 range  3 .. 10;
> 
> Now, since T_Int is within a variant, the assignment is translated into:
> 
>   r_1(D)->d___XVN.O.t_int = 17;
> 
> d___XVN is QUAL_UNION_TYPE and O a RECORD_TYPE (fields of QUAL_UNION_TYPE are 
> always of RECORD_TYPE, that's why the overloading of qualifier is indeed OK).
> 
> The compiler now generates:
> 
> 	movl	8(%ebp), %eax
> 	movb	$17, 1(%eax)
> 
> which is of course wrong given the layout above.
> 
> It seems to me that the new code implicitly assumes that the first field in a 
> bitfield group starts on a byte boundary, which doesn't hold in Ada.

Yes, I have noticed that from what you can see in the code.  So the
issue is that get_bit_range tells us that it's ok to touch bits
outside of the field - but that's obviously required.  We may not
change bits that are not covered by the FIELD_DECL which now
somehow happens?  That sounds like a latent bug in the bitfield
expander to me - it should never _change_ bits outside of bitpos/bitsize
as returned by get_inner_reference.

I guess I'll have to debug this if you don't beat me on it.

Richard.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-22 10:38                             ` Richard Guenther
@ 2012-03-22 11:13                               ` Eric Botcazou
  2012-03-22 11:22                                 ` Richard Guenther
  0 siblings, 1 reply; 46+ messages in thread
From: Eric Botcazou @ 2012-03-22 11:13 UTC (permalink / raw)
  To: Richard Guenther; +Cc: gcc-patches

> Yes, I have noticed that from what you can see in the code.  So the
> issue is that get_bit_range tells us that it's ok to touch bits
> outside of the field - but that's obviously required.  We may not
> change bits that are not covered by the FIELD_DECL which now
> somehow happens?  That sounds like a latent bug in the bitfield
> expander to me - it should never _change_ bits outside of bitpos/bitsize
> as returned by get_inner_reference.

It's the new C++0x memory model stuff:

void
store_bit_field (rtx str_rtx, unsigned HOST_WIDE_INT bitsize,
		 unsigned HOST_WIDE_INT bitnum,
		 unsigned HOST_WIDE_INT bitregion_start,
		 unsigned HOST_WIDE_INT bitregion_end,
		 enum machine_mode fieldmode,
		 rtx value)
{
  /* Under the C++0x memory model, we must not touch bits outside the
     bit region.  Adjust the address to start at the beginning of the
     bit region.  */
  if (MEM_P (str_rtx)
      && bitregion_start > 0)
    {
      enum machine_mode bestmode;
      enum machine_mode op_mode;
      unsigned HOST_WIDE_INT offset;

      op_mode = mode_for_extraction (EP_insv, 3);
      if (op_mode == MAX_MACHINE_MODE)
	op_mode = VOIDmode;

      offset = bitregion_start / BITS_PER_UNIT;
      bitnum -= bitregion_start;
      bitregion_end -= bitregion_start;
      bitregion_start = 0;

which assumes that a bitfield group starts on a byte boundary.  So this is 
probably indeed latent in GCC 4.7 as well.

-- 
Eric Botcazou

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-22 11:13                               ` Eric Botcazou
@ 2012-03-22 11:22                                 ` Richard Guenther
  2012-03-22 11:48                                   ` Eric Botcazou
  0 siblings, 1 reply; 46+ messages in thread
From: Richard Guenther @ 2012-03-22 11:22 UTC (permalink / raw)
  To: Eric Botcazou; +Cc: gcc-patches

On Thu, 22 Mar 2012, Eric Botcazou wrote:

> > Yes, I have noticed that from what you can see in the code.  So the
> > issue is that get_bit_range tells us that it's ok to touch bits
> > outside of the field - but that's obviously required.  We may not
> > change bits that are not covered by the FIELD_DECL which now
> > somehow happens?  That sounds like a latent bug in the bitfield
> > expander to me - it should never _change_ bits outside of bitpos/bitsize
> > as returned by get_inner_reference.
> 
> It's the new C++0x memory model stuff:
> 
> void
> store_bit_field (rtx str_rtx, unsigned HOST_WIDE_INT bitsize,
> 		 unsigned HOST_WIDE_INT bitnum,
> 		 unsigned HOST_WIDE_INT bitregion_start,
> 		 unsigned HOST_WIDE_INT bitregion_end,
> 		 enum machine_mode fieldmode,
> 		 rtx value)
> {
>   /* Under the C++0x memory model, we must not touch bits outside the
>      bit region.  Adjust the address to start at the beginning of the
>      bit region.  */
>   if (MEM_P (str_rtx)
>       && bitregion_start > 0)
>     {
>       enum machine_mode bestmode;
>       enum machine_mode op_mode;
>       unsigned HOST_WIDE_INT offset;
> 
>       op_mode = mode_for_extraction (EP_insv, 3);
>       if (op_mode == MAX_MACHINE_MODE)
> 	op_mode = VOIDmode;
> 
>       offset = bitregion_start / BITS_PER_UNIT;
>       bitnum -= bitregion_start;
>       bitregion_end -= bitregion_start;
>       bitregion_start = 0;
> 
> which assumes that a bitfield group starts on a byte boundary.  So this is 
> probably indeed latent in GCC 4.7 as well.

But the bitfield group _does_ start on a byte boundary.  At least
that's what the new code in stor-layout ensures.

It's ok to assume the group starts on a byte boundary.  But it's not
ok to modify bits outside of the access, so the RMW cycle has to mask
them properly.

Richard.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-22 11:22                                 ` Richard Guenther
@ 2012-03-22 11:48                                   ` Eric Botcazou
  2012-03-22 12:04                                     ` Richard Guenther
  0 siblings, 1 reply; 46+ messages in thread
From: Eric Botcazou @ 2012-03-22 11:48 UTC (permalink / raw)
  To: Richard Guenther; +Cc: gcc-patches

> But the bitfield group _does_ start on a byte boundary.  At least
> that's what the new code in stor-layout ensures.

No, it doesn't, since it does the computation on a record by record basis.
Here we have a bitfield group that starts in a record and ends up in a nested 
record.

> It's ok to assume the group starts on a byte boundary.  But it's not
> ok to modify bits outside of the access, so the RMW cycle has to mask
> them properly.

We have 2 options:
  1. the GCC 4.7 approach where get_bit_range returns 0 for bitstart, which 
doesn't make much sense but is in keeping with store_bit_field.
  2. the GCC 4.8 approach where get_bit_range attempts to return a more correct 
value, but is currently wrong (bitregion_start=11, bitregion_end=18) because 
the representative of the bitfield is wrong.  The real representative of the 
bitfield is outside the record type.

-- 
Eric Botcazou

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-22 11:48                                   ` Eric Botcazou
@ 2012-03-22 12:04                                     ` Richard Guenther
  2012-03-22 12:18                                       ` Eric Botcazou
  0 siblings, 1 reply; 46+ messages in thread
From: Richard Guenther @ 2012-03-22 12:04 UTC (permalink / raw)
  To: Eric Botcazou; +Cc: Richard Guenther, gcc-patches

On Thu, Mar 22, 2012 at 12:47 PM, Eric Botcazou <ebotcazou@adacore.com> wrote:
>> But the bitfield group _does_ start on a byte boundary.  At least
>> that's what the new code in stor-layout ensures.
>
> No, it doesn't, since it does the computation on a record by record basis.
> Here we have a bitfield group that starts in a record and ends up in a nested
> record.
>
>> It's ok to assume the group starts on a byte boundary.  But it's not
>> ok to modify bits outside of the access, so the RMW cycle has to mask
>> them properly.
>
> We have 2 options:
>  1. the GCC 4.7 approach where get_bit_range returns 0 for bitstart, which
> doesn't make much sense but is in keeping with store_bit_field.
>  2. the GCC 4.8 approach where get_bit_range attempts to return a more correct
> value, but is currently wrong (bitregion_start=11, bitregion_end=18) because
> the representative of the bitfield is wrong.  The real representative of the
> bitfield is outside the record type.

bitregion_start == 11 looks bogus.  The representative is starting at

  DECL_FIELD_BIT_OFFSET (repr)
    = size_binop (BIT_AND_EXPR,
                  DECL_FIELD_BIT_OFFSET (field),
                  bitsize_int (~(BITS_PER_UNIT - 1)));

which looks ok, the size of the representative is (at minimum)

  size = size_diffop (DECL_FIELD_OFFSET (field),
                      DECL_FIELD_OFFSET (repr));
  gcc_assert (host_integerp (size, 1));
  bitsize = (tree_low_cst (size, 1) * BITS_PER_UNIT
             + tree_low_cst (DECL_FIELD_BIT_OFFSET (field), 1)
             - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1)
             + tree_low_cst (DECL_SIZE (field), 1));

  /* Round up bitsize to multiples of BITS_PER_UNIT.  */
  bitsize = (bitsize + BITS_PER_UNIT - 1) & ~(BITS_PER_UNIT - 1);

that looks ok to me as well.  Is the issue that we, in get_bit_range,
compute bitregion_start relative to the byte-aligned offset of the
representative?

At least I still can't see where things go wrong from inspecting the
code ...

Richard.


> --
> Eric Botcazou

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-22 12:04                                     ` Richard Guenther
@ 2012-03-22 12:18                                       ` Eric Botcazou
  2012-03-22 12:39                                         ` Richard Guenther
  0 siblings, 1 reply; 46+ messages in thread
From: Eric Botcazou @ 2012-03-22 12:18 UTC (permalink / raw)
  To: Richard Guenther; +Cc: Richard Guenther, gcc-patches

> bitregion_start == 11 looks bogus.  The representative is starting at
>
>   DECL_FIELD_BIT_OFFSET (repr)
>     = size_binop (BIT_AND_EXPR,
>                   DECL_FIELD_BIT_OFFSET (field),
>                   bitsize_int (~(BITS_PER_UNIT - 1)));
>
> which looks ok

It cannot be OK if you want it to be on a byte boundary, since the field isn't 
on a byte boundary itself and they have the same DECL_FIELD_BIT_OFFSET (0).

> the size of the representative is (at minimum) 
>
>   size = size_diffop (DECL_FIELD_OFFSET (field),
>                       DECL_FIELD_OFFSET (repr));
>   gcc_assert (host_integerp (size, 1));
>   bitsize = (tree_low_cst (size, 1) * BITS_PER_UNIT
>              + tree_low_cst (DECL_FIELD_BIT_OFFSET (field), 1)
>              - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1)
>              + tree_low_cst (DECL_SIZE (field), 1));
>
>   /* Round up bitsize to multiples of BITS_PER_UNIT.  */
>   bitsize = (bitsize + BITS_PER_UNIT - 1) & ~(BITS_PER_UNIT - 1);
>
> that looks ok to me as well.  Is the issue that we, in get_bit_range,
> compute bitregion_start relative to the byte-aligned offset of the
> representative?

The issue is that the representative is assumed to be on a byte boundary in 
get_bit_range, but it isn't in the case at hand.  So either we cope with that 
(this is the GCC 4.7 approach) or we change the representative somehow.

IOW, either we pretend that a bitfield group is entirely contained within a 
single record type or we acknowledge that a bitfield group can cross a record 
boundary.

-- 
Eric Botcazou

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-22 12:18                                       ` Eric Botcazou
@ 2012-03-22 12:39                                         ` Richard Guenther
  2012-03-23  9:13                                           ` Richard Guenther
  2012-03-26  7:18                                           ` Eric Botcazou
  0 siblings, 2 replies; 46+ messages in thread
From: Richard Guenther @ 2012-03-22 12:39 UTC (permalink / raw)
  To: Eric Botcazou; +Cc: Richard Guenther, gcc-patches

On Thu, 22 Mar 2012, Eric Botcazou wrote:

> > bitregion_start == 11 looks bogus.  The representative is starting at
> >
> >   DECL_FIELD_BIT_OFFSET (repr)
> >     = size_binop (BIT_AND_EXPR,
> >                   DECL_FIELD_BIT_OFFSET (field),
> >                   bitsize_int (~(BITS_PER_UNIT - 1)));
> >
> > which looks ok
> 
> It cannot be OK if you want it to be on a byte boundary, since the field isn't 
> on a byte boundary itself and they have the same DECL_FIELD_BIT_OFFSET (0).

Huh?  If they have DECL_FIELD_BIT_OFFSET of zero they are at a byte
boundary, no?  Wait - the RECORD_TYPE itself is at non-zero
DECL_FIELD_BIT_OFFSET and thus a zero DECL_FIELD_BIT_OFFSET for its
fields does not mean anything?!  But how can DECL_OFFSET_ALIGN
be still valid for such field?  Obviously if DECL_FIELD_OFFSET == 0,
DECL_FIELD_BIT_OFFSET == 0 then the offset needs to be aligned
to DECL_OFFSET_ALIGN.  Which then means DECL_OFFSET_ALIGN is a
bit-alignment?

Anyway, since we are trying to compute a nice mode to use for
the bitfield representative we can give up in the second that
we do not know how to reach BITS_PER_UNIT alignment.  Or we can
simply only try to ensure MIN (BITS_PER_UNIT, DECL_OFFSET_ALIGN)
alignment/size of the representative.  Of course the bitfield
expansion code has to deal with non-byte-aligned representatives
then, and we'd always have to use BLKmode for them.

> > the size of the representative is (at minimum) 
> >
> >   size = size_diffop (DECL_FIELD_OFFSET (field),
> >                       DECL_FIELD_OFFSET (repr));
> >   gcc_assert (host_integerp (size, 1));
> >   bitsize = (tree_low_cst (size, 1) * BITS_PER_UNIT
> >              + tree_low_cst (DECL_FIELD_BIT_OFFSET (field), 1)
> >              - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1)
> >              + tree_low_cst (DECL_SIZE (field), 1));
> >
> >   /* Round up bitsize to multiples of BITS_PER_UNIT.  */
> >   bitsize = (bitsize + BITS_PER_UNIT - 1) & ~(BITS_PER_UNIT - 1);
> >
> > that looks ok to me as well.  Is the issue that we, in get_bit_range,
> > compute bitregion_start relative to the byte-aligned offset of the
> > representative?
> 
> The issue is that the representative is assumed to be on a byte boundary in 
> get_bit_range, but it isn't in the case at hand.  So either we cope with that 
> (this is the GCC 4.7 approach) or we change the representative somehow.

I think we can't change it to be on a byte-boundary, the same record
may be used at different bit-positions, no?

> IOW, either we pretend that a bitfield group is entirely contained within a 
> single record type or we acknowledge that a bitfield group can cross a record 
> boundary.

Sure, we acknowledge it can cross a record boundary.  I just was not
aware we cannot statically compute the bit-offset to the previous byte
for a record type.

Richard.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-22 12:39                                         ` Richard Guenther
@ 2012-03-23  9:13                                           ` Richard Guenther
  2012-03-26  7:18                                             ` Eric Botcazou
  2012-03-26  7:18                                           ` Eric Botcazou
  1 sibling, 1 reply; 46+ messages in thread
From: Richard Guenther @ 2012-03-23  9:13 UTC (permalink / raw)
  To: Eric Botcazou; +Cc: Richard Guenther, gcc-patches

On Thu, 22 Mar 2012, Richard Guenther wrote:

> On Thu, 22 Mar 2012, Eric Botcazou wrote:
> 
> > > bitregion_start == 11 looks bogus.  The representative is starting at
> > >
> > >   DECL_FIELD_BIT_OFFSET (repr)
> > >     = size_binop (BIT_AND_EXPR,
> > >                   DECL_FIELD_BIT_OFFSET (field),
> > >                   bitsize_int (~(BITS_PER_UNIT - 1)));
> > >
> > > which looks ok
> > 
> > It cannot be OK if you want it to be on a byte boundary, since the field isn't 
> > on a byte boundary itself and they have the same DECL_FIELD_BIT_OFFSET (0).
> 
> Huh?  If they have DECL_FIELD_BIT_OFFSET of zero they are at a byte
> boundary, no?  Wait - the RECORD_TYPE itself is at non-zero
> DECL_FIELD_BIT_OFFSET and thus a zero DECL_FIELD_BIT_OFFSET for its
> fields does not mean anything?!  But how can DECL_OFFSET_ALIGN
> be still valid for such field?  Obviously if DECL_FIELD_OFFSET == 0,
> DECL_FIELD_BIT_OFFSET == 0 then the offset needs to be aligned
> to DECL_OFFSET_ALIGN.  Which then means DECL_OFFSET_ALIGN is a
> bit-alignment?
> 
> Anyway, since we are trying to compute a nice mode to use for
> the bitfield representative we can give up in the second that
> we do not know how to reach BITS_PER_UNIT alignment.  Or we can
> simply only try to ensure MIN (BITS_PER_UNIT, DECL_OFFSET_ALIGN)
> alignment/size of the representative.  Of course the bitfield
> expansion code has to deal with non-byte-aligned representatives
> then, and we'd always have to use BLKmode for them.

Btw, now checking with gdb, DECL_OFFSET_ALIGN is always 128 for
all of the fields - that looks bogus.  DECL_ALIGN is 1, but that
doesn't mean DECL_OFFSET_ALIGN should not be 1 as well, no?

Thanks,
Richard.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-23  9:13                                           ` Richard Guenther
@ 2012-03-26  7:18                                             ` Eric Botcazou
  2012-03-26  7:37                                               ` Richard Guenther
  0 siblings, 1 reply; 46+ messages in thread
From: Eric Botcazou @ 2012-03-26  7:18 UTC (permalink / raw)
  To: Richard Guenther; +Cc: Richard Guenther, gcc-patches

> Btw, now checking with gdb, DECL_OFFSET_ALIGN is always 128 for
> all of the fields - that looks bogus.  DECL_ALIGN is 1, but that
> doesn't mean DECL_OFFSET_ALIGN should not be 1 as well, no?

DECL_OFFSET_ALIGN is set to BIGGEST_ALIGNMENT for the first field, see 
start_record_layout.  If DECL_FIELD_OFFSET doesn't change for the next fields, 
DECL_OFFSET_ALIGN will very likely not either.

-- 
Eric Botcazou

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-22 12:39                                         ` Richard Guenther
  2012-03-23  9:13                                           ` Richard Guenther
@ 2012-03-26  7:18                                           ` Eric Botcazou
  1 sibling, 0 replies; 46+ messages in thread
From: Eric Botcazou @ 2012-03-26  7:18 UTC (permalink / raw)
  To: Richard Guenther; +Cc: Richard Guenther, gcc-patches

> Huh?  If they have DECL_FIELD_BIT_OFFSET of zero they are at a byte
> boundary, no?  Wait - the RECORD_TYPE itself is at non-zero
> DECL_FIELD_BIT_OFFSET and thus a zero DECL_FIELD_BIT_OFFSET for its
> fields does not mean anything?!

DECL_FIELD_BIT_OFFSET is relative to the enclosing record type.

> But how can DECL_OFFSET_ALIGN be still valid for such field?

Likewise, it is relative to DECL_FIELD_OFFSET, which is relative to the 
enclosing record type.

> Obviously if DECL_FIELD_OFFSET == 0, DECL_FIELD_BIT_OFFSET == 0 then the
> offset needs to be aligned to DECL_OFFSET_ALIGN.  Which then means
> DECL_OFFSET_ALIGN is a bit-alignment?

It's just a relative alignment, not an absolute one.

> Anyway, since we are trying to compute a nice mode to use for
> the bitfield representative we can give up in the second that
> we do not know how to reach BITS_PER_UNIT alignment.  Or we can
> simply only try to ensure MIN (BITS_PER_UNIT, DECL_OFFSET_ALIGN)
> alignment/size of the representative.  Of course the bitfield
> expansion code has to deal with non-byte-aligned representatives
> then, and we'd always have to use BLKmode for them.

But BLKmode forces byte alignment.  Anything not byte-aligned must use an 
integral mode.

> I think we can't change it to be on a byte-boundary, the same record
> may be used at different bit-positions, no?

Sure.

> Sure, we acknowledge it can cross a record boundary.  I just was not
> aware we cannot statically compute the bit-offset to the previous byte
> for a record type.

We can if we look into the entire object, not just the enclosing record.  The 
problem is that, while get_inner_reference and friends do the former, the new 
code in stor-layout.c only does the latter.

OK, I think we should enter some sort of a degraded mode for these non-byte 
aligned records, as they cannot occur in C/C++.  I'll try to come up with 
something.

-- 
Eric Botcazou

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-26  7:18                                             ` Eric Botcazou
@ 2012-03-26  7:37                                               ` Richard Guenther
  2012-03-26  7:43                                                 ` Eric Botcazou
  0 siblings, 1 reply; 46+ messages in thread
From: Richard Guenther @ 2012-03-26  7:37 UTC (permalink / raw)
  To: Eric Botcazou; +Cc: Richard Guenther, gcc-patches

On Mon, Mar 26, 2012 at 9:18 AM, Eric Botcazou <ebotcazou@adacore.com> wrote:
>> Btw, now checking with gdb, DECL_OFFSET_ALIGN is always 128 for
>> all of the fields - that looks bogus.  DECL_ALIGN is 1, but that
>> doesn't mean DECL_OFFSET_ALIGN should not be 1 as well, no?
>
> DECL_OFFSET_ALIGN is set to BIGGEST_ALIGNMENT for the first field, see
> start_record_layout.  If DECL_FIELD_OFFSET doesn't change for the next fields,
> DECL_OFFSET_ALIGN will very likely not either.

Isn't DECL_OFFSET_ALIGN completely useless if it is "relative"?

I placed the code in stor-layout.c and computed the field
representatives per record type
(and not per full object) because that's the only way to do it
scalable and that does not
require excessive space.  The invariant should be that you can replace
the bitfield
access with an access of the representative plus a bitfield extract
from the read value.
If bit-aligned representatives need an integral mode then the access
is not constrained
by the representatives size which means having a representative in
this case is useless
(for correctness), so you can as well set none for these cases.  But I
wonder how we can
even detect non-byte-alignment?  I see the field has DECL_ALIGN of 1,
but isn't that
"relative" as well?  At least I'm totally confused with
DECL_OFFSET_ALIGN being "relative".

Richard.

> --
> Eric Botcazou

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-26  7:37                                               ` Richard Guenther
@ 2012-03-26  7:43                                                 ` Eric Botcazou
  2012-03-26  8:01                                                   ` Richard Guenther
  2012-03-26  8:10                                                   ` Eric Botcazou
  0 siblings, 2 replies; 46+ messages in thread
From: Eric Botcazou @ 2012-03-26  7:43 UTC (permalink / raw)
  To: Richard Guenther; +Cc: Richard Guenther, gcc-patches

> Isn't DECL_OFFSET_ALIGN completely useless if it is "relative"?

Good question.

> But I wonder how we can even detect non-byte-alignment?  I see the field has
> DECL_ALIGN of 1, but isn't that "relative" as well?  At least I'm totally
> confused with DECL_OFFSET_ALIGN being "relative".

DECL_ALIGN is supposed to be the absolute counterpart.  I think we indeed can't 
really in stor-layout, so the only place is very likely get_bit_range.

-- 
Eric Botcazou

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-26  7:43                                                 ` Eric Botcazou
@ 2012-03-26  8:01                                                   ` Richard Guenther
  2012-03-26  8:18                                                     ` Eric Botcazou
  2012-03-26  8:10                                                   ` Eric Botcazou
  1 sibling, 1 reply; 46+ messages in thread
From: Richard Guenther @ 2012-03-26  8:01 UTC (permalink / raw)
  To: Eric Botcazou; +Cc: Richard Guenther, gcc-patches

On Mon, Mar 26, 2012 at 9:43 AM, Eric Botcazou <ebotcazou@adacore.com> wrote:
>> Isn't DECL_OFFSET_ALIGN completely useless if it is "relative"?
>
> Good question.

The only use of DECL_OFFSET_ALIGN I see is in stor-layout.c itself:

  /* If this field ended up more aligned than we thought it would be (we
     approximate this by seeing if its position changed), lay out the field
     again; perhaps we can use an integral mode for it now.  */
  if (! integer_zerop (DECL_FIELD_BIT_OFFSET (field)))
    actual_align = (tree_low_cst (DECL_FIELD_BIT_OFFSET (field), 1)
                    & - tree_low_cst (DECL_FIELD_BIT_OFFSET (field), 1));
  else if (integer_zerop (DECL_FIELD_OFFSET (field)))
    actual_align = MAX (BIGGEST_ALIGNMENT, rli->record_align);
  else if (host_integerp (DECL_FIELD_OFFSET (field), 1))
    actual_align = (BITS_PER_UNIT
                   * (tree_low_cst (DECL_FIELD_OFFSET (field), 1)
                      & - tree_low_cst (DECL_FIELD_OFFSET (field), 1)));
  else
    actual_align = DECL_OFFSET_ALIGN (field);
  /* ACTUAL_ALIGN is still the actual alignment *within the record* .
     store / extract bit field operations will check the alignment of the
     record against the mode of bit fields.  */

which again computes a "relative" actual_align.  Removing DECL_OFFSET_ALIGN
would also get rid of that odd scaling of operand 2 of COMPONENT_REFs.

>> But I wonder how we can even detect non-byte-alignment?  I see the field has
>> DECL_ALIGN of 1, but isn't that "relative" as well?  At least I'm totally
>> confused with DECL_OFFSET_ALIGN being "relative".
>
> DECL_ALIGN is supposed to be the absolute counterpart.  I think we indeed can't
> really in stor-layout, so the only place is very likely get_bit_range.

OTOH if DECL_ALIGN is absolute then if the first field of a record type has
DECL_ALIGN that is a multiple of BITS_PER_UNIT we know we can use
the present stor-layout code?  So we can check that and give up computing
representatives at all for a record type if that does not hold.

Richard.

> --
> Eric Botcazou

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-26  7:43                                                 ` Eric Botcazou
  2012-03-26  8:01                                                   ` Richard Guenther
@ 2012-03-26  8:10                                                   ` Eric Botcazou
  2012-03-26  8:52                                                     ` Richard Guenther
  1 sibling, 1 reply; 46+ messages in thread
From: Eric Botcazou @ 2012-03-26  8:10 UTC (permalink / raw)
  To: Richard Guenther; +Cc: gcc-patches, Richard Guenther

[-- Attachment #1: Type: text/plain, Size: 528 bytes --]

> I think we indeed can't really in stor-layout, so the only place is very
> likely get_bit_range.

Something like that for example.  I think the expmed.c hunk should be applied 
to the 4.7 branch as well, because the new code in store_bit_field is quite 
dangerous without it.


	* expmed.c (store_bit_field): Assert that BITREGION_START is a multiple
	of a unit before computing the offset in units.
	* expr.c (get_bit_range): Return the null range if the enclosing record
	is part of a larger bit field.
	

-- 
Eric Botcazou

[-- Attachment #2: p.diff --]
[-- Type: text/x-diff, Size: 1927 bytes --]

Index: expr.c
===================================================================
--- expr.c	(revision 185767)
+++ expr.c	(working copy)
@@ -4462,6 +4462,25 @@ get_bit_range (unsigned HOST_WIDE_INT *b
       return;
     }
 
+  /* If we have a DECL_BIT_FIELD_REPRESENTATIVE but the enclosing record is
+     part of a larger bit field, then the representative does not serve any
+     useful purpose.  This can occur in Ada.  */
+  if (handled_component_p (TREE_OPERAND (exp, 0)))
+    {
+      enum machine_mode rmode;
+      HOST_WIDE_INT rbitsize, rbitpos;
+      tree roffset;
+      int unsignedp;
+      int volatilep = 0;
+      get_inner_reference (TREE_OPERAND (exp, 0), &rbitsize, &rbitpos,
+			   &roffset, &rmode, &unsignedp, &volatilep, false);
+      if ((rbitpos % BITS_PER_UNIT) != 0)
+	{
+	  *bitstart = *bitend = 0;
+	  return;
+        }
+    }
+
   /* Compute the adjustment to bitpos from the offset of the field
      relative to the representative.  DECL_FIELD_OFFSET of field and
      repr are the same by construction if they are not constants,
Index: expmed.c
===================================================================
--- expmed.c	(revision 185570)
+++ expmed.c	(working copy)
@@ -828,8 +828,7 @@ store_bit_field (rtx str_rtx, unsigned H
   /* Under the C++0x memory model, we must not touch bits outside the
      bit region.  Adjust the address to start at the beginning of the
      bit region.  */
-  if (MEM_P (str_rtx)
-      && bitregion_start > 0)
+  if (MEM_P (str_rtx) && bitregion_start > 0)
     {
       enum machine_mode bestmode;
       enum machine_mode op_mode;
@@ -839,6 +838,8 @@ store_bit_field (rtx str_rtx, unsigned H
       if (op_mode == MAX_MACHINE_MODE)
 	op_mode = VOIDmode;
 
+      gcc_assert ((bitregion_start % BITS_PER_UNIT) == 0);
+
       offset = bitregion_start / BITS_PER_UNIT;
       bitnum -= bitregion_start;
       bitregion_end -= bitregion_start;

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-26  8:01                                                   ` Richard Guenther
@ 2012-03-26  8:18                                                     ` Eric Botcazou
  2012-03-26  8:34                                                       ` Richard Guenther
  0 siblings, 1 reply; 46+ messages in thread
From: Eric Botcazou @ 2012-03-26  8:18 UTC (permalink / raw)
  To: Richard Guenther; +Cc: Richard Guenther, gcc-patches

> OTOH if DECL_ALIGN is absolute then if the first field of a record type has
> DECL_ALIGN that is a multiple of BITS_PER_UNIT we know we can use
> the present stor-layout code?  So we can check that and give up computing
> representatives at all for a record type if that does not hold.

I don't think so, DECL_ALIGN of a field is probably honored only if TYPE_ALIGN 
of the record is, and TYPE_ALIGN isn't honored for a bit field.

-- 
Eric Botcazou

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-26  8:18                                                     ` Eric Botcazou
@ 2012-03-26  8:34                                                       ` Richard Guenther
  2012-03-26 17:54                                                         ` Eric Botcazou
  0 siblings, 1 reply; 46+ messages in thread
From: Richard Guenther @ 2012-03-26  8:34 UTC (permalink / raw)
  To: Eric Botcazou; +Cc: Richard Guenther, gcc-patches

On Mon, Mar 26, 2012 at 10:17 AM, Eric Botcazou <ebotcazou@adacore.com> wrote:
>> OTOH if DECL_ALIGN is absolute then if the first field of a record type has
>> DECL_ALIGN that is a multiple of BITS_PER_UNIT we know we can use
>> the present stor-layout code?  So we can check that and give up computing
>> representatives at all for a record type if that does not hold.
>
> I don't think so, DECL_ALIGN of a field is probably honored only if TYPE_ALIGN
> of the record is, and TYPE_ALIGN isn't honored for a bit field.

Uh.  When is a field a bit field though?  At least stor-layout.c
resets DECL_BIT_FIELD
when local relative alignment is "proper" and the filed has an integer
mode.  That's
overly optimistic if the record is placed at a bit position.  Of
course we still have
DECL_BIT_FIELD_TYPE, but I wonder what the DECL_BIT_FIELD flag is for if
we cannot really know whether the field in the end is a "bitfield" anyway...
(what value do the various field-decls in a component-ref chain have anyway if
the real interpretation is subject to the containing object, which
might be even only
specified by the type used for an indirect ref ...)

Richard.

> --
> Eric Botcazou

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-26  8:10                                                   ` Eric Botcazou
@ 2012-03-26  8:52                                                     ` Richard Guenther
  2012-03-26 18:04                                                       ` Eric Botcazou
  0 siblings, 1 reply; 46+ messages in thread
From: Richard Guenther @ 2012-03-26  8:52 UTC (permalink / raw)
  To: Eric Botcazou; +Cc: Richard Guenther, gcc-patches

On Mon, 26 Mar 2012, Eric Botcazou wrote:

> > I think we indeed can't really in stor-layout, so the only place is very
> > likely get_bit_range.
> 
> Something like that for example.  I think the expmed.c hunk should be applied 
> to the 4.7 branch as well, because the new code in store_bit_field is quite 
> dangerous without it.
> 
> 
> 	* expmed.c (store_bit_field): Assert that BITREGION_START is a multiple
> 	of a unit before computing the offset in units.
> 	* expr.c (get_bit_range): Return the null range if the enclosing record
> 	is part of a larger bit field.

The patch looks reasonable - can we compute this backward from the
result of the outer get_inner_reference call and the outermost
field-decl though?  Or make get_inner_reference compute that while
analyzing the full reference and return a flag?  OTOH it shouldn't
be too expensive.

I agree the assert should go to the banch as well, though the code
only ever triggers there with --param allow-store-data-races=0.

Thanks,
Richard.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-26  8:34                                                       ` Richard Guenther
@ 2012-03-26 17:54                                                         ` Eric Botcazou
  2012-03-27  6:51                                                           ` Richard Guenther
  0 siblings, 1 reply; 46+ messages in thread
From: Eric Botcazou @ 2012-03-26 17:54 UTC (permalink / raw)
  To: Richard Guenther; +Cc: Richard Guenther, gcc-patches

> Uh.  When is a field a bit field though?  At least stor-layout.c
> resets DECL_BIT_FIELD
> when local relative alignment is "proper" and the filed has an integer
> mode.  That's
> overly optimistic if the record is placed at a bit position.  Of
> course we still have
> DECL_BIT_FIELD_TYPE, but I wonder what the DECL_BIT_FIELD flag is for if
> we cannot really know whether the field in the end is a "bitfield"
> anyway... (what value do the various field-decls in a component-ref chain
> have anyway if the real interpretation is subject to the containing object,
> which might be even only
> specified by the type used for an indirect ref ...)

We're talking about a bit field with record type here, so anything calculated 
within the record type in isolation is essentially invalidated for the bit 
field in question.  Fortunately this only occurs in Ada, which doesn't use the 
very questionable C++ memory model (fingers crossed for the future. :-)

-- 
Eric Botcazou

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-26  8:52                                                     ` Richard Guenther
@ 2012-03-26 18:04                                                       ` Eric Botcazou
  2012-03-27  6:51                                                         ` Richard Guenther
  0 siblings, 1 reply; 46+ messages in thread
From: Eric Botcazou @ 2012-03-26 18:04 UTC (permalink / raw)
  To: Richard Guenther; +Cc: Richard Guenther, gcc-patches

> The patch looks reasonable - can we compute this backward from the
> result of the outer get_inner_reference call and the outermost
> field-decl though?  Or make get_inner_reference compute that while
> analyzing the full reference and return a flag?  OTOH it shouldn't
> be too expensive.

There are a lot of calls to get_inner_reference throughout the compiler though.
And it is cheap if you compare it with the 4.7 code, which essentially does the 
same processing for _every_ field in the record (and even if the record object 
isn't a handled_component_p).

-- 
Eric Botcazou

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-26 17:54                                                         ` Eric Botcazou
@ 2012-03-27  6:51                                                           ` Richard Guenther
  2012-03-27  7:08                                                             ` Eric Botcazou
  0 siblings, 1 reply; 46+ messages in thread
From: Richard Guenther @ 2012-03-27  6:51 UTC (permalink / raw)
  To: Eric Botcazou; +Cc: Richard Guenther, gcc-patches

On Mon, 26 Mar 2012, Eric Botcazou wrote:

> > Uh.  When is a field a bit field though?  At least stor-layout.c
> > resets DECL_BIT_FIELD
> > when local relative alignment is "proper" and the filed has an integer
> > mode.  That's
> > overly optimistic if the record is placed at a bit position.  Of
> > course we still have
> > DECL_BIT_FIELD_TYPE, but I wonder what the DECL_BIT_FIELD flag is for if
> > we cannot really know whether the field in the end is a "bitfield"
> > anyway... (what value do the various field-decls in a component-ref chain
> > have anyway if the real interpretation is subject to the containing object,
> > which might be even only
> > specified by the type used for an indirect ref ...)
> 
> We're talking about a bit field with record type here, so anything calculated 
> within the record type in isolation is essentially invalidated for the bit 
> field in question.  Fortunately this only occurs in Ada, which doesn't use the 
> very questionable C++ memory model (fingers crossed for the future. :-)

;)

Btw, I put things in stor-layout.c exactly to allow a langhook
eventually controlling things for the bitfield representative.
Mind adding one that simply disables them completely for Ada?
Or maybe, for selected record types, so that we do

  if (lang_hooks.types.lower_bitfield_layout (rli->t))
    finish_bitfield_layout (rli->t);

?  Letting the FE decide when to punt is certainly better than
trying to figure it out from stor-layout.c code.  I suppose
for types that are supposed to interoperate with C / C++ using
the C / C++ rules makes sense (I suppose there is sth like
C/C++ interoperability with Ada).

Richard.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-26 18:04                                                       ` Eric Botcazou
@ 2012-03-27  6:51                                                         ` Richard Guenther
  0 siblings, 0 replies; 46+ messages in thread
From: Richard Guenther @ 2012-03-27  6:51 UTC (permalink / raw)
  To: Eric Botcazou; +Cc: Richard Guenther, gcc-patches

On Mon, 26 Mar 2012, Eric Botcazou wrote:

> > The patch looks reasonable - can we compute this backward from the
> > result of the outer get_inner_reference call and the outermost
> > field-decl though?  Or make get_inner_reference compute that while
> > analyzing the full reference and return a flag?  OTOH it shouldn't
> > be too expensive.
> 
> There are a lot of calls to get_inner_reference throughout the compiler though.
> And it is cheap if you compare it with the 4.7 code, which essentially does the 
> same processing for _every_ field in the record (and even if the record object 
> isn't a handled_component_p).

Yes, indeed.  One of the main motivation was to get rid of that quadratic
complexity ...

Richard.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-27  6:51                                                           ` Richard Guenther
@ 2012-03-27  7:08                                                             ` Eric Botcazou
  2012-03-27  7:50                                                               ` Richard Guenther
  0 siblings, 1 reply; 46+ messages in thread
From: Eric Botcazou @ 2012-03-27  7:08 UTC (permalink / raw)
  To: Richard Guenther; +Cc: Richard Guenther, gcc-patches

> Btw, I put things in stor-layout.c exactly to allow a langhook
> eventually controlling things for the bitfield representative.
> Mind adding one that simply disables them completely for Ada?
> Or maybe, for selected record types, so that we do
>
>   if (lang_hooks.types.lower_bitfield_layout (rli->t))
>     finish_bitfield_layout (rli->t);
>
> ?  Letting the FE decide when to punt is certainly better than
> trying to figure it out from stor-layout.c code.  I suppose
> for types that are supposed to interoperate with C / C++ using
> the C / C++ rules makes sense (I suppose there is sth like
> C/C++ interoperability with Ada).

Yes, there is C/C++ interoperability in Ada.  But GNAT's policy is to be 
compatible with C/C++ by default, so I'd rather not deviate from the common 
code for such a central thing as record layout if we can avoid it.

-- 
Eric Botcazo

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-27  7:08                                                             ` Eric Botcazou
@ 2012-03-27  7:50                                                               ` Richard Guenther
  2012-03-27  8:39                                                                 ` Eric Botcazou
  0 siblings, 1 reply; 46+ messages in thread
From: Richard Guenther @ 2012-03-27  7:50 UTC (permalink / raw)
  To: Eric Botcazou; +Cc: Richard Guenther, gcc-patches

On Tue, 27 Mar 2012, Eric Botcazou wrote:

> > Btw, I put things in stor-layout.c exactly to allow a langhook
> > eventually controlling things for the bitfield representative.
> > Mind adding one that simply disables them completely for Ada?
> > Or maybe, for selected record types, so that we do
> >
> >   if (lang_hooks.types.lower_bitfield_layout (rli->t))
> >     finish_bitfield_layout (rli->t);
> >
> > ?  Letting the FE decide when to punt is certainly better than
> > trying to figure it out from stor-layout.c code.  I suppose
> > for types that are supposed to interoperate with C / C++ using
> > the C / C++ rules makes sense (I suppose there is sth like
> > C/C++ interoperability with Ada).
> 
> Yes, there is C/C++ interoperability in Ada.  But GNAT's policy is to be 
> compatible with C/C++ by default, so I'd rather not deviate from the common 
> code for such a central thing as record layout if we can avoid it.

I see.  Though the code does not affect layout but only access "layout"
for bitfields.  I'm fine with fixing the issues we run into elsewhere,
just the langhook is a possibility I had in mind from the start, in
case frontends differ in their memory model for bitfields.

Richard.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-27  7:50                                                               ` Richard Guenther
@ 2012-03-27  8:39                                                                 ` Eric Botcazou
  2012-03-27  8:47                                                                   ` Richard Guenther
  0 siblings, 1 reply; 46+ messages in thread
From: Eric Botcazou @ 2012-03-27  8:39 UTC (permalink / raw)
  To: Richard Guenther; +Cc: Richard Guenther, gcc-patches

> I see.  Though the code does not affect layout but only access "layout"
> for bitfields.  I'm fine with fixing the issues we run into elsewhere,
> just the langhook is a possibility I had in mind from the start, in
> case frontends differ in their memory model for bitfields.

Understood.  According to our internal testing, the issue we're discussing was 
the last problem introduced by the bitfield change, and I think that using the 
C/C++ model for C/C++-compatible bit fields is fine for GNAT.

May I apply the patch I posted?  It boostrapped/regtested fine on x86-64/Linux.

-- 
Eric Botcazou

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-27  8:39                                                                 ` Eric Botcazou
@ 2012-03-27  8:47                                                                   ` Richard Guenther
  2012-03-30 20:58                                                                     ` Eric Botcazou
  0 siblings, 1 reply; 46+ messages in thread
From: Richard Guenther @ 2012-03-27  8:47 UTC (permalink / raw)
  To: Eric Botcazou; +Cc: Richard Guenther, gcc-patches

On Tue, 27 Mar 2012, Eric Botcazou wrote:

> > I see.  Though the code does not affect layout but only access "layout"
> > for bitfields.  I'm fine with fixing the issues we run into elsewhere,
> > just the langhook is a possibility I had in mind from the start, in
> > case frontends differ in their memory model for bitfields.
> 
> Understood.  According to our internal testing, the issue we're discussing was 
> the last problem introduced by the bitfield change, and I think that using the 
> C/C++ model for C/C++-compatible bit fields is fine for GNAT.
> 
> May I apply the patch I posted?  It boostrapped/regtested fine on x86-64/Linux.

Yes.

Thanks,
Richard.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-27  8:47                                                                   ` Richard Guenther
@ 2012-03-30 20:58                                                                     ` Eric Botcazou
  2012-04-02 14:23                                                                       ` Richard Guenther
  0 siblings, 1 reply; 46+ messages in thread
From: Eric Botcazou @ 2012-03-30 20:58 UTC (permalink / raw)
  To: Richard Guenther; +Cc: Richard Guenther, gcc-patches

[-- Attachment #1: Type: text/plain, Size: 1065 bytes --]

> > May I apply the patch I posted?  It boostrapped/regtested fine on
> > x86-64/Linux.
>
> Yes.

Thanks.  Unfortunately, while this was the last identified problem on x86, 
another issue is visible on x86-64 as a miscompilation of XML/Ada at -O0.
Reduced testcase attached:

 gnat.dg/pack18.adb
 gnat.dg/pack18_pkg.ads

The executable segfaults because it attempts a read at 0x2000000000000000.

The scenario is a follows: Rec is packed record so its fields are bit fields, N 
being at bit offset 129.  The representative is at offset 0.

get_bit_range is invoked on N with a bitpos of 1, because there is variable 
offset and its DECL_FIELD_OFFFSET is added to it instead of bitpos.  Hence

  bitpos - bitoffset

is (unsigned HOST_WIDE_INT) -128.  This value enters unchanged the new code in 
store_bit_field and the division:

     offset = bitregion_start / BITS_PER_UNIT;

yields the problematic big number.

It would therefore appear that bitstart and bitend need to be signed offsets, 
at least until they are adjusted by store_bit_field.

-- 
Eric Botcazou

[-- Attachment #2: pack18.adb --]
[-- Type: text/x-adasrc, Size: 220 bytes --]

-- { dg-do run }

with Pack18_Pkg; use Pack18_Pkg;

procedure Pack18 is
   use Pack18_Pkg.Attributes_Tables;
   Table : Instance;
begin
   Init (Table);
   Set_Last (Table, 1);
   Table.Table (Last (Table)).N := 0;
end;

[-- Attachment #3: pack18_pkg.ads --]
[-- Type: text/x-adasrc, Size: 465 bytes --]

with GNAT.Dynamic_Tables;

package Pack18_Pkg is

   type String_Access is access String;

   type Rec is record
      S : String_Access;
      B : Boolean;
      N : Natural;
   end record;
   pragma Pack (Rec);

   package Attributes_Tables is new GNAT.Dynamic_Tables
     (Table_Component_Type => Rec,
      Table_Index_Type     => Natural,
      Table_Low_Bound      => 1,
      Table_Initial        => 200,
      Table_Increment      => 200);

end Pack18_Pkg;

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-03-30 20:58                                                                     ` Eric Botcazou
@ 2012-04-02 14:23                                                                       ` Richard Guenther
  2012-04-03  8:26                                                                         ` Eric Botcazou
  0 siblings, 1 reply; 46+ messages in thread
From: Richard Guenther @ 2012-04-02 14:23 UTC (permalink / raw)
  To: Eric Botcazou; +Cc: gcc-patches

On Fri, 30 Mar 2012, Eric Botcazou wrote:

> > > May I apply the patch I posted?  It boostrapped/regtested fine on
> > > x86-64/Linux.
> >
> > Yes.
> 
> Thanks.  Unfortunately, while this was the last identified problem on x86, 
> another issue is visible on x86-64 as a miscompilation of XML/Ada at -O0.
> Reduced testcase attached:
> 
>  gnat.dg/pack18.adb
>  gnat.dg/pack18_pkg.ads
> 
> The executable segfaults because it attempts a read at 0x2000000000000000.
> 
> The scenario is a follows: Rec is packed record so its fields are bit fields, N 
> being at bit offset 129.  The representative is at offset 0.
> 
> get_bit_range is invoked on N with a bitpos of 1, because there is variable 
> offset and its DECL_FIELD_OFFFSET is added to it instead of bitpos.  Hence
> 
>   bitpos - bitoffset
> 
> is (unsigned HOST_WIDE_INT) -128.  This value enters unchanged the new code in 
> store_bit_field and the division:
> 
>      offset = bitregion_start / BITS_PER_UNIT;
> 
> yields the problematic big number.
> 
> It would therefore appear that bitstart and bitend need to be signed offsets, 
> at least until they are adjusted by store_bit_field.

Hmm, yeah.  Or something like

Index: expr.c
===================================================================
--- expr.c      (revision 186082)
+++ expr.c      (working copy)
@@ -4490,8 +4490,8 @@ get_bit_range (unsigned HOST_WIDE_INT *b
   bitoffset += (tree_low_cst (DECL_FIELD_BIT_OFFSET (field), 1)
                - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1));
 
-  *bitstart = bitpos - bitoffset;
-  *bitend = *bitstart + tree_low_cst (DECL_SIZE (repr), 1) - 1;
+  *bitstart = bitpos < (HOST_WIDE_INT) bitoffset ? 0 : bitpos - 
bitoffset;
+  *bitend = bitpos + tree_low_cst (DECL_SIZE (repr), 1) - bitoffset - 1;
 }
 
 /* Returns true if the MEM_REF REF refers to an object that does not

which conservatively truncates the bitrange.

Richard.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-04-02 14:23                                                                       ` Richard Guenther
@ 2012-04-03  8:26                                                                         ` Eric Botcazou
  2012-04-03  8:34                                                                           ` Richard Guenther
  0 siblings, 1 reply; 46+ messages in thread
From: Eric Botcazou @ 2012-04-03  8:26 UTC (permalink / raw)
  To: Richard Guenther; +Cc: gcc-patches

> Hmm, yeah.  Or something like
>
> Index: expr.c
> ===================================================================
> --- expr.c      (revision 186082)
> +++ expr.c      (working copy)
> @@ -4490,8 +4490,8 @@ get_bit_range (unsigned HOST_WIDE_INT *b
>    bitoffset += (tree_low_cst (DECL_FIELD_BIT_OFFSET (field), 1)
>                 - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1));
>
> -  *bitstart = bitpos - bitoffset;
> -  *bitend = *bitstart + tree_low_cst (DECL_SIZE (repr), 1) - 1;
> +  *bitstart = bitpos < (HOST_WIDE_INT) bitoffset ? 0 : bitpos -
> bitoffset;
> +  *bitend = bitpos + tree_low_cst (DECL_SIZE (repr), 1) - bitoffset - 1;
>  }
>
>  /* Returns true if the MEM_REF REF refers to an object that does not
>
> which conservatively truncates the bitrange.

What do you think about allowing get_bit_range to adjust offset and bitpos?

      tem = get_inner_reference (to, &bitsize, &bitpos, &offset, &mode1,
				 &unsignedp, &volatilep, true);

      if (TREE_CODE (to) == COMPONENT_REF
	  && DECL_BIT_FIELD_TYPE (TREE_OPERAND (to, 1)))
	get_bit_range (&bitregion_start, &bitregion_end, &offset, &bitpos, to);

so as to have a really non-negative bitregion_start/bitregion_end?  It would 
assert that offset is already non-null in that case.

-- 
Eric Botcazou

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-04-03  8:26                                                                         ` Eric Botcazou
@ 2012-04-03  8:34                                                                           ` Richard Guenther
  2012-04-03 11:09                                                                             ` Eric Botcazou
  0 siblings, 1 reply; 46+ messages in thread
From: Richard Guenther @ 2012-04-03  8:34 UTC (permalink / raw)
  To: Eric Botcazou; +Cc: gcc-patches

On Tue, 3 Apr 2012, Eric Botcazou wrote:

> > Hmm, yeah.  Or something like
> >
> > Index: expr.c
> > ===================================================================
> > --- expr.c      (revision 186082)
> > +++ expr.c      (working copy)
> > @@ -4490,8 +4490,8 @@ get_bit_range (unsigned HOST_WIDE_INT *b
> >    bitoffset += (tree_low_cst (DECL_FIELD_BIT_OFFSET (field), 1)
> >                 - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1));
> >
> > -  *bitstart = bitpos - bitoffset;
> > -  *bitend = *bitstart + tree_low_cst (DECL_SIZE (repr), 1) - 1;
> > +  *bitstart = bitpos < (HOST_WIDE_INT) bitoffset ? 0 : bitpos -
> > bitoffset;
> > +  *bitend = bitpos + tree_low_cst (DECL_SIZE (repr), 1) - bitoffset - 1;
> >  }
> >
> >  /* Returns true if the MEM_REF REF refers to an object that does not
> >
> > which conservatively truncates the bitrange.
> 
> What do you think about allowing get_bit_range to adjust offset and bitpos?
> 
>       tem = get_inner_reference (to, &bitsize, &bitpos, &offset, &mode1,
> 				 &unsignedp, &volatilep, true);
> 
>       if (TREE_CODE (to) == COMPONENT_REF
> 	  && DECL_BIT_FIELD_TYPE (TREE_OPERAND (to, 1)))
> 	get_bit_range (&bitregion_start, &bitregion_end, &offset, &bitpos, to);
> 
> so as to have a really non-negative bitregion_start/bitregion_end?  It would 
> assert that offset is already non-null in that case.

For the case in question offset is (D.2640_7 + -1) * 20 + 16.  I wonder
why DECL_FIELD_OFFSET of the outermost COMPONENT_REF is not added
to bitpos by get_inner_reference (that is what get_bit_range assumes ...).
The FIELD_DECL is

    arg 1 <field_decl 0x7ffff5c50000 n type <integer_type 0x7ffff5c4f2a0 
natural___XDLU_0__2147483647>
        unsigned external packed bit-field nonaddressable SI file 
pack18_pkg.ads line 10 col 7
        size <integer_cst 0x7ffff5ae4c80 constant visited 31> unit size 
<integer_cst 0x7ffff5ae4260 4>
        align 1 offset_align 128
        offset <integer_cst 0x7ffff5ae4420 constant visited 16>
        bit offset <integer_cst 0x7ffff5ae4560 constant visited 1> 
bit_field_type <integer_type 0x7ffff5c4f2a0 natural___XDLU_0__2147483647> 
context <record_type 0x7ffff5c4f0a8 pack18_pkg__rec>>>

and TREE_OPERAND (to, 2) is NULL and component_ref_field_offset
returns 16.  In the alias-oracle variants we add constant field-offsets
to the bitposition part (at least that avoids the need to dissect
the constant part of the offset from the non-constant part).

So, how would you make sure this works?  Match the fact that
get_inner_reference does _not_ add DECL_FIELD_OFFSET to bitpos,
and, if DECL_FIELD_OFFSET is an INTEGER_CST simply subtract that
from offset and add it to bitpos?  I suppose that would work.

Though doing that in get_inner_reference for DECL_BIT_FIELD_TYPE
fields may make sense as well.

Richard.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-04-03  8:34                                                                           ` Richard Guenther
@ 2012-04-03 11:09                                                                             ` Eric Botcazou
  2012-04-03 11:44                                                                               ` Richard Guenther
  0 siblings, 1 reply; 46+ messages in thread
From: Eric Botcazou @ 2012-04-03 11:09 UTC (permalink / raw)
  To: Richard Guenther; +Cc: gcc-patches

[-- Attachment #1: Type: text/plain, Size: 1365 bytes --]

> For the case in question offset is (D.2640_7 + -1) * 20 + 16.  I wonder
> why DECL_FIELD_OFFSET of the outermost COMPONENT_REF is not added
> to bitpos by get_inner_reference (that is what get_bit_range assumes ...).

DECL_FIELD_OFFSET is added to offset and DECL_FIELD_BIT_OFFSET to bitpos.

> So, how would you make sure this works?  Match the fact that
> get_inner_reference does _not_ add DECL_FIELD_OFFSET to bitpos,
> and, if DECL_FIELD_OFFSET is an INTEGER_CST simply subtract that
> from offset and add it to bitpos?  I suppose that would work.

Yes, but the amount is simply the negative bitstart (which is a multiple of 
BITS_PER_UNIT).  This is the same kind of adjustment now done at the end of 
get_inner_reference to avoid negative bit positions there too.

> Though doing that in get_inner_reference for DECL_BIT_FIELD_TYPE
> fields may make sense as well.

That would be more complicated, as we would need to split the offset into
variable and fixed part.


Tentative patch attached, regtested for Ada on x86 and x86-64.  I'll do a full 
testing cycle if it is approved.


	* expr.c (get_bit_range): Add OFFSET parameter and adjust BITPOS.
	Change type of BITOFFSET to signed.  Make sure the lower bound of
	the computed range is non-negative by adjusting OFFSET and BITPOS.
	(expand_assignment): Adjust call to get_bit_range.


-- 
Eric Botcazou

[-- Attachment #2: get_bit_range.diff --]
[-- Type: text/x-diff, Size: 2557 bytes --]

Index: expr.c
===================================================================
--- expr.c	(revision 186054)
+++ expr.c	(working copy)
@@ -4431,19 +4431,22 @@ optimize_bitfield_assignment_op (unsigne
 /* In the C++ memory model, consecutive bit fields in a structure are
    considered one memory location.
 
-   Given a COMPONENT_REF EXP at bit position BITPOS, this function
+   Given a COMPONENT_REF EXP at position (OFFSET, BITPOS), this function
    returns the bit range of consecutive bits in which this COMPONENT_REF
-   belongs in.  The values are returned in *BITSTART and *BITEND.
+   belongs in.  The values are returned in *BITSTART and *BITEND.  Both
+   OFFSET and BITPOS may be adjusted in the process.
+
    If the access does not need to be restricted 0 is returned in
    *BITSTART and *BITEND.  */
 
 static void
 get_bit_range (unsigned HOST_WIDE_INT *bitstart,
 	       unsigned HOST_WIDE_INT *bitend,
-	       tree exp,
-	       HOST_WIDE_INT bitpos)
+	       tree *offset,
+	       HOST_WIDE_INT *bitpos,
+	       tree exp)
 {
-  unsigned HOST_WIDE_INT bitoffset;
+  HOST_WIDE_INT bitoffset;
   tree field, repr;
 
   gcc_assert (TREE_CODE (exp) == COMPONENT_REF);
@@ -4490,7 +4493,25 @@ get_bit_range (unsigned HOST_WIDE_INT *b
   bitoffset += (tree_low_cst (DECL_FIELD_BIT_OFFSET (field), 1)
 		- tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1));
 
-  *bitstart = bitpos - bitoffset;
+  /* If the adjustment is larger than bitpos, we would have a negative bit
+     position for the lower bound and this may wreak havoc later.  This can
+     occur only if we have a non-null offset, so adjust offset and bitpos
+     to make the lower bound non-negative.  */
+  if (bitoffset > *bitpos)
+    {
+      HOST_WIDE_INT adjust = bitoffset - *bitpos;
+
+      gcc_assert ((adjust % BITS_PER_UNIT) == 0);
+      gcc_assert (*offset != NULL_TREE);
+
+      *bitpos += adjust;
+      *offset
+	= size_binop (MINUS_EXPR, *offset, size_int (adjust / BITS_PER_UNIT));
+      *bitstart = 0;
+    }
+  else
+    *bitstart = *bitpos - bitoffset;
+
   *bitend = *bitstart + tree_low_cst (DECL_SIZE (repr), 1) - 1;
 }
 
@@ -4595,7 +4616,7 @@ expand_assignment (tree to, tree from, b
 
       if (TREE_CODE (to) == COMPONENT_REF
 	  && DECL_BIT_FIELD_TYPE (TREE_OPERAND (to, 1)))
-	get_bit_range (&bitregion_start, &bitregion_end, to, bitpos);
+	get_bit_range (&bitregion_start, &bitregion_end, &offset, &bitpos, to);
 
       /* If we are going to use store_bit_field and extract_bit_field,
 	 make sure to_rtx will be safe for multiple use.  */

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere
  2012-04-03 11:09                                                                             ` Eric Botcazou
@ 2012-04-03 11:44                                                                               ` Richard Guenther
  0 siblings, 0 replies; 46+ messages in thread
From: Richard Guenther @ 2012-04-03 11:44 UTC (permalink / raw)
  To: Eric Botcazou; +Cc: gcc-patches

On Tue, 3 Apr 2012, Eric Botcazou wrote:

> > For the case in question offset is (D.2640_7 + -1) * 20 + 16.  I wonder
> > why DECL_FIELD_OFFSET of the outermost COMPONENT_REF is not added
> > to bitpos by get_inner_reference (that is what get_bit_range assumes ...).
> 
> DECL_FIELD_OFFSET is added to offset and DECL_FIELD_BIT_OFFSET to bitpos.
> 
> > So, how would you make sure this works?  Match the fact that
> > get_inner_reference does _not_ add DECL_FIELD_OFFSET to bitpos,
> > and, if DECL_FIELD_OFFSET is an INTEGER_CST simply subtract that
> > from offset and add it to bitpos?  I suppose that would work.
> 
> Yes, but the amount is simply the negative bitstart (which is a multiple of 
> BITS_PER_UNIT).  This is the same kind of adjustment now done at the end of 
> get_inner_reference to avoid negative bit positions there too.
> 
> > Though doing that in get_inner_reference for DECL_BIT_FIELD_TYPE
> > fields may make sense as well.
> 
> That would be more complicated, as we would need to split the offset into
> variable and fixed part.
> 
> 
> Tentative patch attached, regtested for Ada on x86 and x86-64.  I'll do a full 
> testing cycle if it is approved.

Yes, the patch is ok.

Thanks,
Richard.

> 
> 	* expr.c (get_bit_range): Add OFFSET parameter and adjust BITPOS.
> 	Change type of BITOFFSET to signed.  Make sure the lower bound of
> 	the computed range is non-negative by adjusting OFFSET and BITPOS.
> 	(expand_assignment): Adjust call to get_bit_range.

^ permalink raw reply	[flat|nested] 46+ messages in thread

end of thread, other threads:[~2012-04-03 11:44 UTC | newest]

Thread overview: 46+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-03-05 13:26 [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere Richard Guenther
2012-03-09  9:02 ` Eric Botcazou
2012-03-12  9:38   ` Richard Guenther
2012-03-12 10:21     ` Eric Botcazou
2012-03-13 14:19       ` Richard Guenther
2012-03-15 14:43         ` Eric Botcazou
2012-03-15 14:58           ` Richard Guenther
2012-03-15 15:30             ` Richard Guenther
2012-03-15 16:19               ` Eric Botcazou
2012-03-15 17:03                 ` Eric Botcazou
2012-03-16  9:42                   ` Richard Guenther
2012-03-19 10:45                     ` Eric Botcazou
2012-03-19 13:14                       ` Richard Guenther
2012-03-20  9:30                         ` Richard Guenther
2012-03-22 10:26                           ` Eric Botcazou
2012-03-22 10:38                             ` Richard Guenther
2012-03-22 11:13                               ` Eric Botcazou
2012-03-22 11:22                                 ` Richard Guenther
2012-03-22 11:48                                   ` Eric Botcazou
2012-03-22 12:04                                     ` Richard Guenther
2012-03-22 12:18                                       ` Eric Botcazou
2012-03-22 12:39                                         ` Richard Guenther
2012-03-23  9:13                                           ` Richard Guenther
2012-03-26  7:18                                             ` Eric Botcazou
2012-03-26  7:37                                               ` Richard Guenther
2012-03-26  7:43                                                 ` Eric Botcazou
2012-03-26  8:01                                                   ` Richard Guenther
2012-03-26  8:18                                                     ` Eric Botcazou
2012-03-26  8:34                                                       ` Richard Guenther
2012-03-26 17:54                                                         ` Eric Botcazou
2012-03-27  6:51                                                           ` Richard Guenther
2012-03-27  7:08                                                             ` Eric Botcazou
2012-03-27  7:50                                                               ` Richard Guenther
2012-03-27  8:39                                                                 ` Eric Botcazou
2012-03-27  8:47                                                                   ` Richard Guenther
2012-03-30 20:58                                                                     ` Eric Botcazou
2012-04-02 14:23                                                                       ` Richard Guenther
2012-04-03  8:26                                                                         ` Eric Botcazou
2012-04-03  8:34                                                                           ` Richard Guenther
2012-04-03 11:09                                                                             ` Eric Botcazou
2012-04-03 11:44                                                                               ` Richard Guenther
2012-03-26  8:10                                                   ` Eric Botcazou
2012-03-26  8:52                                                     ` Richard Guenther
2012-03-26 18:04                                                       ` Eric Botcazou
2012-03-27  6:51                                                         ` Richard Guenther
2012-03-26  7:18                                           ` Eric Botcazou

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).