public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Richard Sandiford <richard.sandiford@linaro.org>
To: gcc-patches@gcc.gnu.org
Subject: [029/nnn] poly_int: get_ref_base_and_extent
Date: Mon, 23 Oct 2017 17:12:00 -0000	[thread overview]
Message-ID: <87efptpz54.fsf@linaro.org> (raw)
In-Reply-To: <871sltvm7r.fsf@linaro.org> (Richard Sandiford's message of "Mon,	23 Oct 2017 17:54:32 +0100")

This patch changes the types of the bit offsets and sizes returned
by get_ref_base_and_extent to poly_int64.

There are some callers that can't sensibly operate on polynomial
offsets or handle cases where the offset and size aren't known
exactly.  This includes the IPA devirtualisation code (since
there's no defined way of having vtables at variable offsets)
and some parts of the DWARF code.  The patch therefore adds
a helper function get_ref_base_and_extent_hwi that either returns
exact HOST_WIDE_INT bit positions and sizes or returns a null
base to indicate failure.


2017-10-23  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* tree-dfa.h (get_ref_base_and_extent): Return the base, size and
	max_size as poly_int64_pods rather than HOST_WIDE_INTs.
	(get_ref_base_and_extent_hwi): Declare.
	* tree-dfa.c (get_ref_base_and_extent): Return the base, size and
	max_size as poly_int64_pods rather than HOST_WIDE_INTs.
	(get_ref_base_and_extent_hwi): New function.
	* cfgexpand.c (expand_debug_expr): Update call to
	get_ref_base_and_extent.
	* dwarf2out.c (add_var_loc_to_decl): Likewise.
	* gimple-fold.c (get_base_constructor): Return the offset as a
	poly_int64_pod rather than a HOST_WIDE_INT.
	(fold_const_aggregate_ref_1): Track polynomial sizes and offsets.
	* ipa-polymorphic-call.c
	(ipa_polymorphic_call_context::set_by_invariant)
	(extr_type_from_vtbl_ptr_store): Track polynomial offsets.
	(ipa_polymorphic_call_context::ipa_polymorphic_call_context)
	(check_stmt_for_type_change): Use get_ref_base_and_extent_hwi
	rather than get_ref_base_and_extent.
	(ipa_polymorphic_call_context::get_dynamic_type): Likewise.
	* ipa-prop.c (ipa_load_from_parm_agg, compute_complex_assign_jump_func)
	(get_ancestor_addr_info, determine_locally_known_aggregate_parts):
	Likewise.
	(ipa_get_adjustment_candidate): Update call to get_ref_base_and_extent.
	* tree-sra.c (create_access, get_access_for_expr): Likewise.
	* tree-ssa-alias.c (ao_ref_base, aliasing_component_refs_p)
	(stmt_kills_ref_p): Likewise.
	* tree-ssa-dce.c (mark_aliased_reaching_defs_necessary_1): Likewise.
	* tree-ssa-scopedtables.c (avail_expr_hash, equal_mem_array_ref_p):
	Likewise.
	* tree-ssa-sccvn.c (vn_reference_lookup_3): Likewise.
	Use get_ref_base_and_extent_hwi rather than get_ref_base_and_extent
	when calling native_encode_expr.
	* tree-ssa-structalias.c (get_constraint_for_component_ref): Update
	call to get_ref_base_and_extent.
	(do_structure_copy): Use get_ref_base_and_extent_hwi rather than
	get_ref_base_and_extent.
	* var-tracking.c (track_expr_p): Likewise.

Index: gcc/tree-dfa.h
===================================================================
--- gcc/tree-dfa.h	2017-10-23 17:07:40.909726192 +0100
+++ gcc/tree-dfa.h	2017-10-23 17:16:59.705267681 +0100
@@ -29,8 +29,10 @@ extern void debug_dfa_stats (void);
 extern tree ssa_default_def (struct function *, tree);
 extern void set_ssa_default_def (struct function *, tree, tree);
 extern tree get_or_create_ssa_default_def (struct function *, tree);
-extern tree get_ref_base_and_extent (tree, HOST_WIDE_INT *,
-				     HOST_WIDE_INT *, HOST_WIDE_INT *, bool *);
+extern tree get_ref_base_and_extent (tree, poly_int64_pod *, poly_int64_pod *,
+				     poly_int64_pod *, bool *);
+extern tree get_ref_base_and_extent_hwi (tree, HOST_WIDE_INT *,
+					 HOST_WIDE_INT *, bool *);
 extern tree get_addr_base_and_unit_offset_1 (tree, HOST_WIDE_INT *,
 					     tree (*) (tree));
 extern tree get_addr_base_and_unit_offset (tree, HOST_WIDE_INT *);
Index: gcc/tree-dfa.c
===================================================================
--- gcc/tree-dfa.c	2017-10-23 17:07:40.909726192 +0100
+++ gcc/tree-dfa.c	2017-10-23 17:16:59.705267681 +0100
@@ -377,15 +377,15 @@ get_or_create_ssa_default_def (struct fu
    true, the storage order of the reference is reversed.  */
 
 tree
-get_ref_base_and_extent (tree exp, HOST_WIDE_INT *poffset,
-			 HOST_WIDE_INT *psize,
-			 HOST_WIDE_INT *pmax_size,
+get_ref_base_and_extent (tree exp, poly_int64_pod *poffset,
+			 poly_int64_pod *psize,
+			 poly_int64_pod *pmax_size,
 			 bool *preverse)
 {
-  offset_int bitsize = -1;
-  offset_int maxsize;
+  poly_offset_int bitsize = -1;
+  poly_offset_int maxsize;
   tree size_tree = NULL_TREE;
-  offset_int bit_offset = 0;
+  poly_offset_int bit_offset = 0;
   bool seen_variable_array_ref = false;
 
   /* First get the final access size and the storage order from just the
@@ -400,11 +400,11 @@ get_ref_base_and_extent (tree exp, HOST_
       if (mode == BLKmode)
 	size_tree = TYPE_SIZE (TREE_TYPE (exp));
       else
-	bitsize = int (GET_MODE_BITSIZE (mode));
+	bitsize = GET_MODE_BITSIZE (mode);
     }
   if (size_tree != NULL_TREE
-      && TREE_CODE (size_tree) == INTEGER_CST)
-    bitsize = wi::to_offset (size_tree);
+      && poly_int_tree_p (size_tree))
+    bitsize = wi::to_poly_offset (size_tree);
 
   *preverse = reverse_storage_order_for_component_p (exp);
 
@@ -419,7 +419,7 @@ get_ref_base_and_extent (tree exp, HOST_
       switch (TREE_CODE (exp))
 	{
 	case BIT_FIELD_REF:
-	  bit_offset += wi::to_offset (TREE_OPERAND (exp, 2));
+	  bit_offset += wi::to_poly_offset (TREE_OPERAND (exp, 2));
 	  break;
 
 	case COMPONENT_REF:
@@ -427,10 +427,10 @@ get_ref_base_and_extent (tree exp, HOST_
 	    tree field = TREE_OPERAND (exp, 1);
 	    tree this_offset = component_ref_field_offset (exp);
 
-	    if (this_offset && TREE_CODE (this_offset) == INTEGER_CST)
+	    if (this_offset && poly_int_tree_p (this_offset))
 	      {
-		offset_int woffset = (wi::to_offset (this_offset)
-				      << LOG2_BITS_PER_UNIT);
+		poly_offset_int woffset = (wi::to_poly_offset (this_offset)
+					   << LOG2_BITS_PER_UNIT);
 		woffset += wi::to_offset (DECL_FIELD_BIT_OFFSET (field));
 		bit_offset += woffset;
 
@@ -438,7 +438,7 @@ get_ref_base_and_extent (tree exp, HOST_
 		   referenced the last field of a struct or a union member
 		   then we have to adjust maxsize by the padding at the end
 		   of our field.  */
-		if (seen_variable_array_ref && maxsize != -1)
+		if (seen_variable_array_ref && known_size_p (maxsize))
 		  {
 		    tree stype = TREE_TYPE (TREE_OPERAND (exp, 0));
 		    tree next = DECL_CHAIN (field);
@@ -450,14 +450,15 @@ get_ref_base_and_extent (tree exp, HOST_
 			tree fsize = DECL_SIZE_UNIT (field);
 			tree ssize = TYPE_SIZE_UNIT (stype);
 			if (fsize == NULL
-			    || TREE_CODE (fsize) != INTEGER_CST
+			    || !poly_int_tree_p (fsize)
 			    || ssize == NULL
-			    || TREE_CODE (ssize) != INTEGER_CST)
+			    || !poly_int_tree_p (ssize))
 			  maxsize = -1;
 			else
 			  {
-			    offset_int tem = (wi::to_offset (ssize)
-					      - wi::to_offset (fsize));
+			    poly_offset_int tem
+			      = (wi::to_poly_offset (ssize)
+				 - wi::to_poly_offset (fsize));
 			    tem <<= LOG2_BITS_PER_UNIT;
 			    tem -= woffset;
 			    maxsize += tem;
@@ -471,10 +472,10 @@ get_ref_base_and_extent (tree exp, HOST_
 		/* We need to adjust maxsize to the whole structure bitsize.
 		   But we can subtract any constant offset seen so far,
 		   because that would get us out of the structure otherwise.  */
-		if (maxsize != -1
+		if (known_size_p (maxsize)
 		    && csize
-		    && TREE_CODE (csize) == INTEGER_CST)
-		  maxsize = wi::to_offset (csize) - bit_offset;
+		    && poly_int_tree_p (csize))
+		  maxsize = wi::to_poly_offset (csize) - bit_offset;
 		else
 		  maxsize = -1;
 	      }
@@ -488,14 +489,15 @@ get_ref_base_and_extent (tree exp, HOST_
 	    tree low_bound, unit_size;
 
 	    /* If the resulting bit-offset is constant, track it.  */
-	    if (TREE_CODE (index) == INTEGER_CST
+	    if (poly_int_tree_p (index)
 		&& (low_bound = array_ref_low_bound (exp),
- 		    TREE_CODE (low_bound) == INTEGER_CST)
+		    poly_int_tree_p (low_bound))
 		&& (unit_size = array_ref_element_size (exp),
 		    TREE_CODE (unit_size) == INTEGER_CST))
 	      {
-		offset_int woffset
-		  = wi::sext (wi::to_offset (index) - wi::to_offset (low_bound),
+		poly_offset_int woffset
+		  = wi::sext (wi::to_poly_offset (index)
+			      - wi::to_poly_offset (low_bound),
 			      TYPE_PRECISION (TREE_TYPE (index)));
 		woffset *= wi::to_offset (unit_size);
 		woffset <<= LOG2_BITS_PER_UNIT;
@@ -512,10 +514,10 @@ get_ref_base_and_extent (tree exp, HOST_
 		/* We need to adjust maxsize to the whole array bitsize.
 		   But we can subtract any constant offset seen so far,
 		   because that would get us outside of the array otherwise.  */
-		if (maxsize != -1
+		if (known_size_p (maxsize)
 		    && asize
-		    && TREE_CODE (asize) == INTEGER_CST)
-		  maxsize = wi::to_offset (asize) - bit_offset;
+		    && poly_int_tree_p (asize))
+		  maxsize = wi::to_poly_offset (asize) - bit_offset;
 		else
 		  maxsize = -1;
 
@@ -560,11 +562,11 @@ get_ref_base_and_extent (tree exp, HOST_
 	     base type boundary.  This needs to include possible trailing
 	     padding that is there for alignment purposes.  */
 	  if (seen_variable_array_ref
-	      && maxsize != -1
+	      && known_size_p (maxsize)
 	      && (TYPE_SIZE (TREE_TYPE (exp)) == NULL_TREE
-		  || TREE_CODE (TYPE_SIZE (TREE_TYPE (exp))) != INTEGER_CST
-		  || (bit_offset + maxsize
-		      == wi::to_offset (TYPE_SIZE (TREE_TYPE (exp))))))
+		  || !poly_int_tree_p (TYPE_SIZE (TREE_TYPE (exp)))
+		  || may_eq (bit_offset + maxsize,
+			     wi::to_poly_offset (TYPE_SIZE (TREE_TYPE (exp))))))
 	    maxsize = -1;
 
 	  /* Hand back the decl for MEM[&decl, off].  */
@@ -574,12 +576,13 @@ get_ref_base_and_extent (tree exp, HOST_
 		exp = TREE_OPERAND (TREE_OPERAND (exp, 0), 0);
 	      else
 		{
-		  offset_int off = mem_ref_offset (exp);
+		  poly_offset_int off = mem_ref_offset (exp);
 		  off <<= LOG2_BITS_PER_UNIT;
 		  off += bit_offset;
-		  if (wi::fits_shwi_p (off))
+		  poly_int64 off_hwi;
+		  if (off.to_shwi (&off_hwi))
 		    {
-		      bit_offset = off;
+		      bit_offset = off_hwi;
 		      exp = TREE_OPERAND (TREE_OPERAND (exp, 0), 0);
 		    }
 		}
@@ -594,7 +597,7 @@ get_ref_base_and_extent (tree exp, HOST_
     }
 
  done:
-  if (!wi::fits_shwi_p (bitsize) || wi::neg_p (bitsize))
+  if (!bitsize.to_shwi (psize) || may_lt (*psize, 0))
     {
       *poffset = 0;
       *psize = -1;
@@ -603,9 +606,10 @@ get_ref_base_and_extent (tree exp, HOST_
       return exp;
     }
 
-  *psize = bitsize.to_shwi ();
-
-  if (!wi::fits_shwi_p (bit_offset))
+  /* ???  Due to negative offsets in ARRAY_REF we can end up with
+     negative bit_offset here.  We might want to store a zero offset
+     in this case.  */
+  if (!bit_offset.to_shwi (poffset))
     {
       *poffset = 0;
       *pmax_size = -1;
@@ -625,44 +629,37 @@ get_ref_base_and_extent (tree exp, HOST_
 	  if (TREE_CODE (TREE_TYPE (exp)) == ARRAY_TYPE
 	      || (seen_variable_array_ref
 		  && (sz_tree == NULL_TREE
-		      || TREE_CODE (sz_tree) != INTEGER_CST
-		      || (bit_offset + maxsize == wi::to_offset (sz_tree)))))
+		      || !poly_int_tree_p (sz_tree)
+		      || may_eq (bit_offset + maxsize,
+				 wi::to_poly_offset (sz_tree)))))
 	    maxsize = -1;
 	}
       /* If maxsize is unknown adjust it according to the size of the
          base decl.  */
-      else if (maxsize == -1
-	  && DECL_SIZE (exp)
-	  && TREE_CODE (DECL_SIZE (exp)) == INTEGER_CST)
-	maxsize = wi::to_offset (DECL_SIZE (exp)) - bit_offset;
+      else if (!known_size_p (maxsize)
+	       && DECL_SIZE (exp)
+	       && poly_int_tree_p (DECL_SIZE (exp)))
+	maxsize = wi::to_poly_offset (DECL_SIZE (exp)) - bit_offset;
     }
   else if (CONSTANT_CLASS_P (exp))
     {
       /* If maxsize is unknown adjust it according to the size of the
          base type constant.  */
-      if (maxsize == -1
+      if (!known_size_p (maxsize)
 	  && TYPE_SIZE (TREE_TYPE (exp))
-	  && TREE_CODE (TYPE_SIZE (TREE_TYPE (exp))) == INTEGER_CST)
-	maxsize = (wi::to_offset (TYPE_SIZE (TREE_TYPE (exp)))
+	  && poly_int_tree_p (TYPE_SIZE (TREE_TYPE (exp))))
+	maxsize = (wi::to_poly_offset (TYPE_SIZE (TREE_TYPE (exp)))
 		   - bit_offset);
     }
 
-  /* ???  Due to negative offsets in ARRAY_REF we can end up with
-     negative bit_offset here.  We might want to store a zero offset
-     in this case.  */
-  *poffset = bit_offset.to_shwi ();
-  if (!wi::fits_shwi_p (maxsize) || wi::neg_p (maxsize))
+  if (!maxsize.to_shwi (pmax_size)
+      || may_lt (*pmax_size, 0)
+      || !endpoint_representable_p (*poffset, *pmax_size))
     *pmax_size = -1;
-  else
-    {
-      *pmax_size = maxsize.to_shwi ();
-      if (*poffset > HOST_WIDE_INT_MAX - *pmax_size)
-	*pmax_size = -1;
-    }
 
   /* Punt if *POFFSET + *PSIZE overflows in HOST_WIDE_INT, the callers don't
      check for such overflows individually and assume it works.  */
-  if (*psize != -1 && *poffset > HOST_WIDE_INT_MAX - *psize)
+  if (!endpoint_representable_p (*poffset, *psize))
     {
       *poffset = 0;
       *psize = -1;
@@ -674,6 +671,32 @@ get_ref_base_and_extent (tree exp, HOST_
   return exp;
 }
 
+/* Like get_ref_base_and_extent, but for cases in which we only care
+   about constant-width accesses at constant offsets.  Return null
+   if the access is anything else.  */
+
+tree
+get_ref_base_and_extent_hwi (tree exp, HOST_WIDE_INT *poffset,
+			     HOST_WIDE_INT *psize, bool *preverse)
+{
+  poly_int64 offset, size, max_size;
+  HOST_WIDE_INT const_offset, const_size;
+  bool reverse;
+  tree decl = get_ref_base_and_extent (exp, &offset, &size, &max_size,
+				       &reverse);
+  if (!offset.is_constant (&const_offset)
+      || !size.is_constant (&const_size)
+      || const_offset < 0
+      || !known_size_p (max_size)
+      || may_ne (max_size, const_size))
+    return NULL_TREE;
+
+  *poffset = const_offset;
+  *psize = const_size;
+  *preverse = reverse;
+  return decl;
+}
+
 /* Returns the base object and a constant BITS_PER_UNIT offset in *POFFSET that
    denotes the starting address of the memory access EXP.
    Returns NULL_TREE if the offset is not constant or any component
Index: gcc/cfgexpand.c
===================================================================
--- gcc/cfgexpand.c	2017-10-23 17:11:40.240937549 +0100
+++ gcc/cfgexpand.c	2017-10-23 17:16:59.700268356 +0100
@@ -4878,7 +4878,7 @@ expand_debug_expr (tree exp)
 
 	  if (handled_component_p (TREE_OPERAND (exp, 0)))
 	    {
-	      HOST_WIDE_INT bitoffset, bitsize, maxsize;
+	      poly_int64 bitoffset, bitsize, maxsize, byteoffset;
 	      bool reverse;
 	      tree decl
 		= get_ref_base_and_extent (TREE_OPERAND (exp, 0), &bitoffset,
@@ -4888,12 +4888,12 @@ expand_debug_expr (tree exp)
 		   || TREE_CODE (decl) == RESULT_DECL)
 		  && (!TREE_ADDRESSABLE (decl)
 		      || target_for_debug_bind (decl))
-		  && (bitoffset % BITS_PER_UNIT) == 0
-		  && bitsize > 0
-		  && bitsize == maxsize)
+		  && multiple_p (bitoffset, BITS_PER_UNIT, &byteoffset)
+		  && must_gt (bitsize, 0)
+		  && must_eq (bitsize, maxsize))
 		{
 		  rtx base = gen_rtx_DEBUG_IMPLICIT_PTR (mode, decl);
-		  return plus_constant (mode, base, bitoffset / BITS_PER_UNIT);
+		  return plus_constant (mode, base, byteoffset);
 		}
 	    }
 
Index: gcc/dwarf2out.c
===================================================================
--- gcc/dwarf2out.c	2017-10-23 17:16:57.210604569 +0100
+++ gcc/dwarf2out.c	2017-10-23 17:16:59.703267951 +0100
@@ -5914,17 +5914,15 @@ add_var_loc_to_decl (tree decl, rtx loc_
 	  || (TREE_CODE (realdecl) == MEM_REF
 	      && TREE_CODE (TREE_OPERAND (realdecl, 0)) == ADDR_EXPR))
 	{
-	  HOST_WIDE_INT maxsize;
 	  bool reverse;
-	  tree innerdecl
-	    = get_ref_base_and_extent (realdecl, &bitpos, &bitsize, &maxsize,
-				       &reverse);
-	  if (!DECL_P (innerdecl)
+	  tree innerdecl = get_ref_base_and_extent_hwi (realdecl, &bitpos,
+							&bitsize, &reverse);
+	  if (!innerdecl
+	      || !DECL_P (innerdecl)
 	      || DECL_IGNORED_P (innerdecl)
 	      || TREE_STATIC (innerdecl)
-	      || bitsize <= 0
-	      || bitpos + bitsize > 256
-	      || bitsize != maxsize)
+	      || bitsize == 0
+	      || bitpos + bitsize > 256)
 	    return NULL;
 	  decl = innerdecl;
 	}
Index: gcc/gimple-fold.c
===================================================================
--- gcc/gimple-fold.c	2017-10-23 17:11:40.321090726 +0100
+++ gcc/gimple-fold.c	2017-10-23 17:16:59.703267951 +0100
@@ -6171,10 +6171,10 @@ gimple_fold_stmt_to_constant (gimple *st
    is not explicitly available, but it is known to be zero
    such as 'static const int a;'.  */
 static tree
-get_base_constructor (tree base, HOST_WIDE_INT *bit_offset,
+get_base_constructor (tree base, poly_int64_pod *bit_offset,
 		      tree (*valueize)(tree))
 {
-  HOST_WIDE_INT bit_offset2, size, max_size;
+  poly_int64 bit_offset2, size, max_size;
   bool reverse;
 
   if (TREE_CODE (base) == MEM_REF)
@@ -6226,7 +6226,7 @@ get_base_constructor (tree base, HOST_WI
     case COMPONENT_REF:
       base = get_ref_base_and_extent (base, &bit_offset2, &size, &max_size,
 				      &reverse);
-      if (max_size == -1 || size != max_size)
+      if (!known_size_p (max_size) || may_ne (size, max_size))
 	return NULL_TREE;
       *bit_offset +=  bit_offset2;
       return get_base_constructor (base, bit_offset, valueize);
@@ -6437,7 +6437,7 @@ fold_ctor_reference (tree type, tree cto
 fold_const_aggregate_ref_1 (tree t, tree (*valueize) (tree))
 {
   tree ctor, idx, base;
-  HOST_WIDE_INT offset, size, max_size;
+  poly_int64 offset, size, max_size;
   tree tem;
   bool reverse;
 
@@ -6463,23 +6463,23 @@ fold_const_aggregate_ref_1 (tree t, tree
       if (TREE_CODE (TREE_OPERAND (t, 1)) == SSA_NAME
 	  && valueize
 	  && (idx = (*valueize) (TREE_OPERAND (t, 1)))
-	  && TREE_CODE (idx) == INTEGER_CST)
+	  && poly_int_tree_p (idx))
 	{
 	  tree low_bound, unit_size;
 
 	  /* If the resulting bit-offset is constant, track it.  */
 	  if ((low_bound = array_ref_low_bound (t),
-	       TREE_CODE (low_bound) == INTEGER_CST)
+	       poly_int_tree_p (low_bound))
 	      && (unit_size = array_ref_element_size (t),
 		  tree_fits_uhwi_p (unit_size)))
 	    {
-	      offset_int woffset
-		= wi::sext (wi::to_offset (idx) - wi::to_offset (low_bound),
+	      poly_offset_int woffset
+		= wi::sext (wi::to_poly_offset (idx)
+			    - wi::to_poly_offset (low_bound),
 			    TYPE_PRECISION (TREE_TYPE (idx)));
 
-	      if (wi::fits_shwi_p (woffset))
+	      if (woffset.to_shwi (&offset))
 		{
-		  offset = woffset.to_shwi ();
 		  /* TODO: This code seems wrong, multiply then check
 		     to see if it fits.  */
 		  offset *= tree_to_uhwi (unit_size);
@@ -6492,7 +6492,7 @@ fold_const_aggregate_ref_1 (tree t, tree
 		    return build_zero_cst (TREE_TYPE (t));
 		  /* Out of bound array access.  Value is undefined,
 		     but don't fold.  */
-		  if (offset < 0)
+		  if (may_lt (offset, 0))
 		    return NULL_TREE;
 		  /* We can not determine ctor.  */
 		  if (!ctor)
@@ -6517,14 +6517,14 @@ fold_const_aggregate_ref_1 (tree t, tree
       if (ctor == error_mark_node)
 	return build_zero_cst (TREE_TYPE (t));
       /* We do not know precise address.  */
-      if (max_size == -1 || max_size != size)
+      if (!known_size_p (max_size) || may_ne (max_size, size))
 	return NULL_TREE;
       /* We can not determine ctor.  */
       if (!ctor)
 	return NULL_TREE;
 
       /* Out of bound array access.  Value is undefined, but don't fold.  */
-      if (offset < 0)
+      if (may_lt (offset, 0))
 	return NULL_TREE;
 
       return fold_ctor_reference (TREE_TYPE (t), ctor, offset, size,
Index: gcc/ipa-polymorphic-call.c
===================================================================
--- gcc/ipa-polymorphic-call.c	2017-10-23 17:07:40.909726192 +0100
+++ gcc/ipa-polymorphic-call.c	2017-10-23 17:16:59.704267816 +0100
@@ -759,7 +759,7 @@ ipa_polymorphic_call_context::set_by_inv
 						tree otr_type,
 						HOST_WIDE_INT off)
 {
-  HOST_WIDE_INT offset2, size, max_size;
+  poly_int64 offset2, size, max_size;
   bool reverse;
   tree base;
 
@@ -772,7 +772,7 @@ ipa_polymorphic_call_context::set_by_inv
 
   cst = TREE_OPERAND (cst, 0);
   base = get_ref_base_and_extent (cst, &offset2, &size, &max_size, &reverse);
-  if (!DECL_P (base) || max_size == -1 || max_size != size)
+  if (!DECL_P (base) || !known_size_p (max_size) || may_ne (max_size, size))
     return false;
 
   /* Only type inconsistent programs can have otr_type that is
@@ -899,23 +899,21 @@ ipa_polymorphic_call_context::ipa_polymo
       base_pointer = walk_ssa_copies (base_pointer, &visited);
       if (TREE_CODE (base_pointer) == ADDR_EXPR)
 	{
-	  HOST_WIDE_INT size, max_size;
-	  HOST_WIDE_INT offset2;
+	  HOST_WIDE_INT offset2, size;
 	  bool reverse;
 	  tree base
-	    = get_ref_base_and_extent (TREE_OPERAND (base_pointer, 0),
-				       &offset2, &size, &max_size, &reverse);
+	    = get_ref_base_and_extent_hwi (TREE_OPERAND (base_pointer, 0),
+					   &offset2, &size, &reverse);
+	  if (!base)
+	    break;
 
-	  if (max_size != -1 && max_size == size)
-	    combine_speculation_with (TYPE_MAIN_VARIANT (TREE_TYPE (base)),
-				      offset + offset2,
-				      true,
-				      NULL /* Do not change outer type.  */);
+	  combine_speculation_with (TYPE_MAIN_VARIANT (TREE_TYPE (base)),
+				    offset + offset2,
+				    true,
+				    NULL /* Do not change outer type.  */);
 
 	  /* If this is a varying address, punt.  */
-	  if ((TREE_CODE (base) == MEM_REF || DECL_P (base))
-	      && max_size != -1
-	      && max_size == size)
+	  if (TREE_CODE (base) == MEM_REF || DECL_P (base))
 	    {
 	      /* We found dereference of a pointer.  Type of the pointer
 		 and MEM_REF is meaningless, but we can look futher.  */
@@ -1181,7 +1179,7 @@ noncall_stmt_may_be_vtbl_ptr_store (gimp
 extr_type_from_vtbl_ptr_store (gimple *stmt, struct type_change_info *tci,
 			       HOST_WIDE_INT *type_offset)
 {
-  HOST_WIDE_INT offset, size, max_size;
+  poly_int64 offset, size, max_size;
   tree lhs, rhs, base;
   bool reverse;
 
@@ -1263,17 +1261,23 @@ extr_type_from_vtbl_ptr_store (gimple *s
 	    }
 	  return tci->offset > POINTER_SIZE ? error_mark_node : NULL_TREE;
 	}
-      if (offset != tci->offset
-	  || size != POINTER_SIZE
-	  || max_size != POINTER_SIZE)
+      if (may_ne (offset, tci->offset)
+	  || may_ne (size, POINTER_SIZE)
+	  || may_ne (max_size, POINTER_SIZE))
 	{
 	  if (dump_file)
-	    fprintf (dump_file, "    wrong offset %i!=%i or size %i\n",
-		     (int)offset, (int)tci->offset, (int)size);
-	  return offset + POINTER_SIZE <= tci->offset
-	         || (max_size != -1
-		     && tci->offset + POINTER_SIZE > offset + max_size)
-		 ? error_mark_node : NULL;
+	    {
+	      fprintf (dump_file, "    wrong offset ");
+	      print_dec (offset, dump_file);
+	      fprintf (dump_file, "!=%i or size ", (int) tci->offset);
+	      print_dec (size, dump_file);
+	      fprintf (dump_file, "\n");
+	    }
+	  return (must_le (offset + POINTER_SIZE, tci->offset)
+		  || (known_size_p (max_size)
+		      && must_gt (tci->offset + POINTER_SIZE,
+				  offset + max_size))
+		  ? error_mark_node : NULL);
 	}
     }
 
@@ -1403,7 +1407,7 @@ check_stmt_for_type_change (ao_ref *ao A
       {
 	tree op = walk_ssa_copies (gimple_call_arg (stmt, 0));
 	tree type = TYPE_METHOD_BASETYPE (TREE_TYPE (fn));
-	HOST_WIDE_INT offset = 0, size, max_size;
+	HOST_WIDE_INT offset = 0;
 	bool reverse;
 
 	if (dump_file)
@@ -1415,14 +1419,15 @@ check_stmt_for_type_change (ao_ref *ao A
 	/* See if THIS parameter seems like instance pointer.  */
 	if (TREE_CODE (op) == ADDR_EXPR)
 	  {
-	    op = get_ref_base_and_extent (TREE_OPERAND (op, 0), &offset,
-					  &size, &max_size, &reverse);
-	    if (size != max_size || max_size == -1)
+	    HOST_WIDE_INT size;
+	    op = get_ref_base_and_extent_hwi (TREE_OPERAND (op, 0),
+					      &offset, &size, &reverse);
+	    if (!op)
 	      {
                 tci->speculative++;
 	        return csftc_abort_walking_p (tci->speculative);
 	      }
-	    if (op && TREE_CODE (op) == MEM_REF)
+	    if (TREE_CODE (op) == MEM_REF)
 	      {
 		if (!tree_fits_shwi_p (TREE_OPERAND (op, 1)))
 		  {
@@ -1578,7 +1583,6 @@ ipa_polymorphic_call_context::get_dynami
   if (gimple_code (call) == GIMPLE_CALL)
     {
       tree ref = gimple_call_fn (call);
-      HOST_WIDE_INT offset2, size, max_size;
       bool reverse;
 
       if (TREE_CODE (ref) == OBJ_TYPE_REF)
@@ -1608,10 +1612,11 @@ ipa_polymorphic_call_context::get_dynami
 		  && !SSA_NAME_IS_DEFAULT_DEF (ref)
 		  && gimple_assign_load_p (SSA_NAME_DEF_STMT (ref)))
 		{
+		  HOST_WIDE_INT offset2, size;
 		  tree ref_exp = gimple_assign_rhs1 (SSA_NAME_DEF_STMT (ref));
 		  tree base_ref
-		    = get_ref_base_and_extent (ref_exp, &offset2, &size,
-					       &max_size, &reverse);
+		    = get_ref_base_and_extent_hwi (ref_exp, &offset2,
+						   &size, &reverse);
 
 		  /* Finally verify that what we found looks like read from
 		     OTR_OBJECT or from INSTANCE with offset OFFSET.  */
Index: gcc/ipa-prop.c
===================================================================
--- gcc/ipa-prop.c	2017-10-23 17:16:58.507429441 +0100
+++ gcc/ipa-prop.c	2017-10-23 17:16:59.704267816 +0100
@@ -1071,12 +1071,11 @@ ipa_load_from_parm_agg (struct ipa_func_
 			bool *by_ref_p, bool *guaranteed_unmodified)
 {
   int index;
-  HOST_WIDE_INT size, max_size;
+  HOST_WIDE_INT size;
   bool reverse;
-  tree base
-    = get_ref_base_and_extent (op, offset_p, &size, &max_size, &reverse);
+  tree base = get_ref_base_and_extent_hwi (op, offset_p, &size, &reverse);
 
-  if (max_size == -1 || max_size != size || *offset_p < 0)
+  if (!base)
     return false;
 
   if (DECL_P (base))
@@ -1204,7 +1203,7 @@ compute_complex_assign_jump_func (struct
 				  gcall *call, gimple *stmt, tree name,
 				  tree param_type)
 {
-  HOST_WIDE_INT offset, size, max_size;
+  HOST_WIDE_INT offset, size;
   tree op1, tc_ssa, base, ssa;
   bool reverse;
   int index;
@@ -1267,11 +1266,8 @@ compute_complex_assign_jump_func (struct
   op1 = TREE_OPERAND (op1, 0);
   if (TREE_CODE (TREE_TYPE (op1)) != RECORD_TYPE)
     return;
-  base = get_ref_base_and_extent (op1, &offset, &size, &max_size, &reverse);
-  if (TREE_CODE (base) != MEM_REF
-      /* If this is a varying address, punt.  */
-      || max_size == -1
-      || max_size != size)
+  base = get_ref_base_and_extent_hwi (op1, &offset, &size, &reverse);
+  if (!base || TREE_CODE (base) != MEM_REF)
     return;
   offset += mem_ref_offset (base).to_short_addr () * BITS_PER_UNIT;
   ssa = TREE_OPERAND (base, 0);
@@ -1301,7 +1297,7 @@ compute_complex_assign_jump_func (struct
 static tree
 get_ancestor_addr_info (gimple *assign, tree *obj_p, HOST_WIDE_INT *offset)
 {
-  HOST_WIDE_INT size, max_size;
+  HOST_WIDE_INT size;
   tree expr, parm, obj;
   bool reverse;
 
@@ -1313,13 +1309,9 @@ get_ancestor_addr_info (gimple *assign,
     return NULL_TREE;
   expr = TREE_OPERAND (expr, 0);
   obj = expr;
-  expr = get_ref_base_and_extent (expr, offset, &size, &max_size, &reverse);
+  expr = get_ref_base_and_extent_hwi (expr, offset, &size, &reverse);
 
-  if (TREE_CODE (expr) != MEM_REF
-      /* If this is a varying address, punt.  */
-      || max_size == -1
-      || max_size != size
-      || *offset < 0)
+  if (!expr || TREE_CODE (expr) != MEM_REF)
     return NULL_TREE;
   parm = TREE_OPERAND (expr, 0);
   if (TREE_CODE (parm) != SSA_NAME
@@ -1581,15 +1573,12 @@ determine_locally_known_aggregate_parts
 	}
       else if (TREE_CODE (arg) == ADDR_EXPR)
 	{
-	  HOST_WIDE_INT arg_max_size;
 	  bool reverse;
 
 	  arg = TREE_OPERAND (arg, 0);
-	  arg_base = get_ref_base_and_extent (arg, &arg_offset, &arg_size,
-					      &arg_max_size, &reverse);
-	  if (arg_max_size == -1
-	      || arg_max_size != arg_size
-	      || arg_offset < 0)
+	  arg_base = get_ref_base_and_extent_hwi (arg, &arg_offset,
+						  &arg_size, &reverse);
+	  if (!arg_base)
 	    return;
 	  if (DECL_P (arg_base))
 	    {
@@ -1604,18 +1593,15 @@ determine_locally_known_aggregate_parts
     }
   else
     {
-      HOST_WIDE_INT arg_max_size;
       bool reverse;
 
       gcc_checking_assert (AGGREGATE_TYPE_P (TREE_TYPE (arg)));
 
       by_ref = false;
       check_ref = false;
-      arg_base = get_ref_base_and_extent (arg, &arg_offset, &arg_size,
-					  &arg_max_size, &reverse);
-      if (arg_max_size == -1
-	  || arg_max_size != arg_size
-	  || arg_offset < 0)
+      arg_base = get_ref_base_and_extent_hwi (arg, &arg_offset,
+					      &arg_size, &reverse);
+      if (!arg_base)
 	return;
 
       ao_ref_init (&r, arg);
@@ -1631,7 +1617,7 @@ determine_locally_known_aggregate_parts
     {
       struct ipa_known_agg_contents_list *n, **p;
       gimple *stmt = gsi_stmt (gsi);
-      HOST_WIDE_INT lhs_offset, lhs_size, lhs_max_size;
+      HOST_WIDE_INT lhs_offset, lhs_size;
       tree lhs, rhs, lhs_base;
       bool reverse;
 
@@ -1647,10 +1633,9 @@ determine_locally_known_aggregate_parts
 	  || contains_bitfld_component_ref_p (lhs))
 	break;
 
-      lhs_base = get_ref_base_and_extent (lhs, &lhs_offset, &lhs_size,
-					  &lhs_max_size, &reverse);
-      if (lhs_max_size == -1
-	  || lhs_max_size != lhs_size)
+      lhs_base = get_ref_base_and_extent_hwi (lhs, &lhs_offset,
+					      &lhs_size, &reverse);
+      if (!lhs_base)
 	break;
 
       if (check_ref)
@@ -4574,11 +4559,11 @@ ipa_get_adjustment_candidate (tree **exp
 	*convert = true;
     }
 
-  HOST_WIDE_INT offset, size, max_size;
+  poly_int64 offset, size, max_size;
   bool reverse;
   tree base
     = get_ref_base_and_extent (**expr, &offset, &size, &max_size, &reverse);
-  if (!base || size == -1 || max_size == -1)
+  if (!base || !known_size_p (size) || !known_size_p (max_size))
     return NULL;
 
   if (TREE_CODE (base) == MEM_REF)
Index: gcc/tree-sra.c
===================================================================
--- gcc/tree-sra.c	2017-10-23 17:07:40.909726192 +0100
+++ gcc/tree-sra.c	2017-10-23 17:16:59.705267681 +0100
@@ -865,11 +865,20 @@ static bool maybe_add_sra_candidate (tre
 create_access (tree expr, gimple *stmt, bool write)
 {
   struct access *access;
+  poly_int64 poffset, psize, pmax_size;
   HOST_WIDE_INT offset, size, max_size;
   tree base = expr;
   bool reverse, ptr, unscalarizable_region = false;
 
-  base = get_ref_base_and_extent (expr, &offset, &size, &max_size, &reverse);
+  base = get_ref_base_and_extent (expr, &poffset, &psize, &pmax_size,
+				  &reverse);
+  if (!poffset.is_constant (&offset)
+      || !psize.is_constant (&size)
+      || !pmax_size.is_constant (&max_size))
+    {
+      disqualify_candidate (base, "Encountered a polynomial-sized access.");
+      return NULL;
+    }
 
   if (sra_mode == SRA_MODE_EARLY_IPA
       && TREE_CODE (base) == MEM_REF)
@@ -3048,7 +3057,8 @@ clobber_subtree (struct access *access,
 static struct access *
 get_access_for_expr (tree expr)
 {
-  HOST_WIDE_INT offset, size, max_size;
+  poly_int64 poffset, psize, pmax_size;
+  HOST_WIDE_INT offset, max_size;
   tree base;
   bool reverse;
 
@@ -3058,8 +3068,12 @@ get_access_for_expr (tree expr)
   if (TREE_CODE (expr) == VIEW_CONVERT_EXPR)
     expr = TREE_OPERAND (expr, 0);
 
-  base = get_ref_base_and_extent (expr, &offset, &size, &max_size, &reverse);
-  if (max_size == -1 || !DECL_P (base))
+  base = get_ref_base_and_extent (expr, &poffset, &psize, &pmax_size,
+				  &reverse);
+  if (!known_size_p (pmax_size)
+      || !pmax_size.is_constant (&max_size)
+      || !poffset.is_constant (&offset)
+      || !DECL_P (base))
     return NULL;
 
   if (!bitmap_bit_p (candidate_bitmap, DECL_UID (base)))
Index: gcc/tree-ssa-alias.c
===================================================================
--- gcc/tree-ssa-alias.c	2017-10-23 17:11:40.347140508 +0100
+++ gcc/tree-ssa-alias.c	2017-10-23 17:16:59.705267681 +0100
@@ -635,7 +635,7 @@ ao_ref_init (ao_ref *r, tree ref)
 ao_ref_base (ao_ref *ref)
 {
   bool reverse;
-  HOST_WIDE_INT offset, size, max_size;
+  poly_int64 offset, size, max_size;
 
   if (ref->base)
     return ref->base;
@@ -823,7 +823,7 @@ aliasing_component_refs_p (tree ref1,
     return true;
   else if (same_p == 1)
     {
-      HOST_WIDE_INT offadj, sztmp, msztmp;
+      poly_int64 offadj, sztmp, msztmp;
       bool reverse;
       get_ref_base_and_extent (*refp, &offadj, &sztmp, &msztmp, &reverse);
       offset2 -= offadj;
@@ -842,7 +842,7 @@ aliasing_component_refs_p (tree ref1,
     return true;
   else if (same_p == 1)
     {
-      HOST_WIDE_INT offadj, sztmp, msztmp;
+      poly_int64 offadj, sztmp, msztmp;
       bool reverse;
       get_ref_base_and_extent (*refp, &offadj, &sztmp, &msztmp, &reverse);
       offset1 -= offadj;
@@ -2450,15 +2450,12 @@ stmt_kills_ref_p (gimple *stmt, ao_ref *
 	 the access properly.  */
       if (!ref->max_size_known_p ())
 	return false;
-      HOST_WIDE_INT size, max_size, const_offset;
-      poly_int64 ref_offset = ref->offset;
+      poly_int64 size, offset, max_size, ref_offset = ref->offset;
       bool reverse;
-      tree base
-	= get_ref_base_and_extent (lhs, &const_offset, &size, &max_size,
-				   &reverse);
+      tree base = get_ref_base_and_extent (lhs, &offset, &size, &max_size,
+					   &reverse);
       /* We can get MEM[symbol: sZ, index: D.8862_1] here,
 	 so base == ref->base does not always hold.  */
-      poly_int64 offset = const_offset;
       if (base != ref->base)
 	{
 	  /* Try using points-to info.  */
@@ -2490,7 +2487,7 @@ stmt_kills_ref_p (gimple *stmt, ao_ref *
 	}
       /* For a must-alias check we need to be able to constrain
 	 the access properly.  */
-      if (size == max_size
+      if (must_eq (size, max_size)
 	  && known_subrange_p (ref_offset, ref->max_size, offset, size))
 	return true;
     }
Index: gcc/tree-ssa-dce.c
===================================================================
--- gcc/tree-ssa-dce.c	2017-10-23 17:11:40.348142423 +0100
+++ gcc/tree-ssa-dce.c	2017-10-23 17:16:59.706267546 +0100
@@ -477,7 +477,7 @@ mark_aliased_reaching_defs_necessary_1 (
       && !stmt_can_throw_internal (def_stmt))
     {
       tree base, lhs = gimple_get_lhs (def_stmt);
-      HOST_WIDE_INT size, offset, max_size;
+      poly_int64 size, offset, max_size;
       bool reverse;
       ao_ref_base (ref);
       base
@@ -488,7 +488,7 @@ mark_aliased_reaching_defs_necessary_1 (
 	{
 	  /* For a must-alias check we need to be able to constrain
 	     the accesses properly.  */
-	  if (size == max_size
+	  if (must_eq (size, max_size)
 	      && known_subrange_p (ref->offset, ref->max_size, offset, size))
 	    return true;
 	  /* Or they need to be exactly the same.  */
Index: gcc/tree-ssa-scopedtables.c
===================================================================
--- gcc/tree-ssa-scopedtables.c	2017-10-23 17:07:40.909726192 +0100
+++ gcc/tree-ssa-scopedtables.c	2017-10-23 17:16:59.706267546 +0100
@@ -480,13 +480,13 @@ avail_expr_hash (class expr_hash_elt *p)
 	     Dealing with both MEM_REF and ARRAY_REF allows us not to care
 	     about equivalence with other statements not considered here.  */
 	  bool reverse;
-	  HOST_WIDE_INT offset, size, max_size;
+	  poly_int64 offset, size, max_size;
 	  tree base = get_ref_base_and_extent (t, &offset, &size, &max_size,
 					       &reverse);
 	  /* Strictly, we could try to normalize variable-sized accesses too,
 	    but here we just deal with the common case.  */
-	  if (size != -1
-	      && size == max_size)
+	  if (known_size_p (max_size)
+	      && must_eq (size, max_size))
 	    {
 	      enum tree_code code = MEM_REF;
 	      hstate.add_object (code);
@@ -520,26 +520,26 @@ equal_mem_array_ref_p (tree t0, tree t1)
   if (!types_compatible_p (TREE_TYPE (t0), TREE_TYPE (t1)))
     return false;
   bool rev0;
-  HOST_WIDE_INT off0, sz0, max0;
+  poly_int64 off0, sz0, max0;
   tree base0 = get_ref_base_and_extent (t0, &off0, &sz0, &max0, &rev0);
-  if (sz0 == -1
-      || sz0 != max0)
+  if (!known_size_p (max0)
+      || may_ne (sz0, max0))
     return false;
 
   bool rev1;
-  HOST_WIDE_INT off1, sz1, max1;
+  poly_int64 off1, sz1, max1;
   tree base1 = get_ref_base_and_extent (t1, &off1, &sz1, &max1, &rev1);
-  if (sz1 == -1
-      || sz1 != max1)
+  if (!known_size_p (max1)
+      || may_ne (sz1, max1))
     return false;
 
   if (rev0 != rev1)
     return false;
 
   /* Types were compatible, so this is a sanity check.  */
-  gcc_assert (sz0 == sz1);
+  gcc_assert (must_eq (sz0, sz1));
 
-  return (off0 == off1) && operand_equal_p (base0, base1, 0);
+  return must_eq (off0, off1) && operand_equal_p (base0, base1, 0);
 }
 
 /* Compare two hashable_expr structures for equivalence.  They are
Index: gcc/tree-ssa-sccvn.c
===================================================================
--- gcc/tree-ssa-sccvn.c	2017-10-23 17:11:40.349144338 +0100
+++ gcc/tree-ssa-sccvn.c	2017-10-23 17:16:59.706267546 +0100
@@ -1920,7 +1920,7 @@ vn_reference_lookup_3 (ao_ref *ref, tree
     {
       tree ref2 = TREE_OPERAND (gimple_call_arg (def_stmt, 0), 0);
       tree base2;
-      HOST_WIDE_INT offset2, size2, maxsize2;
+      poly_int64 offset2, size2, maxsize2;
       bool reverse;
       base2 = get_ref_base_and_extent (ref2, &offset2, &size2, &maxsize2,
 				       &reverse);
@@ -1943,7 +1943,7 @@ vn_reference_lookup_3 (ao_ref *ref, tree
 	   && CONSTRUCTOR_NELTS (gimple_assign_rhs1 (def_stmt)) == 0)
     {
       tree base2;
-      HOST_WIDE_INT offset2, size2, maxsize2;
+      poly_int64 offset2, size2, maxsize2;
       bool reverse;
       base2 = get_ref_base_and_extent (gimple_assign_lhs (def_stmt),
 				       &offset2, &size2, &maxsize2, &reverse);
@@ -1975,13 +1975,12 @@ vn_reference_lookup_3 (ao_ref *ref, tree
 		   && is_gimple_min_invariant (SSA_VAL (gimple_assign_rhs1 (def_stmt))))))
     {
       tree base2;
-      HOST_WIDE_INT offset2, size2, maxsize2;
+      HOST_WIDE_INT offset2, size2;
       bool reverse;
-      base2 = get_ref_base_and_extent (gimple_assign_lhs (def_stmt),
-				       &offset2, &size2, &maxsize2, &reverse);
-      if (!reverse
-	  && maxsize2 != -1
-	  && maxsize2 == size2
+      base2 = get_ref_base_and_extent_hwi (gimple_assign_lhs (def_stmt),
+					   &offset2, &size2, &reverse);
+      if (base2
+	  && !reverse
 	  && size2 % BITS_PER_UNIT == 0
 	  && offset2 % BITS_PER_UNIT == 0
 	  && operand_equal_p (base, base2, 0)
@@ -2039,14 +2038,14 @@ vn_reference_lookup_3 (ao_ref *ref, tree
 	   && TREE_CODE (gimple_assign_rhs1 (def_stmt)) == SSA_NAME)
     {
       tree base2;
-      HOST_WIDE_INT offset2, size2, maxsize2;
+      poly_int64 offset2, size2, maxsize2;
       bool reverse;
       base2 = get_ref_base_and_extent (gimple_assign_lhs (def_stmt),
 				       &offset2, &size2, &maxsize2,
 				       &reverse);
       if (!reverse
-	  && maxsize2 != -1
-	  && maxsize2 == size2
+	  && known_size_p (maxsize2)
+	  && must_eq (maxsize2, size2)
 	  && operand_equal_p (base, base2, 0)
 	  && known_subrange_p (offset, maxsize, offset2, size2)
 	  /* ???  We can't handle bitfield precision extracts without
Index: gcc/tree-ssa-structalias.c
===================================================================
--- gcc/tree-ssa-structalias.c	2017-10-23 17:07:40.909726192 +0100
+++ gcc/tree-ssa-structalias.c	2017-10-23 17:16:59.707267411 +0100
@@ -3191,9 +3191,9 @@ get_constraint_for_component_ref (tree t
 				  bool address_p, bool lhs_p)
 {
   tree orig_t = t;
-  HOST_WIDE_INT bitsize = -1;
-  HOST_WIDE_INT bitmaxsize = -1;
-  HOST_WIDE_INT bitpos;
+  poly_int64 bitsize = -1;
+  poly_int64 bitmaxsize = -1;
+  poly_int64 bitpos;
   bool reverse;
   tree forzero;
 
@@ -3255,8 +3255,8 @@ get_constraint_for_component_ref (tree t
 	 ignore this constraint. When we handle pointer subtraction,
 	 we may have to do something cute here.  */
 
-      if ((unsigned HOST_WIDE_INT)bitpos < get_varinfo (result.var)->fullsize
-	  && bitmaxsize != 0)
+      if (may_lt (poly_uint64 (bitpos), get_varinfo (result.var)->fullsize)
+	  && maybe_nonzero (bitmaxsize))
 	{
 	  /* It's also not true that the constraint will actually start at the
 	     right offset, it may start in some padding.  We only care about
@@ -3268,8 +3268,8 @@ get_constraint_for_component_ref (tree t
 	  cexpr.offset = 0;
 	  for (curr = get_varinfo (cexpr.var); curr; curr = vi_next (curr))
 	    {
-	      if (ranges_overlap_p (curr->offset, curr->size,
-				    bitpos, bitmaxsize))
+	      if (ranges_may_overlap_p (poly_int64 (curr->offset), curr->size,
+					bitpos, bitmaxsize))
 		{
 		  cexpr.var = curr->id;
 		  results->safe_push (cexpr);
@@ -3302,7 +3302,7 @@ get_constraint_for_component_ref (tree t
 	      results->safe_push (cexpr);
 	    }
 	}
-      else if (bitmaxsize == 0)
+      else if (known_zero (bitmaxsize))
 	{
 	  if (dump_file && (dump_flags & TDF_DETAILS))
 	    fprintf (dump_file, "Access to zero-sized part of variable, "
@@ -3317,13 +3317,15 @@ get_constraint_for_component_ref (tree t
       /* If we do not know exactly where the access goes say so.  Note
 	 that only for non-structure accesses we know that we access
 	 at most one subfiled of any variable.  */
-      if (bitpos == -1
-	  || bitsize != bitmaxsize
+      HOST_WIDE_INT const_bitpos;
+      if (!bitpos.is_constant (&const_bitpos)
+	  || const_bitpos == -1
+	  || may_ne (bitsize, bitmaxsize)
 	  || AGGREGATE_TYPE_P (TREE_TYPE (orig_t))
 	  || result.offset == UNKNOWN_OFFSET)
 	result.offset = UNKNOWN_OFFSET;
       else
-	result.offset += bitpos;
+	result.offset += const_bitpos;
     }
   else if (result.type == ADDRESSOF)
     {
@@ -3660,14 +3662,17 @@ do_structure_copy (tree lhsop, tree rhso
 	   && (rhsp->type == SCALAR
 	       || rhsp->type == ADDRESSOF))
     {
-      HOST_WIDE_INT lhssize, lhsmaxsize, lhsoffset;
-      HOST_WIDE_INT rhssize, rhsmaxsize, rhsoffset;
+      HOST_WIDE_INT lhssize, lhsoffset;
+      HOST_WIDE_INT rhssize, rhsoffset;
       bool reverse;
       unsigned k = 0;
-      get_ref_base_and_extent (lhsop, &lhsoffset, &lhssize, &lhsmaxsize,
-			       &reverse);
-      get_ref_base_and_extent (rhsop, &rhsoffset, &rhssize, &rhsmaxsize,
-			       &reverse);
+      if (!get_ref_base_and_extent_hwi (lhsop, &lhsoffset, &lhssize, &reverse)
+	  || !get_ref_base_and_extent_hwi (rhsop, &rhsoffset, &rhssize,
+					   &reverse))
+	{
+	  process_all_all_constraints (lhsc, rhsc);
+	  return;
+	}
       for (j = 0; lhsc.iterate (j, &lhsp);)
 	{
 	  varinfo_t lhsv, rhsv;
Index: gcc/var-tracking.c
===================================================================
--- gcc/var-tracking.c	2017-10-23 17:16:50.377527331 +0100
+++ gcc/var-tracking.c	2017-10-23 17:16:59.708267276 +0100
@@ -5208,20 +5208,20 @@ track_expr_p (tree expr, bool need_rtl)
 	      || (TREE_CODE (realdecl) == MEM_REF
 		  && TREE_CODE (TREE_OPERAND (realdecl, 0)) == ADDR_EXPR))
 	    {
-	      HOST_WIDE_INT bitsize, bitpos, maxsize;
+	      HOST_WIDE_INT bitsize, bitpos;
 	      bool reverse;
 	      tree innerdecl
-		= get_ref_base_and_extent (realdecl, &bitpos, &bitsize,
-					   &maxsize, &reverse);
-	      if (!DECL_P (innerdecl)
+		= get_ref_base_and_extent_hwi (realdecl, &bitpos,
+					       &bitsize, &reverse);
+	      if (!innerdecl
+		  || !DECL_P (innerdecl)
 		  || DECL_IGNORED_P (innerdecl)
 		  /* Do not track declarations for parts of tracked record
 		     parameters since we want to track them as a whole.  */
 		  || tracked_record_parameter_p (innerdecl)
 		  || TREE_STATIC (innerdecl)
-		  || bitsize <= 0
-		  || bitpos + bitsize > 256
-		  || bitsize != maxsize)
+		  || bitsize == 0
+		  || bitpos + bitsize > 256)
 		return 0;
 	      else
 		realdecl = expr;

  parent reply	other threads:[~2017-10-23 17:12 UTC|newest]

Thread overview: 302+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-10-23 16:57 [000/nnn] poly_int: representation of runtime offsets and sizes Richard Sandiford
2017-10-23 16:58 ` [001/nnn] poly_int: add poly-int.h Richard Sandiford
2017-10-25 16:17   ` Martin Sebor
2017-11-08  9:44     ` Richard Sandiford
2017-11-08 16:51       ` Martin Sebor
2017-11-08 16:56         ` Richard Sandiford
2017-11-08 17:33           ` Martin Sebor
2017-11-08 17:34           ` Martin Sebor
2017-11-08 18:34             ` Richard Sandiford
2017-11-09  9:10               ` Martin Sebor
2017-11-09 11:14                 ` Richard Sandiford
2017-11-09 17:42                   ` Martin Sebor
2017-11-13 17:59                   ` Jeff Law
2017-11-13 23:57                     ` Richard Sandiford
2017-11-14  1:21                       ` Martin Sebor
2017-11-14  9:46                         ` Richard Sandiford
2017-11-17  3:31                       ` Jeff Law
2017-11-08 10:03   ` Richard Sandiford
2017-11-14  0:42     ` Richard Sandiford
2017-12-06 20:11       ` Jeff Law
2017-12-07 14:46         ` Richard Biener
2017-12-07 15:08           ` Jeff Law
2017-12-07 22:39             ` Richard Sandiford
2017-12-07 22:48               ` Jeff Law
2017-12-15  3:40                 ` Martin Sebor
2017-12-15  9:08                   ` Richard Biener
2017-12-15 15:19                     ` Jeff Law
2017-10-23 16:59 ` [002/nnn] poly_int: IN_TARGET_CODE Richard Sandiford
2017-11-17  3:35   ` Jeff Law
2017-12-15  1:08     ` Richard Sandiford
2017-12-15 15:22       ` Jeff Law
2017-10-23 17:00 ` [004/nnn] poly_int: mode query functions Richard Sandiford
2017-11-17  3:37   ` Jeff Law
2017-10-23 17:00 ` [003/nnn] poly_int: MACRO_MODE Richard Sandiford
2017-11-17  3:36   ` Jeff Law
2017-10-23 17:01 ` [005/nnn] poly_int: rtx constants Richard Sandiford
2017-11-17  4:17   ` Jeff Law
2017-12-15  1:25     ` Richard Sandiford
2017-12-19  4:52       ` Jeff Law
2017-10-23 17:02 ` [006/nnn] poly_int: tree constants Richard Sandiford
2017-10-25 17:14   ` Martin Sebor
2017-10-25 21:35     ` Richard Sandiford
2017-10-26  5:52       ` Martin Sebor
2017-10-26  8:40         ` Richard Sandiford
2017-10-26 16:45           ` Martin Sebor
2017-10-26 18:05             ` Richard Sandiford
2017-10-26 23:53               ` Martin Sebor
2017-10-27  8:33                 ` Richard Sandiford
2017-10-29 16:56                   ` Martin Sebor
2017-10-30  6:36                     ` Trevor Saunders
2017-10-31 20:25                       ` Martin Sebor
2017-10-26 18:11             ` Pedro Alves
2017-10-26 19:12               ` Martin Sebor
2017-10-26 19:19                 ` Pedro Alves
2017-10-26 23:41                   ` Martin Sebor
2017-10-30 10:26                     ` Pedro Alves
2017-10-31 16:12                       ` Martin Sebor
2017-11-17  4:51   ` Jeff Law
2017-11-18 15:48     ` Richard Sandiford
2017-10-23 17:02 ` [007/nnn] poly_int: dump routines Richard Sandiford
2017-11-17  3:38   ` Jeff Law
2017-10-23 17:03 ` [008/nnn] poly_int: create_integer_operand Richard Sandiford
2017-11-17  3:40   ` Jeff Law
2017-10-23 17:04 ` [010/nnn] poly_int: REG_OFFSET Richard Sandiford
2017-11-17  3:41   ` Jeff Law
2017-10-23 17:04 ` [009/nnn] poly_int: TRULY_NOOP_TRUNCATION Richard Sandiford
2017-11-17  3:40   ` Jeff Law
2017-10-23 17:05 ` [011/nnn] poly_int: DWARF locations Richard Sandiford
2017-11-17 17:40   ` Jeff Law
2017-10-23 17:05 ` [013/nnn] poly_int: same_addr_size_stores_p Richard Sandiford
2017-11-17  4:11   ` Jeff Law
2017-10-23 17:05 ` [012/nnn] poly_int: fold_ctor_reference Richard Sandiford
2017-11-17  3:59   ` Jeff Law
2017-10-23 17:06 ` [015/nnn] poly_int: ao_ref and vn_reference_op_t Richard Sandiford
2017-11-18  4:25   ` Jeff Law
2017-10-23 17:06 ` [014/nnn] poly_int: indirect_refs_may_alias_p Richard Sandiford
2017-11-17 18:11   ` Jeff Law
2017-11-20 13:31     ` Richard Sandiford
2017-11-21  0:49       ` Jeff Law
2017-10-23 17:07 ` [016/nnn] poly_int: dse.c Richard Sandiford
2017-11-18  4:30   ` Jeff Law
2017-10-23 17:07 ` [017/nnn] poly_int: rtx_addr_can_trap_p_1 Richard Sandiford
2017-11-18  4:46   ` Jeff Law
2017-10-23 17:08 ` [018/nnn] poly_int: MEM_OFFSET and MEM_SIZE Richard Sandiford
2017-12-06 18:27   ` Jeff Law
2017-10-23 17:08 ` [020/nnn] poly_int: store_bit_field bitrange Richard Sandiford
2017-12-05 23:43   ` Jeff Law
2017-10-23 17:08 ` [019/nnn] poly_int: lra frame offsets Richard Sandiford
2017-12-06  0:16   ` Jeff Law
2017-10-23 17:09 ` [022/nnn] poly_int: C++ bitfield regions Richard Sandiford
2017-12-05 23:39   ` Jeff Law
2017-10-23 17:09 ` [023/nnn] poly_int: store_field & co Richard Sandiford
2017-12-05 23:49   ` Jeff Law
2017-10-23 17:09 ` [021/nnn] poly_int: extract_bit_field bitrange Richard Sandiford
2017-12-05 23:46   ` Jeff Law
2017-10-23 17:10 ` [024/nnn] poly_int: ira subreg liveness tracking Richard Sandiford
2017-11-28 21:10   ` Jeff Law
2017-12-05 21:54     ` Richard Sandiford
2017-10-23 17:10 ` [025/nnn] poly_int: SUBREG_BYTE Richard Sandiford
2017-12-06 18:50   ` Jeff Law
2017-10-23 17:11 ` [027/nnn] poly_int: DWARF CFA offsets Richard Sandiford
2017-12-06  0:40   ` Jeff Law
2017-10-23 17:11 ` [026/nnn] poly_int: operand_subword Richard Sandiford
2017-11-28 17:51   ` Jeff Law
2017-10-23 17:12 ` [030/nnn] poly_int: get_addr_unit_base_and_extent Richard Sandiford
2017-12-06  0:26   ` Jeff Law
2017-10-23 17:12 ` [028/nnn] poly_int: ipa_parm_adjustment Richard Sandiford
2017-11-28 17:47   ` Jeff Law
2017-10-23 17:12 ` Richard Sandiford [this message]
2017-12-06 20:03   ` [029/nnn] poly_int: get_ref_base_and_extent Jeff Law
2017-10-23 17:13 ` [032/nnn] poly_int: symbolic_number Richard Sandiford
2017-11-28 17:45   ` Jeff Law
2017-10-23 17:13 ` [031/nnn] poly_int: aff_tree Richard Sandiford
2017-12-06  0:04   ` Jeff Law
2017-10-23 17:13 ` [033/nnn] poly_int: pointer_may_wrap_p Richard Sandiford
2017-11-28 17:44   ` Jeff Law
2017-10-23 17:14 ` [036/nnn] poly_int: get_object_alignment_2 Richard Sandiford
2017-11-28 17:37   ` Jeff Law
2017-10-23 17:14 ` [034/nnn] poly_int: get_inner_reference_aff Richard Sandiford
2017-11-28 17:56   ` Jeff Law
2017-10-23 17:14 ` [035/nnn] poly_int: expand_debug_expr Richard Sandiford
2017-12-05 17:08   ` Jeff Law
2017-10-23 17:16 ` [037/nnn] poly_int: get_bit_range Richard Sandiford
2017-12-05 23:19   ` Jeff Law
2017-10-23 17:17 ` [038/nnn] poly_int: fold_comparison Richard Sandiford
2017-11-28 21:47   ` Jeff Law
2017-10-23 17:17 ` [039/nnn] poly_int: pass_store_merging::execute Richard Sandiford
2017-11-28 18:00   ` Jeff Law
2017-12-20 12:59     ` Richard Sandiford
2017-10-23 17:18 ` [042/nnn] poly_int: reload1.c Richard Sandiford
2017-12-05 17:23   ` Jeff Law
2017-10-23 17:18 ` [040/nnn] poly_int: get_inner_reference & co Richard Sandiford
2017-12-06 17:26   ` Jeff Law
2018-12-21 11:17   ` Thomas Schwinge
2018-12-21 11:40     ` Jakub Jelinek
2018-12-28 14:34       ` Thomas Schwinge
2017-10-23 17:18 ` [041/nnn] poly_int: reload.c Richard Sandiford
2017-12-05 17:10   ` Jeff Law
2017-10-23 17:19 ` [043/nnn] poly_int: frame allocations Richard Sandiford
2017-12-06  3:15   ` Jeff Law
2017-10-23 17:19 ` [045/nnn] poly_int: REG_ARGS_SIZE Richard Sandiford
2017-12-06  0:10   ` Jeff Law
2017-12-22 21:56   ` Andreas Schwab
2017-12-23  9:36     ` Richard Sandiford
2017-12-24 12:49       ` Andreas Schwab
2017-12-28 20:37         ` RFA: Fix REG_ARGS_SIZE handling when pushing TLS addresses Richard Sandiford
2018-01-02 19:07           ` Jeff Law
2017-10-23 17:19 ` [044/nnn] poly_int: push_block/emit_push_insn Richard Sandiford
2017-11-28 22:18   ` Jeff Law
2017-10-23 17:20 ` [047/nnn] poly_int: argument sizes Richard Sandiford
2017-12-06 20:57   ` Jeff Law
2017-12-20 11:37     ` Richard Sandiford
2017-10-23 17:20 ` [046/nnn] poly_int: instantiate_virtual_regs Richard Sandiford
2017-11-28 18:00   ` Jeff Law
2017-10-23 17:21 ` [050/nnn] poly_int: reload<->ira interface Richard Sandiford
2017-11-28 16:55   ` Jeff Law
2017-10-23 17:21 ` [048/nnn] poly_int: cfgexpand stack variables Richard Sandiford
2017-12-05 23:22   ` Jeff Law
2017-10-23 17:21 ` [049/nnn] poly_int: emit_inc Richard Sandiford
2017-11-28 17:30   ` Jeff Law
2017-10-23 17:22 ` [053/nnn] poly_int: decode_addr_const Richard Sandiford
2017-11-28 16:53   ` Jeff Law
2017-10-23 17:22 ` [052/nnn] poly_int: bit_field_size/offset Richard Sandiford
2017-12-05 17:25   ` Jeff Law
2017-10-23 17:22 ` [051/nnn] poly_int: emit_group_load/store Richard Sandiford
2017-12-05 23:26   ` Jeff Law
2017-10-23 17:23 ` [054/nnn] poly_int: adjust_ptr_info_misalignment Richard Sandiford
2017-11-28 16:53   ` Jeff Law
2017-10-23 17:23 ` [055/nnn] poly_int: find_bswap_or_nop_load Richard Sandiford
2017-11-28 16:52   ` Jeff Law
2017-10-23 17:24 ` [057/nnn] poly_int: build_ref_for_offset Richard Sandiford
2017-11-28 16:51   ` Jeff Law
2017-10-23 17:24 ` [058/nnn] poly_int: get_binfo_at_offset Richard Sandiford
2017-11-28 16:50   ` Jeff Law
2017-10-23 17:24 ` [056/nnn] poly_int: MEM_REF offsets Richard Sandiford
2017-12-06  0:46   ` Jeff Law
2017-10-23 17:25 ` [061/nnn] poly_int: compute_data_ref_alignment Richard Sandiford
2017-11-28 16:49   ` Jeff Law
2017-10-23 17:25 ` [059/nnn] poly_int: tree-ssa-loop-ivopts.c:iv_use Richard Sandiford
2017-12-05 17:26   ` Jeff Law
2017-10-23 17:25 ` [060/nnn] poly_int: loop versioning threshold Richard Sandiford
2017-12-05 17:31   ` Jeff Law
2017-10-23 17:26 ` [062/nnn] poly_int: prune_runtime_alias_test_list Richard Sandiford
2017-12-05 17:33   ` Jeff Law
2017-10-23 17:26 ` [063/nnn] poly_int: vectoriser vf and uf Richard Sandiford
2017-12-06  2:46   ` Jeff Law
2018-01-03 21:23   ` [PATCH] Fix gcc.dg/vect-opt-info-1.c testcase Jakub Jelinek
2018-01-03 21:30     ` Richard Sandiford
2018-01-04 17:32     ` Jeff Law
2017-10-23 17:27 ` [064/nnn] poly_int: SLP max_units Richard Sandiford
2017-12-05 17:41   ` Jeff Law
2017-10-23 17:27 ` [065/nnn] poly_int: vect_nunits_for_cost Richard Sandiford
2017-12-05 17:35   ` Jeff Law
2017-10-23 17:27 ` [066/nnn] poly_int: omp_max_vf Richard Sandiford
2017-12-05 17:40   ` Jeff Law
2017-10-23 17:28 ` [067/nnn] poly_int: get_mask_mode Richard Sandiford
2017-11-28 16:48   ` Jeff Law
2017-10-23 17:28 ` [068/nnn] poly_int: current_vector_size and TARGET_AUTOVECTORIZE_VECTOR_SIZES Richard Sandiford
2017-12-06  1:52   ` Jeff Law
2017-10-23 17:29 ` [071/nnn] poly_int: vectorizable_induction Richard Sandiford
2017-12-05 17:44   ` Jeff Law
2017-10-23 17:29 ` [069/nnn] poly_int: vector_alignment_reachable_p Richard Sandiford
2017-11-28 16:48   ` Jeff Law
2017-10-23 17:29 ` [070/nnn] poly_int: vectorizable_reduction Richard Sandiford
2017-11-22 18:11   ` Richard Sandiford
2017-12-06  0:33     ` Jeff Law
2017-10-23 17:30 ` [072/nnn] poly_int: vectorizable_live_operation Richard Sandiford
2017-11-28 16:47   ` Jeff Law
2017-10-23 17:30 ` [073/nnn] poly_int: vectorizable_load/store Richard Sandiford
2017-12-06  0:51   ` Jeff Law
2017-10-23 17:30 ` [074/nnn] poly_int: vectorizable_call Richard Sandiford
2017-11-28 16:46   ` Jeff Law
2017-10-23 17:31 ` [076/nnn] poly_int: vectorizable_conversion Richard Sandiford
2017-11-28 16:44   ` Jeff Law
2017-11-28 18:15     ` Richard Sandiford
2017-12-05 17:49       ` Jeff Law
2017-10-23 17:31 ` [077/nnn] poly_int: vect_get_constant_vectors Richard Sandiford
2017-11-28 16:43   ` Jeff Law
2017-10-23 17:31 ` [075/nnn] poly_int: vectorizable_simd_clone_call Richard Sandiford
2017-11-28 16:45   ` Jeff Law
2017-10-23 17:32 ` [078/nnn] poly_int: two-operation SLP Richard Sandiford
2017-11-28 16:41   ` Jeff Law
2017-10-23 17:32 ` [079/nnn] poly_int: vect_no_alias_p Richard Sandiford
2017-12-05 17:46   ` Jeff Law
2017-10-23 17:32 ` [080/nnn] poly_int: tree-vect-generic.c Richard Sandiford
2017-12-05 17:48   ` Jeff Law
2017-10-23 17:33 ` [082/nnn] poly_int: omp-simd-clone.c Richard Sandiford
2017-11-28 16:36   ` Jeff Law
2017-10-23 17:33 ` [081/nnn] poly_int: brig vector elements Richard Sandiford
2017-10-24  7:10   ` Pekka Jääskeläinen
2017-10-23 17:34 ` [083/nnn] poly_int: fold_indirect_ref_1 Richard Sandiford
2017-11-28 16:34   ` Jeff Law
2017-10-23 17:34 ` [085/nnn] poly_int: expand_vector_ubsan_overflow Richard Sandiford
2017-11-28 16:33   ` Jeff Law
2017-10-23 17:34 ` [084/nnn] poly_int: folding BIT_FIELD_REFs on vectors Richard Sandiford
2017-11-28 16:33   ` Jeff Law
2017-10-23 17:35 ` [086/nnn] poly_int: REGMODE_NATURAL_SIZE Richard Sandiford
2017-12-05 23:33   ` Jeff Law
2017-10-23 17:35 ` [087/nnn] poly_int: subreg_get_info Richard Sandiford
2017-11-28 16:29   ` Jeff Law
2017-10-23 17:35 ` [088/nnn] poly_int: expand_expr_real_2 Richard Sandiford
2017-11-28  8:49   ` Jeff Law
2017-10-23 17:36 ` [089/nnn] poly_int: expand_expr_real_1 Richard Sandiford
2017-11-28  8:41   ` Jeff Law
2017-10-23 17:36 ` [090/nnn] poly_int: set_inc_state Richard Sandiford
2017-11-28  8:35   ` Jeff Law
2017-10-23 17:37 ` [091/nnn] poly_int: emit_single_push_insn_1 Richard Sandiford
2017-11-28  8:33   ` Jeff Law
2017-10-23 17:37 ` [092/nnn] poly_int: PUSH_ROUNDING Richard Sandiford
2017-11-28 16:21   ` Jeff Law
2017-11-28 18:01     ` Richard Sandiford
2017-11-28 18:10       ` PUSH_ROUNDING Jeff Law
2017-10-23 17:37 ` [093/nnn] poly_int: adjust_mems Richard Sandiford
2017-11-28  8:32   ` Jeff Law
2017-10-23 17:38 ` [094/nnn] poly_int: expand_ifn_atomic_compare_exchange_into_call Richard Sandiford
2017-11-28  8:31   ` Jeff Law
2017-10-23 17:39 ` [096/nnn] poly_int: reloading complex subregs Richard Sandiford
2017-11-28  8:09   ` Jeff Law
2017-10-23 17:39 ` [095/nnn] poly_int: process_alt_operands Richard Sandiford
2017-11-28  8:14   ` Jeff Law
2017-10-23 17:40 ` [097/nnn] poly_int: alter_reg Richard Sandiford
2017-11-28  8:08   ` Jeff Law
2017-10-23 17:40 ` [098/nnn] poly_int: load_register_parameters Richard Sandiford
2017-11-28  8:08   ` Jeff Law
2017-10-23 17:40 ` [099/nnn] poly_int: struct_value_size Richard Sandiford
2017-11-21  8:14   ` Jeff Law
2017-10-23 17:41 ` [100/nnn] poly_int: memrefs_conflict_p Richard Sandiford
2017-12-05 23:29   ` Jeff Law
2017-10-23 17:41 ` [101/nnn] poly_int: GET_MODE_NUNITS Richard Sandiford
2017-12-06  2:05   ` Jeff Law
2017-10-23 17:42 ` [102/nnn] poly_int: vect_permute_load/store_chain Richard Sandiford
2017-11-21  8:01   ` Jeff Law
2017-10-23 17:42 ` [103/nnn] poly_int: TYPE_VECTOR_SUBPARTS Richard Sandiford
2017-10-24  9:06   ` Richard Biener
2017-10-24  9:40     ` Richard Sandiford
2017-10-24 10:01       ` Richard Biener
2017-10-24 11:20         ` Richard Sandiford
2017-10-24 11:30           ` Richard Biener
2017-10-24 16:24             ` Richard Sandiford
2017-12-06  2:31   ` Jeff Law
2017-10-23 17:43 ` [105/nnn] poly_int: expand_assignment Richard Sandiford
2017-11-21  7:50   ` Jeff Law
2017-10-23 17:43 ` [104/nnn] poly_int: GET_MODE_PRECISION Richard Sandiford
2017-11-28  8:07   ` Jeff Law
2017-10-23 17:43 ` [106/nnn] poly_int: GET_MODE_BITSIZE Richard Sandiford
2017-11-21  7:49   ` Jeff Law
2017-10-23 17:48 ` [107/nnn] poly_int: GET_MODE_SIZE Richard Sandiford
2017-11-21  7:48   ` Jeff Law
2017-10-24  9:25 ` [000/nnn] poly_int: representation of runtime offsets and sizes Eric Botcazou
2017-10-24  9:58   ` Richard Sandiford
2017-10-24 10:53     ` Eric Botcazou
2017-10-24 11:25       ` Richard Sandiford
2017-10-24 12:24         ` Richard Biener
2017-10-24 13:07           ` Richard Sandiford
2017-10-24 13:18             ` Richard Biener
2017-10-24 13:30               ` Richard Sandiford
2017-10-25 10:27                 ` Richard Biener
2017-10-25 10:45                   ` Jakub Jelinek
2017-10-25 11:39                   ` Richard Sandiford
2017-10-25 13:09                     ` Richard Biener
2017-11-08  9:51                       ` Richard Sandiford
2017-11-08 11:57                         ` Richard Biener

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87efptpz54.fsf@linaro.org \
    --to=richard.sandiford@linaro.org \
    --cc=gcc-patches@gcc.gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).