public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [00/46] Remove vinfo_for_stmt etc.
@ 2018-07-24  9:52 Richard Sandiford
  2018-07-24  9:52 ` [01/46] Move special cases out of get_initial_def_for_reduction Richard Sandiford
                   ` (45 more replies)
  0 siblings, 46 replies; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24  9:52 UTC (permalink / raw)
  To: gcc-patches

The aim of this series is to:

(a) make the vectoriser refer to statements using its own expanded
    stmt_vec_info rather than the underlying gimple stmt.  This reduces
    the number of stmt lookups from 480 in current sources to under 100.

(b) make the remaining lookups relative the owning vec_info rather than
    to global state.

The original motivation was to make it more natural to have multiple
vec_infos live at once.

The series is a clean-up only in a data structure sense.  It certainly
doesn't make the code prettier, and in the end it only shaves 120 LOC
in total.  But I think it should make it easier to do follow-on clean-ups.

The series was pretty tedious to write to and will be pretty tedious
to review, sorry.

I tested each individual patch on aarch64-linux-gnu and the series as a
whole on aarch64-linux-gnu with SVE, aarch64_be-elf and x86_64-linux-gnu.
I also built and tested at least one target per CPU directory, made sure
that there were no new warnings, and checked for differences in assembly
output for gcc.dg, g++.dg and gcc.c-torture.  There were a couple of
cases in vect-alias-check-* of equality comparisons using the opposite
operand order, which is an unrelated problem.  There were no other
differences.

OK to install?

Thanks,
Richard

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [01/46] Move special cases out of get_initial_def_for_reduction
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
@ 2018-07-24  9:52 ` Richard Sandiford
  2018-07-25  8:42   ` Richard Biener
  2018-07-24  9:53 ` [03/46] Remove unnecessary update of NUM_SLP_USES Richard Sandiford
                   ` (44 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24  9:52 UTC (permalink / raw)
  To: gcc-patches

This minor clean-up avoids repeating the test for double reductions
and also moves the vect_get_vec_def_for_operand call to the same
function as the corresponding vect_get_vec_def_for_stmt_copy.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vect-loop.c (get_initial_def_for_reduction): Move special
	cases for nested loops from here to ...
	(vect_create_epilog_for_reduction): ...here.  Only call
	vect_is_simple_use for inner-loop reductions.

Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-13 10:11:14.429843575 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:22:02.965552667 +0100
@@ -4113,10 +4113,8 @@ get_initial_def_for_reduction (gimple *s
   enum tree_code code = gimple_assign_rhs_code (stmt);
   tree def_for_init;
   tree init_def;
-  bool nested_in_vect_loop = false;
   REAL_VALUE_TYPE real_init_val = dconst0;
   int int_init_val = 0;
-  gimple *def_stmt = NULL;
   gimple_seq stmts = NULL;
 
   gcc_assert (vectype);
@@ -4124,39 +4122,12 @@ get_initial_def_for_reduction (gimple *s
   gcc_assert (POINTER_TYPE_P (scalar_type) || INTEGRAL_TYPE_P (scalar_type)
 	      || SCALAR_FLOAT_TYPE_P (scalar_type));
 
-  if (nested_in_vect_loop_p (loop, stmt))
-    nested_in_vect_loop = true;
-  else
-    gcc_assert (loop == (gimple_bb (stmt))->loop_father);
-
-  /* In case of double reduction we only create a vector variable to be put
-     in the reduction phi node.  The actual statement creation is done in
-     vect_create_epilog_for_reduction.  */
-  if (adjustment_def && nested_in_vect_loop
-      && TREE_CODE (init_val) == SSA_NAME
-      && (def_stmt = SSA_NAME_DEF_STMT (init_val))
-      && gimple_code (def_stmt) == GIMPLE_PHI
-      && flow_bb_inside_loop_p (loop, gimple_bb (def_stmt))
-      && vinfo_for_stmt (def_stmt)
-      && STMT_VINFO_DEF_TYPE (vinfo_for_stmt (def_stmt))
-          == vect_double_reduction_def)
-    {
-      *adjustment_def = NULL;
-      return vect_create_destination_var (init_val, vectype);
-    }
+  gcc_assert (nested_in_vect_loop_p (loop, stmt)
+	      || loop == (gimple_bb (stmt))->loop_father);
 
   vect_reduction_type reduction_type
     = STMT_VINFO_VEC_REDUCTION_TYPE (stmt_vinfo);
 
-  /* In case of a nested reduction do not use an adjustment def as
-     that case is not supported by the epilogue generation correctly
-     if ncopies is not one.  */
-  if (adjustment_def && nested_in_vect_loop)
-    {
-      *adjustment_def = NULL;
-      return vect_get_vec_def_for_operand (init_val, stmt);
-    }
-
   switch (code)
     {
     case WIDEN_SUM_EXPR:
@@ -4586,9 +4557,22 @@ vect_create_epilog_for_reduction (vec<tr
 	      || (induc_code == MIN_EXPR
 		  && tree_int_cst_lt (induc_val, initial_def))))
 	induc_val = initial_def;
-      vect_is_simple_use (initial_def, loop_vinfo, &initial_def_dt);
-      vec_initial_def = get_initial_def_for_reduction (stmt, initial_def,
-						       &adjustment_def);
+
+      if (double_reduc)
+	/* In case of double reduction we only create a vector variable
+	   to be put in the reduction phi node.  The actual statement
+	   creation is done later in this function.  */
+	vec_initial_def = vect_create_destination_var (initial_def, vectype);
+      else if (nested_in_vect_loop)
+	{
+	  /* Do not use an adjustment def as that case is not supported
+	     correctly if ncopies is not one.  */
+	  vect_is_simple_use (initial_def, loop_vinfo, &initial_def_dt);
+	  vec_initial_def = vect_get_vec_def_for_operand (initial_def, stmt);
+	}
+      else
+	vec_initial_def = get_initial_def_for_reduction (stmt, initial_def,
+							 &adjustment_def);
       vec_initial_defs.create (1);
       vec_initial_defs.quick_push (vec_initial_def);
     }

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [03/46] Remove unnecessary update of NUM_SLP_USES
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
  2018-07-24  9:52 ` [01/46] Move special cases out of get_initial_def_for_reduction Richard Sandiford
@ 2018-07-24  9:53 ` Richard Sandiford
  2018-07-25  8:46   ` Richard Biener
  2018-07-24  9:53 ` [02/46] Remove dead vectorizable_reduction code Richard Sandiford
                   ` (43 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24  9:53 UTC (permalink / raw)
  To: gcc-patches

vect_free_slp_tree had:

  gimple *stmt;
  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
    /* After transform some stmts are removed and thus their vinfo is gone.  */
    if (vinfo_for_stmt (stmt))
      {
	gcc_assert (STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt)) > 0);
	STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt))--;
      }

But after transform this update is redundant even for statements that do
exist, so it seems better to skip this loop for the final teardown.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (vect_free_slp_instance): Add a final_p parameter.
	* tree-vect-slp.c (vect_free_slp_tree): Likewise.  Don't update
	STMT_VINFO_NUM_SLP_USES when it's true.
	(vect_free_slp_instance): Add a final_p parameter and pass it to
	vect_free_slp_tree.
	(vect_build_slp_tree_2): Update call to vect_free_slp_instance.
	(vect_analyze_slp_instance): Likewise.
	(vect_slp_analyze_operations): Likewise.
	(vect_slp_analyze_bb_1): Likewise.
	* tree-vectorizer.c (vec_info): Likewise.
	* tree-vect-loop.c (vect_transform_loop): Likewise.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-03 10:59:30.480481417 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:22:09.237496975 +0100
@@ -1634,7 +1634,7 @@ extern int vect_get_known_peeling_cost (
 extern tree cse_and_gimplify_to_preheader (loop_vec_info, tree);
 
 /* In tree-vect-slp.c.  */
-extern void vect_free_slp_instance (slp_instance);
+extern void vect_free_slp_instance (slp_instance, bool);
 extern bool vect_transform_slp_perm_load (slp_tree, vec<tree> ,
 					  gimple_stmt_iterator *, poly_uint64,
 					  slp_instance, bool, unsigned *);
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c	2018-07-23 16:58:06.000000000 +0100
+++ gcc/tree-vect-slp.c	2018-07-24 10:22:09.237496975 +0100
@@ -47,25 +47,32 @@ Software Foundation; either version 3, o
 #include "internal-fn.h"
 
 
-/* Recursively free the memory allocated for the SLP tree rooted at NODE.  */
+/* Recursively free the memory allocated for the SLP tree rooted at NODE.
+   FINAL_P is true if we have vectorized the instance or if we have
+   made a final decision not to vectorize the statements in any way.  */
 
 static void
-vect_free_slp_tree (slp_tree node)
+vect_free_slp_tree (slp_tree node, bool final_p)
 {
   int i;
   slp_tree child;
 
   FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
-    vect_free_slp_tree (child);
+    vect_free_slp_tree (child, final_p);
 
-  gimple *stmt;
-  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
-    /* After transform some stmts are removed and thus their vinfo is gone.  */
-    if (vinfo_for_stmt (stmt))
-      {
-	gcc_assert (STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt)) > 0);
-	STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt))--;
-      }
+  /* Don't update STMT_VINFO_NUM_SLP_USES if it isn't relevant.
+     Some statements might no longer exist, after having been
+     removed by vect_transform_stmt.  Updating the remaining
+     statements would be redundant.  */
+  if (!final_p)
+    {
+      gimple *stmt;
+      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
+	{
+	  gcc_assert (STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt)) > 0);
+	  STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt))--;
+	}
+    }
 
   SLP_TREE_CHILDREN (node).release ();
   SLP_TREE_SCALAR_STMTS (node).release ();
@@ -76,12 +83,14 @@ vect_free_slp_tree (slp_tree node)
 }
 
 
-/* Free the memory allocated for the SLP instance.  */
+/* Free the memory allocated for the SLP instance.  FINAL_P is true if we
+   have vectorized the instance or if we have made a final decision not
+   to vectorize the statements in any way.  */
 
 void
-vect_free_slp_instance (slp_instance instance)
+vect_free_slp_instance (slp_instance instance, bool final_p)
 {
-  vect_free_slp_tree (SLP_INSTANCE_TREE (instance));
+  vect_free_slp_tree (SLP_INSTANCE_TREE (instance), final_p);
   SLP_INSTANCE_LOADS (instance).release ();
   free (instance);
 }
@@ -1284,7 +1293,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
       if (++this_tree_size > max_tree_size)
 	{
 	  FOR_EACH_VEC_ELT (children, j, child)
-	    vect_free_slp_tree (child);
+	    vect_free_slp_tree (child, false);
 	  vect_free_oprnd_info (oprnds_info);
 	  return NULL;
 	}
@@ -1315,7 +1324,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
 		  this_loads.truncate (old_nloads);
 		  this_tree_size = old_tree_size;
 		  FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (child), j, grandchild)
-		    vect_free_slp_tree (grandchild);
+		    vect_free_slp_tree (grandchild, false);
 		  SLP_TREE_CHILDREN (child).truncate (0);
 
 		  dump_printf_loc (MSG_NOTE, vect_location,
@@ -1495,7 +1504,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
 		      this_loads.truncate (old_nloads);
 		      this_tree_size = old_tree_size;
 		      FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (child), j, grandchild)
-			vect_free_slp_tree (grandchild);
+			vect_free_slp_tree (grandchild, false);
 		      SLP_TREE_CHILDREN (child).truncate (0);
 
 		      dump_printf_loc (MSG_NOTE, vect_location,
@@ -1519,7 +1528,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
 fail:
       gcc_assert (child == NULL);
       FOR_EACH_VEC_ELT (children, j, child)
-	vect_free_slp_tree (child);
+	vect_free_slp_tree (child, false);
       vect_free_oprnd_info (oprnds_info);
       return NULL;
     }
@@ -2036,13 +2045,13 @@ vect_analyze_slp_instance (vec_info *vin
 				 "Build SLP failed: store group "
 				 "size not a multiple of the vector size "
 				 "in basic block SLP\n");
-	      vect_free_slp_tree (node);
+	      vect_free_slp_tree (node, false);
 	      loads.release ();
 	      return false;
 	    }
 	  /* Fatal mismatch.  */
 	  matches[group_size / const_max_nunits * const_max_nunits] = false;
-	  vect_free_slp_tree (node);
+	  vect_free_slp_tree (node, false);
 	  loads.release ();
 	}
       else
@@ -2102,7 +2111,7 @@ vect_analyze_slp_instance (vec_info *vin
 		      dump_gimple_stmt (MSG_MISSED_OPTIMIZATION,
 					TDF_SLIM, stmt, 0);
                 }
-              vect_free_slp_instance (new_instance);
+	      vect_free_slp_instance (new_instance, false);
               return false;
             }
         }
@@ -2133,7 +2142,7 @@ vect_analyze_slp_instance (vec_info *vin
 		dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
 				 "Built SLP cancelled: can use "
 				 "load/store-lanes\n");
-	      vect_free_slp_instance (new_instance);
+	      vect_free_slp_instance (new_instance, false);
 	      return false;
 	    }
 	}
@@ -2668,7 +2677,7 @@ vect_slp_analyze_operations (vec_info *v
 	  dump_gimple_stmt (MSG_NOTE, TDF_SLIM,
 			    SLP_TREE_SCALAR_STMTS
 			      (SLP_INSTANCE_TREE (instance))[0], 0);
-	  vect_free_slp_instance (instance);
+	  vect_free_slp_instance (instance, false);
           vinfo->slp_instances.ordered_remove (i);
 	  cost_vec.release ();
 	}
@@ -2947,7 +2956,7 @@ vect_slp_analyze_bb_1 (gimple_stmt_itera
 	  dump_gimple_stmt (MSG_NOTE, TDF_SLIM,
 			    SLP_TREE_SCALAR_STMTS
 			      (SLP_INSTANCE_TREE (instance))[0], 0);
-	  vect_free_slp_instance (instance);
+	  vect_free_slp_instance (instance, false);
 	  BB_VINFO_SLP_INSTANCES (bb_vinfo).ordered_remove (i);
 	  continue;
 	}
Index: gcc/tree-vectorizer.c
===================================================================
--- gcc/tree-vectorizer.c	2018-06-27 10:27:09.894649672 +0100
+++ gcc/tree-vectorizer.c	2018-07-24 10:22:09.237496975 +0100
@@ -466,7 +466,7 @@ vec_info::~vec_info ()
   unsigned int i;
 
   FOR_EACH_VEC_ELT (slp_instances, i, instance)
-    vect_free_slp_instance (instance);
+    vect_free_slp_instance (instance, true);
 
   destroy_cost_data (target_cost_data);
   free_stmt_vec_infos (&stmt_vec_infos);
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:22:06.269523330 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:22:09.237496975 +0100
@@ -2229,7 +2229,7 @@ vect_analyze_loop_2 (loop_vec_info loop_
   LOOP_VINFO_VECT_FACTOR (loop_vinfo) = saved_vectorization_factor;
   /* Free the SLP instances.  */
   FOR_EACH_VEC_ELT (LOOP_VINFO_SLP_INSTANCES (loop_vinfo), j, instance)
-    vect_free_slp_instance (instance);
+    vect_free_slp_instance (instance, false);
   LOOP_VINFO_SLP_INSTANCES (loop_vinfo).release ();
   /* Reset SLP type to loop_vect on all stmts.  */
   for (i = 0; i < LOOP_VINFO_LOOP (loop_vinfo)->num_nodes; ++i)
@@ -8683,7 +8683,7 @@ vect_transform_loop (loop_vec_info loop_
      won't work.  */
   slp_instance instance;
   FOR_EACH_VEC_ELT (LOOP_VINFO_SLP_INSTANCES (loop_vinfo), i, instance)
-    vect_free_slp_instance (instance);
+    vect_free_slp_instance (instance, true);
   LOOP_VINFO_SLP_INSTANCES (loop_vinfo).release ();
   /* Clear-up safelen field since its value is invalid after vectorization
      since vectorized loop can have loop-carried dependencies.  */

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [02/46] Remove dead vectorizable_reduction code
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
  2018-07-24  9:52 ` [01/46] Move special cases out of get_initial_def_for_reduction Richard Sandiford
  2018-07-24  9:53 ` [03/46] Remove unnecessary update of NUM_SLP_USES Richard Sandiford
@ 2018-07-24  9:53 ` Richard Sandiford
  2018-07-25  8:43   ` Richard Biener
  2018-07-24  9:54 ` [05/46] Fix make_ssa_name call in vectorizable_reduction Richard Sandiford
                   ` (42 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24  9:53 UTC (permalink / raw)
  To: gcc-patches

vectorizable_reduction has old code to cope with cases in which the
given statement belongs to a reduction group but isn't the first statement.
That can no longer happen, since all statements in the group go into the
same SLP node, and we only check the first statement in each node.

The point is to remove the only path through vectorizable_reduction
in which stmt and stmt_info refer to different statements.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vect-loop.c (vectorizable_reduction): Assert that the
	function is not called for second and subsequent members of
	a reduction group.

Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:22:02.965552667 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:22:06.269523330 +0100
@@ -6162,7 +6162,6 @@ vectorizable_reduction (gimple *stmt, gi
   auto_vec<gimple *> phis;
   int vec_num;
   tree def0, tem;
-  bool first_p = true;
   tree cr_index_scalar_type = NULL_TREE, cr_index_vector_type = NULL_TREE;
   tree cond_reduc_val = NULL_TREE;
 
@@ -6178,15 +6177,8 @@ vectorizable_reduction (gimple *stmt, gi
       nested_cycle = true;
     }
 
-  /* In case of reduction chain we switch to the first stmt in the chain, but
-     we don't update STMT_INFO, since only the last stmt is marked as reduction
-     and has reduction properties.  */
-  if (REDUC_GROUP_FIRST_ELEMENT (stmt_info)
-      && REDUC_GROUP_FIRST_ELEMENT (stmt_info) != stmt)
-    {
-      stmt = REDUC_GROUP_FIRST_ELEMENT (stmt_info);
-      first_p = false;
-    }
+  if (REDUC_GROUP_FIRST_ELEMENT (stmt_info))
+    gcc_assert (slp_node && REDUC_GROUP_FIRST_ELEMENT (stmt_info) == stmt);
 
   if (gimple_code (stmt) == GIMPLE_PHI)
     {
@@ -7050,8 +7042,7 @@ vectorizable_reduction (gimple *stmt, gi
 
   if (!vec_stmt) /* transformation not required.  */
     {
-      if (first_p)
-	vect_model_reduction_cost (stmt_info, reduc_fn, ncopies, cost_vec);
+      vect_model_reduction_cost (stmt_info, reduc_fn, ncopies, cost_vec);
       if (loop_vinfo && LOOP_VINFO_CAN_FULLY_MASK_P (loop_vinfo))
 	{
 	  if (reduction_type != FOLD_LEFT_REDUCTION

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [05/46] Fix make_ssa_name call in vectorizable_reduction
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (2 preceding siblings ...)
  2018-07-24  9:53 ` [02/46] Remove dead vectorizable_reduction code Richard Sandiford
@ 2018-07-24  9:54 ` Richard Sandiford
  2018-07-25  8:47   ` Richard Biener
  2018-07-24  9:54 ` [04/46] Factor out the test for a valid reduction input Richard Sandiford
                   ` (41 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24  9:54 UTC (permalink / raw)
  To: gcc-patches

The usual vectoriser dance to create new assignments is:

    new_stmt = gimple_build_assign (vec_dest, ...);
    new_temp = make_ssa_name (vec_dest, new_stmt);
    gimple_assign_set_lhs (new_stmt, new_temp);

but one site in vectorizable_reduction used:

    new_temp = make_ssa_name (vec_dest, new_stmt);

before creating new_stmt.

This method of creating statements probably needs cleaning up, but
that's for another day...


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vect-loop.c (vectorizable_reduction): Fix an instance in
	which make_ssa_name was called with new_stmt before new_stmt
	had been created.

Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:22:12.737465897 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:22:16.421433184 +0100
@@ -7210,9 +7210,10 @@ vectorizable_reduction (gimple *stmt, gi
 	      if (op_type == ternary_op)
 		vop[2] = vec_oprnds2[i];
 
-	      new_temp = make_ssa_name (vec_dest, new_stmt);
-	      new_stmt = gimple_build_assign (new_temp, code,
+	      new_stmt = gimple_build_assign (vec_dest, code,
 					      vop[0], vop[1], vop[2]);
+	      new_temp = make_ssa_name (vec_dest, new_stmt);
+	      gimple_assign_set_lhs (new_stmt, new_temp);
 	    }
 	  vect_finish_stmt_generation (stmt, new_stmt, gsi);
 

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [04/46] Factor out the test for a valid reduction input
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (3 preceding siblings ...)
  2018-07-24  9:54 ` [05/46] Fix make_ssa_name call in vectorizable_reduction Richard Sandiford
@ 2018-07-24  9:54 ` Richard Sandiford
  2018-07-25  8:46   ` Richard Biener
  2018-07-24  9:55 ` [07/46] Add vec_info::lookup_stmt Richard Sandiford
                   ` (40 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24  9:54 UTC (permalink / raw)
  To: gcc-patches

vect_is_slp_reduction and vect_is_simple_reduction had two instances
each of:

              && (is_gimple_assign (def_stmt)
                  || is_gimple_call (def_stmt)
                  || STMT_VINFO_DEF_TYPE (vinfo_for_stmt (def_stmt))
                           == vect_induction_def
                  || (gimple_code (def_stmt) == GIMPLE_PHI
                      && STMT_VINFO_DEF_TYPE (vinfo_for_stmt (def_stmt))
                                  == vect_internal_def
                      && !is_loop_header_bb_p (gimple_bb (def_stmt)))))

This patch splits it out in a subroutine.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vect-loop.c (vect_valid_reduction_input_p): New function,
	split out from...
	(vect_is_slp_reduction): ...here...
	(vect_is_simple_reduction): ...and here.  Remove repetition of tests
	that are already known to be false.

Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:22:09.237496975 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:22:12.737465897 +0100
@@ -2501,6 +2501,21 @@ report_vect_op (dump_flags_t msg_type, g
   dump_gimple_stmt (msg_type, TDF_SLIM, stmt, 0);
 }
 
+/* DEF_STMT occurs in a loop that contains a potential reduction operation.
+   Return true if the results of DEF_STMT are something that can be
+   accumulated by such a reduction.  */
+
+static bool
+vect_valid_reduction_input_p (gimple *def_stmt)
+{
+  stmt_vec_info def_stmt_info = vinfo_for_stmt (def_stmt);
+  return (is_gimple_assign (def_stmt)
+	  || is_gimple_call (def_stmt)
+	  || STMT_VINFO_DEF_TYPE (def_stmt_info) == vect_induction_def
+	  || (gimple_code (def_stmt) == GIMPLE_PHI
+	      && STMT_VINFO_DEF_TYPE (def_stmt_info) == vect_internal_def
+	      && !is_loop_header_bb_p (gimple_bb (def_stmt))));
+}
 
 /* Detect SLP reduction of the form:
 
@@ -2624,16 +2639,9 @@ vect_is_slp_reduction (loop_vec_info loo
 	     ("vect_internal_def"), or it's an induction (defined by a
 	     loop-header phi-node).  */
           if (def_stmt
-              && gimple_bb (def_stmt)
+	      && gimple_bb (def_stmt)
 	      && flow_bb_inside_loop_p (loop, gimple_bb (def_stmt))
-              && (is_gimple_assign (def_stmt)
-                  || is_gimple_call (def_stmt)
-                  || STMT_VINFO_DEF_TYPE (vinfo_for_stmt (def_stmt))
-                           == vect_induction_def
-                  || (gimple_code (def_stmt) == GIMPLE_PHI
-                      && STMT_VINFO_DEF_TYPE (vinfo_for_stmt (def_stmt))
-                                  == vect_internal_def
-                      && !is_loop_header_bb_p (gimple_bb (def_stmt)))))
+	      && vect_valid_reduction_input_p (def_stmt))
 	    {
 	      lhs = gimple_assign_lhs (next_stmt);
 	      next_stmt = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next_stmt));
@@ -2654,16 +2662,9 @@ vect_is_slp_reduction (loop_vec_info loo
             ("vect_internal_def"), or it's an induction (defined by a
             loop-header phi-node).  */
           if (def_stmt
-              && gimple_bb (def_stmt)
+	      && gimple_bb (def_stmt)
 	      && flow_bb_inside_loop_p (loop, gimple_bb (def_stmt))
-              && (is_gimple_assign (def_stmt)
-                  || is_gimple_call (def_stmt)
-                  || STMT_VINFO_DEF_TYPE (vinfo_for_stmt (def_stmt))
-                              == vect_induction_def
-                  || (gimple_code (def_stmt) == GIMPLE_PHI
-                      && STMT_VINFO_DEF_TYPE (vinfo_for_stmt (def_stmt))
-                                  == vect_internal_def
-                      && !is_loop_header_bb_p (gimple_bb (def_stmt)))))
+	      && vect_valid_reduction_input_p (def_stmt))
   	    {
 	      if (dump_enabled_p ())
 		{
@@ -3196,15 +3197,7 @@ vect_is_simple_reduction (loop_vec_info
       && (code == COND_EXPR
 	  || !def1 || gimple_nop_p (def1)
 	  || !flow_bb_inside_loop_p (loop, gimple_bb (def1))
-          || (def1 && flow_bb_inside_loop_p (loop, gimple_bb (def1))
-              && (is_gimple_assign (def1)
-		  || is_gimple_call (def1)
-  	          || STMT_VINFO_DEF_TYPE (vinfo_for_stmt (def1))
-                      == vect_induction_def
-   	          || (gimple_code (def1) == GIMPLE_PHI
-	              && STMT_VINFO_DEF_TYPE (vinfo_for_stmt (def1))
-                          == vect_internal_def
- 	              && !is_loop_header_bb_p (gimple_bb (def1)))))))
+	  || vect_valid_reduction_input_p (def1)))
     {
       if (dump_enabled_p ())
 	report_vect_op (MSG_NOTE, def_stmt, "detected reduction: ");
@@ -3215,15 +3208,7 @@ vect_is_simple_reduction (loop_vec_info
       && (code == COND_EXPR
 	  || !def2 || gimple_nop_p (def2)
 	  || !flow_bb_inside_loop_p (loop, gimple_bb (def2))
-	  || (def2 && flow_bb_inside_loop_p (loop, gimple_bb (def2))
-	      && (is_gimple_assign (def2)
-		  || is_gimple_call (def2)
-		  || STMT_VINFO_DEF_TYPE (vinfo_for_stmt (def2))
-		       == vect_induction_def
-		  || (gimple_code (def2) == GIMPLE_PHI
-		      && STMT_VINFO_DEF_TYPE (vinfo_for_stmt (def2))
-			   == vect_internal_def
-		      && !is_loop_header_bb_p (gimple_bb (def2)))))))
+	  || vect_valid_reduction_input_p (def2)))
     {
       if (! nested_in_vect_loop && orig_code != MINUS_EXPR)
 	{

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [06/46] Add vec_info::add_stmt
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (5 preceding siblings ...)
  2018-07-24  9:55 ` [07/46] Add vec_info::lookup_stmt Richard Sandiford
@ 2018-07-24  9:55 ` Richard Sandiford
  2018-07-25  9:10   ` Richard Biener
  2018-07-24  9:55 ` [08/46] Add vec_info::lookup_def Richard Sandiford
                   ` (38 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24  9:55 UTC (permalink / raw)
  To: gcc-patches

This patch adds a vec_info function for allocating and setting
stmt_vec_infos.  It's the start of a long process of removing
the global stmt_vec_info array.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (stmt_vec_info): Move typedef earlier in file.
	(vec_info::add_stmt): Declare.
	* tree-vectorizer.c (vec_info::add_stmt): New function.
	* tree-vect-data-refs.c (vect_create_data_ref_ptr): Use it.
	* tree-vect-loop.c (_loop_vec_info::_loop_vec_info): Likewise.
	(vect_create_epilog_for_reduction, vectorizable_reduction): Likewise.
	(vectorizable_induction): Likewise.
	* tree-vect-slp.c (_bb_vec_info::_bb_vec_info): Likewise.
	* tree-vect-stmts.c (vect_finish_stmt_generation_1): Likewise.
	(vectorizable_simd_clone_call, vectorizable_store): Likewise.
	(vectorizable_load): Likewise.
	* tree-vect-patterns.c (vect_init_pattern_stmt): Likewise.
	(vect_recog_bool_pattern, vect_recog_mask_conversion_pattern)
	(vect_recog_gather_scatter_pattern): Likewise.
	(append_pattern_def_seq): Likewise.  Remove a check that is
	performed by add_stmt itself.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:22:09.237496975 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:22:19.809403100 +0100
@@ -25,6 +25,8 @@ #define GCC_TREE_VECTORIZER_H
 #include "tree-hash-traits.h"
 #include "target.h"
 
+typedef struct _stmt_vec_info *stmt_vec_info;
+
 /* Used for naming of new temporaries.  */
 enum vect_var_kind {
   vect_simple_var,
@@ -215,6 +217,8 @@ struct vec_info {
   vec_info (vec_kind, void *, vec_info_shared *);
   ~vec_info ();
 
+  stmt_vec_info add_stmt (gimple *);
+
   /* The type of vectorization.  */
   vec_kind kind;
 
@@ -761,7 +765,7 @@ struct dataref_aux {
 
 typedef struct data_reference *dr_p;
 
-typedef struct _stmt_vec_info {
+struct _stmt_vec_info {
 
   enum stmt_vec_info_type type;
 
@@ -914,7 +918,7 @@ typedef struct _stmt_vec_info {
      and OPERATION_BITS without changing the result.  */
   unsigned int operation_precision;
   signop operation_sign;
-} *stmt_vec_info;
+};
 
 /* Information about a gather/scatter call.  */
 struct gather_scatter_info {
Index: gcc/tree-vectorizer.c
===================================================================
--- gcc/tree-vectorizer.c	2018-07-24 10:22:09.237496975 +0100
+++ gcc/tree-vectorizer.c	2018-07-24 10:22:19.809403100 +0100
@@ -507,6 +507,17 @@ vec_info_shared::check_datarefs ()
       gcc_unreachable ();
 }
 
+/* Record that STMT belongs to the vectorizable region.  Create and return
+   an associated stmt_vec_info.  */
+
+stmt_vec_info
+vec_info::add_stmt (gimple *stmt)
+{
+  stmt_vec_info res = new_stmt_vec_info (stmt, this);
+  set_vinfo_for_stmt (stmt, res);
+  return res;
+}
+
 /* A helper function to free scev and LOOP niter information, as well as
    clear loop constraint LOOP_C_FINITE.  */
 
Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c	2018-07-23 15:56:47.000000000 +0100
+++ gcc/tree-vect-data-refs.c	2018-07-24 10:22:19.801403171 +0100
@@ -4850,7 +4850,7 @@ vect_create_data_ref_ptr (gimple *stmt,
 		 aggr_ptr, loop, &incr_gsi, insert_after,
 		 &indx_before_incr, &indx_after_incr);
       incr = gsi_stmt (incr_gsi);
-      set_vinfo_for_stmt (incr, new_stmt_vec_info (incr, loop_vinfo));
+      loop_vinfo->add_stmt (incr);
 
       /* Copy the points-to information if it exists. */
       if (DR_PTR_INFO (dr))
@@ -4880,7 +4880,7 @@ vect_create_data_ref_ptr (gimple *stmt,
 		 containing_loop, &incr_gsi, insert_after, &indx_before_incr,
 		 &indx_after_incr);
       incr = gsi_stmt (incr_gsi);
-      set_vinfo_for_stmt (incr, new_stmt_vec_info (incr, loop_vinfo));
+      loop_vinfo->add_stmt (incr);
 
       /* Copy the points-to information if it exists. */
       if (DR_PTR_INFO (dr))
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:22:16.421433184 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:22:19.801403171 +0100
@@ -845,14 +845,14 @@ _loop_vec_info::_loop_vec_info (struct l
 	{
 	  gimple *phi = gsi_stmt (si);
 	  gimple_set_uid (phi, 0);
-	  set_vinfo_for_stmt (phi, new_stmt_vec_info (phi, this));
+	  add_stmt (phi);
 	}
 
       for (si = gsi_start_bb (bb); !gsi_end_p (si); gsi_next (&si))
 	{
 	  gimple *stmt = gsi_stmt (si);
 	  gimple_set_uid (stmt, 0);
-	  set_vinfo_for_stmt (stmt, new_stmt_vec_info (stmt, this));
+	  add_stmt (stmt);
 	}
     }
   free (body);
@@ -4665,8 +4665,7 @@ vect_create_epilog_for_reduction (vec<tr
       /* Create a vector phi node.  */
       tree new_phi_tree = make_ssa_name (cr_index_vector_type);
       new_phi = create_phi_node (new_phi_tree, loop->header);
-      set_vinfo_for_stmt (new_phi,
-			  new_stmt_vec_info (new_phi, loop_vinfo));
+      loop_vinfo->add_stmt (new_phi);
       add_phi_arg (as_a <gphi *> (new_phi), vec_zero,
 		   loop_preheader_edge (loop), UNKNOWN_LOCATION);
 
@@ -4691,10 +4690,8 @@ vect_create_epilog_for_reduction (vec<tr
       gimple *index_condition = gimple_build_assign (induction_index,
 						     index_cond_expr);
       gsi_insert_before (&incr_gsi, index_condition, GSI_SAME_STMT);
-      stmt_vec_info index_vec_info = new_stmt_vec_info (index_condition,
-							loop_vinfo);
+      stmt_vec_info index_vec_info = loop_vinfo->add_stmt (index_condition);
       STMT_VINFO_VECTYPE (index_vec_info) = cr_index_vector_type;
-      set_vinfo_for_stmt (index_condition, index_vec_info);
 
       /* Update the phi with the vec cond.  */
       add_phi_arg (as_a <gphi *> (new_phi), induction_index,
@@ -4741,7 +4738,7 @@ vect_create_epilog_for_reduction (vec<tr
         {
 	  tree new_def = copy_ssa_name (def);
           phi = create_phi_node (new_def, exit_bb);
-          set_vinfo_for_stmt (phi, new_stmt_vec_info (phi, loop_vinfo));
+	  stmt_vec_info phi_info = loop_vinfo->add_stmt (phi);
           if (j == 0)
             new_phis.quick_push (phi);
           else
@@ -4751,7 +4748,7 @@ vect_create_epilog_for_reduction (vec<tr
 	    }
 
           SET_PHI_ARG_DEF (phi, single_exit (loop)->dest_idx, def);
-          prev_phi_info = vinfo_for_stmt (phi);
+	  prev_phi_info = phi_info;
         }
     }
 
@@ -4768,11 +4765,9 @@ vect_create_epilog_for_reduction (vec<tr
 	  gphi *outer_phi = create_phi_node (new_result, exit_bb);
 	  SET_PHI_ARG_DEF (outer_phi, single_exit (loop)->dest_idx,
 			   PHI_RESULT (phi));
-	  set_vinfo_for_stmt (outer_phi, new_stmt_vec_info (outer_phi,
-							    loop_vinfo));
+	  prev_phi_info = loop_vinfo->add_stmt (outer_phi);
 	  inner_phis.quick_push (phi);
 	  new_phis[i] = outer_phi;
-	  prev_phi_info = vinfo_for_stmt (outer_phi);
           while (STMT_VINFO_RELATED_STMT (vinfo_for_stmt (phi)))
             {
 	      phi = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (phi));
@@ -4780,10 +4775,9 @@ vect_create_epilog_for_reduction (vec<tr
 	      outer_phi = create_phi_node (new_result, exit_bb);
 	      SET_PHI_ARG_DEF (outer_phi, single_exit (loop)->dest_idx,
 			       PHI_RESULT (phi));
-	      set_vinfo_for_stmt (outer_phi, new_stmt_vec_info (outer_phi,
-								loop_vinfo));
+	      stmt_vec_info outer_phi_info = loop_vinfo->add_stmt (outer_phi);
 	      STMT_VINFO_RELATED_STMT (prev_phi_info) = outer_phi;
-	      prev_phi_info = vinfo_for_stmt (outer_phi);
+	      prev_phi_info = outer_phi_info;
 	    }
 	}
     }
@@ -5553,10 +5547,9 @@ vect_create_epilog_for_reduction (vec<tr
       gsi_insert_before (&exit_gsi, epilog_stmt, GSI_SAME_STMT);
       if (nested_in_vect_loop)
         {
-          set_vinfo_for_stmt (epilog_stmt,
-                              new_stmt_vec_info (epilog_stmt, loop_vinfo));
-          STMT_VINFO_RELATED_STMT (vinfo_for_stmt (epilog_stmt)) =
-                STMT_VINFO_RELATED_STMT (vinfo_for_stmt (new_phi));
+	  stmt_vec_info epilog_stmt_info = loop_vinfo->add_stmt (epilog_stmt);
+	  STMT_VINFO_RELATED_STMT (epilog_stmt_info)
+	    = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (new_phi));
 
           if (!double_reduc)
             scalar_results.quick_push (new_temp);
@@ -5697,7 +5690,6 @@ vect_create_epilog_for_reduction (vec<tr
               FOR_EACH_IMM_USE_STMT (use_stmt, imm_iter, orig_name)
                 {
                   stmt_vec_info use_stmt_vinfo;
-                  stmt_vec_info new_phi_vinfo;
                   tree vect_phi_init, preheader_arg, vect_phi_res;
                   basic_block bb = gimple_bb (use_stmt);
 		  gimple *use;
@@ -5724,9 +5716,7 @@ vect_create_epilog_for_reduction (vec<tr
 
                   /* Create vector phi node.  */
                   vect_phi = create_phi_node (vec_initial_def, bb);
-                  new_phi_vinfo = new_stmt_vec_info (vect_phi,
-                                    loop_vec_info_for_loop (outer_loop));
-                  set_vinfo_for_stmt (vect_phi, new_phi_vinfo);
+		  loop_vec_info_for_loop (outer_loop)->add_stmt (vect_phi);
 
                   /* Create vs0 - initial def of the double reduction phi.  */
                   preheader_arg = PHI_ARG_DEF_FROM_EDGE (use_stmt,
@@ -6249,8 +6239,7 @@ vectorizable_reduction (gimple *stmt, gi
 		  /* Create the reduction-phi that defines the reduction
 		     operand.  */
 		  gimple *new_phi = create_phi_node (vec_dest, loop->header);
-		  set_vinfo_for_stmt (new_phi,
-				      new_stmt_vec_info (new_phi, loop_vinfo));
+		  stmt_vec_info new_phi_info = loop_vinfo->add_stmt (new_phi);
 
 		  if (slp_node)
 		    SLP_TREE_VEC_STMTS (slp_node).quick_push (new_phi);
@@ -6260,7 +6249,7 @@ vectorizable_reduction (gimple *stmt, gi
 			STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_phi;
 		      else
 			STMT_VINFO_RELATED_STMT (prev_phi_info) = new_phi;
-		      prev_phi_info = vinfo_for_stmt (new_phi);
+		      prev_phi_info = new_phi_info;
 		    }
 		}
 	    }
@@ -7537,15 +7526,14 @@ vectorizable_induction (gimple *phi,
 	  /* Create the induction-phi that defines the induction-operand.  */
 	  vec_dest = vect_get_new_vect_var (vectype, vect_simple_var, "vec_iv_");
 	  induction_phi = create_phi_node (vec_dest, iv_loop->header);
-	  set_vinfo_for_stmt (induction_phi,
-			      new_stmt_vec_info (induction_phi, loop_vinfo));
+	  loop_vinfo->add_stmt (induction_phi);
 	  induc_def = PHI_RESULT (induction_phi);
 
 	  /* Create the iv update inside the loop  */
 	  vec_def = make_ssa_name (vec_dest);
 	  new_stmt = gimple_build_assign (vec_def, PLUS_EXPR, induc_def, vec_step);
 	  gsi_insert_before (&si, new_stmt, GSI_SAME_STMT);
-	  set_vinfo_for_stmt (new_stmt, new_stmt_vec_info (new_stmt, loop_vinfo));
+	  loop_vinfo->add_stmt (new_stmt);
 
 	  /* Set the arguments of the phi node:  */
 	  add_phi_arg (induction_phi, vec_init, pe, UNKNOWN_LOCATION);
@@ -7593,8 +7581,7 @@ vectorizable_induction (gimple *phi,
 		  gimple_stmt_iterator tgsi = gsi_for_stmt (iv);
 		  gsi_insert_after (&tgsi, new_stmt, GSI_CONTINUE_LINKING);
 		}
-	      set_vinfo_for_stmt (new_stmt,
-				  new_stmt_vec_info (new_stmt, loop_vinfo));
+	      loop_vinfo->add_stmt (new_stmt);
 	      SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
 	    }
 	}
@@ -7623,8 +7610,7 @@ vectorizable_induction (gimple *phi,
 	  new_bb = gsi_insert_on_edge_immediate (loop_preheader_edge (iv_loop),
 						 new_stmt);
 	  gcc_assert (!new_bb);
-	  set_vinfo_for_stmt (new_stmt,
-			      new_stmt_vec_info (new_stmt, loop_vinfo));
+	  loop_vinfo->add_stmt (new_stmt);
 	}
     }
   else
@@ -7728,15 +7714,14 @@ vectorizable_induction (gimple *phi,
   /* Create the induction-phi that defines the induction-operand.  */
   vec_dest = vect_get_new_vect_var (vectype, vect_simple_var, "vec_iv_");
   induction_phi = create_phi_node (vec_dest, iv_loop->header);
-  set_vinfo_for_stmt (induction_phi,
-		      new_stmt_vec_info (induction_phi, loop_vinfo));
+  stmt_vec_info induction_phi_info = loop_vinfo->add_stmt (induction_phi);
   induc_def = PHI_RESULT (induction_phi);
 
   /* Create the iv update inside the loop  */
   vec_def = make_ssa_name (vec_dest);
   new_stmt = gimple_build_assign (vec_def, PLUS_EXPR, induc_def, vec_step);
   gsi_insert_before (&si, new_stmt, GSI_SAME_STMT);
-  set_vinfo_for_stmt (new_stmt, new_stmt_vec_info (new_stmt, loop_vinfo));
+  stmt_vec_info new_stmt_info = loop_vinfo->add_stmt (new_stmt);
 
   /* Set the arguments of the phi node:  */
   add_phi_arg (induction_phi, vec_init, pe, UNKNOWN_LOCATION);
@@ -7781,7 +7766,7 @@ vectorizable_induction (gimple *phi,
       vec_step = vect_init_vector (phi, new_vec, vectype, NULL);
 
       vec_def = induc_def;
-      prev_stmt_vinfo = vinfo_for_stmt (induction_phi);
+      prev_stmt_vinfo = induction_phi_info;
       for (i = 1; i < ncopies; i++)
 	{
 	  /* vec_i = vec_prev + vec_step  */
@@ -7791,10 +7776,9 @@ vectorizable_induction (gimple *phi,
 	  gimple_assign_set_lhs (new_stmt, vec_def);
  
 	  gsi_insert_before (&si, new_stmt, GSI_SAME_STMT);
-	  set_vinfo_for_stmt (new_stmt,
-			      new_stmt_vec_info (new_stmt, loop_vinfo));
+	  new_stmt_info = loop_vinfo->add_stmt (new_stmt);
 	  STMT_VINFO_RELATED_STMT (prev_stmt_vinfo) = new_stmt;
-	  prev_stmt_vinfo = vinfo_for_stmt (new_stmt);
+	  prev_stmt_vinfo = new_stmt_info;
 	}
     }
 
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c	2018-07-24 10:22:09.237496975 +0100
+++ gcc/tree-vect-slp.c	2018-07-24 10:22:19.805403136 +0100
@@ -2494,7 +2494,7 @@ _bb_vec_info::_bb_vec_info (gimple_stmt_
     {
       gimple *stmt = gsi_stmt (gsi);
       gimple_set_uid (stmt, 0);
-      set_vinfo_for_stmt (stmt, new_stmt_vec_info (stmt, this));
+      add_stmt (stmt);
     }
 
   bb->aux = this;
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-13 10:11:14.533842692 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:22:19.809403100 +0100
@@ -1744,7 +1744,7 @@ vect_finish_stmt_generation_1 (gimple *s
   stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   vec_info *vinfo = stmt_info->vinfo;
 
-  set_vinfo_for_stmt (vec_stmt, new_stmt_vec_info (vec_stmt, vinfo));
+  vinfo->add_stmt (vec_stmt);
 
   if (dump_enabled_p ())
     {
@@ -4183,8 +4183,7 @@ vectorizable_simd_clone_call (gimple *st
 		    }
 		  tree phi_res = copy_ssa_name (op);
 		  gphi *new_phi = create_phi_node (phi_res, loop->header);
-		  set_vinfo_for_stmt (new_phi,
-				      new_stmt_vec_info (new_phi, loop_vinfo));
+		  loop_vinfo->add_stmt (new_phi);
 		  add_phi_arg (new_phi, arginfo[i].op,
 			       loop_preheader_edge (loop), UNKNOWN_LOCATION);
 		  enum tree_code code
@@ -4201,8 +4200,7 @@ vectorizable_simd_clone_call (gimple *st
 		    = gimple_build_assign (phi_arg, code, phi_res, tcst);
 		  gimple_stmt_iterator si = gsi_after_labels (loop->header);
 		  gsi_insert_after (&si, new_stmt, GSI_NEW_STMT);
-		  set_vinfo_for_stmt (new_stmt,
-				      new_stmt_vec_info (new_stmt, loop_vinfo));
+		  loop_vinfo->add_stmt (new_stmt);
 		  add_phi_arg (new_phi, phi_arg, loop_latch_edge (loop),
 			       UNKNOWN_LOCATION);
 		  arginfo[i].op = phi_res;
@@ -6731,7 +6729,7 @@ vectorizable_store (gimple *stmt, gimple
 		 loop, &incr_gsi, insert_after,
 		 &offvar, NULL);
       incr = gsi_stmt (incr_gsi);
-      set_vinfo_for_stmt (incr, new_stmt_vec_info (incr, loop_vinfo));
+      loop_vinfo->add_stmt (incr);
 
       stride_step = cse_and_gimplify_to_preheader (loop_vinfo, stride_step);
 
@@ -7729,7 +7727,7 @@ vectorizable_load (gimple *stmt, gimple_
 		 loop, &incr_gsi, insert_after,
 		 &offvar, NULL);
       incr = gsi_stmt (incr_gsi);
-      set_vinfo_for_stmt (incr, new_stmt_vec_info (incr, loop_vinfo));
+      loop_vinfo->add_stmt (incr);
 
       stride_step = cse_and_gimplify_to_preheader (loop_vinfo, stride_step);
 
@@ -8488,8 +8486,7 @@ vectorizable_load (gimple *stmt, gimple_
 					        (gimple_assign_rhs1 (stmt))));
 		      new_temp = vect_init_vector (stmt, tem, vectype, NULL);
 		      new_stmt = SSA_NAME_DEF_STMT (new_temp);
-		      set_vinfo_for_stmt (new_stmt,
-					  new_stmt_vec_info (new_stmt, vinfo));
+		      vinfo->add_stmt (new_stmt);
 		    }
 		  else
 		    {
Index: gcc/tree-vect-patterns.c
===================================================================
--- gcc/tree-vect-patterns.c	2018-07-18 18:44:23.517905682 +0100
+++ gcc/tree-vect-patterns.c	2018-07-24 10:22:19.805403136 +0100
@@ -103,11 +103,7 @@ vect_init_pattern_stmt (gimple *pattern_
 {
   stmt_vec_info pattern_stmt_info = vinfo_for_stmt (pattern_stmt);
   if (pattern_stmt_info == NULL)
-    {
-      pattern_stmt_info = new_stmt_vec_info (pattern_stmt,
-					     orig_stmt_info->vinfo);
-      set_vinfo_for_stmt (pattern_stmt, pattern_stmt_info);
-    }
+    pattern_stmt_info = orig_stmt_info->vinfo->add_stmt (pattern_stmt);
   gimple_set_bb (pattern_stmt, gimple_bb (orig_stmt_info->stmt));
 
   STMT_VINFO_RELATED_STMT (pattern_stmt_info) = orig_stmt_info->stmt;
@@ -141,9 +137,7 @@ append_pattern_def_seq (stmt_vec_info st
   vec_info *vinfo = stmt_info->vinfo;
   if (vectype)
     {
-      gcc_assert (!vinfo_for_stmt (new_stmt));
-      stmt_vec_info new_stmt_info = new_stmt_vec_info (new_stmt, vinfo);
-      set_vinfo_for_stmt (new_stmt, new_stmt_info);
+      stmt_vec_info new_stmt_info = vinfo->add_stmt (new_stmt);
       STMT_VINFO_VECTYPE (new_stmt_info) = vectype;
     }
   gimple_seq_add_stmt_without_update (&STMT_VINFO_PATTERN_DEF_SEQ (stmt_info),
@@ -3832,8 +3826,7 @@ vect_recog_bool_pattern (stmt_vec_info s
 	  rhs = rhs2;
 	}
       pattern_stmt = gimple_build_assign (lhs, SSA_NAME, rhs);
-      pattern_stmt_info = new_stmt_vec_info (pattern_stmt, vinfo);
-      set_vinfo_for_stmt (pattern_stmt, pattern_stmt_info);
+      pattern_stmt_info = vinfo->add_stmt (pattern_stmt);
       STMT_VINFO_DATA_REF (pattern_stmt_info)
 	= STMT_VINFO_DATA_REF (stmt_vinfo);
       STMT_VINFO_DR_WRT_VEC_LOOP (pattern_stmt_info)
@@ -3958,8 +3951,7 @@ vect_recog_mask_conversion_pattern (stmt
 	}
       gimple_call_set_nothrow (pattern_stmt, true);
 
-      pattern_stmt_info = new_stmt_vec_info (pattern_stmt, vinfo);
-      set_vinfo_for_stmt (pattern_stmt, pattern_stmt_info);
+      pattern_stmt_info = vinfo->add_stmt (pattern_stmt);
       if (STMT_VINFO_DATA_REF (stmt_vinfo))
 	{
 	  STMT_VINFO_DATA_REF (pattern_stmt_info)
@@ -4290,9 +4282,7 @@ vect_recog_gather_scatter_pattern (stmt_
 
   /* Copy across relevant vectorization info and associate DR with the
      new pattern statement instead of the original statement.  */
-  stmt_vec_info pattern_stmt_info = new_stmt_vec_info (pattern_stmt,
-						       loop_vinfo);
-  set_vinfo_for_stmt (pattern_stmt, pattern_stmt_info);
+  stmt_vec_info pattern_stmt_info = loop_vinfo->add_stmt (pattern_stmt);
   STMT_VINFO_DATA_REF (pattern_stmt_info) = dr;
   STMT_VINFO_DR_WRT_VEC_LOOP (pattern_stmt_info)
     = STMT_VINFO_DR_WRT_VEC_LOOP (stmt_info);

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [07/46] Add vec_info::lookup_stmt
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (4 preceding siblings ...)
  2018-07-24  9:54 ` [04/46] Factor out the test for a valid reduction input Richard Sandiford
@ 2018-07-24  9:55 ` Richard Sandiford
  2018-07-25  9:11   ` Richard Biener
  2018-07-24  9:55 ` [06/46] Add vec_info::add_stmt Richard Sandiford
                   ` (39 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24  9:55 UTC (permalink / raw)
  To: gcc-patches

This patch adds a vec_info replacement for vinfo_for_stmt.  The main
difference is that the new routine can cope with arbitrary statements,
so there's no need to call vect_stmt_in_region_p first.

The patch only converts calls that are still needed at the end of the
series.  Later patches get rid of most other calls to vinfo_for_stmt.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (vec_info::lookup_stmt): Declare.
	* tree-vectorizer.c (vec_info::lookup_stmt): New function.
	* tree-vect-loop.c (vect_determine_vf_for_stmt): Use it instead
	of vinfo_for_stmt.
	(vect_determine_vectorization_factor, vect_analyze_scalar_cycles_1)
	(vect_compute_single_scalar_iteration_cost, vect_analyze_loop_form)
	(vect_update_vf_for_slp, vect_analyze_loop_operations)
	(vect_is_slp_reduction, vectorizable_induction)
	(vect_transform_loop_stmt, vect_transform_loop): Likewise.
	* tree-vect-patterns.c (vect_init_pattern_stmt):
	(vect_determine_min_output_precision_1, vect_determine_precisions)
	(vect_pattern_recog): Likewise.
	* tree-vect-stmts.c (vect_analyze_stmt, vect_transform_stmt): Likewise.
	* config/powerpcspe/powerpcspe.c (rs6000_density_test): Likewise.
	* config/rs6000/rs6000.c (rs6000_density_test): Likewise.
	* tree-vect-slp.c (vect_detect_hybrid_slp_stmts): Likewise.
	(vect_detect_hybrid_slp_1, vect_detect_hybrid_slp_2)
	(vect_detect_hybrid_slp): Likewise.  Change the walk_stmt_info
	info field from a loop to a loop_vec_info.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:22:19.809403100 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:22:23.797367688 +0100
@@ -218,6 +218,7 @@ struct vec_info {
   ~vec_info ();
 
   stmt_vec_info add_stmt (gimple *);
+  stmt_vec_info lookup_stmt (gimple *);
 
   /* The type of vectorization.  */
   vec_kind kind;
Index: gcc/tree-vectorizer.c
===================================================================
--- gcc/tree-vectorizer.c	2018-07-24 10:22:19.809403100 +0100
+++ gcc/tree-vectorizer.c	2018-07-24 10:22:23.797367688 +0100
@@ -518,6 +518,23 @@ vec_info::add_stmt (gimple *stmt)
   return res;
 }
 
+/* If STMT has an associated stmt_vec_info, return that vec_info, otherwise
+   return null.  It is safe to call this function on any statement, even if
+   it might not be part of the vectorizable region.  */
+
+stmt_vec_info
+vec_info::lookup_stmt (gimple *stmt)
+{
+  unsigned int uid = gimple_uid (stmt);
+  if (uid > 0 && uid - 1 < stmt_vec_infos.length ())
+    {
+      stmt_vec_info res = stmt_vec_infos[uid - 1];
+      if (res && res->stmt == stmt)
+	return res;
+    }
+  return NULL;
+}
+
 /* A helper function to free scev and LOOP niter information, as well as
    clear loop constraint LOOP_C_FINITE.  */
 
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:22:19.801403171 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:22:23.793367723 +0100
@@ -213,6 +213,7 @@ vect_determine_vf_for_stmt_1 (stmt_vec_i
 vect_determine_vf_for_stmt (stmt_vec_info stmt_info, poly_uint64 *vf,
 			    vec<stmt_vec_info > *mask_producers)
 {
+  vec_info *vinfo = stmt_info->vinfo;
   if (dump_enabled_p ())
     {
       dump_printf_loc (MSG_NOTE, vect_location, "==> examining statement: ");
@@ -231,7 +232,7 @@ vect_determine_vf_for_stmt (stmt_vec_inf
       for (gimple_stmt_iterator si = gsi_start (pattern_def_seq);
 	   !gsi_end_p (si); gsi_next (&si))
 	{
-	  stmt_vec_info def_stmt_info = vinfo_for_stmt (gsi_stmt (si));
+	  stmt_vec_info def_stmt_info = vinfo->lookup_stmt (gsi_stmt (si));
 	  if (dump_enabled_p ())
 	    {
 	      dump_printf_loc (MSG_NOTE, vect_location,
@@ -306,7 +307,7 @@ vect_determine_vectorization_factor (loo
 	   gsi_next (&si))
 	{
 	  phi = si.phi ();
-	  stmt_info = vinfo_for_stmt (phi);
+	  stmt_info = loop_vinfo->lookup_stmt (phi);
 	  if (dump_enabled_p ())
 	    {
 	      dump_printf_loc (MSG_NOTE, vect_location, "==> examining phi: ");
@@ -366,7 +367,7 @@ vect_determine_vectorization_factor (loo
       for (gimple_stmt_iterator si = gsi_start_bb (bb); !gsi_end_p (si);
 	   gsi_next (&si))
 	{
-	  stmt_info = vinfo_for_stmt (gsi_stmt (si));
+	  stmt_info = loop_vinfo->lookup_stmt (gsi_stmt (si));
 	  if (!vect_determine_vf_for_stmt (stmt_info, &vectorization_factor,
 					   &mask_producers))
 	    return false;
@@ -487,7 +488,7 @@ vect_analyze_scalar_cycles_1 (loop_vec_i
       gphi *phi = gsi.phi ();
       tree access_fn = NULL;
       tree def = PHI_RESULT (phi);
-      stmt_vec_info stmt_vinfo = vinfo_for_stmt (phi);
+      stmt_vec_info stmt_vinfo = loop_vinfo->lookup_stmt (phi);
 
       if (dump_enabled_p ())
 	{
@@ -1101,7 +1102,7 @@ vect_compute_single_scalar_iteration_cos
       for (si = gsi_start_bb (bb); !gsi_end_p (si); gsi_next (&si))
         {
 	  gimple *stmt = gsi_stmt (si);
-          stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+	  stmt_vec_info stmt_info = loop_vinfo->lookup_stmt (stmt);
 
           if (!is_gimple_assign (stmt) && !is_gimple_call (stmt))
             continue;
@@ -1390,10 +1391,14 @@ vect_analyze_loop_form (struct loop *loo
         }
     }
 
-  STMT_VINFO_TYPE (vinfo_for_stmt (loop_cond)) = loop_exit_ctrl_vec_info_type;
+  stmt_vec_info loop_cond_info = loop_vinfo->lookup_stmt (loop_cond);
+  STMT_VINFO_TYPE (loop_cond_info) = loop_exit_ctrl_vec_info_type;
   if (inner_loop_cond)
-    STMT_VINFO_TYPE (vinfo_for_stmt (inner_loop_cond))
-      = loop_exit_ctrl_vec_info_type;
+    {
+      stmt_vec_info inner_loop_cond_info
+	= loop_vinfo->lookup_stmt (inner_loop_cond);
+      STMT_VINFO_TYPE (inner_loop_cond_info) = loop_exit_ctrl_vec_info_type;
+    }
 
   gcc_assert (!loop->aux);
   loop->aux = loop_vinfo;
@@ -1432,7 +1437,7 @@ vect_update_vf_for_slp (loop_vec_info lo
 	   gsi_next (&si))
 	{
 	  gimple *stmt = gsi_stmt (si);
-	  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+	  stmt_vec_info stmt_info = loop_vinfo->lookup_stmt (gsi_stmt (si));
 	  if (STMT_VINFO_IN_PATTERN_P (stmt_info)
 	      && STMT_VINFO_RELATED_STMT (stmt_info))
 	    {
@@ -1532,7 +1537,7 @@ vect_analyze_loop_operations (loop_vec_i
           gphi *phi = si.phi ();
           ok = true;
 
-          stmt_info = vinfo_for_stmt (phi);
+	  stmt_info = loop_vinfo->lookup_stmt (phi);
           if (dump_enabled_p ())
             {
               dump_printf_loc (MSG_NOTE, vect_location, "examining phi: ");
@@ -2238,13 +2243,13 @@ vect_analyze_loop_2 (loop_vec_info loop_
       for (gimple_stmt_iterator si = gsi_start_phis (bb);
 	   !gsi_end_p (si); gsi_next (&si))
 	{
-	  stmt_vec_info stmt_info = vinfo_for_stmt (gsi_stmt (si));
+	  stmt_vec_info stmt_info = loop_vinfo->lookup_stmt (gsi_stmt (si));
 	  STMT_SLP_TYPE (stmt_info) = loop_vect;
 	}
       for (gimple_stmt_iterator si = gsi_start_bb (bb);
 	   !gsi_end_p (si); gsi_next (&si))
 	{
-	  stmt_vec_info stmt_info = vinfo_for_stmt (gsi_stmt (si));
+	  stmt_vec_info stmt_info = loop_vinfo->lookup_stmt (gsi_stmt (si));
 	  STMT_SLP_TYPE (stmt_info) = loop_vect;
 	  if (STMT_VINFO_IN_PATTERN_P (stmt_info))
 	    {
@@ -2253,10 +2258,8 @@ vect_analyze_loop_2 (loop_vec_info loop_
 	      STMT_SLP_TYPE (stmt_info) = loop_vect;
 	      for (gimple_stmt_iterator pi = gsi_start (pattern_def_seq);
 		   !gsi_end_p (pi); gsi_next (&pi))
-		{
-		  gimple *pstmt = gsi_stmt (pi);
-		  STMT_SLP_TYPE (vinfo_for_stmt (pstmt)) = loop_vect;
-		}
+		STMT_SLP_TYPE (loop_vinfo->lookup_stmt (gsi_stmt (pi)))
+		  = loop_vect;
 	    }
 	}
     }
@@ -2602,7 +2605,7 @@ vect_is_slp_reduction (loop_vec_info loo
         return false;
 
       /* Insert USE_STMT into reduction chain.  */
-      use_stmt_info = vinfo_for_stmt (loop_use_stmt);
+      use_stmt_info = loop_info->lookup_stmt (loop_use_stmt);
       if (current_stmt)
         {
           current_stmt_info = vinfo_for_stmt (current_stmt);
@@ -5549,7 +5552,7 @@ vect_create_epilog_for_reduction (vec<tr
         {
 	  stmt_vec_info epilog_stmt_info = loop_vinfo->add_stmt (epilog_stmt);
 	  STMT_VINFO_RELATED_STMT (epilog_stmt_info)
-	    = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (new_phi));
+	    = STMT_VINFO_RELATED_STMT (loop_vinfo->lookup_stmt (new_phi));
 
           if (!double_reduc)
             scalar_results.quick_push (new_temp);
@@ -5653,7 +5656,8 @@ vect_create_epilog_for_reduction (vec<tr
         {
           if (outer_loop)
             {
-              stmt_vec_info exit_phi_vinfo = vinfo_for_stmt (exit_phi);
+	      stmt_vec_info exit_phi_vinfo
+		= loop_vinfo->lookup_stmt (exit_phi);
               gphi *vect_phi;
 
               /* FORNOW. Currently not supporting the case that an inner-loop
@@ -5700,7 +5704,7 @@ vect_create_epilog_for_reduction (vec<tr
                       || gimple_phi_num_args (use_stmt) != 2
                       || bb->loop_father != outer_loop)
                     continue;
-                  use_stmt_vinfo = vinfo_for_stmt (use_stmt);
+		  use_stmt_vinfo = loop_vinfo->lookup_stmt (use_stmt);
                   if (!use_stmt_vinfo
                       || STMT_VINFO_DEF_TYPE (use_stmt_vinfo)
                           != vect_double_reduction_def)
@@ -7377,7 +7381,7 @@ vectorizable_induction (gimple *phi,
 	}
       if (exit_phi)
 	{
-	  stmt_vec_info exit_phi_vinfo  = vinfo_for_stmt (exit_phi);
+	  stmt_vec_info exit_phi_vinfo = loop_vinfo->lookup_stmt (exit_phi);
 	  if (!(STMT_VINFO_RELEVANT_P (exit_phi_vinfo)
 		&& !STMT_VINFO_LIVE_P (exit_phi_vinfo)))
 	    {
@@ -7801,7 +7805,7 @@ vectorizable_induction (gimple *phi,
         }
       if (exit_phi)
 	{
-	  stmt_vec_info stmt_vinfo = vinfo_for_stmt (exit_phi);
+	  stmt_vec_info stmt_vinfo = loop_vinfo->lookup_stmt (exit_phi);
 	  /* FORNOW. Currently not supporting the case that an inner-loop induction
 	     is not used in the outer-loop (i.e. only outside the outer-loop).  */
 	  gcc_assert (STMT_VINFO_RELEVANT_P (stmt_vinfo)
@@ -8260,7 +8264,7 @@ vect_transform_loop_stmt (loop_vec_info
 {
   struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
   poly_uint64 vf = LOOP_VINFO_VECT_FACTOR (loop_vinfo);
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+  stmt_vec_info stmt_info = loop_vinfo->lookup_stmt (stmt);
   if (!stmt_info)
     return;
 
@@ -8463,7 +8467,7 @@ vect_transform_loop (loop_vec_info loop_
                                "------>vectorizing phi: ");
 	      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, phi, 0);
 	    }
-	  stmt_info = vinfo_for_stmt (phi);
+	  stmt_info = loop_vinfo->lookup_stmt (phi);
 	  if (!stmt_info)
 	    continue;
 
@@ -8504,7 +8508,7 @@ vect_transform_loop (loop_vec_info loop_
 	    }
 	  else
 	    {
-	      stmt_info = vinfo_for_stmt (stmt);
+	      stmt_info = loop_vinfo->lookup_stmt (stmt);
 
 	      /* vector stmts created in the outer-loop during vectorization of
 		 stmts in an inner-loop may not have a stmt_info, and do not
Index: gcc/tree-vect-patterns.c
===================================================================
--- gcc/tree-vect-patterns.c	2018-07-24 10:22:19.805403136 +0100
+++ gcc/tree-vect-patterns.c	2018-07-24 10:22:23.793367723 +0100
@@ -101,7 +101,8 @@ vect_pattern_detected (const char *name,
 vect_init_pattern_stmt (gimple *pattern_stmt, stmt_vec_info orig_stmt_info,
 			tree vectype)
 {
-  stmt_vec_info pattern_stmt_info = vinfo_for_stmt (pattern_stmt);
+  vec_info *vinfo = orig_stmt_info->vinfo;
+  stmt_vec_info pattern_stmt_info = vinfo->lookup_stmt (pattern_stmt);
   if (pattern_stmt_info == NULL)
     pattern_stmt_info = orig_stmt_info->vinfo->add_stmt (pattern_stmt);
   gimple_set_bb (pattern_stmt, gimple_bb (orig_stmt_info->stmt));
@@ -4401,6 +4402,7 @@ vect_set_min_input_precision (stmt_vec_i
 vect_determine_min_output_precision_1 (stmt_vec_info stmt_info, tree lhs)
 {
   /* Take the maximum precision required by users of the result.  */
+  vec_info *vinfo = stmt_info->vinfo;
   unsigned int precision = 0;
   imm_use_iterator iter;
   use_operand_p use;
@@ -4409,10 +4411,8 @@ vect_determine_min_output_precision_1 (s
       gimple *use_stmt = USE_STMT (use);
       if (is_gimple_debug (use_stmt))
 	continue;
-      if (!vect_stmt_in_region_p (stmt_info->vinfo, use_stmt))
-	return false;
-      stmt_vec_info use_stmt_info = vinfo_for_stmt (use_stmt);
-      if (!use_stmt_info->min_input_precision)
+      stmt_vec_info use_stmt_info = vinfo->lookup_stmt (use_stmt);
+      if (!use_stmt_info || !use_stmt_info->min_input_precision)
 	return false;
       precision = MAX (precision, use_stmt_info->min_input_precision);
     }
@@ -4657,7 +4657,8 @@ vect_determine_precisions (vec_info *vin
 	  basic_block bb = bbs[nbbs - i - 1];
 	  for (gimple_stmt_iterator si = gsi_last_bb (bb);
 	       !gsi_end_p (si); gsi_prev (&si))
-	    vect_determine_stmt_precisions (vinfo_for_stmt (gsi_stmt (si)));
+	    vect_determine_stmt_precisions
+	      (vinfo->lookup_stmt (gsi_stmt (si)));
 	}
     }
   else
@@ -4672,7 +4673,7 @@ vect_determine_precisions (vec_info *vin
 	  else
 	    gsi_prev (&si);
 	  stmt = gsi_stmt (si);
-	  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+	  stmt_vec_info stmt_info = vinfo->lookup_stmt (stmt);
 	  if (stmt_info && STMT_VINFO_VECTORIZABLE (stmt_info))
 	    vect_determine_stmt_precisions (stmt_info);
 	}
@@ -4971,7 +4972,7 @@ vect_pattern_recog (vec_info *vinfo)
 	   gsi_stmt (si) != gsi_stmt (bb_vinfo->region_end); gsi_next (&si))
 	{
 	  gimple *stmt = gsi_stmt (si);
-	  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+	  stmt_vec_info stmt_info = bb_vinfo->lookup_stmt (stmt);
 	  if (stmt_info && !STMT_VINFO_VECTORIZABLE (stmt_info))
 	    continue;
 
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:22:19.809403100 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:22:23.797367688 +0100
@@ -9377,6 +9377,7 @@ vect_analyze_stmt (gimple *stmt, bool *n
 		   slp_instance node_instance, stmt_vector_for_cost *cost_vec)
 {
   stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+  vec_info *vinfo = stmt_info->vinfo;
   bb_vec_info bb_vinfo = STMT_VINFO_BB_VINFO (stmt_info);
   enum vect_relevant relevance = STMT_VINFO_RELEVANT (stmt_info);
   bool ok;
@@ -9407,8 +9408,10 @@ vect_analyze_stmt (gimple *stmt, bool *n
       for (si = gsi_start (pattern_def_seq); !gsi_end_p (si); gsi_next (&si))
 	{
 	  gimple *pattern_def_stmt = gsi_stmt (si);
-	  if (STMT_VINFO_RELEVANT_P (vinfo_for_stmt (pattern_def_stmt))
-	      || STMT_VINFO_LIVE_P (vinfo_for_stmt (pattern_def_stmt)))
+	  stmt_vec_info pattern_def_stmt_info
+	    = vinfo->lookup_stmt (gsi_stmt (si));
+	  if (STMT_VINFO_RELEVANT_P (pattern_def_stmt_info)
+	      || STMT_VINFO_LIVE_P (pattern_def_stmt_info))
 	    {
 	      /* Analyze def stmt of STMT if it's a pattern stmt.  */
 	      if (dump_enabled_p ())
@@ -9605,9 +9608,10 @@ vect_transform_stmt (gimple *stmt, gimpl
 		     bool *grouped_store, slp_tree slp_node,
                      slp_instance slp_node_instance)
 {
+  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+  vec_info *vinfo = stmt_info->vinfo;
   bool is_store = false;
   gimple *vec_stmt = NULL;
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   bool done;
 
   gcc_assert (slp_node || !PURE_SLP_STMT (stmt_info));
@@ -9728,7 +9732,6 @@ vect_transform_stmt (gimple *stmt, gimpl
       imm_use_iterator imm_iter;
       use_operand_p use_p;
       tree scalar_dest;
-      gimple *exit_phi;
 
       if (dump_enabled_p ())
         dump_printf_loc (MSG_NOTE, vect_location,
@@ -9743,13 +9746,12 @@ vect_transform_stmt (gimple *stmt, gimpl
         scalar_dest = gimple_assign_lhs (stmt);
 
       FOR_EACH_IMM_USE_FAST (use_p, imm_iter, scalar_dest)
-       {
-         if (!flow_bb_inside_loop_p (innerloop, gimple_bb (USE_STMT (use_p))))
-           {
-             exit_phi = USE_STMT (use_p);
-             STMT_VINFO_VEC_STMT (vinfo_for_stmt (exit_phi)) = vec_stmt;
-           }
-       }
+	if (!flow_bb_inside_loop_p (innerloop, gimple_bb (USE_STMT (use_p))))
+	  {
+	    stmt_vec_info exit_phi_info
+	      = vinfo->lookup_stmt (USE_STMT (use_p));
+	    STMT_VINFO_VEC_STMT (exit_phi_info) = vec_stmt;
+	  }
     }
 
   /* Handle stmts whose DEF is used outside the loop-nest that is
Index: gcc/config/powerpcspe/powerpcspe.c
===================================================================
--- gcc/config/powerpcspe/powerpcspe.c	2018-07-18 18:44:23.681904201 +0100
+++ gcc/config/powerpcspe/powerpcspe.c	2018-07-24 10:22:23.785367794 +0100
@@ -6030,6 +6030,7 @@ rs6000_density_test (rs6000_cost_data *d
   struct loop *loop = data->loop_info;
   basic_block *bbs = get_loop_body (loop);
   int nbbs = loop->num_nodes;
+  loop_vec_info loop_vinfo = loop_vec_info_for_loop (data->loop_info);
   int vec_cost = data->cost[vect_body], not_vec_cost = 0;
   int i, density_pct;
 
@@ -6041,7 +6042,7 @@ rs6000_density_test (rs6000_cost_data *d
       for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
 	{
 	  gimple *stmt = gsi_stmt (gsi);
-	  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+	  stmt_vec_info stmt_info = loop_vinfo->lookup_stmt (stmt);
 
 	  if (!STMT_VINFO_RELEVANT_P (stmt_info)
 	      && !STMT_VINFO_IN_PATTERN_P (stmt_info))
Index: gcc/config/rs6000/rs6000.c
===================================================================
--- gcc/config/rs6000/rs6000.c	2018-07-23 17:14:27.395541019 +0100
+++ gcc/config/rs6000/rs6000.c	2018-07-24 10:22:23.793367723 +0100
@@ -5566,6 +5566,7 @@ rs6000_density_test (rs6000_cost_data *d
   struct loop *loop = data->loop_info;
   basic_block *bbs = get_loop_body (loop);
   int nbbs = loop->num_nodes;
+  loop_vec_info loop_vinfo = loop_vec_info_for_loop (data->loop_info);
   int vec_cost = data->cost[vect_body], not_vec_cost = 0;
   int i, density_pct;
 
@@ -5577,7 +5578,7 @@ rs6000_density_test (rs6000_cost_data *d
       for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
 	{
 	  gimple *stmt = gsi_stmt (gsi);
-	  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+	  stmt_vec_info stmt_info = loop_vinfo->lookup_stmt (stmt);
 
 	  if (!STMT_VINFO_RELEVANT_P (stmt_info)
 	      && !STMT_VINFO_IN_PATTERN_P (stmt_info))
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c	2018-07-24 10:22:19.805403136 +0100
+++ gcc/tree-vect-slp.c	2018-07-24 10:22:23.793367723 +0100
@@ -2315,7 +2315,6 @@ vect_detect_hybrid_slp_stmts (slp_tree n
   stmt_vec_info use_vinfo, stmt_vinfo = vinfo_for_stmt (stmt);
   slp_tree child;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_vinfo);
-  struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
   int j;
 
   /* Propagate hybrid down the SLP tree.  */
@@ -2340,9 +2339,9 @@ vect_detect_hybrid_slp_stmts (slp_tree n
       if (def)
 	FOR_EACH_IMM_USE_STMT (use_stmt, imm_iter, def)
 	  {
-	    if (!flow_bb_inside_loop_p (loop, gimple_bb (use_stmt)))
+	    use_vinfo = loop_vinfo->lookup_stmt (use_stmt);
+	    if (!use_vinfo)
 	      continue;
-	    use_vinfo = vinfo_for_stmt (use_stmt);
 	    if (STMT_VINFO_IN_PATTERN_P (use_vinfo)
 		&& STMT_VINFO_RELATED_STMT (use_vinfo))
 	      use_vinfo = vinfo_for_stmt (STMT_VINFO_RELATED_STMT (use_vinfo));
@@ -2385,25 +2384,23 @@ vect_detect_hybrid_slp_stmts (slp_tree n
 vect_detect_hybrid_slp_1 (tree *tp, int *, void *data)
 {
   walk_stmt_info *wi = (walk_stmt_info *)data;
-  struct loop *loopp = (struct loop *)wi->info;
+  loop_vec_info loop_vinfo = (loop_vec_info) wi->info;
 
   if (wi->is_lhs)
     return NULL_TREE;
 
+  stmt_vec_info def_stmt_info;
   if (TREE_CODE (*tp) == SSA_NAME
-      && !SSA_NAME_IS_DEFAULT_DEF (*tp))
+      && !SSA_NAME_IS_DEFAULT_DEF (*tp)
+      && (def_stmt_info = loop_vinfo->lookup_stmt (SSA_NAME_DEF_STMT (*tp)))
+      && PURE_SLP_STMT (def_stmt_info))
     {
-      gimple *def_stmt = SSA_NAME_DEF_STMT (*tp);
-      if (flow_bb_inside_loop_p (loopp, gimple_bb (def_stmt))
-	  && PURE_SLP_STMT (vinfo_for_stmt (def_stmt)))
+      if (dump_enabled_p ())
 	{
-	  if (dump_enabled_p ())
-	    {
-	      dump_printf_loc (MSG_NOTE, vect_location, "marking hybrid: ");
-	      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, def_stmt, 0);
-	    }
-	  STMT_SLP_TYPE (vinfo_for_stmt (def_stmt)) = hybrid;
+	  dump_printf_loc (MSG_NOTE, vect_location, "marking hybrid: ");
+	  dump_gimple_stmt (MSG_NOTE, TDF_SLIM, def_stmt_info->stmt, 0);
 	}
+      STMT_SLP_TYPE (def_stmt_info) = hybrid;
     }
 
   return NULL_TREE;
@@ -2411,9 +2408,10 @@ vect_detect_hybrid_slp_1 (tree *tp, int
 
 static tree
 vect_detect_hybrid_slp_2 (gimple_stmt_iterator *gsi, bool *handled,
-			  walk_stmt_info *)
+			  walk_stmt_info *wi)
 {
-  stmt_vec_info use_vinfo = vinfo_for_stmt (gsi_stmt (*gsi));
+  loop_vec_info loop_vinfo = (loop_vec_info) wi->info;
+  stmt_vec_info use_vinfo = loop_vinfo->lookup_stmt (gsi_stmt (*gsi));
   /* If the stmt is in a SLP instance then this isn't a reason
      to mark use definitions in other SLP instances as hybrid.  */
   if (! STMT_SLP_TYPE (use_vinfo)
@@ -2447,12 +2445,12 @@ vect_detect_hybrid_slp (loop_vec_info lo
 	   gsi_next (&gsi))
 	{
 	  gimple *stmt = gsi_stmt (gsi);
-	  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+	  stmt_vec_info stmt_info = loop_vinfo->lookup_stmt (stmt);
 	  if (STMT_VINFO_IN_PATTERN_P (stmt_info))
 	    {
 	      walk_stmt_info wi;
 	      memset (&wi, 0, sizeof (wi));
-	      wi.info = LOOP_VINFO_LOOP (loop_vinfo);
+	      wi.info = loop_vinfo;
 	      gimple_stmt_iterator gsi2
 		= gsi_for_stmt (STMT_VINFO_RELATED_STMT (stmt_info));
 	      walk_gimple_stmt (&gsi2, vect_detect_hybrid_slp_2,

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [08/46] Add vec_info::lookup_def
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (6 preceding siblings ...)
  2018-07-24  9:55 ` [06/46] Add vec_info::add_stmt Richard Sandiford
@ 2018-07-24  9:55 ` Richard Sandiford
  2018-07-25  9:12   ` Richard Biener
  2018-07-24  9:56 ` [09/46] Add vec_info::lookup_single_use Richard Sandiford
                   ` (37 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24  9:55 UTC (permalink / raw)
  To: gcc-patches

This patch adds a vec_info helper for checking whether an operand is an
SSA_NAME that is defined in the vectorisable region.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (vec_info::lookup_def): Declare.
	* tree-vectorizer.c (vec_info::lookup_def): New function.
	* tree-vect-patterns.c (vect_get_internal_def): Use it.
	(vect_widened_op_tree): Likewise.
	* tree-vect-stmts.c (vect_is_simple_use): Likewise.
	* tree-vect-loop.c (vect_analyze_loop_operations): Likewise.
	(vectorizable_reduction): Likewise.
	(vect_valid_reduction_input_p): Take a stmt_vec_info instead
	of a gimple *.
	(vect_is_slp_reduction): Update calls accordingly.  Use
	vec_info::lookup_def.
	(vect_is_simple_reduction): Likewise
	* tree-vect-slp.c (vect_detect_hybrid_slp_1): Use vec_info::lookup_def.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:22:23.797367688 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:22:27.285336715 +0100
@@ -219,6 +219,7 @@ struct vec_info {
 
   stmt_vec_info add_stmt (gimple *);
   stmt_vec_info lookup_stmt (gimple *);
+  stmt_vec_info lookup_def (tree);
 
   /* The type of vectorization.  */
   vec_kind kind;
Index: gcc/tree-vectorizer.c
===================================================================
--- gcc/tree-vectorizer.c	2018-07-24 10:22:23.797367688 +0100
+++ gcc/tree-vectorizer.c	2018-07-24 10:22:27.285336715 +0100
@@ -535,6 +535,19 @@ vec_info::lookup_stmt (gimple *stmt)
   return NULL;
 }
 
+/* If NAME is an SSA_NAME and its definition has an associated stmt_vec_info,
+   return that stmt_vec_info, otherwise return null.  It is safe to call
+   this on arbitrary operands.  */
+
+stmt_vec_info
+vec_info::lookup_def (tree name)
+{
+  if (TREE_CODE (name) == SSA_NAME
+      && !SSA_NAME_IS_DEFAULT_DEF (name))
+    return lookup_stmt (SSA_NAME_DEF_STMT (name));
+  return NULL;
+}
+
 /* A helper function to free scev and LOOP niter information, as well as
    clear loop constraint LOOP_C_FINITE.  */
 
Index: gcc/tree-vect-patterns.c
===================================================================
--- gcc/tree-vect-patterns.c	2018-07-24 10:22:23.793367723 +0100
+++ gcc/tree-vect-patterns.c	2018-07-24 10:22:27.281336751 +0100
@@ -227,14 +227,11 @@ vect_element_precision (unsigned int pre
 static stmt_vec_info
 vect_get_internal_def (vec_info *vinfo, tree op)
 {
-  vect_def_type dt;
-  gimple *def_stmt;
-  if (TREE_CODE (op) != SSA_NAME
-      || !vect_is_simple_use (op, vinfo, &dt, &def_stmt)
-      || dt != vect_internal_def)
-    return NULL;
-
-  return vinfo_for_stmt (def_stmt);
+  stmt_vec_info def_stmt_info = vinfo->lookup_def (op);
+  if (def_stmt_info
+      && STMT_VINFO_DEF_TYPE (def_stmt_info) == vect_internal_def)
+    return def_stmt_info;
+  return NULL;
 }
 
 /* Check whether NAME, an ssa-name used in USE_STMT,
@@ -528,6 +525,7 @@ vect_widened_op_tree (stmt_vec_info stmt
 		      vect_unpromoted_value *unprom, tree *common_type)
 {
   /* Check for an integer operation with the right code.  */
+  vec_info *vinfo = stmt_info->vinfo;
   gassign *assign = dyn_cast <gassign *> (stmt_info->stmt);
   if (!assign)
     return 0;
@@ -584,7 +582,7 @@ vect_widened_op_tree (stmt_vec_info stmt
 
 	      /* Recursively process the definition of the operand.  */
 	      stmt_vec_info def_stmt_info
-		= vinfo_for_stmt (SSA_NAME_DEF_STMT (this_unprom->op));
+		= vinfo->lookup_def (this_unprom->op);
 	      nops = vect_widened_op_tree (def_stmt_info, code, widened_code,
 					   shift_p, max_nops, this_unprom,
 					   common_type);
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:22:23.797367688 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:22:27.281336751 +0100
@@ -10092,11 +10092,11 @@ vect_is_simple_use (tree operand, vec_in
   else
     {
       gimple *def_stmt = SSA_NAME_DEF_STMT (operand);
-      if (! vect_stmt_in_region_p (vinfo, def_stmt))
+      stmt_vec_info stmt_vinfo = vinfo->lookup_def (operand);
+      if (!stmt_vinfo)
 	*dt = vect_external_def;
       else
 	{
-	  stmt_vec_info stmt_vinfo = vinfo_for_stmt (def_stmt);
 	  if (STMT_VINFO_IN_PATTERN_P (stmt_vinfo))
 	    {
 	      def_stmt = STMT_VINFO_RELATED_STMT (stmt_vinfo);
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:22:23.793367723 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:22:27.277336786 +0100
@@ -1569,26 +1569,19 @@ vect_analyze_loop_operations (loop_vec_i
               if (STMT_VINFO_RELEVANT_P (stmt_info))
                 {
                   tree phi_op;
-		  gimple *op_def_stmt;
 
                   if (gimple_phi_num_args (phi) != 1)
                     return false;
 
                   phi_op = PHI_ARG_DEF (phi, 0);
-                  if (TREE_CODE (phi_op) != SSA_NAME)
+		  stmt_vec_info op_def_info = loop_vinfo->lookup_def (phi_op);
+		  if (!op_def_info)
                     return false;
 
-                  op_def_stmt = SSA_NAME_DEF_STMT (phi_op);
-		  if (gimple_nop_p (op_def_stmt)
-		      || !flow_bb_inside_loop_p (loop, gimple_bb (op_def_stmt))
-		      || !vinfo_for_stmt (op_def_stmt))
-                    return false;
-
-                  if (STMT_VINFO_RELEVANT (vinfo_for_stmt (op_def_stmt))
-                        != vect_used_in_outer
-                      && STMT_VINFO_RELEVANT (vinfo_for_stmt (op_def_stmt))
-                           != vect_used_in_outer_by_reduction)
-                    return false;
+		  if (STMT_VINFO_RELEVANT (op_def_info) != vect_used_in_outer
+		      && (STMT_VINFO_RELEVANT (op_def_info)
+			  != vect_used_in_outer_by_reduction))
+		    return false;
                 }
 
               continue;
@@ -2504,20 +2497,19 @@ report_vect_op (dump_flags_t msg_type, g
   dump_gimple_stmt (msg_type, TDF_SLIM, stmt, 0);
 }
 
-/* DEF_STMT occurs in a loop that contains a potential reduction operation.
-   Return true if the results of DEF_STMT are something that can be
-   accumulated by such a reduction.  */
+/* DEF_STMT_INFO occurs in a loop that contains a potential reduction
+   operation.  Return true if the results of DEF_STMT_INFO are something
+   that can be accumulated by such a reduction.  */
 
 static bool
-vect_valid_reduction_input_p (gimple *def_stmt)
+vect_valid_reduction_input_p (stmt_vec_info def_stmt_info)
 {
-  stmt_vec_info def_stmt_info = vinfo_for_stmt (def_stmt);
-  return (is_gimple_assign (def_stmt)
-	  || is_gimple_call (def_stmt)
+  return (is_gimple_assign (def_stmt_info->stmt)
+	  || is_gimple_call (def_stmt_info->stmt)
 	  || STMT_VINFO_DEF_TYPE (def_stmt_info) == vect_induction_def
-	  || (gimple_code (def_stmt) == GIMPLE_PHI
+	  || (gimple_code (def_stmt_info->stmt) == GIMPLE_PHI
 	      && STMT_VINFO_DEF_TYPE (def_stmt_info) == vect_internal_def
-	      && !is_loop_header_bb_p (gimple_bb (def_stmt))));
+	      && !is_loop_header_bb_p (gimple_bb (def_stmt_info->stmt))));
 }
 
 /* Detect SLP reduction of the form:
@@ -2633,18 +2625,14 @@ vect_is_slp_reduction (loop_vec_info loo
       if (gimple_assign_rhs2 (next_stmt) == lhs)
 	{
 	  tree op = gimple_assign_rhs1 (next_stmt);
-	  gimple *def_stmt = NULL;
-
-          if (TREE_CODE (op) == SSA_NAME)
-            def_stmt = SSA_NAME_DEF_STMT (op);
+	  stmt_vec_info def_stmt_info = loop_info->lookup_def (op);
 
 	  /* Check that the other def is either defined in the loop
 	     ("vect_internal_def"), or it's an induction (defined by a
 	     loop-header phi-node).  */
-          if (def_stmt
-	      && gimple_bb (def_stmt)
-	      && flow_bb_inside_loop_p (loop, gimple_bb (def_stmt))
-	      && vect_valid_reduction_input_p (def_stmt))
+	  if (def_stmt_info
+	      && flow_bb_inside_loop_p (loop, gimple_bb (def_stmt_info->stmt))
+	      && vect_valid_reduction_input_p (def_stmt_info))
 	    {
 	      lhs = gimple_assign_lhs (next_stmt);
 	      next_stmt = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next_stmt));
@@ -2656,18 +2644,14 @@ vect_is_slp_reduction (loop_vec_info loo
       else
 	{
           tree op = gimple_assign_rhs2 (next_stmt);
-	  gimple *def_stmt = NULL;
-
-          if (TREE_CODE (op) == SSA_NAME)
-            def_stmt = SSA_NAME_DEF_STMT (op);
+	  stmt_vec_info def_stmt_info = loop_info->lookup_def (op);
 
           /* Check that the other def is either defined in the loop
             ("vect_internal_def"), or it's an induction (defined by a
             loop-header phi-node).  */
-          if (def_stmt
-	      && gimple_bb (def_stmt)
-	      && flow_bb_inside_loop_p (loop, gimple_bb (def_stmt))
-	      && vect_valid_reduction_input_p (def_stmt))
+	  if (def_stmt_info
+	      && flow_bb_inside_loop_p (loop, gimple_bb (def_stmt_info->stmt))
+	      && vect_valid_reduction_input_p (def_stmt_info))
   	    {
 	      if (dump_enabled_p ())
 		{
@@ -2896,7 +2880,7 @@ vect_is_simple_reduction (loop_vec_info
 {
   struct loop *loop = (gimple_bb (phi))->loop_father;
   struct loop *vect_loop = LOOP_VINFO_LOOP (loop_info);
-  gimple *def_stmt, *def1 = NULL, *def2 = NULL, *phi_use_stmt = NULL;
+  gimple *def_stmt, *phi_use_stmt = NULL;
   enum tree_code orig_code, code;
   tree op1, op2, op3 = NULL_TREE, op4 = NULL_TREE;
   tree type;
@@ -3020,7 +3004,7 @@ vect_is_simple_reduction (loop_vec_info
           return NULL;
         }
 
-      def1 = SSA_NAME_DEF_STMT (op1);
+      gimple *def1 = SSA_NAME_DEF_STMT (op1);
       if (gimple_bb (def1)
 	  && flow_bb_inside_loop_p (loop, gimple_bb (def_stmt))
           && loop->inner
@@ -3178,14 +3162,9 @@ vect_is_simple_reduction (loop_vec_info
      1) integer arithmetic and no trapv
      2) floating point arithmetic, and special flags permit this optimization
      3) nested cycle (i.e., outer loop vectorization).  */
-  if (TREE_CODE (op1) == SSA_NAME)
-    def1 = SSA_NAME_DEF_STMT (op1);
-
-  if (TREE_CODE (op2) == SSA_NAME)
-    def2 = SSA_NAME_DEF_STMT (op2);
-
-  if (code != COND_EXPR
-      && ((!def1 || gimple_nop_p (def1)) && (!def2 || gimple_nop_p (def2))))
+  stmt_vec_info def1_info = loop_info->lookup_def (op1);
+  stmt_vec_info def2_info = loop_info->lookup_def (op2);
+  if (code != COND_EXPR && !def1_info && !def2_info)
     {
       if (dump_enabled_p ())
 	report_vect_op (MSG_NOTE, def_stmt, "reduction: no defs for operands: ");
@@ -3196,22 +3175,22 @@ vect_is_simple_reduction (loop_vec_info
      the other def is either defined in the loop ("vect_internal_def"),
      or it's an induction (defined by a loop-header phi-node).  */
 
-  if (def2 && def2 == phi
+  if (def2_info
+      && def2_info->stmt == phi
       && (code == COND_EXPR
-	  || !def1 || gimple_nop_p (def1)
-	  || !flow_bb_inside_loop_p (loop, gimple_bb (def1))
-	  || vect_valid_reduction_input_p (def1)))
+	  || !def1_info
+	  || vect_valid_reduction_input_p (def1_info)))
     {
       if (dump_enabled_p ())
 	report_vect_op (MSG_NOTE, def_stmt, "detected reduction: ");
       return def_stmt;
     }
 
-  if (def1 && def1 == phi
+  if (def1_info
+      && def1_info->stmt == phi
       && (code == COND_EXPR
-	  || !def2 || gimple_nop_p (def2)
-	  || !flow_bb_inside_loop_p (loop, gimple_bb (def2))
-	  || vect_valid_reduction_input_p (def2)))
+	  || !def2_info
+	  || vect_valid_reduction_input_p (def2_info)))
     {
       if (! nested_in_vect_loop && orig_code != MINUS_EXPR)
 	{
@@ -6131,9 +6110,8 @@ vectorizable_reduction (gimple *stmt, gi
   bool nested_cycle = false, found_nested_cycle_def = false;
   bool double_reduc = false;
   basic_block def_bb;
-  struct loop * def_stmt_loop, *outer_loop = NULL;
+  struct loop * def_stmt_loop;
   tree def_arg;
-  gimple *def_arg_stmt;
   auto_vec<tree> vec_oprnds0;
   auto_vec<tree> vec_oprnds1;
   auto_vec<tree> vec_oprnds2;
@@ -6151,7 +6129,6 @@ vectorizable_reduction (gimple *stmt, gi
 
   if (nested_in_vect_loop_p (loop, stmt))
     {
-      outer_loop = loop;
       loop = loop->inner;
       nested_cycle = true;
     }
@@ -6731,13 +6708,10 @@ vectorizable_reduction (gimple *stmt, gi
       def_stmt_loop = def_bb->loop_father;
       def_arg = PHI_ARG_DEF_FROM_EDGE (reduc_def_stmt,
                                        loop_preheader_edge (def_stmt_loop));
-      if (TREE_CODE (def_arg) == SSA_NAME
-          && (def_arg_stmt = SSA_NAME_DEF_STMT (def_arg))
-          && gimple_code (def_arg_stmt) == GIMPLE_PHI
-          && flow_bb_inside_loop_p (outer_loop, gimple_bb (def_arg_stmt))
-          && vinfo_for_stmt (def_arg_stmt)
-          && STMT_VINFO_DEF_TYPE (vinfo_for_stmt (def_arg_stmt))
-              == vect_double_reduction_def)
+      stmt_vec_info def_arg_stmt_info = loop_vinfo->lookup_def (def_arg);
+      if (def_arg_stmt_info
+	  && (STMT_VINFO_DEF_TYPE (def_arg_stmt_info)
+	      == vect_double_reduction_def))
         double_reduc = true;
     }
 
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c	2018-07-24 10:22:23.793367723 +0100
+++ gcc/tree-vect-slp.c	2018-07-24 10:22:27.281336751 +0100
@@ -2389,11 +2389,8 @@ vect_detect_hybrid_slp_1 (tree *tp, int
   if (wi->is_lhs)
     return NULL_TREE;
 
-  stmt_vec_info def_stmt_info;
-  if (TREE_CODE (*tp) == SSA_NAME
-      && !SSA_NAME_IS_DEFAULT_DEF (*tp)
-      && (def_stmt_info = loop_vinfo->lookup_stmt (SSA_NAME_DEF_STMT (*tp)))
-      && PURE_SLP_STMT (def_stmt_info))
+  stmt_vec_info def_stmt_info = loop_vinfo->lookup_def (*tp);
+  if (def_stmt_info && PURE_SLP_STMT (def_stmt_info))
     {
       if (dump_enabled_p ())
 	{

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [09/46] Add vec_info::lookup_single_use
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (7 preceding siblings ...)
  2018-07-24  9:55 ` [08/46] Add vec_info::lookup_def Richard Sandiford
@ 2018-07-24  9:56 ` Richard Sandiford
  2018-07-25  9:13   ` Richard Biener
  2018-07-24  9:57 ` [10/46] Temporarily make stmt_vec_info a class Richard Sandiford
                   ` (36 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24  9:56 UTC (permalink / raw)
  To: gcc-patches

This patch adds a helper function for seeing whether there is a single
user of an SSA name, and whether that user has a stmt_vec_info.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (vec_info::lookup_single_use): Declare.
	* tree-vectorizer.c (vec_info::lookup_single_use): New function.
	* tree-vect-loop.c (vectorizable_reduction): Use it instead of
	a single_imm_use-based sequence.
	* tree-vect-stmts.c (supportable_widening_operation): Likewise.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:22:27.285336715 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:22:30.401309046 +0100
@@ -220,6 +220,7 @@ struct vec_info {
   stmt_vec_info add_stmt (gimple *);
   stmt_vec_info lookup_stmt (gimple *);
   stmt_vec_info lookup_def (tree);
+  stmt_vec_info lookup_single_use (tree);
 
   /* The type of vectorization.  */
   vec_kind kind;
Index: gcc/tree-vectorizer.c
===================================================================
--- gcc/tree-vectorizer.c	2018-07-24 10:22:27.285336715 +0100
+++ gcc/tree-vectorizer.c	2018-07-24 10:22:30.401309046 +0100
@@ -548,6 +548,20 @@ vec_info::lookup_def (tree name)
   return NULL;
 }
 
+/* See whether there is a single non-debug statement that uses LHS and
+   whether that statement has an associated stmt_vec_info.  Return the
+   stmt_vec_info if so, otherwise return null.  */
+
+stmt_vec_info
+vec_info::lookup_single_use (tree lhs)
+{
+  use_operand_p dummy;
+  gimple *use_stmt;
+  if (single_imm_use (lhs, &dummy, &use_stmt))
+    return lookup_stmt (use_stmt);
+  return NULL;
+}
+
 /* A helper function to free scev and LOOP niter information, as well as
    clear loop constraint LOOP_C_FINITE.  */
 
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:22:27.277336786 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:22:30.401309046 +0100
@@ -6138,6 +6138,7 @@ vectorizable_reduction (gimple *stmt, gi
 
   if (gimple_code (stmt) == GIMPLE_PHI)
     {
+      tree phi_result = gimple_phi_result (stmt);
       /* Analysis is fully done on the reduction stmt invocation.  */
       if (! vec_stmt)
 	{
@@ -6158,7 +6159,8 @@ vectorizable_reduction (gimple *stmt, gi
       if (STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (reduc_stmt)))
 	reduc_stmt = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (reduc_stmt));
 
-      if (STMT_VINFO_VEC_REDUCTION_TYPE (vinfo_for_stmt (reduc_stmt))
+      stmt_vec_info reduc_stmt_info = vinfo_for_stmt (reduc_stmt);
+      if (STMT_VINFO_VEC_REDUCTION_TYPE (reduc_stmt_info)
 	  == EXTRACT_LAST_REDUCTION)
 	/* Leave the scalar phi in place.  */
 	return true;
@@ -6185,15 +6187,12 @@ vectorizable_reduction (gimple *stmt, gi
       else
 	ncopies = vect_get_num_copies (loop_vinfo, vectype_in);
 
-      use_operand_p use_p;
-      gimple *use_stmt;
+      stmt_vec_info use_stmt_info;
       if (ncopies > 1
-	  && (STMT_VINFO_RELEVANT (vinfo_for_stmt (reduc_stmt))
-	      <= vect_used_only_live)
-	  && single_imm_use (gimple_phi_result (stmt), &use_p, &use_stmt)
-	  && (use_stmt == reduc_stmt
-	      || (STMT_VINFO_RELATED_STMT (vinfo_for_stmt (use_stmt))
-		  == reduc_stmt)))
+	  && STMT_VINFO_RELEVANT (reduc_stmt_info) <= vect_used_only_live
+	  && (use_stmt_info = loop_vinfo->lookup_single_use (phi_result))
+	  && (use_stmt_info == reduc_stmt_info
+	      || STMT_VINFO_RELATED_STMT (use_stmt_info) == reduc_stmt))
 	single_defuse_cycle = true;
 
       /* Create the destination vector  */
@@ -6955,13 +6954,13 @@ vectorizable_reduction (gimple *stmt, gi
    This only works when we see both the reduction PHI and its only consumer
    in vectorizable_reduction and there are no intermediate stmts
    participating.  */
-  use_operand_p use_p;
-  gimple *use_stmt;
+  stmt_vec_info use_stmt_info;
+  tree reduc_phi_result = gimple_phi_result (reduc_def_stmt);
   if (ncopies > 1
       && (STMT_VINFO_RELEVANT (stmt_info) <= vect_used_only_live)
-      && single_imm_use (gimple_phi_result (reduc_def_stmt), &use_p, &use_stmt)
-      && (use_stmt == stmt
-	  || STMT_VINFO_RELATED_STMT (vinfo_for_stmt (use_stmt)) == stmt))
+      && (use_stmt_info = loop_vinfo->lookup_single_use (reduc_phi_result))
+      && (use_stmt_info == stmt_info
+	  || STMT_VINFO_RELATED_STMT (use_stmt_info) == stmt))
     {
       single_defuse_cycle = true;
       epilog_copies = 1;
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:22:27.281336751 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:22:30.401309046 +0100
@@ -10310,14 +10310,11 @@ supportable_widening_operation (enum tre
              same operation.  One such an example is s += a * b, where elements
              in a and b cannot be reordered.  Here we check if the vector defined
              by STMT is only directly used in the reduction statement.  */
-          tree lhs = gimple_assign_lhs (stmt);
-          use_operand_p dummy;
-          gimple *use_stmt;
-          stmt_vec_info use_stmt_info = NULL;
-          if (single_imm_use (lhs, &dummy, &use_stmt)
-              && (use_stmt_info = vinfo_for_stmt (use_stmt))
-              && STMT_VINFO_DEF_TYPE (use_stmt_info) == vect_reduction_def)
-            return true;
+	  tree lhs = gimple_assign_lhs (stmt);
+	  stmt_vec_info use_stmt_info = loop_info->lookup_single_use (lhs);
+	  if (use_stmt_info
+	      && STMT_VINFO_DEF_TYPE (use_stmt_info) == vect_reduction_def)
+	    return true;
         }
       c1 = VEC_WIDEN_MULT_LO_EXPR;
       c2 = VEC_WIDEN_MULT_HI_EXPR;

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [11/46] Pass back a stmt_vec_info from vect_is_simple_use
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (9 preceding siblings ...)
  2018-07-24  9:57 ` [10/46] Temporarily make stmt_vec_info a class Richard Sandiford
@ 2018-07-24  9:57 ` Richard Sandiford
  2018-07-25  9:18   ` Richard Biener
  2018-07-24  9:58 ` [13/46] Make STMT_VINFO_RELATED_STMT a stmt_vec_info Richard Sandiford
                   ` (34 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24  9:57 UTC (permalink / raw)
  To: gcc-patches

This patch makes vect_is_simple_use pass back a stmt_vec_info to
those callers that want it.  Most users only need the stmt_vec_info
but some need the gimple stmt too.

It's probably high time we added a class to represent "simple operands"
instead, but I have a separate series that tries to clean up how
operands are handled (with a view to allowing mixed vector sizes).


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (vect_is_simple_use): Add an optional
	stmt_vec_info * parameter before the optional gimple **.
	* tree-vect-stmts.c (vect_is_simple_use): Likewise.
	(process_use, vect_get_vec_def_for_operand_1): Update callers.
	(vect_get_vec_def_for_operand, vectorizable_shift): Likewise.
	* tree-vect-loop.c (vectorizable_reduction): Likewise.
	(vectorizable_live_operation): Likewise.
	* tree-vect-patterns.c (type_conversion_p): Likewise.
	(vect_look_through_possible_promotion): Likewise.
	(vect_recog_rotate_pattern): Likewise.
	* tree-vect-slp.c (vect_get_and_check_slp_defs): Likewise.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:22:33.829278607 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:22:37.257248166 +0100
@@ -1532,9 +1532,10 @@ extern tree get_mask_type_for_scalar_typ
 extern tree get_same_sized_vectype (tree, tree);
 extern bool vect_get_loop_mask_type (loop_vec_info);
 extern bool vect_is_simple_use (tree, vec_info *, enum vect_def_type *,
-				gimple ** = NULL);
+				stmt_vec_info * = NULL, gimple ** = NULL);
 extern bool vect_is_simple_use (tree, vec_info *, enum vect_def_type *,
-				tree *, gimple ** = NULL);
+				tree *, stmt_vec_info * = NULL,
+				gimple ** = NULL);
 extern bool supportable_widening_operation (enum tree_code, gimple *, tree,
 					    tree, enum tree_code *,
 					    enum tree_code *, int *,
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:22:33.829278607 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:22:37.257248166 +0100
@@ -459,11 +459,9 @@ process_use (gimple *stmt, tree use, loo
 	     enum vect_relevant relevant, vec<gimple *> *worklist,
 	     bool force)
 {
-  struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
   stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
   stmt_vec_info dstmt_vinfo;
   basic_block bb, def_bb;
-  gimple *def_stmt;
   enum vect_def_type dt;
 
   /* case 1: we are only interested in uses that need to be vectorized.  Uses
@@ -471,7 +469,7 @@ process_use (gimple *stmt, tree use, loo
   if (!force && !exist_non_indexing_operands_for_use_p (use, stmt))
      return true;
 
-  if (!vect_is_simple_use (use, loop_vinfo, &dt, &def_stmt))
+  if (!vect_is_simple_use (use, loop_vinfo, &dt, &dstmt_vinfo))
     {
       if (dump_enabled_p ())
         dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
@@ -479,27 +477,20 @@ process_use (gimple *stmt, tree use, loo
       return false;
     }
 
-  if (!def_stmt || gimple_nop_p (def_stmt))
+  if (!dstmt_vinfo)
     return true;
 
-  def_bb = gimple_bb (def_stmt);
-  if (!flow_bb_inside_loop_p (loop, def_bb))
-    {
-      if (dump_enabled_p ())
-	dump_printf_loc (MSG_NOTE, vect_location, "def_stmt is out of loop.\n");
-      return true;
-    }
+  def_bb = gimple_bb (dstmt_vinfo->stmt);
 
-  /* case 2: A reduction phi (STMT) defined by a reduction stmt (DEF_STMT).
-     DEF_STMT must have already been processed, because this should be the
+  /* case 2: A reduction phi (STMT) defined by a reduction stmt (DSTMT_VINFO).
+     DSTMT_VINFO must have already been processed, because this should be the
      only way that STMT, which is a reduction-phi, was put in the worklist,
-     as there should be no other uses for DEF_STMT in the loop.  So we just
+     as there should be no other uses for DSTMT_VINFO in the loop.  So we just
      check that everything is as expected, and we are done.  */
-  dstmt_vinfo = vinfo_for_stmt (def_stmt);
   bb = gimple_bb (stmt);
   if (gimple_code (stmt) == GIMPLE_PHI
       && STMT_VINFO_DEF_TYPE (stmt_vinfo) == vect_reduction_def
-      && gimple_code (def_stmt) != GIMPLE_PHI
+      && gimple_code (dstmt_vinfo->stmt) != GIMPLE_PHI
       && STMT_VINFO_DEF_TYPE (dstmt_vinfo) == vect_reduction_def
       && bb->loop_father == def_bb->loop_father)
     {
@@ -514,7 +505,7 @@ process_use (gimple *stmt, tree use, loo
 
   /* case 3a: outer-loop stmt defining an inner-loop stmt:
 	outer-loop-header-bb:
-		d = def_stmt
+		d = dstmt_vinfo
 	inner-loop:
 		stmt # use (d)
 	outer-loop-tail-bb:
@@ -554,7 +545,7 @@ process_use (gimple *stmt, tree use, loo
 	outer-loop-header-bb:
 		...
 	inner-loop:
-		d = def_stmt
+		d = dstmt_vinfo
 	outer-loop-tail-bb (or outer-loop-exit-bb in double reduction):
 		stmt # use (d)		*/
   else if (flow_loop_nested_p (bb->loop_father, def_bb->loop_father))
@@ -601,7 +592,7 @@ process_use (gimple *stmt, tree use, loo
     }
 
 
-  vect_mark_relevant (worklist, def_stmt, relevant, false);
+  vect_mark_relevant (worklist, dstmt_vinfo, relevant, false);
   return true;
 }
 
@@ -1563,7 +1554,9 @@ vect_get_vec_def_for_operand (tree op, g
       dump_printf (MSG_NOTE, "\n");
     }
 
-  is_simple_use = vect_is_simple_use (op, loop_vinfo, &dt, &def_stmt);
+  stmt_vec_info def_stmt_info;
+  is_simple_use = vect_is_simple_use (op, loop_vinfo, &dt,
+				      &def_stmt_info, &def_stmt);
   gcc_assert (is_simple_use);
   if (def_stmt && dump_enabled_p ())
     {
@@ -1588,7 +1581,7 @@ vect_get_vec_def_for_operand (tree op, g
       return vect_init_vector (stmt, op, vector_type, NULL);
     }
   else
-    return vect_get_vec_def_for_operand_1 (def_stmt, dt);
+    return vect_get_vec_def_for_operand_1 (def_stmt_info, dt);
 }
 
 
@@ -5479,7 +5472,9 @@ vectorizable_shift (gimple *stmt, gimple
     return false;
 
   op1 = gimple_assign_rhs2 (stmt);
-  if (!vect_is_simple_use (op1, vinfo, &dt[1], &op1_vectype))
+  stmt_vec_info op1_def_stmt_info;
+  if (!vect_is_simple_use (op1, vinfo, &dt[1], &op1_vectype,
+			   &op1_def_stmt_info))
     {
       if (dump_enabled_p ())
         dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
@@ -5524,12 +5519,8 @@ vectorizable_shift (gimple *stmt, gimple
       /* If the shift amount is computed by a pattern stmt we cannot
          use the scalar amount directly thus give up and use a vector
 	 shift.  */
-      if (dt[1] == vect_internal_def)
-	{
-	  gimple *def = SSA_NAME_DEF_STMT (op1);
-	  if (is_pattern_stmt_p (vinfo_for_stmt (def)))
-	    scalar_shift_arg = false;
-	}
+      if (op1_def_stmt_info && is_pattern_stmt_p (op1_def_stmt_info))
+	scalar_shift_arg = false;
     }
   else
     {
@@ -10051,7 +10042,10 @@ get_same_sized_vectype (tree scalar_type
    VINFO - the vect info of the loop or basic block that is being vectorized.
    OPERAND - operand in the loop or bb.
    Output:
-   DEF_STMT_OUT (optional) - the defining stmt in case OPERAND is an SSA_NAME.
+   DEF_STMT_INFO_OUT (optional) - information about the defining stmt in
+     case OPERAND is an SSA_NAME that is defined in the vectorizable region
+   DEF_STMT_OUT (optional) - the defining stmt in case OPERAND is an SSA_NAME;
+     the definition could be anywhere in the function
    DT - the type of definition
 
    Returns whether a stmt with OPERAND can be vectorized.
@@ -10064,8 +10058,10 @@ get_same_sized_vectype (tree scalar_type
 
 bool
 vect_is_simple_use (tree operand, vec_info *vinfo, enum vect_def_type *dt,
-		    gimple **def_stmt_out)
+		    stmt_vec_info *def_stmt_info_out, gimple **def_stmt_out)
 {
+  if (def_stmt_info_out)
+    *def_stmt_info_out = NULL;
   if (def_stmt_out)
     *def_stmt_out = NULL;
   *dt = vect_unknown_def_type;
@@ -10113,6 +10109,8 @@ vect_is_simple_use (tree operand, vec_in
 	      *dt = vect_unknown_def_type;
 	      break;
 	    }
+	  if (def_stmt_info_out)
+	    *def_stmt_info_out = stmt_vinfo;
 	}
       if (def_stmt_out)
 	*def_stmt_out = def_stmt;
@@ -10175,14 +10173,18 @@ vect_is_simple_use (tree operand, vec_in
 
 bool
 vect_is_simple_use (tree operand, vec_info *vinfo, enum vect_def_type *dt,
-		    tree *vectype, gimple **def_stmt_out)
+		    tree *vectype, stmt_vec_info *def_stmt_info_out,
+		    gimple **def_stmt_out)
 {
+  stmt_vec_info def_stmt_info;
   gimple *def_stmt;
-  if (!vect_is_simple_use (operand, vinfo, dt, &def_stmt))
+  if (!vect_is_simple_use (operand, vinfo, dt, &def_stmt_info, &def_stmt))
     return false;
 
   if (def_stmt_out)
     *def_stmt_out = def_stmt;
+  if (def_stmt_info_out)
+    *def_stmt_info_out = def_stmt_info;
 
   /* Now get a vector type if the def is internal, otherwise supply
      NULL_TREE and leave it up to the caller to figure out a proper
@@ -10193,8 +10195,7 @@ vect_is_simple_use (tree operand, vec_in
       || *dt == vect_double_reduction_def
       || *dt == vect_nested_cycle)
     {
-      stmt_vec_info stmt_info = vinfo_for_stmt (def_stmt);
-      *vectype = STMT_VINFO_VECTYPE (stmt_info);
+      *vectype = STMT_VINFO_VECTYPE (def_stmt_info);
       gcc_assert (*vectype != NULL_TREE);
       if (dump_enabled_p ())
 	{
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:22:33.821278677 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:22:37.253248202 +0100
@@ -6090,7 +6090,6 @@ vectorizable_reduction (gimple *stmt, gi
   int op_type;
   optab optab;
   tree new_temp = NULL_TREE;
-  gimple *def_stmt;
   enum vect_def_type dt, cond_reduc_dt = vect_unknown_def_type;
   gimple *cond_reduc_def_stmt = NULL;
   enum tree_code cond_reduc_op_code = ERROR_MARK;
@@ -6324,13 +6323,14 @@ vectorizable_reduction (gimple *stmt, gi
       if (i == 0 && code == COND_EXPR)
         continue;
 
-      is_simple_use = vect_is_simple_use (ops[i], loop_vinfo,
-					  &dts[i], &tem, &def_stmt);
+      stmt_vec_info def_stmt_info;
+      is_simple_use = vect_is_simple_use (ops[i], loop_vinfo, &dts[i], &tem,
+					  &def_stmt_info);
       dt = dts[i];
       gcc_assert (is_simple_use);
       if (dt == vect_reduction_def)
 	{
-          reduc_def_stmt = def_stmt;
+	  reduc_def_stmt = def_stmt_info;
 	  reduc_index = i;
 	  continue;
 	}
@@ -6352,11 +6352,11 @@ vectorizable_reduction (gimple *stmt, gi
 	return false;
 
       if (dt == vect_nested_cycle)
-        {
-          found_nested_cycle_def = true;
-          reduc_def_stmt = def_stmt;
-          reduc_index = i;
-        }
+	{
+	  found_nested_cycle_def = true;
+	  reduc_def_stmt = def_stmt_info;
+	  reduc_index = i;
+	}
 
       if (i == 1 && code == COND_EXPR)
 	{
@@ -6367,11 +6367,11 @@ vectorizable_reduction (gimple *stmt, gi
 	      cond_reduc_val = ops[i];
 	    }
 	  if (dt == vect_induction_def
-	      && def_stmt != NULL
-	      && is_nonwrapping_integer_induction (def_stmt, loop))
+	      && def_stmt_info
+	      && is_nonwrapping_integer_induction (def_stmt_info, loop))
 	    {
 	      cond_reduc_dt = dt;
-	      cond_reduc_def_stmt = def_stmt;
+	      cond_reduc_def_stmt = def_stmt_info;
 	    }
 	}
     }
@@ -7958,7 +7958,7 @@ vectorizable_live_operation (gimple *stm
   else
     {
       enum vect_def_type dt = STMT_VINFO_DEF_TYPE (stmt_info);
-      vec_lhs = vect_get_vec_def_for_operand_1 (stmt, dt);
+      vec_lhs = vect_get_vec_def_for_operand_1 (stmt_info, dt);
       gcc_checking_assert (ncopies == 1
 			   || !LOOP_VINFO_FULLY_MASKED_P (loop_vinfo));
 
Index: gcc/tree-vect-patterns.c
===================================================================
--- gcc/tree-vect-patterns.c	2018-07-24 10:22:33.825278642 +0100
+++ gcc/tree-vect-patterns.c	2018-07-24 10:22:37.253248202 +0100
@@ -250,7 +250,9 @@ type_conversion_p (tree name, gimple *us
   enum vect_def_type dt;
 
   stmt_vinfo = vinfo_for_stmt (use_stmt);
-  if (!vect_is_simple_use (name, stmt_vinfo->vinfo, &dt, def_stmt))
+  stmt_vec_info def_stmt_info;
+  if (!vect_is_simple_use (name, stmt_vinfo->vinfo, &dt, &def_stmt_info,
+			   def_stmt))
     return false;
 
   if (dt != vect_internal_def
@@ -371,9 +373,10 @@ vect_look_through_possible_promotion (ve
   while (TREE_CODE (op) == SSA_NAME && INTEGRAL_TYPE_P (op_type))
     {
       /* See whether OP is simple enough to vectorize.  */
+      stmt_vec_info def_stmt_info;
       gimple *def_stmt;
       vect_def_type dt;
-      if (!vect_is_simple_use (op, vinfo, &dt, &def_stmt))
+      if (!vect_is_simple_use (op, vinfo, &dt, &def_stmt_info, &def_stmt))
 	break;
 
       /* If OP is the input of a demotion, skip over it to see whether
@@ -407,17 +410,15 @@ vect_look_through_possible_promotion (ve
 	 the cast is potentially vectorizable.  */
       if (!def_stmt)
 	break;
-      if (dt == vect_internal_def)
-	{
-	  caster = vinfo_for_stmt (def_stmt);
-	  /* Ignore pattern statements, since we don't link uses for them.  */
-	  if (single_use_p
-	      && !STMT_VINFO_RELATED_STMT (caster)
-	      && !has_single_use (res))
-	    *single_use_p = false;
-	}
-      else
-	caster = NULL;
+      caster = def_stmt_info;
+
+      /* Ignore pattern statements, since we don't link uses for them.  */
+      if (caster
+	  && single_use_p
+	  && !STMT_VINFO_RELATED_STMT (caster)
+	  && !has_single_use (res))
+	*single_use_p = false;
+
       gassign *assign = dyn_cast <gassign *> (def_stmt);
       if (!assign || !CONVERT_EXPR_CODE_P (gimple_assign_rhs_code (def_stmt)))
 	break;
@@ -1988,7 +1989,8 @@ vect_recog_rotate_pattern (stmt_vec_info
       || !TYPE_UNSIGNED (type))
     return NULL;
 
-  if (!vect_is_simple_use (oprnd1, vinfo, &dt, &def_stmt))
+  stmt_vec_info def_stmt_info;
+  if (!vect_is_simple_use (oprnd1, vinfo, &dt, &def_stmt_info, &def_stmt))
     return NULL;
 
   if (dt != vect_internal_def
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c	2018-07-24 10:22:33.825278642 +0100
+++ gcc/tree-vect-slp.c	2018-07-24 10:22:37.253248202 +0100
@@ -303,7 +303,6 @@ vect_get_and_check_slp_defs (vec_info *v
   gimple *stmt = stmts[stmt_num];
   tree oprnd;
   unsigned int i, number_of_oprnds;
-  gimple *def_stmt;
   enum vect_def_type dt = vect_uninitialized_def;
   bool pattern = false;
   slp_oprnd_info oprnd_info;
@@ -357,7 +356,8 @@ vect_get_and_check_slp_defs (vec_info *v
 
       oprnd_info = (*oprnds_info)[i];
 
-      if (!vect_is_simple_use (oprnd, vinfo, &dt, &def_stmt))
+      stmt_vec_info def_stmt_info;
+      if (!vect_is_simple_use (oprnd, vinfo, &dt, &def_stmt_info))
 	{
 	  if (dump_enabled_p ())
 	    {
@@ -370,13 +370,10 @@ vect_get_and_check_slp_defs (vec_info *v
 	  return -1;
 	}
 
-      /* Check if DEF_STMT is a part of a pattern in LOOP and get the def stmt
-         from the pattern.  Check that all the stmts of the node are in the
-         pattern.  */
-      if (def_stmt && gimple_bb (def_stmt)
-	  && vect_stmt_in_region_p (vinfo, def_stmt)
-	  && vinfo_for_stmt (def_stmt)
-	  && is_pattern_stmt_p (vinfo_for_stmt (def_stmt)))
+      /* Check if DEF_STMT_INFO is a part of a pattern in LOOP and get
+	 the def stmt from the pattern.  Check that all the stmts of the
+	 node are in the pattern.  */
+      if (def_stmt_info && is_pattern_stmt_p (def_stmt_info))
         {
           pattern = true;
           if (!first && !oprnd_info->first_pattern
@@ -405,7 +402,7 @@ vect_get_and_check_slp_defs (vec_info *v
 	      return 1;
             }
 
-          dt = STMT_VINFO_DEF_TYPE (vinfo_for_stmt (def_stmt));
+	  dt = STMT_VINFO_DEF_TYPE (def_stmt_info);
 
           if (dt == vect_unknown_def_type)
             {
@@ -415,7 +412,7 @@ vect_get_and_check_slp_defs (vec_info *v
               return -1;
             }
 
-          switch (gimple_code (def_stmt))
+	  switch (gimple_code (def_stmt_info->stmt))
             {
             case GIMPLE_PHI:
             case GIMPLE_ASSIGN:
@@ -499,7 +496,7 @@ vect_get_and_check_slp_defs (vec_info *v
 	case vect_reduction_def:
 	case vect_induction_def:
 	case vect_internal_def:
-	  oprnd_info->def_stmts.quick_push (def_stmt);
+	  oprnd_info->def_stmts.quick_push (def_stmt_info);
 	  break;
 
 	default:

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [10/46] Temporarily make stmt_vec_info a class
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (8 preceding siblings ...)
  2018-07-24  9:56 ` [09/46] Add vec_info::lookup_single_use Richard Sandiford
@ 2018-07-24  9:57 ` Richard Sandiford
  2018-07-25  9:14   ` Richard Biener
  2018-07-24  9:57 ` [11/46] Pass back a stmt_vec_info from vect_is_simple_use Richard Sandiford
                   ` (35 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24  9:57 UTC (permalink / raw)
  To: gcc-patches

This patch turns stmt_vec_info into an unspeakably bad wrapper class
and adds an implicit conversion to the associated gimple stmt.
Having this conversion makes the rest of the series easier to write,
but since the class goes away again at the end of the series, I've
not bothered adding any comments or tried to make it pretty.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (stmt_vec_info): Temporarily change from
	a typedef to a wrapper class.
	(NULL_STMT_VEC_INFO): New macro.
	(vec_info::stmt_infos): Change to vec<stmt_vec_info>.
	(stmt_vec_info::operator*): New function.
	(stmt_vec_info::operator gimple *): Likewise.
	(set_vinfo_for_stmt): Use NULL_STMT_VEC_INFO.
	(add_stmt_costs): Likewise.
	* tree-vect-loop-manip.c (iv_phi_p): Likewise.
	* tree-vect-loop.c (vect_compute_single_scalar_iteration_cost)
	(vect_get_known_peeling_cost): Likewise.
	(vect_estimate_min_profitable_iters): Likewise.
	* tree-vect-patterns.c (vect_init_pattern_stmt): Likewise.
	* tree-vect-slp.c (vect_remove_slp_scalar_calls): Likewise.
	* tree-vect-stmts.c (vect_build_gather_load_calls): Likewise.
	(vectorizable_store, free_stmt_vec_infos): Likewise.
	(new_stmt_vec_info): Change return type of xcalloc to
	_stmt_vec_info *.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:22:30.401309046 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:22:33.829278607 +0100
@@ -21,12 +21,31 @@ Software Foundation; either version 3, o
 #ifndef GCC_TREE_VECTORIZER_H
 #define GCC_TREE_VECTORIZER_H
 
+class stmt_vec_info {
+public:
+  stmt_vec_info () {}
+  stmt_vec_info (struct _stmt_vec_info *ptr) : m_ptr (ptr) {}
+  struct _stmt_vec_info *operator-> () const { return m_ptr; }
+  struct _stmt_vec_info &operator* () const;
+  operator struct _stmt_vec_info * () const { return m_ptr; }
+  operator gimple * () const;
+  operator void * () const { return m_ptr; }
+  operator bool () const { return m_ptr; }
+  bool operator == (const stmt_vec_info &x) { return x.m_ptr == m_ptr; }
+  bool operator == (_stmt_vec_info *x) { return x == m_ptr; }
+  bool operator != (const stmt_vec_info &x) { return x.m_ptr != m_ptr; }
+  bool operator != (_stmt_vec_info *x) { return x != m_ptr; }
+
+private:
+  struct _stmt_vec_info *m_ptr;
+};
+
+#define NULL_STMT_VEC_INFO (stmt_vec_info (NULL))
+
 #include "tree-data-ref.h"
 #include "tree-hash-traits.h"
 #include "target.h"
 
-typedef struct _stmt_vec_info *stmt_vec_info;
-
 /* Used for naming of new temporaries.  */
 enum vect_var_kind {
   vect_simple_var,
@@ -229,7 +248,7 @@ struct vec_info {
   vec_info_shared *shared;
 
   /* The mapping of GIMPLE UID to stmt_vec_info.  */
-  vec<struct _stmt_vec_info *> stmt_vec_infos;
+  vec<stmt_vec_info> stmt_vec_infos;
 
   /* All SLP instances.  */
   auto_vec<slp_instance> slp_instances;
@@ -1052,6 +1071,17 @@ #define VECT_SCALAR_BOOLEAN_TYPE_P(TYPE)
        && TYPE_PRECISION (TYPE) == 1		\
        && TYPE_UNSIGNED (TYPE)))
 
+inline _stmt_vec_info &
+stmt_vec_info::operator* () const
+{
+  return *m_ptr;
+}
+
+inline stmt_vec_info::operator gimple * () const
+{
+  return m_ptr ? m_ptr->stmt : NULL;
+}
+
 extern vec<stmt_vec_info> *stmt_vec_info_vec;
 
 void set_stmt_vec_info_vec (vec<stmt_vec_info> *);
@@ -1084,7 +1114,7 @@ set_vinfo_for_stmt (gimple *stmt, stmt_v
     }
   else
     {
-      gcc_checking_assert (info == NULL);
+      gcc_checking_assert (info == NULL_STMT_VEC_INFO);
       (*stmt_vec_info_vec)[uid - 1] = info;
     }
 }
@@ -1261,7 +1291,9 @@ add_stmt_costs (void *data, stmt_vector_
   unsigned i;
   FOR_EACH_VEC_ELT (*cost_vec, i, cost)
     add_stmt_cost (data, cost->count, cost->kind,
-		   cost->stmt ? vinfo_for_stmt (cost->stmt) : NULL,
+		   (cost->stmt
+		    ? vinfo_for_stmt (cost->stmt)
+		    : NULL_STMT_VEC_INFO),
 		   cost->misalign, cost->where);
 }
 
Index: gcc/tree-vect-loop-manip.c
===================================================================
--- gcc/tree-vect-loop-manip.c	2018-06-30 14:56:22.022893750 +0100
+++ gcc/tree-vect-loop-manip.c	2018-07-24 10:22:33.821278677 +0100
@@ -1344,7 +1344,7 @@ iv_phi_p (gphi *phi)
     return false;
 
   stmt_vec_info stmt_info = vinfo_for_stmt (phi);
-  gcc_assert (stmt_info != NULL);
+  gcc_assert (stmt_info != NULL_STMT_VEC_INFO);
   if (STMT_VINFO_DEF_TYPE (stmt_info) == vect_reduction_def
       || STMT_VINFO_DEF_TYPE (stmt_info) == vect_double_reduction_def)
     return false;
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:22:30.401309046 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:22:33.821278677 +0100
@@ -1139,7 +1139,7 @@ vect_compute_single_scalar_iteration_cos
 		    j, si)
     {
       struct _stmt_vec_info *stmt_info
-	= si->stmt ? vinfo_for_stmt (si->stmt) : NULL;
+	= si->stmt ? vinfo_for_stmt (si->stmt) : NULL_STMT_VEC_INFO;
       (void) add_stmt_cost (target_cost_data, si->count,
 			    si->kind, stmt_info, si->misalign,
 			    vect_body);
@@ -3351,7 +3351,7 @@ vect_get_known_peeling_cost (loop_vec_in
     FOR_EACH_VEC_ELT (*scalar_cost_vec, j, si)
 	{
 	  stmt_vec_info stmt_info
-	    = si->stmt ? vinfo_for_stmt (si->stmt) : NULL;
+	    = si->stmt ? vinfo_for_stmt (si->stmt) : NULL_STMT_VEC_INFO;
 	  retval += record_stmt_cost (prologue_cost_vec,
 				      si->count * peel_iters_prologue,
 				      si->kind, stmt_info, si->misalign,
@@ -3361,7 +3361,7 @@ vect_get_known_peeling_cost (loop_vec_in
     FOR_EACH_VEC_ELT (*scalar_cost_vec, j, si)
 	{
 	  stmt_vec_info stmt_info
-	    = si->stmt ? vinfo_for_stmt (si->stmt) : NULL;
+	    = si->stmt ? vinfo_for_stmt (si->stmt) : NULL_STMT_VEC_INFO;
 	  retval += record_stmt_cost (epilogue_cost_vec,
 				      si->count * *peel_iters_epilogue,
 				      si->kind, stmt_info, si->misalign,
@@ -3504,7 +3504,7 @@ vect_estimate_min_profitable_iters (loop
 			    j, si)
 	    {
 	      struct _stmt_vec_info *stmt_info
-		= si->stmt ? vinfo_for_stmt (si->stmt) : NULL;
+		= si->stmt ? vinfo_for_stmt (si->stmt) : NULL_STMT_VEC_INFO;
 	      (void) add_stmt_cost (target_cost_data, si->count,
 				    si->kind, stmt_info, si->misalign,
 				    vect_epilogue);
@@ -3541,7 +3541,7 @@ vect_estimate_min_profitable_iters (loop
       FOR_EACH_VEC_ELT (LOOP_VINFO_SCALAR_ITERATION_COST (loop_vinfo), j, si)
 	{
 	  struct _stmt_vec_info *stmt_info
-	    = si->stmt ? vinfo_for_stmt (si->stmt) : NULL;
+	    = si->stmt ? vinfo_for_stmt (si->stmt) : NULL_STMT_VEC_INFO;
 	  (void) add_stmt_cost (target_cost_data,
 				si->count * peel_iters_prologue,
 				si->kind, stmt_info, si->misalign,
@@ -3573,7 +3573,7 @@ vect_estimate_min_profitable_iters (loop
       FOR_EACH_VEC_ELT (prologue_cost_vec, j, si)
 	{
 	  struct _stmt_vec_info *stmt_info
-	    = si->stmt ? vinfo_for_stmt (si->stmt) : NULL;
+	    = si->stmt ? vinfo_for_stmt (si->stmt) : NULL_STMT_VEC_INFO;
 	  (void) add_stmt_cost (data, si->count, si->kind, stmt_info,
 				si->misalign, vect_prologue);
 	}
@@ -3581,7 +3581,7 @@ vect_estimate_min_profitable_iters (loop
       FOR_EACH_VEC_ELT (epilogue_cost_vec, j, si)
 	{
 	  struct _stmt_vec_info *stmt_info
-	    = si->stmt ? vinfo_for_stmt (si->stmt) : NULL;
+	    = si->stmt ? vinfo_for_stmt (si->stmt) : NULL_STMT_VEC_INFO;
 	  (void) add_stmt_cost (data, si->count, si->kind, stmt_info,
 				si->misalign, vect_epilogue);
 	}
Index: gcc/tree-vect-patterns.c
===================================================================
--- gcc/tree-vect-patterns.c	2018-07-24 10:22:27.281336751 +0100
+++ gcc/tree-vect-patterns.c	2018-07-24 10:22:33.825278642 +0100
@@ -103,7 +103,7 @@ vect_init_pattern_stmt (gimple *pattern_
 {
   vec_info *vinfo = orig_stmt_info->vinfo;
   stmt_vec_info pattern_stmt_info = vinfo->lookup_stmt (pattern_stmt);
-  if (pattern_stmt_info == NULL)
+  if (pattern_stmt_info == NULL_STMT_VEC_INFO)
     pattern_stmt_info = orig_stmt_info->vinfo->add_stmt (pattern_stmt);
   gimple_set_bb (pattern_stmt, gimple_bb (orig_stmt_info->stmt));
 
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c	2018-07-24 10:22:27.281336751 +0100
+++ gcc/tree-vect-slp.c	2018-07-24 10:22:33.825278642 +0100
@@ -4039,7 +4039,7 @@ vect_remove_slp_scalar_calls (slp_tree n
       if (!is_gimple_call (stmt) || gimple_bb (stmt) == NULL)
 	continue;
       stmt_info = vinfo_for_stmt (stmt);
-      if (stmt_info == NULL
+      if (stmt_info == NULL_STMT_VEC_INFO
 	  || is_pattern_stmt_p (stmt_info)
 	  || !PURE_SLP_STMT (stmt_info))
 	continue;
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:22:30.401309046 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:22:33.829278607 +0100
@@ -2865,7 +2865,7 @@ vect_build_gather_load_calls (gimple *st
 	  new_stmt = SSA_NAME_DEF_STMT (var);
 	}
 
-      if (prev_stmt_info == NULL)
+      if (prev_stmt_info == NULL_STMT_VEC_INFO)
 	STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
       else
 	STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
@@ -6550,7 +6550,7 @@ vectorizable_store (gimple *stmt, gimple
 
 	  vect_finish_stmt_generation (stmt, new_stmt, gsi);
 
-	  if (prev_stmt_info == NULL)
+	  if (prev_stmt_info == NULL_STMT_VEC_INFO)
 	    STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
 	  else
 	    STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
@@ -9805,7 +9805,7 @@ vect_remove_stores (gimple *first_stmt)
 new_stmt_vec_info (gimple *stmt, vec_info *vinfo)
 {
   stmt_vec_info res;
-  res = (stmt_vec_info) xcalloc (1, sizeof (struct _stmt_vec_info));
+  res = (_stmt_vec_info *) xcalloc (1, sizeof (struct _stmt_vec_info));
 
   STMT_VINFO_TYPE (res) = undef_vec_info_type;
   STMT_VINFO_STMT (res) = stmt;
@@ -9862,7 +9862,7 @@ free_stmt_vec_infos (vec<stmt_vec_info>
   unsigned int i;
   stmt_vec_info info;
   FOR_EACH_VEC_ELT (*v, i, info)
-    if (info != NULL)
+    if (info != NULL_STMT_VEC_INFO)
       free_stmt_vec_info (STMT_VINFO_STMT (info));
   if (v == stmt_vec_info_vec)
     stmt_vec_info_vec = NULL;

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [13/46] Make STMT_VINFO_RELATED_STMT a stmt_vec_info
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (10 preceding siblings ...)
  2018-07-24  9:57 ` [11/46] Pass back a stmt_vec_info from vect_is_simple_use Richard Sandiford
@ 2018-07-24  9:58 ` Richard Sandiford
  2018-07-25  9:19   ` Richard Biener
  2018-07-24  9:58 ` [12/46] Make vect_finish_stmt_generation return " Richard Sandiford
                   ` (33 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24  9:58 UTC (permalink / raw)
  To: gcc-patches

This patch changes STMT_VINFO_RELATED_STMT from a gimple stmt to a
stmt_vec_info.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (_stmt_vec_info::related_stmt): Change from
	a gimple stmt to a stmt_vec_info.
	(is_pattern_stmt_p): Update accordingly.
	* tree-vect-data-refs.c (vect_preserves_scalar_order_p): Likewise.
	(vect_record_grouped_load_vectors): Likewise.
	* tree-vect-loop.c (vect_determine_vf_for_stmt): Likewise.
	(vect_fixup_reduc_chain, vect_update_vf_for_slp): Likewise.
	(vect_model_reduction_cost): Likewise.
	(vect_create_epilog_for_reduction): Likewise.
	(vectorizable_reduction, vectorizable_induction): Likewise.
	* tree-vect-patterns.c (vect_init_pattern_stmt): Likewise.
	Return the stmt_vec_info for the pattern statement.
	(vect_set_pattern_stmt): Update use of STMT_VINFO_RELATED_STMT.
	(vect_split_statement, vect_mark_pattern_stmts): Likewise.
	* tree-vect-slp.c (vect_detect_hybrid_slp_stmts): Likewise.
	(vect_detect_hybrid_slp, vect_get_slp_defs): Likewise.
	* tree-vect-stmts.c (vect_mark_relevant): Likewise.
	(vect_get_vec_def_for_operand_1, vectorizable_call): Likewise.
	(vectorizable_simd_clone_call, vect_analyze_stmt, new_stmt_vec_info)
	(free_stmt_vec_info, vect_is_simple_use): Likewise.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:22:40.725217371 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:22:44.297185652 +0100
@@ -847,7 +847,7 @@ struct _stmt_vec_info {
         related_stmt of the "pattern stmt" points back to this stmt (which is
         the last stmt in the original sequence of stmts that constitutes the
         pattern).  */
-  gimple *related_stmt;
+  stmt_vec_info related_stmt;
 
   /* Used to keep a sequence of def stmts of a pattern stmt if such exists.
      The sequence is attached to the original statement rather than the
@@ -1189,16 +1189,8 @@ get_later_stmt (gimple *stmt1, gimple *s
 static inline bool
 is_pattern_stmt_p (stmt_vec_info stmt_info)
 {
-  gimple *related_stmt;
-  stmt_vec_info related_stmt_info;
-
-  related_stmt = STMT_VINFO_RELATED_STMT (stmt_info);
-  if (related_stmt
-      && (related_stmt_info = vinfo_for_stmt (related_stmt))
-      && STMT_VINFO_IN_PATTERN_P (related_stmt_info))
-    return true;
-
-  return false;
+  stmt_vec_info related_stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
+  return related_stmt_info && STMT_VINFO_IN_PATTERN_P (related_stmt_info);
 }
 
 /* Return true if BB is a loop header.  */
Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c	2018-07-24 10:22:19.801403171 +0100
+++ gcc/tree-vect-data-refs.c	2018-07-24 10:22:44.285185759 +0100
@@ -213,10 +213,10 @@ vect_preserves_scalar_order_p (gimple *s
      current position (but could happen earlier).  Reordering is therefore
      only possible if the first access is a write.  */
   if (is_pattern_stmt_p (stmtinfo_a))
-    stmt_a = STMT_VINFO_RELATED_STMT (stmtinfo_a);
+    stmtinfo_a = STMT_VINFO_RELATED_STMT (stmtinfo_a);
   if (is_pattern_stmt_p (stmtinfo_b))
-    stmt_b = STMT_VINFO_RELATED_STMT (stmtinfo_b);
-  gimple *earlier_stmt = get_earlier_stmt (stmt_a, stmt_b);
+    stmtinfo_b = STMT_VINFO_RELATED_STMT (stmtinfo_b);
+  gimple *earlier_stmt = get_earlier_stmt (stmtinfo_a, stmtinfo_b);
   return !DR_IS_WRITE (STMT_VINFO_DATA_REF (vinfo_for_stmt (earlier_stmt)));
 }
 
@@ -6359,8 +6359,10 @@ vect_transform_grouped_load (gimple *stm
 void
 vect_record_grouped_load_vectors (gimple *stmt, vec<tree> result_chain)
 {
-  gimple *first_stmt = DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt));
-  gimple *next_stmt, *new_stmt;
+  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+  vec_info *vinfo = stmt_info->vinfo;
+  gimple *first_stmt = DR_GROUP_FIRST_ELEMENT (stmt_info);
+  gimple *next_stmt;
   unsigned int i, gap_count;
   tree tmp_data_ref;
 
@@ -6389,29 +6391,28 @@ vect_record_grouped_load_vectors (gimple
 
       while (next_stmt)
         {
-	  new_stmt = SSA_NAME_DEF_STMT (tmp_data_ref);
+	  stmt_vec_info new_stmt_info = vinfo->lookup_def (tmp_data_ref);
 	  /* We assume that if VEC_STMT is not NULL, this is a case of multiple
 	     copies, and we put the new vector statement in the first available
 	     RELATED_STMT.  */
 	  if (!STMT_VINFO_VEC_STMT (vinfo_for_stmt (next_stmt)))
-	    STMT_VINFO_VEC_STMT (vinfo_for_stmt (next_stmt)) = new_stmt;
+	    STMT_VINFO_VEC_STMT (vinfo_for_stmt (next_stmt)) = new_stmt_info;
 	  else
             {
               if (!DR_GROUP_SAME_DR_STMT (vinfo_for_stmt (next_stmt)))
                 {
 		  gimple *prev_stmt =
 		    STMT_VINFO_VEC_STMT (vinfo_for_stmt (next_stmt));
-		  gimple *rel_stmt =
-		    STMT_VINFO_RELATED_STMT (vinfo_for_stmt (prev_stmt));
-	          while (rel_stmt)
+		  stmt_vec_info rel_stmt_info
+		    = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (prev_stmt));
+		  while (rel_stmt_info)
 		    {
-		      prev_stmt = rel_stmt;
-		      rel_stmt =
-                        STMT_VINFO_RELATED_STMT (vinfo_for_stmt (rel_stmt));
+		      prev_stmt = rel_stmt_info;
+		      rel_stmt_info = STMT_VINFO_RELATED_STMT (rel_stmt_info);
 		    }
 
-  	          STMT_VINFO_RELATED_STMT (vinfo_for_stmt (prev_stmt)) =
-                    new_stmt;
+		  STMT_VINFO_RELATED_STMT (vinfo_for_stmt (prev_stmt))
+		    = new_stmt_info;
                 }
             }
 
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:22:40.721217407 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:22:44.289185723 +0100
@@ -226,7 +226,7 @@ vect_determine_vf_for_stmt (stmt_vec_inf
       && STMT_VINFO_RELATED_STMT (stmt_info))
     {
       gimple *pattern_def_seq = STMT_VINFO_PATTERN_DEF_SEQ (stmt_info);
-      stmt_info = vinfo_for_stmt (STMT_VINFO_RELATED_STMT (stmt_info));
+      stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
 
       /* If a pattern statement has def stmts, analyze them too.  */
       for (gimple_stmt_iterator si = gsi_start (pattern_def_seq);
@@ -654,23 +654,23 @@ vect_analyze_scalar_cycles (loop_vec_inf
 static void
 vect_fixup_reduc_chain (gimple *stmt)
 {
-  gimple *firstp = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (stmt));
-  gimple *stmtp;
-  gcc_assert (!REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (firstp))
-	      && REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)));
-  REDUC_GROUP_SIZE (vinfo_for_stmt (firstp))
-    = REDUC_GROUP_SIZE (vinfo_for_stmt (stmt));
+  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+  stmt_vec_info firstp = STMT_VINFO_RELATED_STMT (stmt_info);
+  stmt_vec_info stmtp;
+  gcc_assert (!REDUC_GROUP_FIRST_ELEMENT (firstp)
+	      && REDUC_GROUP_FIRST_ELEMENT (stmt_info));
+  REDUC_GROUP_SIZE (firstp) = REDUC_GROUP_SIZE (stmt_info);
   do
     {
       stmtp = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (stmt));
-      REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmtp)) = firstp;
+      REDUC_GROUP_FIRST_ELEMENT (stmtp) = firstp;
       stmt = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (stmt));
       if (stmt)
-	REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (stmtp))
+	REDUC_GROUP_NEXT_ELEMENT (stmtp)
 	  = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (stmt));
     }
   while (stmt);
-  STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmtp)) = vect_reduction_def;
+  STMT_VINFO_DEF_TYPE (stmtp) = vect_reduction_def;
 }
 
 /* Fixup scalar cycles that now have their stmts detected as patterns.  */
@@ -1436,14 +1436,10 @@ vect_update_vf_for_slp (loop_vec_info lo
       for (gimple_stmt_iterator si = gsi_start_bb (bb); !gsi_end_p (si);
 	   gsi_next (&si))
 	{
-	  gimple *stmt = gsi_stmt (si);
 	  stmt_vec_info stmt_info = loop_vinfo->lookup_stmt (gsi_stmt (si));
 	  if (STMT_VINFO_IN_PATTERN_P (stmt_info)
 	      && STMT_VINFO_RELATED_STMT (stmt_info))
-	    {
-	      stmt = STMT_VINFO_RELATED_STMT (stmt_info);
-	      stmt_info = vinfo_for_stmt (stmt);
-	    }
+	    stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
 	  if ((STMT_VINFO_RELEVANT_P (stmt_info)
 	       || VECTORIZABLE_CYCLE_DEF (STMT_VINFO_DEF_TYPE (stmt_info)))
 	      && !PURE_SLP_STMT (stmt_info))
@@ -2247,7 +2243,7 @@ vect_analyze_loop_2 (loop_vec_info loop_
 	  if (STMT_VINFO_IN_PATTERN_P (stmt_info))
 	    {
 	      gimple *pattern_def_seq = STMT_VINFO_PATTERN_DEF_SEQ (stmt_info);
-	      stmt_info = vinfo_for_stmt (STMT_VINFO_RELATED_STMT (stmt_info));
+	      stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
 	      STMT_SLP_TYPE (stmt_info) = loop_vect;
 	      for (gimple_stmt_iterator pi = gsi_start (pattern_def_seq);
 		   !gsi_end_p (pi); gsi_next (&pi))
@@ -3836,7 +3832,6 @@ vect_model_reduction_cost (stmt_vec_info
   enum tree_code code;
   optab optab;
   tree vectype;
-  gimple *orig_stmt;
   machine_mode mode;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct loop *loop = NULL;
@@ -3852,12 +3847,12 @@ vect_model_reduction_cost (stmt_vec_info
 
   vectype = STMT_VINFO_VECTYPE (stmt_info);
   mode = TYPE_MODE (vectype);
-  orig_stmt = STMT_VINFO_RELATED_STMT (stmt_info);
+  stmt_vec_info orig_stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
 
-  if (!orig_stmt)
-    orig_stmt = STMT_VINFO_STMT (stmt_info);
+  if (!orig_stmt_info)
+    orig_stmt_info = stmt_info;
 
-  code = gimple_assign_rhs_code (orig_stmt);
+  code = gimple_assign_rhs_code (orig_stmt_info->stmt);
 
   if (reduction_type == EXTRACT_LAST_REDUCTION
       || reduction_type == FOLD_LEFT_REDUCTION)
@@ -3902,7 +3897,7 @@ vect_model_reduction_cost (stmt_vec_info
      We have a reduction operator that will reduce the vector in one statement.
      Also requires scalar extract.  */
 
-  if (!loop || !nested_in_vect_loop_p (loop, orig_stmt))
+  if (!loop || !nested_in_vect_loop_p (loop, orig_stmt_info))
     {
       if (reduc_fn != IFN_LAST)
 	{
@@ -3953,7 +3948,7 @@ vect_model_reduction_cost (stmt_vec_info
 	{
 	  int vec_size_in_bits = tree_to_uhwi (TYPE_SIZE (vectype));
 	  tree bitsize =
-	    TYPE_SIZE (TREE_TYPE (gimple_assign_lhs (orig_stmt)));
+	    TYPE_SIZE (TREE_TYPE (gimple_assign_lhs (orig_stmt_info->stmt)));
 	  int element_bitsize = tree_to_uhwi (bitsize);
 	  int nelements = vec_size_in_bits / element_bitsize;
 
@@ -4447,7 +4442,7 @@ vect_create_epilog_for_reduction (vec<tr
   tree orig_name, scalar_result;
   imm_use_iterator imm_iter, phi_imm_iter;
   use_operand_p use_p, phi_use_p;
-  gimple *use_stmt, *orig_stmt, *reduction_phi = NULL;
+  gimple *use_stmt, *reduction_phi = NULL;
   bool nested_in_vect_loop = false;
   auto_vec<gimple *> new_phis;
   auto_vec<gimple *> inner_phis;
@@ -4726,7 +4721,7 @@ vect_create_epilog_for_reduction (vec<tr
           else
 	    {
 	      def = vect_get_vec_def_for_stmt_copy (dt, def);
-	      STMT_VINFO_RELATED_STMT (prev_phi_info) = phi;
+	      STMT_VINFO_RELATED_STMT (prev_phi_info) = phi_info;
 	    }
 
           SET_PHI_ARG_DEF (phi, single_exit (loop)->dest_idx, def);
@@ -4758,7 +4753,7 @@ vect_create_epilog_for_reduction (vec<tr
 	      SET_PHI_ARG_DEF (outer_phi, single_exit (loop)->dest_idx,
 			       PHI_RESULT (phi));
 	      stmt_vec_info outer_phi_info = loop_vinfo->add_stmt (outer_phi);
-	      STMT_VINFO_RELATED_STMT (prev_phi_info) = outer_phi;
+	      STMT_VINFO_RELATED_STMT (prev_phi_info) = outer_phi_info;
 	      prev_phi_info = outer_phi_info;
 	    }
 	}
@@ -4775,27 +4770,26 @@ vect_create_epilog_for_reduction (vec<tr
          Otherwise (it is a regular reduction) - the tree-code and scalar-def
          are taken from STMT.  */
 
-  orig_stmt = STMT_VINFO_RELATED_STMT (stmt_info);
-  if (!orig_stmt)
+  stmt_vec_info orig_stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
+  if (!orig_stmt_info)
     {
       /* Regular reduction  */
-      orig_stmt = stmt;
+      orig_stmt_info = stmt_info;
     }
   else
     {
       /* Reduction pattern  */
-      stmt_vec_info stmt_vinfo = vinfo_for_stmt (orig_stmt);
-      gcc_assert (STMT_VINFO_IN_PATTERN_P (stmt_vinfo));
-      gcc_assert (STMT_VINFO_RELATED_STMT (stmt_vinfo) == stmt);
+      gcc_assert (STMT_VINFO_IN_PATTERN_P (orig_stmt_info));
+      gcc_assert (STMT_VINFO_RELATED_STMT (orig_stmt_info) == stmt_info);
     }
 
-  code = gimple_assign_rhs_code (orig_stmt);
+  code = gimple_assign_rhs_code (orig_stmt_info->stmt);
   /* For MINUS_EXPR the initial vector is [init_val,0,...,0], therefore,
      partial results are added and not subtracted.  */
   if (code == MINUS_EXPR) 
     code = PLUS_EXPR;
   
-  scalar_dest = gimple_assign_lhs (orig_stmt);
+  scalar_dest = gimple_assign_lhs (orig_stmt_info->stmt);
   scalar_type = TREE_TYPE (scalar_dest);
   scalar_results.create (group_size); 
   new_scalar_dest = vect_create_destination_var (scalar_dest, NULL);
@@ -5613,10 +5607,11 @@ vect_create_epilog_for_reduction (vec<tr
         {
 	  gimple *current_stmt = SLP_TREE_SCALAR_STMTS (slp_node)[k];
 
-          orig_stmt = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (current_stmt));
-          /* SLP statements can't participate in patterns.  */
-          gcc_assert (!orig_stmt);
-          scalar_dest = gimple_assign_lhs (current_stmt);
+	  orig_stmt_info
+	    = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (current_stmt));
+	  /* SLP statements can't participate in patterns.  */
+	  gcc_assert (!orig_stmt_info);
+	  scalar_dest = gimple_assign_lhs (current_stmt);
         }
 
       phis.create (3);
@@ -6097,8 +6092,6 @@ vectorizable_reduction (gimple *stmt, gi
   enum tree_code cond_reduc_op_code = ERROR_MARK;
   tree scalar_type;
   bool is_simple_use;
-  gimple *orig_stmt;
-  stmt_vec_info orig_stmt_info = NULL;
   int i;
   int ncopies;
   int epilog_copies;
@@ -6229,7 +6222,7 @@ vectorizable_reduction (gimple *stmt, gi
 		      if (j == 0)
 			STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_phi;
 		      else
-			STMT_VINFO_RELATED_STMT (prev_phi_info) = new_phi;
+			STMT_VINFO_RELATED_STMT (prev_phi_info) = new_phi_info;
 		      prev_phi_info = new_phi_info;
 		    }
 		}
@@ -6259,10 +6252,9 @@ vectorizable_reduction (gimple *stmt, gi
      the STMT_VINFO_RELATED_STMT field records the last stmt in
      the original sequence that constitutes the pattern.  */
 
-  orig_stmt = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (stmt));
-  if (orig_stmt)
+  stmt_vec_info orig_stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
+  if (orig_stmt_info)
     {
-      orig_stmt_info = vinfo_for_stmt (orig_stmt);
       gcc_assert (STMT_VINFO_IN_PATTERN_P (orig_stmt_info));
       gcc_assert (!STMT_VINFO_IN_PATTERN_P (stmt_info));
     }
@@ -6393,7 +6385,7 @@ vectorizable_reduction (gimple *stmt, gi
 	  return false;
 	}
 
-      if (orig_stmt)
+      if (orig_stmt_info)
 	reduc_def_stmt = STMT_VINFO_REDUC_DEF (orig_stmt_info);
       else
 	reduc_def_stmt = STMT_VINFO_REDUC_DEF (stmt_info);
@@ -6414,7 +6406,7 @@ vectorizable_reduction (gimple *stmt, gi
       /* For pattern recognized stmts, orig_stmt might be a reduction,
 	 but some helper statements for the pattern might not, or
 	 might be COND_EXPRs with reduction uses in the condition.  */
-      gcc_assert (orig_stmt);
+      gcc_assert (orig_stmt_info);
       return false;
     }
 
@@ -6548,10 +6540,10 @@ vectorizable_reduction (gimple *stmt, gi
 	}
     }
 
-  if (orig_stmt)
-    gcc_assert (tmp == orig_stmt
+  if (orig_stmt_info)
+    gcc_assert (tmp == orig_stmt_info
 		|| (REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (tmp))
-		    == orig_stmt));
+		    == orig_stmt_info));
   else
     /* We changed STMT to be the first stmt in reduction chain, hence we
        check that in this case the first element in the chain is STMT.  */
@@ -6673,13 +6665,13 @@ vectorizable_reduction (gimple *stmt, gi
 
   vect_reduction_type reduction_type
     = STMT_VINFO_VEC_REDUCTION_TYPE (stmt_info);
-  if (orig_stmt
+  if (orig_stmt_info
       && (reduction_type == TREE_CODE_REDUCTION
 	  || reduction_type == FOLD_LEFT_REDUCTION))
     {
       /* This is a reduction pattern: get the vectype from the type of the
          reduction variable, and get the tree-code from orig_stmt.  */
-      orig_code = gimple_assign_rhs_code (orig_stmt);
+      orig_code = gimple_assign_rhs_code (orig_stmt_info->stmt);
       gcc_assert (vectype_out);
       vec_mode = TYPE_MODE (vectype_out);
     }
@@ -7757,7 +7749,7 @@ vectorizable_induction (gimple *phi,
  
 	  gsi_insert_before (&si, new_stmt, GSI_SAME_STMT);
 	  new_stmt_info = loop_vinfo->add_stmt (new_stmt);
-	  STMT_VINFO_RELATED_STMT (prev_stmt_vinfo) = new_stmt;
+	  STMT_VINFO_RELATED_STMT (prev_stmt_vinfo) = new_stmt_info;
 	  prev_stmt_vinfo = new_stmt_info;
 	}
     }
Index: gcc/tree-vect-patterns.c
===================================================================
--- gcc/tree-vect-patterns.c	2018-07-24 10:22:37.253248202 +0100
+++ gcc/tree-vect-patterns.c	2018-07-24 10:22:44.289185723 +0100
@@ -94,10 +94,11 @@ vect_pattern_detected (const char *name,
     }
 }
 
-/* Associate pattern statement PATTERN_STMT with ORIG_STMT_INFO.
-   Set its vector type to VECTYPE if it doesn't have one already.  */
+/* Associate pattern statement PATTERN_STMT with ORIG_STMT_INFO and
+   return the pattern statement's stmt_vec_info.  Set its vector type to
+   VECTYPE if it doesn't have one already.  */
 
-static void
+static stmt_vec_info
 vect_init_pattern_stmt (gimple *pattern_stmt, stmt_vec_info orig_stmt_info,
 			tree vectype)
 {
@@ -107,11 +108,12 @@ vect_init_pattern_stmt (gimple *pattern_
     pattern_stmt_info = orig_stmt_info->vinfo->add_stmt (pattern_stmt);
   gimple_set_bb (pattern_stmt, gimple_bb (orig_stmt_info->stmt));
 
-  STMT_VINFO_RELATED_STMT (pattern_stmt_info) = orig_stmt_info->stmt;
+  STMT_VINFO_RELATED_STMT (pattern_stmt_info) = orig_stmt_info;
   STMT_VINFO_DEF_TYPE (pattern_stmt_info)
     = STMT_VINFO_DEF_TYPE (orig_stmt_info);
   if (!STMT_VINFO_VECTYPE (pattern_stmt_info))
     STMT_VINFO_VECTYPE (pattern_stmt_info) = vectype;
+  return pattern_stmt_info;
 }
 
 /* Set the pattern statement of ORIG_STMT_INFO to PATTERN_STMT.
@@ -123,8 +125,8 @@ vect_set_pattern_stmt (gimple *pattern_s
 		       tree vectype)
 {
   STMT_VINFO_IN_PATTERN_P (orig_stmt_info) = true;
-  STMT_VINFO_RELATED_STMT (orig_stmt_info) = pattern_stmt;
-  vect_init_pattern_stmt (pattern_stmt, orig_stmt_info, vectype);
+  STMT_VINFO_RELATED_STMT (orig_stmt_info)
+    = vect_init_pattern_stmt (pattern_stmt, orig_stmt_info, vectype);
 }
 
 /* Add NEW_STMT to STMT_INFO's pattern definition statements.  If VECTYPE
@@ -634,8 +636,7 @@ vect_split_statement (stmt_vec_info stmt
     {
       /* STMT2_INFO is part of a pattern.  Get the statement to which
 	 the pattern is attached.  */
-      stmt_vec_info orig_stmt2_info
-	= vinfo_for_stmt (STMT_VINFO_RELATED_STMT (stmt2_info));
+      stmt_vec_info orig_stmt2_info = STMT_VINFO_RELATED_STMT (stmt2_info);
       vect_init_pattern_stmt (stmt1, orig_stmt2_info, vectype);
 
       if (dump_enabled_p ())
@@ -659,7 +660,7 @@ vect_split_statement (stmt_vec_info stmt
 	}
 
       gimple_seq *def_seq = &STMT_VINFO_PATTERN_DEF_SEQ (orig_stmt2_info);
-      if (STMT_VINFO_RELATED_STMT (orig_stmt2_info) == stmt2_info->stmt)
+      if (STMT_VINFO_RELATED_STMT (orig_stmt2_info) == stmt2_info)
 	/* STMT2_INFO is the actual pattern statement.  Add STMT1
 	   to the end of the definition sequence.  */
 	gimple_seq_add_stmt_without_update (def_seq, stmt1);
@@ -4754,8 +4755,7 @@ vect_mark_pattern_stmts (gimple *orig_st
 	}
 
       /* Switch to the statement that ORIG replaces.  */
-      orig_stmt_info
-	= vinfo_for_stmt (STMT_VINFO_RELATED_STMT (orig_stmt_info));
+      orig_stmt_info = STMT_VINFO_RELATED_STMT (orig_stmt_info);
 
       /* We shouldn't be replacing the main pattern statement.  */
       gcc_assert (STMT_VINFO_RELATED_STMT (orig_stmt_info) != orig_stmt);
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c	2018-07-24 10:22:37.253248202 +0100
+++ gcc/tree-vect-slp.c	2018-07-24 10:22:44.293185688 +0100
@@ -2327,7 +2327,7 @@ vect_detect_hybrid_slp_stmts (slp_tree n
          original stmt for immediate uses.  */
       if (! STMT_VINFO_IN_PATTERN_P (stmt_vinfo)
 	  && STMT_VINFO_RELATED_STMT (stmt_vinfo))
-	stmt = STMT_VINFO_RELATED_STMT (stmt_vinfo);
+	stmt = STMT_VINFO_RELATED_STMT (stmt_vinfo)->stmt;
       tree def;
       if (gimple_code (stmt) == GIMPLE_PHI)
 	def = gimple_phi_result (stmt);
@@ -2341,7 +2341,7 @@ vect_detect_hybrid_slp_stmts (slp_tree n
 	      continue;
 	    if (STMT_VINFO_IN_PATTERN_P (use_vinfo)
 		&& STMT_VINFO_RELATED_STMT (use_vinfo))
-	      use_vinfo = vinfo_for_stmt (STMT_VINFO_RELATED_STMT (use_vinfo));
+	      use_vinfo = STMT_VINFO_RELATED_STMT (use_vinfo);
 	    if (!STMT_SLP_TYPE (use_vinfo)
 		&& (STMT_VINFO_RELEVANT (use_vinfo)
 		    || VECTORIZABLE_CYCLE_DEF (STMT_VINFO_DEF_TYPE (use_vinfo)))
@@ -2446,7 +2446,7 @@ vect_detect_hybrid_slp (loop_vec_info lo
 	      memset (&wi, 0, sizeof (wi));
 	      wi.info = loop_vinfo;
 	      gimple_stmt_iterator gsi2
-		= gsi_for_stmt (STMT_VINFO_RELATED_STMT (stmt_info));
+		= gsi_for_stmt (STMT_VINFO_RELATED_STMT (stmt_info)->stmt);
 	      walk_gimple_stmt (&gsi2, vect_detect_hybrid_slp_2,
 				vect_detect_hybrid_slp_1, &wi);
 	      walk_gimple_seq (STMT_VINFO_PATTERN_DEF_SEQ (stmt_info),
@@ -3612,7 +3612,7 @@ vect_get_slp_defs (vec<tree> ops, slp_tr
 	  if (SLP_TREE_DEF_TYPE (child) == vect_internal_def)
 	    {
 	      gimple *first_def = SLP_TREE_SCALAR_STMTS (child)[0];
-	      gimple *related
+	      stmt_vec_info related
 		= STMT_VINFO_RELATED_STMT (vinfo_for_stmt (first_def));
 	      tree first_def_op;
 
@@ -3622,7 +3622,8 @@ vect_get_slp_defs (vec<tree> ops, slp_tr
 		first_def_op = gimple_get_lhs (first_def);
 	      if (operand_equal_p (oprnd, first_def_op, 0)
 		  || (related
-		      && operand_equal_p (oprnd, gimple_get_lhs (related), 0)))
+		      && operand_equal_p (oprnd,
+					  gimple_get_lhs (related->stmt), 0)))
 		{
 		  /* The number of vector defs is determined by the number of
 		     vector statements in the node from which we get those
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:22:40.725217371 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:22:44.293185688 +0100
@@ -202,7 +202,6 @@ vect_mark_relevant (vec<gimple *> *workl
   stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   enum vect_relevant save_relevant = STMT_VINFO_RELEVANT (stmt_info);
   bool save_live_p = STMT_VINFO_LIVE_P (stmt_info);
-  gimple *pattern_stmt;
 
   if (dump_enabled_p ())
     {
@@ -222,17 +221,16 @@ vect_mark_relevant (vec<gimple *> *workl
 	 as relevant/live because it's not going to be vectorized.
 	 Instead mark the pattern-stmt that replaces it.  */
 
-      pattern_stmt = STMT_VINFO_RELATED_STMT (stmt_info);
-
       if (dump_enabled_p ())
 	dump_printf_loc (MSG_NOTE, vect_location,
 			 "last stmt in pattern. don't mark"
 			 " relevant/live.\n");
-      stmt_info = vinfo_for_stmt (pattern_stmt);
-      gcc_assert (STMT_VINFO_RELATED_STMT (stmt_info) == stmt);
+      stmt_vec_info old_stmt_info = stmt_info;
+      stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
+      gcc_assert (STMT_VINFO_RELATED_STMT (stmt_info) == old_stmt_info);
       save_relevant = STMT_VINFO_RELEVANT (stmt_info);
       save_live_p = STMT_VINFO_LIVE_P (stmt_info);
-      stmt = pattern_stmt;
+      stmt = stmt_info->stmt;
     }
 
   STMT_VINFO_LIVE_P (stmt_info) |= live_p;
@@ -1489,8 +1487,8 @@ vect_get_vec_def_for_operand_1 (gimple *
         if (!vec_stmt
             && STMT_VINFO_IN_PATTERN_P (def_stmt_info)
             && !STMT_VINFO_RELEVANT (def_stmt_info))
-          vec_stmt = STMT_VINFO_VEC_STMT (vinfo_for_stmt (
-                       STMT_VINFO_RELATED_STMT (def_stmt_info)));
+	  vec_stmt = (STMT_VINFO_VEC_STMT
+		      (STMT_VINFO_RELATED_STMT (def_stmt_info)));
         gcc_assert (vec_stmt);
 	if (gimple_code (vec_stmt) == GIMPLE_PHI)
 	  vec_oprnd = PHI_RESULT (vec_stmt);
@@ -3635,7 +3633,7 @@ vectorizable_call (gimple *gs, gimple_st
     return true;
 
   if (is_pattern_stmt_p (stmt_info))
-    stmt_info = vinfo_for_stmt (STMT_VINFO_RELATED_STMT (stmt_info));
+    stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
   lhs = gimple_get_lhs (stmt_info->stmt);
 
   gassign *new_stmt
@@ -4370,7 +4368,7 @@ vectorizable_simd_clone_call (gimple *st
     {
       type = TREE_TYPE (scalar_dest);
       if (is_pattern_stmt_p (stmt_info))
-	lhs = gimple_call_lhs (STMT_VINFO_RELATED_STMT (stmt_info));
+	lhs = gimple_call_lhs (STMT_VINFO_RELATED_STMT (stmt_info)->stmt);
       else
 	lhs = gimple_call_lhs (stmt);
       new_stmt = gimple_build_assign (lhs, build_zero_cst (type));
@@ -9420,7 +9418,6 @@ vect_analyze_stmt (gimple *stmt, bool *n
   bb_vec_info bb_vinfo = STMT_VINFO_BB_VINFO (stmt_info);
   enum vect_relevant relevance = STMT_VINFO_RELEVANT (stmt_info);
   bool ok;
-  gimple *pattern_stmt;
   gimple_seq pattern_def_seq;
 
   if (dump_enabled_p ())
@@ -9482,18 +9479,18 @@ vect_analyze_stmt (gimple *stmt, bool *n
      traversal, don't analyze pattern stmts instead, the pattern stmts
      already will be part of SLP instance.  */
 
-  pattern_stmt = STMT_VINFO_RELATED_STMT (stmt_info);
+  stmt_vec_info pattern_stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
   if (!STMT_VINFO_RELEVANT_P (stmt_info)
       && !STMT_VINFO_LIVE_P (stmt_info))
     {
       if (STMT_VINFO_IN_PATTERN_P (stmt_info)
-          && pattern_stmt
-          && (STMT_VINFO_RELEVANT_P (vinfo_for_stmt (pattern_stmt))
-              || STMT_VINFO_LIVE_P (vinfo_for_stmt (pattern_stmt))))
+	  && pattern_stmt_info
+	  && (STMT_VINFO_RELEVANT_P (pattern_stmt_info)
+	      || STMT_VINFO_LIVE_P (pattern_stmt_info)))
         {
           /* Analyze PATTERN_STMT instead of the original stmt.  */
-          stmt = pattern_stmt;
-          stmt_info = vinfo_for_stmt (pattern_stmt);
+	  stmt = pattern_stmt_info->stmt;
+	  stmt_info = pattern_stmt_info;
           if (dump_enabled_p ())
             {
               dump_printf_loc (MSG_NOTE, vect_location,
@@ -9511,9 +9508,9 @@ vect_analyze_stmt (gimple *stmt, bool *n
     }
   else if (STMT_VINFO_IN_PATTERN_P (stmt_info)
 	   && node == NULL
-           && pattern_stmt
-           && (STMT_VINFO_RELEVANT_P (vinfo_for_stmt (pattern_stmt))
-               || STMT_VINFO_LIVE_P (vinfo_for_stmt (pattern_stmt))))
+	   && pattern_stmt_info
+	   && (STMT_VINFO_RELEVANT_P (pattern_stmt_info)
+	       || STMT_VINFO_LIVE_P (pattern_stmt_info)))
     {
       /* Analyze PATTERN_STMT too.  */
       if (dump_enabled_p ())
@@ -9523,7 +9520,7 @@ vect_analyze_stmt (gimple *stmt, bool *n
           dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
         }
 
-      if (!vect_analyze_stmt (pattern_stmt, need_to_vectorize, node,
+      if (!vect_analyze_stmt (pattern_stmt_info, need_to_vectorize, node,
 			      node_instance, cost_vec))
         return false;
    }
@@ -9855,7 +9852,6 @@ new_stmt_vec_info (gimple *stmt, vec_inf
   STMT_VINFO_VEC_STMT (res) = NULL;
   STMT_VINFO_VECTORIZABLE (res) = true;
   STMT_VINFO_IN_PATTERN_P (res) = false;
-  STMT_VINFO_RELATED_STMT (res) = NULL;
   STMT_VINFO_PATTERN_DEF_SEQ (res) = NULL;
   STMT_VINFO_DATA_REF (res) = NULL;
   STMT_VINFO_VEC_REDUCTION_TYPE (res) = TREE_CODE_REDUCTION;
@@ -9936,16 +9932,14 @@ free_stmt_vec_info (gimple *stmt)
 	      release_ssa_name (lhs);
 	    free_stmt_vec_info (seq_stmt);
 	  }
-      stmt_vec_info patt_info
-	= vinfo_for_stmt (STMT_VINFO_RELATED_STMT (stmt_info));
-      if (patt_info)
-	{
-	  gimple *patt_stmt = STMT_VINFO_STMT (patt_info);
-	  gimple_set_bb (patt_stmt, NULL);
-	  tree lhs = gimple_get_lhs (patt_stmt);
+      stmt_vec_info patt_stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
+      if (patt_stmt_info)
+	{
+	  gimple_set_bb (patt_stmt_info->stmt, NULL);
+	  tree lhs = gimple_get_lhs (patt_stmt_info->stmt);
 	  if (lhs && TREE_CODE (lhs) == SSA_NAME)
 	    release_ssa_name (lhs);
-	  free_stmt_vec_info (patt_stmt);
+	  free_stmt_vec_info (patt_stmt_info);
 	}
     }
 
@@ -10143,8 +10137,8 @@ vect_is_simple_use (tree operand, vec_in
 	{
 	  if (STMT_VINFO_IN_PATTERN_P (stmt_vinfo))
 	    {
-	      def_stmt = STMT_VINFO_RELATED_STMT (stmt_vinfo);
-	      stmt_vinfo = vinfo_for_stmt (def_stmt);
+	      stmt_vinfo = STMT_VINFO_RELATED_STMT (stmt_vinfo);
+	      def_stmt = stmt_vinfo->stmt;
 	    }
 	  switch (gimple_code (def_stmt))
 	    {

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [12/46] Make vect_finish_stmt_generation return a stmt_vec_info
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (11 preceding siblings ...)
  2018-07-24  9:58 ` [13/46] Make STMT_VINFO_RELATED_STMT a stmt_vec_info Richard Sandiford
@ 2018-07-24  9:58 ` Richard Sandiford
  2018-07-25  9:19   ` Richard Biener
  2018-07-24  9:58 ` [14/46] Make STMT_VINFO_VEC_STMT " Richard Sandiford
                   ` (32 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24  9:58 UTC (permalink / raw)
  To: gcc-patches

This patch makes vect_finish_replace_stmt and vect_finish_stmt_generation
return the stmt_vec_info for the vectorised statement, so that the caller
doesn't need a separate vinfo_for_stmt to get at it.

This involved changing the structure of the statement-generating loops
so that they use narrow scopes for the vectorised gimple statements
and use the existing (wider) scopes for the associated stmt_vec_infos.
This helps with gimple stmt->stmt_vec_info changes further down the line.

The way we do this generation is another area ripe for clean-up,
but that's too much of a rabbit-hole for this series.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (vect_finish_replace_stmt): Return a stmt_vec_info
	(vect_finish_stmt_generation): Likewise.
	* tree-vect-stmts.c (vect_finish_stmt_generation_1): Likewise.
	(vect_finish_replace_stmt, vect_finish_stmt_generation): Likewise.
	(vect_build_gather_load_calls): Use the return value of the above
	functions instead of a separate call to vinfo_for_stmt.  Use narrow
	scopes for the input gimple stmt and wider scopes for the associated
	stmt_vec_info.  Use vec_info::lookup_def when setting these
	stmt_vec_infos from an SSA_NAME definition.
	(vectorizable_bswap, vectorizable_call, vectorizable_simd_clone_call)
	(vect_create_vectorized_demotion_stmts, vectorizable_conversion)
	(vectorizable_assignment, vectorizable_shift, vectorizable_operation)
	(vectorizable_store, vectorizable_load, vectorizable_condition)
	(vectorizable_comparison): Likewise.
	* tree-vect-loop.c (vectorize_fold_left_reduction): Likewise.
	(vectorizable_reduction): Likewise.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:22:37.257248166 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:22:40.725217371 +0100
@@ -1548,9 +1548,9 @@ extern void free_stmt_vec_info (gimple *
 extern unsigned record_stmt_cost (stmt_vector_for_cost *, int,
 				  enum vect_cost_for_stmt, stmt_vec_info,
 				  int, enum vect_cost_model_location);
-extern void vect_finish_replace_stmt (gimple *, gimple *);
-extern void vect_finish_stmt_generation (gimple *, gimple *,
-                                         gimple_stmt_iterator *);
+extern stmt_vec_info vect_finish_replace_stmt (gimple *, gimple *);
+extern stmt_vec_info vect_finish_stmt_generation (gimple *, gimple *,
+						  gimple_stmt_iterator *);
 extern bool vect_mark_stmts_to_be_vectorized (loop_vec_info);
 extern tree vect_get_store_rhs (gimple *);
 extern tree vect_get_vec_def_for_operand_1 (gimple *, enum vect_def_type);
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:22:37.257248166 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:22:40.725217371 +0100
@@ -1729,15 +1729,15 @@ vect_get_vec_defs (tree op0, tree op1, g
 
 /* Helper function called by vect_finish_replace_stmt and
    vect_finish_stmt_generation.  Set the location of the new
-   statement and create a stmt_vec_info for it.  */
+   statement and create and return a stmt_vec_info for it.  */
 
-static void
+static stmt_vec_info
 vect_finish_stmt_generation_1 (gimple *stmt, gimple *vec_stmt)
 {
   stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   vec_info *vinfo = stmt_info->vinfo;
 
-  vinfo->add_stmt (vec_stmt);
+  stmt_vec_info vec_stmt_info = vinfo->add_stmt (vec_stmt);
 
   if (dump_enabled_p ())
     {
@@ -1753,12 +1753,15 @@ vect_finish_stmt_generation_1 (gimple *s
   int lp_nr = lookup_stmt_eh_lp (stmt);
   if (lp_nr != 0 && stmt_could_throw_p (vec_stmt))
     add_stmt_to_eh_lp (vec_stmt, lp_nr);
+
+  return vec_stmt_info;
 }
 
 /* Replace the scalar statement STMT with a new vector statement VEC_STMT,
-   which sets the same scalar result as STMT did.  */
+   which sets the same scalar result as STMT did.  Create and return a
+   stmt_vec_info for VEC_STMT.  */
 
-void
+stmt_vec_info
 vect_finish_replace_stmt (gimple *stmt, gimple *vec_stmt)
 {
   gcc_assert (gimple_get_lhs (stmt) == gimple_get_lhs (vec_stmt));
@@ -1766,14 +1769,13 @@ vect_finish_replace_stmt (gimple *stmt,
   gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
   gsi_replace (&gsi, vec_stmt, false);
 
-  vect_finish_stmt_generation_1 (stmt, vec_stmt);
+  return vect_finish_stmt_generation_1 (stmt, vec_stmt);
 }
 
-/* Function vect_finish_stmt_generation.
-
-   Insert a new stmt.  */
+/* Add VEC_STMT to the vectorized implementation of STMT and insert it
+   before *GSI.  Create and return a stmt_vec_info for VEC_STMT.  */
 
-void
+stmt_vec_info
 vect_finish_stmt_generation (gimple *stmt, gimple *vec_stmt,
 			     gimple_stmt_iterator *gsi)
 {
@@ -1806,7 +1808,7 @@ vect_finish_stmt_generation (gimple *stm
 	}
     }
   gsi_insert_before (gsi, vec_stmt, GSI_SAME_STMT);
-  vect_finish_stmt_generation_1 (stmt, vec_stmt);
+  return vect_finish_stmt_generation_1 (stmt, vec_stmt);
 }
 
 /* We want to vectorize a call to combined function CFN with function
@@ -2774,7 +2776,6 @@ vect_build_gather_load_calls (gimple *st
   for (int j = 0; j < ncopies; ++j)
     {
       tree op, var;
-      gimple *new_stmt;
       if (modifier == WIDEN && (j & 1))
 	op = permute_vec_elements (vec_oprnd0, vec_oprnd0,
 				   perm_mask, stmt, gsi);
@@ -2791,7 +2792,7 @@ vect_build_gather_load_calls (gimple *st
 				TYPE_VECTOR_SUBPARTS (idxtype)));
 	  var = vect_get_new_ssa_name (idxtype, vect_simple_var);
 	  op = build1 (VIEW_CONVERT_EXPR, idxtype, op);
-	  new_stmt = gimple_build_assign (var, VIEW_CONVERT_EXPR, op);
+	  gassign *new_stmt = gimple_build_assign (var, VIEW_CONVERT_EXPR, op);
 	  vect_finish_stmt_generation (stmt, new_stmt, gsi);
 	  op = var;
 	}
@@ -2816,8 +2817,8 @@ vect_build_gather_load_calls (gimple *st
 			       TYPE_VECTOR_SUBPARTS (masktype)));
 		  var = vect_get_new_ssa_name (masktype, vect_simple_var);
 		  mask_op = build1 (VIEW_CONVERT_EXPR, masktype, mask_op);
-		  new_stmt = gimple_build_assign (var, VIEW_CONVERT_EXPR,
-						  mask_op);
+		  gassign *new_stmt
+		    = gimple_build_assign (var, VIEW_CONVERT_EXPR, mask_op);
 		  vect_finish_stmt_generation (stmt, new_stmt, gsi);
 		  mask_op = var;
 		}
@@ -2825,28 +2826,29 @@ vect_build_gather_load_calls (gimple *st
 	  src_op = mask_op;
 	}
 
-      new_stmt = gimple_build_call (gs_info->decl, 5, src_op, ptr, op,
-				    mask_op, scale);
+      gcall *new_call = gimple_build_call (gs_info->decl, 5, src_op, ptr, op,
+					   mask_op, scale);
 
+      stmt_vec_info new_stmt_info;
       if (!useless_type_conversion_p (vectype, rettype))
 	{
 	  gcc_assert (known_eq (TYPE_VECTOR_SUBPARTS (vectype),
 				TYPE_VECTOR_SUBPARTS (rettype)));
 	  op = vect_get_new_ssa_name (rettype, vect_simple_var);
-	  gimple_call_set_lhs (new_stmt, op);
-	  vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	  gimple_call_set_lhs (new_call, op);
+	  vect_finish_stmt_generation (stmt, new_call, gsi);
 	  var = make_ssa_name (vec_dest);
 	  op = build1 (VIEW_CONVERT_EXPR, vectype, op);
-	  new_stmt = gimple_build_assign (var, VIEW_CONVERT_EXPR, op);
+	  gassign *new_stmt = gimple_build_assign (var, VIEW_CONVERT_EXPR, op);
+	  new_stmt_info = vect_finish_stmt_generation (stmt, new_stmt, gsi);
 	}
       else
 	{
-	  var = make_ssa_name (vec_dest, new_stmt);
-	  gimple_call_set_lhs (new_stmt, var);
+	  var = make_ssa_name (vec_dest, new_call);
+	  gimple_call_set_lhs (new_call, var);
+	  new_stmt_info = vect_finish_stmt_generation (stmt, new_call, gsi);
 	}
 
-      vect_finish_stmt_generation (stmt, new_stmt, gsi);
-
       if (modifier == NARROW)
 	{
 	  if ((j & 1) == 0)
@@ -2855,14 +2857,14 @@ vect_build_gather_load_calls (gimple *st
 	      continue;
 	    }
 	  var = permute_vec_elements (prev_res, var, perm_mask, stmt, gsi);
-	  new_stmt = SSA_NAME_DEF_STMT (var);
+	  new_stmt_info = loop_vinfo->lookup_def (var);
 	}
 
       if (prev_stmt_info == NULL_STMT_VEC_INFO)
-	STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
+	STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
       else
-	STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
-      prev_stmt_info = vinfo_for_stmt (new_stmt);
+	STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
+      prev_stmt_info = new_stmt_info;
     }
 }
 
@@ -3023,7 +3025,7 @@ vectorizable_bswap (gimple *stmt, gimple
 
   /* Transform.  */
   vec<tree> vec_oprnds = vNULL;
-  gimple *new_stmt = NULL;
+  stmt_vec_info new_stmt_info = NULL;
   stmt_vec_info prev_stmt_info = NULL;
   for (unsigned j = 0; j < ncopies; j++)
     {
@@ -3038,6 +3040,7 @@ vectorizable_bswap (gimple *stmt, gimple
       tree vop;
       FOR_EACH_VEC_ELT (vec_oprnds, i, vop)
        {
+	 gimple *new_stmt;
 	 tree tem = make_ssa_name (char_vectype);
 	 new_stmt = gimple_build_assign (tem, build1 (VIEW_CONVERT_EXPR,
 						      char_vectype, vop));
@@ -3049,20 +3052,20 @@ vectorizable_bswap (gimple *stmt, gimple
 	 tem = make_ssa_name (vectype);
 	 new_stmt = gimple_build_assign (tem, build1 (VIEW_CONVERT_EXPR,
 						      vectype, tem2));
-	 vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	 new_stmt_info = vect_finish_stmt_generation (stmt, new_stmt, gsi);
          if (slp_node)
-           SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
+	   SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
        }
 
       if (slp_node)
         continue;
 
       if (j == 0)
-        STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
+	STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
       else
-        STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
+	STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
 
-      prev_stmt_info = vinfo_for_stmt (new_stmt);
+      prev_stmt_info = new_stmt_info;
     }
 
   vec_oprnds.release ();
@@ -3123,7 +3126,6 @@ vectorizable_call (gimple *gs, gimple_st
     = { vect_unknown_def_type, vect_unknown_def_type, vect_unknown_def_type,
 	vect_unknown_def_type };
   int ndts = ARRAY_SIZE (dt);
-  gimple *new_stmt = NULL;
   int ncopies, j;
   auto_vec<tree, 8> vargs;
   auto_vec<tree, 8> orig_vargs;
@@ -3361,6 +3363,7 @@ vectorizable_call (gimple *gs, gimple_st
 
   bool masked_loop_p = loop_vinfo && LOOP_VINFO_FULLY_MASKED_P (loop_vinfo);
 
+  stmt_vec_info new_stmt_info = NULL;
   prev_stmt_info = NULL;
   if (modifier == NONE || ifn != IFN_LAST)
     {
@@ -3399,16 +3402,19 @@ vectorizable_call (gimple *gs, gimple_st
 			= gimple_build_call_internal_vec (ifn, vargs);
 		      gimple_call_set_lhs (call, half_res);
 		      gimple_call_set_nothrow (call, true);
-		      new_stmt = call;
-		      vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		      new_stmt_info
+			= vect_finish_stmt_generation (stmt, call, gsi);
 		      if ((i & 1) == 0)
 			{
 			  prev_res = half_res;
 			  continue;
 			}
 		      new_temp = make_ssa_name (vec_dest);
-		      new_stmt = gimple_build_assign (new_temp, convert_code,
-						      prev_res, half_res);
+		      gimple *new_stmt
+			= gimple_build_assign (new_temp, convert_code,
+					       prev_res, half_res);
+		      new_stmt_info
+			= vect_finish_stmt_generation (stmt, new_stmt, gsi);
 		    }
 		  else
 		    {
@@ -3431,10 +3437,10 @@ vectorizable_call (gimple *gs, gimple_st
 		      new_temp = make_ssa_name (vec_dest, call);
 		      gimple_call_set_lhs (call, new_temp);
 		      gimple_call_set_nothrow (call, true);
-		      new_stmt = call;
+		      new_stmt_info
+			= vect_finish_stmt_generation (stmt, call, gsi);
 		    }
-		  vect_finish_stmt_generation (stmt, new_stmt, gsi);
-		  SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
+		  SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
 		}
 
 	      for (i = 0; i < nargs; i++)
@@ -3475,7 +3481,9 @@ vectorizable_call (gimple *gs, gimple_st
 	      gimple *init_stmt = gimple_build_assign (new_var, cst);
 	      vect_init_vector_1 (stmt, init_stmt, NULL);
 	      new_temp = make_ssa_name (vec_dest);
-	      new_stmt = gimple_build_assign (new_temp, new_var);
+	      gimple *new_stmt = gimple_build_assign (new_temp, new_var);
+	      new_stmt_info
+		= vect_finish_stmt_generation (stmt, new_stmt, gsi);
 	    }
 	  else if (modifier == NARROW)
 	    {
@@ -3486,16 +3494,17 @@ vectorizable_call (gimple *gs, gimple_st
 	      gcall *call = gimple_build_call_internal_vec (ifn, vargs);
 	      gimple_call_set_lhs (call, half_res);
 	      gimple_call_set_nothrow (call, true);
-	      new_stmt = call;
-	      vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	      new_stmt_info = vect_finish_stmt_generation (stmt, call, gsi);
 	      if ((j & 1) == 0)
 		{
 		  prev_res = half_res;
 		  continue;
 		}
 	      new_temp = make_ssa_name (vec_dest);
-	      new_stmt = gimple_build_assign (new_temp, convert_code,
-					      prev_res, half_res);
+	      gassign *new_stmt = gimple_build_assign (new_temp, convert_code,
+						       prev_res, half_res);
+	      new_stmt_info
+		= vect_finish_stmt_generation (stmt, new_stmt, gsi);
 	    }
 	  else
 	    {
@@ -3504,19 +3513,18 @@ vectorizable_call (gimple *gs, gimple_st
 		call = gimple_build_call_internal_vec (ifn, vargs);
 	      else
 		call = gimple_build_call_vec (fndecl, vargs);
-	      new_temp = make_ssa_name (vec_dest, new_stmt);
+	      new_temp = make_ssa_name (vec_dest, call);
 	      gimple_call_set_lhs (call, new_temp);
 	      gimple_call_set_nothrow (call, true);
-	      new_stmt = call;
+	      new_stmt_info = vect_finish_stmt_generation (stmt, call, gsi);
 	    }
-	  vect_finish_stmt_generation (stmt, new_stmt, gsi);
 
 	  if (j == (modifier == NARROW ? 1 : 0))
-	    STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
+	    STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
 	  else
-	    STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
+	    STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
 
-	  prev_stmt_info = vinfo_for_stmt (new_stmt);
+	  prev_stmt_info = new_stmt_info;
 	}
     }
   else if (modifier == NARROW)
@@ -3560,9 +3568,9 @@ vectorizable_call (gimple *gs, gimple_st
 		  new_temp = make_ssa_name (vec_dest, call);
 		  gimple_call_set_lhs (call, new_temp);
 		  gimple_call_set_nothrow (call, true);
-		  new_stmt = call;
-		  vect_finish_stmt_generation (stmt, new_stmt, gsi);
-		  SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
+		  new_stmt_info
+		    = vect_finish_stmt_generation (stmt, call, gsi);
+		  SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
 		}
 
 	      for (i = 0; i < nargs; i++)
@@ -3585,7 +3593,8 @@ vectorizable_call (gimple *gs, gimple_st
 		}
 	      else
 		{
-		  vec_oprnd1 = gimple_call_arg (new_stmt, 2*i + 1);
+		  vec_oprnd1 = gimple_call_arg (new_stmt_info->stmt,
+						2 * i + 1);
 		  vec_oprnd0
 		    = vect_get_vec_def_for_stmt_copy (dt[i], vec_oprnd1);
 		  vec_oprnd1
@@ -3596,17 +3605,17 @@ vectorizable_call (gimple *gs, gimple_st
 	      vargs.quick_push (vec_oprnd1);
 	    }
 
-	  new_stmt = gimple_build_call_vec (fndecl, vargs);
+	  gcall *new_stmt = gimple_build_call_vec (fndecl, vargs);
 	  new_temp = make_ssa_name (vec_dest, new_stmt);
 	  gimple_call_set_lhs (new_stmt, new_temp);
-	  vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	  new_stmt_info = vect_finish_stmt_generation (stmt, new_stmt, gsi);
 
 	  if (j == 0)
-	    STMT_VINFO_VEC_STMT (stmt_info) = new_stmt;
+	    STMT_VINFO_VEC_STMT (stmt_info) = new_stmt_info;
 	  else
-	    STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
+	    STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
 
-	  prev_stmt_info = vinfo_for_stmt (new_stmt);
+	  prev_stmt_info = new_stmt_info;
 	}
 
       *vec_stmt = STMT_VINFO_VEC_STMT (stmt_info);
@@ -3629,7 +3638,8 @@ vectorizable_call (gimple *gs, gimple_st
     stmt_info = vinfo_for_stmt (STMT_VINFO_RELATED_STMT (stmt_info));
   lhs = gimple_get_lhs (stmt_info->stmt);
 
-  new_stmt = gimple_build_assign (lhs, build_zero_cst (TREE_TYPE (lhs)));
+  gassign *new_stmt
+    = gimple_build_assign (lhs, build_zero_cst (TREE_TYPE (lhs)));
   set_vinfo_for_stmt (new_stmt, stmt_info);
   set_vinfo_for_stmt (stmt_info->stmt, NULL);
   STMT_VINFO_STMT (stmt_info) = new_stmt;
@@ -3752,7 +3762,6 @@ vectorizable_simd_clone_call (gimple *st
   vec_info *vinfo = stmt_info->vinfo;
   struct loop *loop = loop_vinfo ? LOOP_VINFO_LOOP (loop_vinfo) : NULL;
   tree fndecl, new_temp;
-  gimple *new_stmt = NULL;
   int ncopies, j;
   auto_vec<simd_call_arg_info> arginfo;
   vec<tree> vargs = vNULL;
@@ -4106,7 +4115,7 @@ vectorizable_simd_clone_call (gimple *st
 			= build3 (BIT_FIELD_REF, atype, vec_oprnd0,
 				  bitsize_int (prec),
 				  bitsize_int ((m & (k - 1)) * prec));
-		      new_stmt
+		      gassign *new_stmt
 			= gimple_build_assign (make_ssa_name (atype),
 					       vec_oprnd0);
 		      vect_finish_stmt_generation (stmt, new_stmt, gsi);
@@ -4142,7 +4151,7 @@ vectorizable_simd_clone_call (gimple *st
 		      else
 			{
 			  vec_oprnd0 = build_constructor (atype, ctor_elts);
-			  new_stmt
+			  gassign *new_stmt
 			    = gimple_build_assign (make_ssa_name (atype),
 						   vec_oprnd0);
 			  vect_finish_stmt_generation (stmt, new_stmt, gsi);
@@ -4189,7 +4198,7 @@ vectorizable_simd_clone_call (gimple *st
 			       ncopies * nunits);
 		  tree tcst = wide_int_to_tree (type, cst);
 		  tree phi_arg = copy_ssa_name (op);
-		  new_stmt
+		  gassign *new_stmt
 		    = gimple_build_assign (phi_arg, code, phi_res, tcst);
 		  gimple_stmt_iterator si = gsi_after_labels (loop->header);
 		  gsi_insert_after (&si, new_stmt, GSI_NEW_STMT);
@@ -4211,8 +4220,9 @@ vectorizable_simd_clone_call (gimple *st
 			       j * nunits);
 		  tree tcst = wide_int_to_tree (type, cst);
 		  new_temp = make_ssa_name (TREE_TYPE (op));
-		  new_stmt = gimple_build_assign (new_temp, code,
-						  arginfo[i].op, tcst);
+		  gassign *new_stmt
+		    = gimple_build_assign (new_temp, code,
+					   arginfo[i].op, tcst);
 		  vect_finish_stmt_generation (stmt, new_stmt, gsi);
 		  vargs.safe_push (new_temp);
 		}
@@ -4228,7 +4238,7 @@ vectorizable_simd_clone_call (gimple *st
 	    }
 	}
 
-      new_stmt = gimple_build_call_vec (fndecl, vargs);
+      gcall *new_call = gimple_build_call_vec (fndecl, vargs);
       if (vec_dest)
 	{
 	  gcc_assert (ratype || simd_clone_subparts (rtype) == nunits);
@@ -4236,12 +4246,13 @@ vectorizable_simd_clone_call (gimple *st
 	    new_temp = create_tmp_var (ratype);
 	  else if (simd_clone_subparts (vectype)
 		   == simd_clone_subparts (rtype))
-	    new_temp = make_ssa_name (vec_dest, new_stmt);
+	    new_temp = make_ssa_name (vec_dest, new_call);
 	  else
-	    new_temp = make_ssa_name (rtype, new_stmt);
-	  gimple_call_set_lhs (new_stmt, new_temp);
+	    new_temp = make_ssa_name (rtype, new_call);
+	  gimple_call_set_lhs (new_call, new_temp);
 	}
-      vect_finish_stmt_generation (stmt, new_stmt, gsi);
+      stmt_vec_info new_stmt_info
+	= vect_finish_stmt_generation (stmt, new_call, gsi);
 
       if (vec_dest)
 	{
@@ -4264,15 +4275,18 @@ vectorizable_simd_clone_call (gimple *st
 		  else
 		    t = build3 (BIT_FIELD_REF, vectype, new_temp,
 				bitsize_int (prec), bitsize_int (l * prec));
-		  new_stmt
+		  gimple *new_stmt
 		    = gimple_build_assign (make_ssa_name (vectype), t);
-		  vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		  new_stmt_info
+		    = vect_finish_stmt_generation (stmt, new_stmt, gsi);
+
 		  if (j == 0 && l == 0)
-		    STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
+		    STMT_VINFO_VEC_STMT (stmt_info)
+		      = *vec_stmt = new_stmt_info;
 		  else
-		    STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
+		    STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
 
-		  prev_stmt_info = vinfo_for_stmt (new_stmt);
+		  prev_stmt_info = new_stmt_info;
 		}
 
 	      if (ratype)
@@ -4293,9 +4307,10 @@ vectorizable_simd_clone_call (gimple *st
 		    {
 		      tree tem = build4 (ARRAY_REF, rtype, new_temp,
 					 size_int (m), NULL_TREE, NULL_TREE);
-		      new_stmt
+		      gimple *new_stmt
 			= gimple_build_assign (make_ssa_name (rtype), tem);
-		      vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		      new_stmt_info
+			= vect_finish_stmt_generation (stmt, new_stmt, gsi);
 		      CONSTRUCTOR_APPEND_ELT (ret_ctor_elts, NULL_TREE,
 					      gimple_assign_lhs (new_stmt));
 		    }
@@ -4306,16 +4321,17 @@ vectorizable_simd_clone_call (gimple *st
 	      if ((j & (k - 1)) != k - 1)
 		continue;
 	      vec_oprnd0 = build_constructor (vectype, ret_ctor_elts);
-	      new_stmt
+	      gimple *new_stmt
 		= gimple_build_assign (make_ssa_name (vec_dest), vec_oprnd0);
-	      vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	      new_stmt_info
+		= vect_finish_stmt_generation (stmt, new_stmt, gsi);
 
 	      if ((unsigned) j == k - 1)
-		STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
+		STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
 	      else
-		STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
+		STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
 
-	      prev_stmt_info = vinfo_for_stmt (new_stmt);
+	      prev_stmt_info = new_stmt_info;
 	      continue;
 	    }
 	  else if (ratype)
@@ -4323,19 +4339,20 @@ vectorizable_simd_clone_call (gimple *st
 	      tree t = build_fold_addr_expr (new_temp);
 	      t = build2 (MEM_REF, vectype, t,
 			  build_int_cst (TREE_TYPE (t), 0));
-	      new_stmt
+	      gimple *new_stmt
 		= gimple_build_assign (make_ssa_name (vec_dest), t);
-	      vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	      new_stmt_info
+		= vect_finish_stmt_generation (stmt, new_stmt, gsi);
 	      vect_clobber_variable (stmt, gsi, new_temp);
 	    }
 	}
 
       if (j == 0)
-	STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
+	STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
       else
-	STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
+	STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
 
-      prev_stmt_info = vinfo_for_stmt (new_stmt);
+      prev_stmt_info = new_stmt_info;
     }
 
   vargs.release ();
@@ -4348,6 +4365,7 @@ vectorizable_simd_clone_call (gimple *st
   if (slp_node)
     return true;
 
+  gimple *new_stmt;
   if (scalar_dest)
     {
       type = TREE_TYPE (scalar_dest);
@@ -4465,7 +4483,6 @@ vect_create_vectorized_demotion_stmts (v
 {
   unsigned int i;
   tree vop0, vop1, new_tmp, vec_dest;
-  gimple *new_stmt;
   stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
 
   vec_dest = vec_dsts.pop ();
@@ -4475,10 +4492,11 @@ vect_create_vectorized_demotion_stmts (v
       /* Create demotion operation.  */
       vop0 = (*vec_oprnds)[i];
       vop1 = (*vec_oprnds)[i + 1];
-      new_stmt = gimple_build_assign (vec_dest, code, vop0, vop1);
+      gassign *new_stmt = gimple_build_assign (vec_dest, code, vop0, vop1);
       new_tmp = make_ssa_name (vec_dest, new_stmt);
       gimple_assign_set_lhs (new_stmt, new_tmp);
-      vect_finish_stmt_generation (stmt, new_stmt, gsi);
+      stmt_vec_info new_stmt_info
+	= vect_finish_stmt_generation (stmt, new_stmt, gsi);
 
       if (multi_step_cvt)
 	/* Store the resulting vector for next recursive call.  */
@@ -4489,15 +4507,15 @@ vect_create_vectorized_demotion_stmts (v
 	     vectors in SLP_NODE or in vector info of the scalar statement
 	     (or in STMT_VINFO_RELATED_STMT chain).  */
 	  if (slp_node)
-	    SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
+	    SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
 	  else
 	    {
 	      if (!*prev_stmt_info)
-		STMT_VINFO_VEC_STMT (stmt_info) = new_stmt;
+		STMT_VINFO_VEC_STMT (stmt_info) = new_stmt_info;
 	      else
-		STMT_VINFO_RELATED_STMT (*prev_stmt_info) = new_stmt;
+		STMT_VINFO_RELATED_STMT (*prev_stmt_info) = new_stmt_info;
 
-	      *prev_stmt_info = vinfo_for_stmt (new_stmt);
+	      *prev_stmt_info = new_stmt_info;
 	    }
 	}
     }
@@ -4595,7 +4613,6 @@ vectorizable_conversion (gimple *stmt, g
   tree new_temp;
   enum vect_def_type dt[2] = {vect_unknown_def_type, vect_unknown_def_type};
   int ndts = 2;
-  gimple *new_stmt = NULL;
   stmt_vec_info prev_stmt_info;
   poly_uint64 nunits_in;
   poly_uint64 nunits_out;
@@ -4965,31 +4982,37 @@ vectorizable_conversion (gimple *stmt, g
 
 	  FOR_EACH_VEC_ELT (vec_oprnds0, i, vop0)
 	    {
+	      stmt_vec_info new_stmt_info;
 	      /* Arguments are ready, create the new vector stmt.  */
 	      if (code1 == CALL_EXPR)
 		{
-		  new_stmt = gimple_build_call (decl1, 1, vop0);
+		  gcall *new_stmt = gimple_build_call (decl1, 1, vop0);
 		  new_temp = make_ssa_name (vec_dest, new_stmt);
 		  gimple_call_set_lhs (new_stmt, new_temp);
+		  new_stmt_info
+		    = vect_finish_stmt_generation (stmt, new_stmt, gsi);
 		}
 	      else
 		{
 		  gcc_assert (TREE_CODE_LENGTH (code1) == unary_op);
-		  new_stmt = gimple_build_assign (vec_dest, code1, vop0);
+		  gassign *new_stmt
+		    = gimple_build_assign (vec_dest, code1, vop0);
 		  new_temp = make_ssa_name (vec_dest, new_stmt);
 		  gimple_assign_set_lhs (new_stmt, new_temp);
+		  new_stmt_info
+		    = vect_finish_stmt_generation (stmt, new_stmt, gsi);
 		}
 
-	      vect_finish_stmt_generation (stmt, new_stmt, gsi);
 	      if (slp_node)
-		SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
+		SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
 	      else
 		{
 		  if (!prev_stmt_info)
-		    STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
+		    STMT_VINFO_VEC_STMT (stmt_info)
+		      = *vec_stmt = new_stmt_info;
 		  else
-		    STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
-		  prev_stmt_info = vinfo_for_stmt (new_stmt);
+		    STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
+		  prev_stmt_info = new_stmt_info;
 		}
 	    }
 	}
@@ -5075,36 +5098,39 @@ vectorizable_conversion (gimple *stmt, g
 
 	  FOR_EACH_VEC_ELT (vec_oprnds0, i, vop0)
 	    {
+	      stmt_vec_info new_stmt_info;
 	      if (cvt_type)
 		{
 		  if (codecvt1 == CALL_EXPR)
 		    {
-		      new_stmt = gimple_build_call (decl1, 1, vop0);
+		      gcall *new_stmt = gimple_build_call (decl1, 1, vop0);
 		      new_temp = make_ssa_name (vec_dest, new_stmt);
 		      gimple_call_set_lhs (new_stmt, new_temp);
+		      new_stmt_info
+			= vect_finish_stmt_generation (stmt, new_stmt, gsi);
 		    }
 		  else
 		    {
 		      gcc_assert (TREE_CODE_LENGTH (codecvt1) == unary_op);
 		      new_temp = make_ssa_name (vec_dest);
-		      new_stmt = gimple_build_assign (new_temp, codecvt1,
-						      vop0);
+		      gassign *new_stmt
+			= gimple_build_assign (new_temp, codecvt1, vop0);
+		      new_stmt_info
+			= vect_finish_stmt_generation (stmt, new_stmt, gsi);
 		    }
-
-		  vect_finish_stmt_generation (stmt, new_stmt, gsi);
 		}
 	      else
-		new_stmt = SSA_NAME_DEF_STMT (vop0);
+		new_stmt_info = vinfo->lookup_def (vop0);
 
 	      if (slp_node)
-		SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
+		SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
 	      else
 		{
 		  if (!prev_stmt_info)
-		    STMT_VINFO_VEC_STMT (stmt_info) = new_stmt;
+		    STMT_VINFO_VEC_STMT (stmt_info) = new_stmt_info;
 		  else
-		    STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
-		  prev_stmt_info = vinfo_for_stmt (new_stmt);
+		    STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
+		  prev_stmt_info = new_stmt_info;
 		}
 	    }
 	}
@@ -5136,19 +5162,20 @@ vectorizable_conversion (gimple *stmt, g
 	      {
 		if (codecvt1 == CALL_EXPR)
 		  {
-		    new_stmt = gimple_build_call (decl1, 1, vop0);
+		    gcall *new_stmt = gimple_build_call (decl1, 1, vop0);
 		    new_temp = make_ssa_name (vec_dest, new_stmt);
 		    gimple_call_set_lhs (new_stmt, new_temp);
+		    vect_finish_stmt_generation (stmt, new_stmt, gsi);
 		  }
 		else
 		  {
 		    gcc_assert (TREE_CODE_LENGTH (codecvt1) == unary_op);
 		    new_temp = make_ssa_name (vec_dest);
-		    new_stmt = gimple_build_assign (new_temp, codecvt1,
-						    vop0);
+		    gassign *new_stmt
+		      = gimple_build_assign (new_temp, codecvt1, vop0);
+		    vect_finish_stmt_generation (stmt, new_stmt, gsi);
 		  }
 
-		vect_finish_stmt_generation (stmt, new_stmt, gsi);
 		vec_oprnds0[i] = new_temp;
 	      }
 
@@ -5196,7 +5223,6 @@ vectorizable_assignment (gimple *stmt, g
   tree vop;
   bb_vec_info bb_vinfo = STMT_VINFO_BB_VINFO (stmt_info);
   vec_info *vinfo = stmt_info->vinfo;
-  gimple *new_stmt = NULL;
   stmt_vec_info prev_stmt_info = NULL;
   enum tree_code code;
   tree vectype_in;
@@ -5306,28 +5332,29 @@ vectorizable_assignment (gimple *stmt, g
         vect_get_vec_defs_for_stmt_copy (dt, &vec_oprnds, NULL);
 
       /* Arguments are ready. create the new vector stmt.  */
+      stmt_vec_info new_stmt_info = NULL;
       FOR_EACH_VEC_ELT (vec_oprnds, i, vop)
        {
 	 if (CONVERT_EXPR_CODE_P (code)
 	     || code == VIEW_CONVERT_EXPR)
 	   vop = build1 (VIEW_CONVERT_EXPR, vectype, vop);
-         new_stmt = gimple_build_assign (vec_dest, vop);
+	 gassign *new_stmt = gimple_build_assign (vec_dest, vop);
          new_temp = make_ssa_name (vec_dest, new_stmt);
          gimple_assign_set_lhs (new_stmt, new_temp);
-         vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	 new_stmt_info = vect_finish_stmt_generation (stmt, new_stmt, gsi);
          if (slp_node)
-           SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
+	   SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
        }
 
       if (slp_node)
         continue;
 
       if (j == 0)
-        STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
+	STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
       else
-        STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
+	STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
 
-      prev_stmt_info = vinfo_for_stmt (new_stmt);
+      prev_stmt_info = new_stmt_info;
     }
 
   vec_oprnds.release ();
@@ -5398,7 +5425,6 @@ vectorizable_shift (gimple *stmt, gimple
   machine_mode optab_op2_mode;
   enum vect_def_type dt[2] = {vect_unknown_def_type, vect_unknown_def_type};
   int ndts = 2;
-  gimple *new_stmt = NULL;
   stmt_vec_info prev_stmt_info;
   poly_uint64 nunits_in;
   poly_uint64 nunits_out;
@@ -5706,25 +5732,26 @@ vectorizable_shift (gimple *stmt, gimple
         vect_get_vec_defs_for_stmt_copy (dt, &vec_oprnds0, &vec_oprnds1);
 
       /* Arguments are ready.  Create the new vector stmt.  */
+      stmt_vec_info new_stmt_info = NULL;
       FOR_EACH_VEC_ELT (vec_oprnds0, i, vop0)
         {
           vop1 = vec_oprnds1[i];
-	  new_stmt = gimple_build_assign (vec_dest, code, vop0, vop1);
+	  gassign *new_stmt = gimple_build_assign (vec_dest, code, vop0, vop1);
           new_temp = make_ssa_name (vec_dest, new_stmt);
           gimple_assign_set_lhs (new_stmt, new_temp);
-          vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	  new_stmt_info = vect_finish_stmt_generation (stmt, new_stmt, gsi);
           if (slp_node)
-            SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
+	    SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
         }
 
       if (slp_node)
         continue;
 
       if (j == 0)
-        STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
+	STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
       else
-        STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
-      prev_stmt_info = vinfo_for_stmt (new_stmt);
+	STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
+      prev_stmt_info = new_stmt_info;
     }
 
   vec_oprnds0.release ();
@@ -5762,7 +5789,6 @@ vectorizable_operation (gimple *stmt, gi
   enum vect_def_type dt[3]
     = {vect_unknown_def_type, vect_unknown_def_type, vect_unknown_def_type};
   int ndts = 3;
-  gimple *new_stmt = NULL;
   stmt_vec_info prev_stmt_info;
   poly_uint64 nunits_in;
   poly_uint64 nunits_out;
@@ -6090,37 +6116,41 @@ vectorizable_operation (gimple *stmt, gi
 	}
 
       /* Arguments are ready.  Create the new vector stmt.  */
+      stmt_vec_info new_stmt_info = NULL;
       FOR_EACH_VEC_ELT (vec_oprnds0, i, vop0)
         {
 	  vop1 = ((op_type == binary_op || op_type == ternary_op)
 		  ? vec_oprnds1[i] : NULL_TREE);
 	  vop2 = ((op_type == ternary_op)
 		  ? vec_oprnds2[i] : NULL_TREE);
-	  new_stmt = gimple_build_assign (vec_dest, code, vop0, vop1, vop2);
+	  gassign *new_stmt = gimple_build_assign (vec_dest, code,
+						   vop0, vop1, vop2);
 	  new_temp = make_ssa_name (vec_dest, new_stmt);
 	  gimple_assign_set_lhs (new_stmt, new_temp);
-	  vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	  new_stmt_info = vect_finish_stmt_generation (stmt, new_stmt, gsi);
 	  if (vec_cvt_dest)
 	    {
 	      new_temp = build1 (VIEW_CONVERT_EXPR, vectype_out, new_temp);
-	      new_stmt = gimple_build_assign (vec_cvt_dest, VIEW_CONVERT_EXPR,
-					      new_temp);
+	      gassign *new_stmt
+		= gimple_build_assign (vec_cvt_dest, VIEW_CONVERT_EXPR,
+				       new_temp);
 	      new_temp = make_ssa_name (vec_cvt_dest, new_stmt);
 	      gimple_assign_set_lhs (new_stmt, new_temp);
-	      vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	      new_stmt_info
+		= vect_finish_stmt_generation (stmt, new_stmt, gsi);
 	    }
           if (slp_node)
-	    SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
+	    SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
         }
 
       if (slp_node)
         continue;
 
       if (j == 0)
-	STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
+	STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
       else
-	STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
-      prev_stmt_info = vinfo_for_stmt (new_stmt);
+	STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
+      prev_stmt_info = new_stmt_info;
     }
 
   vec_oprnds0.release ();
@@ -6230,7 +6260,6 @@ vectorizable_store (gimple *stmt, gimple
   vec_info *vinfo = stmt_info->vinfo;
   tree aggr_type;
   gather_scatter_info gs_info;
-  gimple *new_stmt;
   poly_uint64 vf;
   vec_load_store_type vls_type;
   tree ref_type;
@@ -6520,7 +6549,8 @@ vectorizable_store (gimple *stmt, gimple
 				    TYPE_VECTOR_SUBPARTS (srctype)));
 	      var = vect_get_new_ssa_name (srctype, vect_simple_var);
 	      src = build1 (VIEW_CONVERT_EXPR, srctype, src);
-	      new_stmt = gimple_build_assign (var, VIEW_CONVERT_EXPR, src);
+	      gassign *new_stmt
+		= gimple_build_assign (var, VIEW_CONVERT_EXPR, src);
 	      vect_finish_stmt_generation (stmt, new_stmt, gsi);
 	      src = var;
 	    }
@@ -6531,21 +6561,22 @@ vectorizable_store (gimple *stmt, gimple
 				    TYPE_VECTOR_SUBPARTS (idxtype)));
 	      var = vect_get_new_ssa_name (idxtype, vect_simple_var);
 	      op = build1 (VIEW_CONVERT_EXPR, idxtype, op);
-	      new_stmt = gimple_build_assign (var, VIEW_CONVERT_EXPR, op);
+	      gassign *new_stmt
+		= gimple_build_assign (var, VIEW_CONVERT_EXPR, op);
 	      vect_finish_stmt_generation (stmt, new_stmt, gsi);
 	      op = var;
 	    }
 
-	  new_stmt
+	  gcall *new_stmt
 	    = gimple_build_call (gs_info.decl, 5, ptr, mask, op, src, scale);
-
-	  vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	  stmt_vec_info new_stmt_info
+	    = vect_finish_stmt_generation (stmt, new_stmt, gsi);
 
 	  if (prev_stmt_info == NULL_STMT_VEC_INFO)
-	    STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
+	    STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
 	  else
-	    STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
-	  prev_stmt_info = vinfo_for_stmt (new_stmt);
+	    STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
+	  prev_stmt_info = new_stmt_info;
 	}
       return true;
     }
@@ -6806,7 +6837,8 @@ vectorizable_store (gimple *stmt, gimple
 
 		  /* And store it to *running_off.  */
 		  assign = gimple_build_assign (newref, elem);
-		  vect_finish_stmt_generation (stmt, assign, gsi);
+		  stmt_vec_info assign_info
+		    = vect_finish_stmt_generation (stmt, assign, gsi);
 
 		  group_el += lnel;
 		  if (! slp
@@ -6825,10 +6857,10 @@ vectorizable_store (gimple *stmt, gimple
 		    {
 		      if (j == 0 && i == 0)
 			STMT_VINFO_VEC_STMT (stmt_info)
-			    = *vec_stmt = assign;
+			    = *vec_stmt = assign_info;
 		      else
-			STMT_VINFO_RELATED_STMT (prev_stmt_info) = assign;
-		      prev_stmt_info = vinfo_for_stmt (assign);
+			STMT_VINFO_RELATED_STMT (prev_stmt_info) = assign_info;
+		      prev_stmt_info = assign_info;
 		    }
 		}
 	    }
@@ -6931,7 +6963,7 @@ vectorizable_store (gimple *stmt, gimple
   tree vec_mask = NULL_TREE;
   for (j = 0; j < ncopies; j++)
     {
-
+      stmt_vec_info new_stmt_info;
       if (j == 0)
 	{
           if (slp)
@@ -7081,15 +7113,14 @@ vectorizable_store (gimple *stmt, gimple
 	      gimple_call_set_lhs (call, data_ref);
 	    }
 	  gimple_call_set_nothrow (call, true);
-	  new_stmt = call;
-	  vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	  new_stmt_info = vect_finish_stmt_generation (stmt, call, gsi);
 
 	  /* Record that VEC_ARRAY is now dead.  */
 	  vect_clobber_variable (stmt, gsi, vec_array);
 	}
       else
 	{
-	  new_stmt = NULL;
+	  new_stmt_info = NULL;
 	  if (grouped_store)
 	    {
 	      if (j == 0)
@@ -7126,8 +7157,8 @@ vectorizable_store (gimple *stmt, gimple
 		      (IFN_SCATTER_STORE, 4, dataref_ptr, vec_offset,
 		       scale, vec_oprnd);
 		  gimple_call_set_nothrow (call, true);
-		  new_stmt = call;
-		  vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		  new_stmt_info
+		    = vect_finish_stmt_generation (stmt, call, gsi);
 		  break;
 		}
 
@@ -7186,7 +7217,8 @@ vectorizable_store (gimple *stmt, gimple
 						  dataref_ptr, ptr,
 						  final_mask, vec_oprnd);
 		  gimple_call_set_nothrow (call, true);
-		  new_stmt = call;
+		  new_stmt_info
+		    = vect_finish_stmt_generation (stmt, call, gsi);
 		}
 	      else
 		{
@@ -7206,9 +7238,11 @@ vectorizable_store (gimple *stmt, gimple
 		      = build_aligned_type (TREE_TYPE (data_ref),
 					    TYPE_ALIGN (elem_type));
 		  vect_copy_ref_info (data_ref, DR_REF (first_dr));
-		  new_stmt = gimple_build_assign (data_ref, vec_oprnd);
+		  gassign *new_stmt
+		    = gimple_build_assign (data_ref, vec_oprnd);
+		  new_stmt_info
+		    = vect_finish_stmt_generation (stmt, new_stmt, gsi);
 		}
-	      vect_finish_stmt_generation (stmt, new_stmt, gsi);
 
 	      if (slp)
 		continue;
@@ -7221,10 +7255,10 @@ vectorizable_store (gimple *stmt, gimple
       if (!slp)
 	{
 	  if (j == 0)
-	    STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
+	    STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
 	  else
-	    STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
-	  prev_stmt_info = vinfo_for_stmt (new_stmt);
+	    STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
+	  prev_stmt_info = new_stmt_info;
 	}
     }
 
@@ -7370,7 +7404,6 @@ vectorizable_load (gimple *stmt, gimple_
   tree elem_type;
   tree new_temp;
   machine_mode mode;
-  gimple *new_stmt = NULL;
   tree dummy;
   enum dr_alignment_support alignment_support_scheme;
   tree dataref_ptr = NULL_TREE;
@@ -7812,14 +7845,17 @@ vectorizable_load (gimple *stmt, gimple_
 	{
 	  if (nloads > 1)
 	    vec_alloc (v, nloads);
+	  stmt_vec_info new_stmt_info = NULL;
 	  for (i = 0; i < nloads; i++)
 	    {
 	      tree this_off = build_int_cst (TREE_TYPE (alias_off),
 					     group_el * elsz + cst_offset);
 	      tree data_ref = build2 (MEM_REF, ltype, running_off, this_off);
 	      vect_copy_ref_info (data_ref, DR_REF (first_dr));
-	      new_stmt = gimple_build_assign (make_ssa_name (ltype), data_ref);
-	      vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	      gassign *new_stmt
+		= gimple_build_assign (make_ssa_name (ltype), data_ref);
+	      new_stmt_info
+		= vect_finish_stmt_generation (stmt, new_stmt, gsi);
 	      if (nloads > 1)
 		CONSTRUCTOR_APPEND_ELT (v, NULL_TREE,
 					gimple_assign_lhs (new_stmt));
@@ -7841,31 +7877,33 @@ vectorizable_load (gimple *stmt, gimple_
 	    {
 	      tree vec_inv = build_constructor (lvectype, v);
 	      new_temp = vect_init_vector (stmt, vec_inv, lvectype, gsi);
-	      new_stmt = SSA_NAME_DEF_STMT (new_temp);
+	      new_stmt_info = vinfo->lookup_def (new_temp);
 	      if (lvectype != vectype)
 		{
-		  new_stmt = gimple_build_assign (make_ssa_name (vectype),
-						  VIEW_CONVERT_EXPR,
-						  build1 (VIEW_CONVERT_EXPR,
-							  vectype, new_temp));
-		  vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		  gassign *new_stmt
+		    = gimple_build_assign (make_ssa_name (vectype),
+					   VIEW_CONVERT_EXPR,
+					   build1 (VIEW_CONVERT_EXPR,
+						   vectype, new_temp));
+		  new_stmt_info
+		    = vect_finish_stmt_generation (stmt, new_stmt, gsi);
 		}
 	    }
 
 	  if (slp)
 	    {
 	      if (slp_perm)
-		dr_chain.quick_push (gimple_assign_lhs (new_stmt));
+		dr_chain.quick_push (gimple_assign_lhs (new_stmt_info->stmt));
 	      else
-		SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
+		SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
 	    }
 	  else
 	    {
 	      if (j == 0)
-		STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
+		STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
 	      else
-		STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
-	      prev_stmt_info = vinfo_for_stmt (new_stmt);
+		STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
+	      prev_stmt_info = new_stmt_info;
 	    }
 	}
       if (slp_perm)
@@ -8122,6 +8160,7 @@ vectorizable_load (gimple *stmt, gimple_
   poly_uint64 group_elt = 0;
   for (j = 0; j < ncopies; j++)
     {
+      stmt_vec_info new_stmt_info = NULL;
       /* 1. Create the vector or array pointer update chain.  */
       if (j == 0)
 	{
@@ -8228,8 +8267,7 @@ vectorizable_load (gimple *stmt, gimple_
 	    }
 	  gimple_call_set_lhs (call, vec_array);
 	  gimple_call_set_nothrow (call, true);
-	  new_stmt = call;
-	  vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	  new_stmt_info = vect_finish_stmt_generation (stmt, call, gsi);
 
 	  /* Extract each vector into an SSA_NAME.  */
 	  for (i = 0; i < vec_num; i++)
@@ -8264,6 +8302,7 @@ vectorizable_load (gimple *stmt, gimple_
 					       stmt, bump);
 
 	      /* 2. Create the vector-load in the loop.  */
+	      gimple *new_stmt = NULL;
 	      switch (alignment_support_scheme)
 		{
 		case dr_aligned:
@@ -8421,7 +8460,8 @@ vectorizable_load (gimple *stmt, gimple_
 		}
 	      new_temp = make_ssa_name (vec_dest, new_stmt);
 	      gimple_set_lhs (new_stmt, new_temp);
-	      vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	      new_stmt_info
+		= vect_finish_stmt_generation (stmt, new_stmt, gsi);
 
 	      /* 3. Handle explicit realignment if necessary/supported.
 		 Create in loop:
@@ -8437,7 +8477,8 @@ vectorizable_load (gimple *stmt, gimple_
 						  msq, lsq, realignment_token);
 		  new_temp = make_ssa_name (vec_dest, new_stmt);
 		  gimple_assign_set_lhs (new_stmt, new_temp);
-		  vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		  new_stmt_info
+		    = vect_finish_stmt_generation (stmt, new_stmt, gsi);
 
 		  if (alignment_support_scheme == dr_explicit_realign_optimized)
 		    {
@@ -8477,7 +8518,7 @@ vectorizable_load (gimple *stmt, gimple_
 					        (gimple_assign_rhs1 (stmt))));
 		      new_temp = vect_init_vector (stmt, tem, vectype, NULL);
 		      new_stmt = SSA_NAME_DEF_STMT (new_temp);
-		      vinfo->add_stmt (new_stmt);
+		      new_stmt_info = vinfo->add_stmt (new_stmt);
 		    }
 		  else
 		    {
@@ -8485,7 +8526,7 @@ vectorizable_load (gimple *stmt, gimple_
 		      gsi_next (&gsi2);
 		      new_temp = vect_init_vector (stmt, scalar_dest,
 						   vectype, &gsi2);
-		      new_stmt = SSA_NAME_DEF_STMT (new_temp);
+		      new_stmt_info = vinfo->lookup_def (new_temp);
 		    }
 		}
 
@@ -8494,7 +8535,7 @@ vectorizable_load (gimple *stmt, gimple_
 		  tree perm_mask = perm_mask_for_reverse (vectype);
 		  new_temp = permute_vec_elements (new_temp, new_temp,
 						   perm_mask, stmt, gsi);
-		  new_stmt = SSA_NAME_DEF_STMT (new_temp);
+		  new_stmt_info = vinfo->lookup_def (new_temp);
 		}
 
 	      /* Collect vector loads and later create their permutation in
@@ -8504,7 +8545,7 @@ vectorizable_load (gimple *stmt, gimple_
 
 	      /* Store vector loads in the corresponding SLP_NODE.  */
 	      if (slp && !slp_perm)
-		SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
+		SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
 
 	      /* With SLP permutation we load the gaps as well, without
 	         we need to skip the gaps after we manage to fully load
@@ -8561,10 +8602,10 @@ vectorizable_load (gimple *stmt, gimple_
           else
 	    {
 	      if (j == 0)
-	        STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
+	        STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
 	      else
-	        STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
-	      prev_stmt_info = vinfo_for_stmt (new_stmt);
+	        STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
+	      prev_stmt_info = new_stmt_info;
 	    }
         }
       dr_chain.release ();
@@ -8869,7 +8910,7 @@ vectorizable_condition (gimple *stmt, gi
   /* Handle cond expr.  */
   for (j = 0; j < ncopies; j++)
     {
-      gimple *new_stmt = NULL;
+      stmt_vec_info new_stmt_info = NULL;
       if (j == 0)
 	{
           if (slp_node)
@@ -8974,6 +9015,7 @@ vectorizable_condition (gimple *stmt, gi
 	      else
 		{
 		  new_temp = make_ssa_name (vec_cmp_type);
+		  gassign *new_stmt;
 		  if (bitop1 == BIT_NOT_EXPR)
 		    new_stmt = gimple_build_assign (new_temp, bitop1,
 						    vec_cond_rhs);
@@ -9005,19 +9047,19 @@ vectorizable_condition (gimple *stmt, gi
 	      if (!is_gimple_val (vec_compare))
 		{
 		  tree vec_compare_name = make_ssa_name (vec_cmp_type);
-		  new_stmt = gimple_build_assign (vec_compare_name,
-						  vec_compare);
+		  gassign *new_stmt = gimple_build_assign (vec_compare_name,
+							   vec_compare);
 		  vect_finish_stmt_generation (stmt, new_stmt, gsi);
 		  vec_compare = vec_compare_name;
 		}
 	      gcc_assert (reduc_index == 2);
-	      new_stmt = gimple_build_call_internal
+	      gcall *new_stmt = gimple_build_call_internal
 		(IFN_FOLD_EXTRACT_LAST, 3, else_clause, vec_compare,
 		 vec_then_clause);
 	      gimple_call_set_lhs (new_stmt, scalar_dest);
 	      SSA_NAME_DEF_STMT (scalar_dest) = new_stmt;
 	      if (stmt == gsi_stmt (*gsi))
-		vect_finish_replace_stmt (stmt, new_stmt);
+		new_stmt_info = vect_finish_replace_stmt (stmt, new_stmt);
 	      else
 		{
 		  /* In this case we're moving the definition to later in the
@@ -9025,30 +9067,32 @@ vectorizable_condition (gimple *stmt, gi
 		     lhs are in phi statements.  */
 		  gimple_stmt_iterator old_gsi = gsi_for_stmt (stmt);
 		  gsi_remove (&old_gsi, true);
-		  vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		  new_stmt_info
+		    = vect_finish_stmt_generation (stmt, new_stmt, gsi);
 		}
 	    }
 	  else
 	    {
 	      new_temp = make_ssa_name (vec_dest);
-	      new_stmt = gimple_build_assign (new_temp, VEC_COND_EXPR,
-					      vec_compare, vec_then_clause,
-					      vec_else_clause);
-	      vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	      gassign *new_stmt
+		= gimple_build_assign (new_temp, VEC_COND_EXPR, vec_compare,
+				       vec_then_clause, vec_else_clause);
+	      new_stmt_info
+		= vect_finish_stmt_generation (stmt, new_stmt, gsi);
 	    }
           if (slp_node)
-            SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
+	    SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
         }
 
         if (slp_node)
           continue;
 
-        if (j == 0)
-          STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
-        else
-          STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
+	if (j == 0)
+	  STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
+	else
+	  STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
 
-        prev_stmt_info = vinfo_for_stmt (new_stmt);
+	prev_stmt_info = new_stmt_info;
     }
 
   vec_oprnds0.release ();
@@ -9244,7 +9288,7 @@ vectorizable_comparison (gimple *stmt, g
   /* Handle cmp expr.  */
   for (j = 0; j < ncopies; j++)
     {
-      gassign *new_stmt = NULL;
+      stmt_vec_info new_stmt_info = NULL;
       if (j == 0)
 	{
 	  if (slp_node)
@@ -9286,18 +9330,21 @@ vectorizable_comparison (gimple *stmt, g
 	  new_temp = make_ssa_name (mask);
 	  if (bitop1 == NOP_EXPR)
 	    {
-	      new_stmt = gimple_build_assign (new_temp, code,
-					      vec_rhs1, vec_rhs2);
-	      vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	      gassign *new_stmt = gimple_build_assign (new_temp, code,
+						       vec_rhs1, vec_rhs2);
+	      new_stmt_info
+		= vect_finish_stmt_generation (stmt, new_stmt, gsi);
 	    }
 	  else
 	    {
+	      gassign *new_stmt;
 	      if (bitop1 == BIT_NOT_EXPR)
 		new_stmt = gimple_build_assign (new_temp, bitop1, vec_rhs2);
 	      else
 		new_stmt = gimple_build_assign (new_temp, bitop1, vec_rhs1,
 						vec_rhs2);
-	      vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	      new_stmt_info
+		= vect_finish_stmt_generation (stmt, new_stmt, gsi);
 	      if (bitop2 != NOP_EXPR)
 		{
 		  tree res = make_ssa_name (mask);
@@ -9306,22 +9353,23 @@ vectorizable_comparison (gimple *stmt, g
 		  else
 		    new_stmt = gimple_build_assign (res, bitop2, vec_rhs1,
 						    new_temp);
-		  vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		  new_stmt_info
+		    = vect_finish_stmt_generation (stmt, new_stmt, gsi);
 		}
 	    }
 	  if (slp_node)
-	    SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
+	    SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
 	}
 
       if (slp_node)
 	continue;
 
       if (j == 0)
-	STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
+	STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
       else
-	STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
+	STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
 
-      prev_stmt_info = vinfo_for_stmt (new_stmt);
+      prev_stmt_info = new_stmt_info;
     }
 
   vec_oprnds0.release ();
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:22:37.253248202 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:22:40.721217407 +0100
@@ -5861,7 +5861,7 @@ vectorize_fold_left_reduction (gimple *s
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
   tree vectype_out = STMT_VINFO_VECTYPE (stmt_info);
-  gimple *new_stmt = NULL;
+  stmt_vec_info new_stmt_info = NULL;
 
   int ncopies;
   if (slp_node)
@@ -5917,6 +5917,7 @@ vectorize_fold_left_reduction (gimple *s
   tree def0;
   FOR_EACH_VEC_ELT (vec_oprnds0, i, def0)
     {
+      gimple *new_stmt;
       tree mask = NULL_TREE;
       if (LOOP_VINFO_FULLY_MASKED_P (loop_vinfo))
 	mask = vect_get_loop_mask (gsi, masks, vec_num, vectype_in, i);
@@ -5965,17 +5966,18 @@ vectorize_fold_left_reduction (gimple *s
       if (i == vec_num - 1)
 	{
 	  gimple_set_lhs (new_stmt, scalar_dest);
-	  vect_finish_replace_stmt (scalar_dest_def, new_stmt);
+	  new_stmt_info = vect_finish_replace_stmt (scalar_dest_def, new_stmt);
 	}
       else
-	vect_finish_stmt_generation (scalar_dest_def, new_stmt, gsi);
+	new_stmt_info = vect_finish_stmt_generation (scalar_dest_def,
+						     new_stmt, gsi);
 
       if (slp_node)
-	SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
+	SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
     }
 
   if (!slp_node)
-    STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
+    STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
 
   return true;
 }
@@ -6102,7 +6104,7 @@ vectorizable_reduction (gimple *stmt, gi
   int epilog_copies;
   stmt_vec_info prev_stmt_info, prev_phi_info;
   bool single_defuse_cycle = false;
-  gimple *new_stmt = NULL;
+  stmt_vec_info new_stmt_info = NULL;
   int j;
   tree ops[3];
   enum vect_def_type dts[3];
@@ -7130,19 +7132,19 @@ vectorizable_reduction (gimple *stmt, gi
 	      gcc_assert (reduc_index != -1 || ! single_defuse_cycle);
 
 	      if (single_defuse_cycle && reduc_index == 0)
-		vec_oprnds0[0] = gimple_get_lhs (new_stmt);
+		vec_oprnds0[0] = gimple_get_lhs (new_stmt_info->stmt);
 	      else
 		vec_oprnds0[0]
 		  = vect_get_vec_def_for_stmt_copy (dts[0], vec_oprnds0[0]);
 	      if (single_defuse_cycle && reduc_index == 1)
-		vec_oprnds1[0] = gimple_get_lhs (new_stmt);
+		vec_oprnds1[0] = gimple_get_lhs (new_stmt_info->stmt);
 	      else
 		vec_oprnds1[0]
 		  = vect_get_vec_def_for_stmt_copy (dts[1], vec_oprnds1[0]);
 	      if (op_type == ternary_op)
 		{
 		  if (single_defuse_cycle && reduc_index == 2)
-		    vec_oprnds2[0] = gimple_get_lhs (new_stmt);
+		    vec_oprnds2[0] = gimple_get_lhs (new_stmt_info->stmt);
 		  else
 		    vec_oprnds2[0] 
 		      = vect_get_vec_def_for_stmt_copy (dts[2], vec_oprnds2[0]);
@@ -7169,23 +7171,24 @@ vectorizable_reduction (gimple *stmt, gi
 	      new_temp = make_ssa_name (vec_dest, call);
 	      gimple_call_set_lhs (call, new_temp);
 	      gimple_call_set_nothrow (call, true);
-	      new_stmt = call;
+	      new_stmt_info = vect_finish_stmt_generation (stmt, call, gsi);
 	    }
 	  else
 	    {
 	      if (op_type == ternary_op)
 		vop[2] = vec_oprnds2[i];
 
-	      new_stmt = gimple_build_assign (vec_dest, code,
-					      vop[0], vop[1], vop[2]);
+	      gassign *new_stmt = gimple_build_assign (vec_dest, code,
+						       vop[0], vop[1], vop[2]);
 	      new_temp = make_ssa_name (vec_dest, new_stmt);
 	      gimple_assign_set_lhs (new_stmt, new_temp);
+	      new_stmt_info
+		= vect_finish_stmt_generation (stmt, new_stmt, gsi);
 	    }
-	  vect_finish_stmt_generation (stmt, new_stmt, gsi);
 
           if (slp_node)
             {
-              SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
+	      SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
               vect_defs.quick_push (new_temp);
             }
           else
@@ -7196,11 +7199,11 @@ vectorizable_reduction (gimple *stmt, gi
         continue;
 
       if (j == 0)
-	STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
+	STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
       else
-	STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
+	STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
 
-      prev_stmt_info = vinfo_for_stmt (new_stmt);
+      prev_stmt_info = new_stmt_info;
     }
 
   /* Finalize the reduction-phi (set its arguments) and create the

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [14/46] Make STMT_VINFO_VEC_STMT a stmt_vec_info
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (12 preceding siblings ...)
  2018-07-24  9:58 ` [12/46] Make vect_finish_stmt_generation return " Richard Sandiford
@ 2018-07-24  9:58 ` Richard Sandiford
  2018-07-25  9:21   ` Richard Biener
  2018-08-02  0:22   ` H.J. Lu
  2018-07-24  9:59 ` [17/46] Make LOOP_VINFO_REDUCTIONS an auto_vec<stmt_vec_info> Richard Sandiford
                   ` (31 subsequent siblings)
  45 siblings, 2 replies; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24  9:58 UTC (permalink / raw)
  To: gcc-patches

This patch changes STMT_VINFO_VEC_STMT from a gimple stmt to a
stmt_vec_info and makes the vectorizable_* routines pass back
a stmt_vec_info to vect_transform_stmt.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (_stmt_vec_info::vectorized_stmt): Change from
	a gimple stmt to a stmt_vec_info.
	(vectorizable_condition, vectorizable_live_operation)
	(vectorizable_reduction, vectorizable_induction): Pass back the
	vectorized statement as a stmt_vec_info.
	* tree-vect-data-refs.c (vect_record_grouped_load_vectors): Update
	use of STMT_VINFO_VEC_STMT.
	* tree-vect-loop.c (vect_create_epilog_for_reduction): Likewise,
	accumulating the inner phis that feed the STMT_VINFO_VEC_STMT
	as stmt_vec_infos rather than gimple stmts.
	(vectorize_fold_left_reduction): Change vec_stmt from a gimple stmt
	to a stmt_vec_info.
	(vectorizable_live_operation): Likewise.
	(vectorizable_reduction, vectorizable_induction): Likewise,
	updating use of STMT_VINFO_VEC_STMT.
	* tree-vect-stmts.c (vect_get_vec_def_for_operand_1): Update use
	of STMT_VINFO_VEC_STMT.
	(vect_build_gather_load_calls, vectorizable_bswap, vectorizable_call)
	(vectorizable_simd_clone_call, vectorizable_conversion)
	(vectorizable_assignment, vectorizable_shift, vectorizable_operation)
	(vectorizable_store, vectorizable_load, vectorizable_condition)
	(vectorizable_comparison, can_vectorize_live_stmts): Change vec_stmt
	from a gimple stmt to a stmt_vec_info.
	(vect_transform_stmt): Update use of STMT_VINFO_VEC_STMT.  Pass a
	pointer to a stmt_vec_info to the vectorizable_* routines.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:22:44.297185652 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:22:47.489157307 +0100
@@ -812,7 +812,7 @@ struct _stmt_vec_info {
   tree vectype;
 
   /* The vectorized version of the stmt.  */
-  gimple *vectorized_stmt;
+  stmt_vec_info vectorized_stmt;
 
 
   /* The following is relevant only for stmts that contain a non-scalar
@@ -1560,7 +1560,7 @@ extern void vect_remove_stores (gimple *
 extern bool vect_analyze_stmt (gimple *, bool *, slp_tree, slp_instance,
 			       stmt_vector_for_cost *);
 extern bool vectorizable_condition (gimple *, gimple_stmt_iterator *,
-				    gimple **, tree, int, slp_tree,
+				    stmt_vec_info *, tree, int, slp_tree,
 				    stmt_vector_for_cost *);
 extern void vect_get_load_cost (stmt_vec_info, int, bool,
 				unsigned int *, unsigned int *,
@@ -1649,13 +1649,13 @@ extern tree vect_get_loop_mask (gimple_s
 extern struct loop *vect_transform_loop (loop_vec_info);
 extern loop_vec_info vect_analyze_loop_form (struct loop *, vec_info_shared *);
 extern bool vectorizable_live_operation (gimple *, gimple_stmt_iterator *,
-					 slp_tree, int, gimple **,
+					 slp_tree, int, stmt_vec_info *,
 					 stmt_vector_for_cost *);
 extern bool vectorizable_reduction (gimple *, gimple_stmt_iterator *,
-				    gimple **, slp_tree, slp_instance,
+				    stmt_vec_info *, slp_tree, slp_instance,
 				    stmt_vector_for_cost *);
 extern bool vectorizable_induction (gimple *, gimple_stmt_iterator *,
-				    gimple **, slp_tree,
+				    stmt_vec_info *, slp_tree,
 				    stmt_vector_for_cost *);
 extern tree get_initial_def_for_reduction (gimple *, tree, tree *);
 extern bool vect_worthwhile_without_simd_p (vec_info *, tree_code);
Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c	2018-07-24 10:22:44.285185759 +0100
+++ gcc/tree-vect-data-refs.c	2018-07-24 10:22:47.485157343 +0100
@@ -6401,18 +6401,17 @@ vect_record_grouped_load_vectors (gimple
             {
               if (!DR_GROUP_SAME_DR_STMT (vinfo_for_stmt (next_stmt)))
                 {
-		  gimple *prev_stmt =
-		    STMT_VINFO_VEC_STMT (vinfo_for_stmt (next_stmt));
+		  stmt_vec_info prev_stmt_info
+		    = STMT_VINFO_VEC_STMT (vinfo_for_stmt (next_stmt));
 		  stmt_vec_info rel_stmt_info
-		    = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (prev_stmt));
+		    = STMT_VINFO_RELATED_STMT (prev_stmt_info);
 		  while (rel_stmt_info)
 		    {
-		      prev_stmt = rel_stmt_info;
+		      prev_stmt_info = rel_stmt_info;
 		      rel_stmt_info = STMT_VINFO_RELATED_STMT (rel_stmt_info);
 		    }
 
-		  STMT_VINFO_RELATED_STMT (vinfo_for_stmt (prev_stmt))
-		    = new_stmt_info;
+		  STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
                 }
             }
 
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:22:44.289185723 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:22:47.489157307 +0100
@@ -4445,7 +4445,7 @@ vect_create_epilog_for_reduction (vec<tr
   gimple *use_stmt, *reduction_phi = NULL;
   bool nested_in_vect_loop = false;
   auto_vec<gimple *> new_phis;
-  auto_vec<gimple *> inner_phis;
+  auto_vec<stmt_vec_info> inner_phis;
   enum vect_def_type dt = vect_unknown_def_type;
   int j, i;
   auto_vec<tree> scalar_results;
@@ -4455,7 +4455,7 @@ vect_create_epilog_for_reduction (vec<tr
   bool slp_reduc = false;
   bool direct_slp_reduc;
   tree new_phi_result;
-  gimple *inner_phi = NULL;
+  stmt_vec_info inner_phi = NULL;
   tree induction_index = NULL_TREE;
 
   if (slp_node)
@@ -4605,7 +4605,7 @@ vect_create_epilog_for_reduction (vec<tr
       tree indx_before_incr, indx_after_incr;
       poly_uint64 nunits_out = TYPE_VECTOR_SUBPARTS (vectype);
 
-      gimple *vec_stmt = STMT_VINFO_VEC_STMT (stmt_info);
+      gimple *vec_stmt = STMT_VINFO_VEC_STMT (stmt_info)->stmt;
       gcc_assert (gimple_assign_rhs_code (vec_stmt) == VEC_COND_EXPR);
 
       int scalar_precision
@@ -4738,20 +4738,21 @@ vect_create_epilog_for_reduction (vec<tr
       inner_phis.create (vect_defs.length ());
       FOR_EACH_VEC_ELT (new_phis, i, phi)
 	{
+	  stmt_vec_info phi_info = loop_vinfo->lookup_stmt (phi);
 	  tree new_result = copy_ssa_name (PHI_RESULT (phi));
 	  gphi *outer_phi = create_phi_node (new_result, exit_bb);
 	  SET_PHI_ARG_DEF (outer_phi, single_exit (loop)->dest_idx,
 			   PHI_RESULT (phi));
 	  prev_phi_info = loop_vinfo->add_stmt (outer_phi);
-	  inner_phis.quick_push (phi);
+	  inner_phis.quick_push (phi_info);
 	  new_phis[i] = outer_phi;
-          while (STMT_VINFO_RELATED_STMT (vinfo_for_stmt (phi)))
+	  while (STMT_VINFO_RELATED_STMT (phi_info))
             {
-	      phi = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (phi));
-	      new_result = copy_ssa_name (PHI_RESULT (phi));
+	      phi_info = STMT_VINFO_RELATED_STMT (phi_info);
+	      new_result = copy_ssa_name (PHI_RESULT (phi_info->stmt));
 	      outer_phi = create_phi_node (new_result, exit_bb);
 	      SET_PHI_ARG_DEF (outer_phi, single_exit (loop)->dest_idx,
-			       PHI_RESULT (phi));
+			       PHI_RESULT (phi_info->stmt));
 	      stmt_vec_info outer_phi_info = loop_vinfo->add_stmt (outer_phi);
 	      STMT_VINFO_RELATED_STMT (prev_phi_info) = outer_phi_info;
 	      prev_phi_info = outer_phi_info;
@@ -5644,7 +5645,8 @@ vect_create_epilog_for_reduction (vec<tr
 	      if (double_reduc)
 		STMT_VINFO_VEC_STMT (exit_phi_vinfo) = inner_phi;
 	      else
-		STMT_VINFO_VEC_STMT (exit_phi_vinfo) = epilog_stmt;
+		STMT_VINFO_VEC_STMT (exit_phi_vinfo)
+		  = vinfo_for_stmt (epilog_stmt);
               if (!double_reduc
                   || STMT_VINFO_DEF_TYPE (exit_phi_vinfo)
                       != vect_double_reduction_def)
@@ -5706,8 +5708,8 @@ vect_create_epilog_for_reduction (vec<tr
                   add_phi_arg (vect_phi, vect_phi_init,
                                loop_preheader_edge (outer_loop),
                                UNKNOWN_LOCATION);
-                  add_phi_arg (vect_phi, PHI_RESULT (inner_phi),
-                               loop_latch_edge (outer_loop), UNKNOWN_LOCATION);
+		  add_phi_arg (vect_phi, PHI_RESULT (inner_phi->stmt),
+			       loop_latch_edge (outer_loop), UNKNOWN_LOCATION);
                   if (dump_enabled_p ())
                     {
                       dump_printf_loc (MSG_NOTE, vect_location,
@@ -5846,7 +5848,7 @@ vect_expand_fold_left (gimple_stmt_itera
 
 static bool
 vectorize_fold_left_reduction (gimple *stmt, gimple_stmt_iterator *gsi,
-			       gimple **vec_stmt, slp_tree slp_node,
+			       stmt_vec_info *vec_stmt, slp_tree slp_node,
 			       gimple *reduc_def_stmt,
 			       tree_code code, internal_fn reduc_fn,
 			       tree ops[3], tree vectype_in,
@@ -6070,7 +6072,7 @@ is_nonwrapping_integer_induction (gimple
 
 bool
 vectorizable_reduction (gimple *stmt, gimple_stmt_iterator *gsi,
-			gimple **vec_stmt, slp_tree slp_node,
+			stmt_vec_info *vec_stmt, slp_tree slp_node,
 			slp_instance slp_node_instance,
 			stmt_vector_for_cost *cost_vec)
 {
@@ -6220,7 +6222,8 @@ vectorizable_reduction (gimple *stmt, gi
 		  else
 		    {
 		      if (j == 0)
-			STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_phi;
+			STMT_VINFO_VEC_STMT (stmt_info)
+			  = *vec_stmt = new_phi_info;
 		      else
 			STMT_VINFO_RELATED_STMT (prev_phi_info) = new_phi_info;
 		      prev_phi_info = new_phi_info;
@@ -7201,7 +7204,7 @@ vectorizable_reduction (gimple *stmt, gi
   /* Finalize the reduction-phi (set its arguments) and create the
      epilog reduction code.  */
   if ((!single_defuse_cycle || code == COND_EXPR) && !slp_node)
-    vect_defs[0] = gimple_get_lhs (*vec_stmt);
+    vect_defs[0] = gimple_get_lhs ((*vec_stmt)->stmt);
 
   vect_create_epilog_for_reduction (vect_defs, stmt, reduc_def_stmt,
 				    epilog_copies, reduc_fn, phis,
@@ -7262,7 +7265,7 @@ vect_worthwhile_without_simd_p (vec_info
 bool
 vectorizable_induction (gimple *phi,
 			gimple_stmt_iterator *gsi ATTRIBUTE_UNUSED,
-			gimple **vec_stmt, slp_tree slp_node,
+			stmt_vec_info *vec_stmt, slp_tree slp_node,
 			stmt_vector_for_cost *cost_vec)
 {
   stmt_vec_info stmt_info = vinfo_for_stmt (phi);
@@ -7700,7 +7703,7 @@ vectorizable_induction (gimple *phi,
   add_phi_arg (induction_phi, vec_def, loop_latch_edge (iv_loop),
 	       UNKNOWN_LOCATION);
 
-  STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = induction_phi;
+  STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = induction_phi_info;
 
   /* In case that vectorization factor (VF) is bigger than the number
      of elements that we can fit in a vectype (nunits), we have to generate
@@ -7779,7 +7782,7 @@ vectorizable_induction (gimple *phi,
 	  gcc_assert (STMT_VINFO_RELEVANT_P (stmt_vinfo)
 		      && !STMT_VINFO_LIVE_P (stmt_vinfo));
 
-	  STMT_VINFO_VEC_STMT (stmt_vinfo) = new_stmt;
+	  STMT_VINFO_VEC_STMT (stmt_vinfo) = new_stmt_info;
 	  if (dump_enabled_p ())
 	    {
 	      dump_printf_loc (MSG_NOTE, vect_location,
@@ -7811,7 +7814,7 @@ vectorizable_induction (gimple *phi,
 vectorizable_live_operation (gimple *stmt,
 			     gimple_stmt_iterator *gsi ATTRIBUTE_UNUSED,
 			     slp_tree slp_node, int slp_index,
-			     gimple **vec_stmt,
+			     stmt_vec_info *vec_stmt,
 			     stmt_vector_for_cost *)
 {
   stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:22:44.293185688 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:22:47.489157307 +0100
@@ -1465,7 +1465,7 @@ vect_init_vector (gimple *stmt, tree val
 vect_get_vec_def_for_operand_1 (gimple *def_stmt, enum vect_def_type dt)
 {
   tree vec_oprnd;
-  gimple *vec_stmt;
+  stmt_vec_info vec_stmt_info;
   stmt_vec_info def_stmt_info = NULL;
 
   switch (dt)
@@ -1482,21 +1482,19 @@ vect_get_vec_def_for_operand_1 (gimple *
         /* Get the def from the vectorized stmt.  */
         def_stmt_info = vinfo_for_stmt (def_stmt);
 
-        vec_stmt = STMT_VINFO_VEC_STMT (def_stmt_info);
-        /* Get vectorized pattern statement.  */
-        if (!vec_stmt
-            && STMT_VINFO_IN_PATTERN_P (def_stmt_info)
-            && !STMT_VINFO_RELEVANT (def_stmt_info))
-	  vec_stmt = (STMT_VINFO_VEC_STMT
-		      (STMT_VINFO_RELATED_STMT (def_stmt_info)));
-        gcc_assert (vec_stmt);
-	if (gimple_code (vec_stmt) == GIMPLE_PHI)
-	  vec_oprnd = PHI_RESULT (vec_stmt);
-	else if (is_gimple_call (vec_stmt))
-	  vec_oprnd = gimple_call_lhs (vec_stmt);
+	vec_stmt_info = STMT_VINFO_VEC_STMT (def_stmt_info);
+	/* Get vectorized pattern statement.  */
+	if (!vec_stmt_info
+	    && STMT_VINFO_IN_PATTERN_P (def_stmt_info)
+	    && !STMT_VINFO_RELEVANT (def_stmt_info))
+	  vec_stmt_info = (STMT_VINFO_VEC_STMT
+			   (STMT_VINFO_RELATED_STMT (def_stmt_info)));
+	gcc_assert (vec_stmt_info);
+	if (gphi *phi = dyn_cast <gphi *> (vec_stmt_info->stmt))
+	  vec_oprnd = PHI_RESULT (phi);
 	else
-	  vec_oprnd = gimple_assign_lhs (vec_stmt);
-        return vec_oprnd;
+	  vec_oprnd = gimple_get_lhs (vec_stmt_info->stmt);
+	return vec_oprnd;
       }
 
     /* operand is defined by a loop header phi.  */
@@ -1507,14 +1505,14 @@ vect_get_vec_def_for_operand_1 (gimple *
       {
 	gcc_assert (gimple_code (def_stmt) == GIMPLE_PHI);
 
-        /* Get the def from the vectorized stmt.  */
-        def_stmt_info = vinfo_for_stmt (def_stmt);
-        vec_stmt = STMT_VINFO_VEC_STMT (def_stmt_info);
-	if (gimple_code (vec_stmt) == GIMPLE_PHI)
-	  vec_oprnd = PHI_RESULT (vec_stmt);
+	/* Get the def from the vectorized stmt.  */
+	def_stmt_info = vinfo_for_stmt (def_stmt);
+	vec_stmt_info = STMT_VINFO_VEC_STMT (def_stmt_info);
+	if (gphi *phi = dyn_cast <gphi *> (vec_stmt_info->stmt))
+	  vec_oprnd = PHI_RESULT (phi);
 	else
-	  vec_oprnd = gimple_get_lhs (vec_stmt);
-        return vec_oprnd;
+	  vec_oprnd = gimple_get_lhs (vec_stmt_info->stmt);
+	return vec_oprnd;
       }
 
     default:
@@ -2674,8 +2672,9 @@ vect_build_zero_merge_argument (gimple *
 
 static void
 vect_build_gather_load_calls (gimple *stmt, gimple_stmt_iterator *gsi,
-			      gimple **vec_stmt, gather_scatter_info *gs_info,
-			      tree mask, vect_def_type mask_dt)
+			      stmt_vec_info *vec_stmt,
+			      gather_scatter_info *gs_info, tree mask,
+			      vect_def_type mask_dt)
 {
   stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
@@ -2960,7 +2959,7 @@ vect_get_data_ptr_increment (data_refere
 
 static bool
 vectorizable_bswap (gimple *stmt, gimple_stmt_iterator *gsi,
-		    gimple **vec_stmt, slp_tree slp_node,
+		    stmt_vec_info *vec_stmt, slp_tree slp_node,
 		    tree vectype_in, enum vect_def_type *dt,
 		    stmt_vector_for_cost *cost_vec)
 {
@@ -3104,8 +3103,9 @@ simple_integer_narrowing (tree vectype_o
    Return FALSE if not a vectorizable STMT, TRUE otherwise.  */
 
 static bool
-vectorizable_call (gimple *gs, gimple_stmt_iterator *gsi, gimple **vec_stmt,
-		   slp_tree slp_node, stmt_vector_for_cost *cost_vec)
+vectorizable_call (gimple *gs, gimple_stmt_iterator *gsi,
+		   stmt_vec_info *vec_stmt, slp_tree slp_node,
+		   stmt_vector_for_cost *cost_vec)
 {
   gcall *stmt;
   tree vec_dest;
@@ -3745,7 +3745,7 @@ simd_clone_subparts (tree vectype)
 
 static bool
 vectorizable_simd_clone_call (gimple *stmt, gimple_stmt_iterator *gsi,
-			      gimple **vec_stmt, slp_tree slp_node,
+			      stmt_vec_info *vec_stmt, slp_tree slp_node,
 			      stmt_vector_for_cost *)
 {
   tree vec_dest;
@@ -4596,7 +4596,7 @@ vect_create_vectorized_promotion_stmts (
 
 static bool
 vectorizable_conversion (gimple *stmt, gimple_stmt_iterator *gsi,
-			 gimple **vec_stmt, slp_tree slp_node,
+			 stmt_vec_info *vec_stmt, slp_tree slp_node,
 			 stmt_vector_for_cost *cost_vec)
 {
   tree vec_dest;
@@ -5204,7 +5204,7 @@ vectorizable_conversion (gimple *stmt, g
 
 static bool
 vectorizable_assignment (gimple *stmt, gimple_stmt_iterator *gsi,
-			 gimple **vec_stmt, slp_tree slp_node,
+			 stmt_vec_info *vec_stmt, slp_tree slp_node,
 			 stmt_vector_for_cost *cost_vec)
 {
   tree vec_dest;
@@ -5405,7 +5405,7 @@ vect_supportable_shift (enum tree_code c
 
 static bool
 vectorizable_shift (gimple *stmt, gimple_stmt_iterator *gsi,
-                    gimple **vec_stmt, slp_tree slp_node,
+		    stmt_vec_info *vec_stmt, slp_tree slp_node,
 		    stmt_vector_for_cost *cost_vec)
 {
   tree vec_dest;
@@ -5769,7 +5769,7 @@ vectorizable_shift (gimple *stmt, gimple
 
 static bool
 vectorizable_operation (gimple *stmt, gimple_stmt_iterator *gsi,
-			gimple **vec_stmt, slp_tree slp_node,
+			stmt_vec_info *vec_stmt, slp_tree slp_node,
 			stmt_vector_for_cost *cost_vec)
 {
   tree vec_dest;
@@ -6222,8 +6222,9 @@ get_group_alias_ptr_type (gimple *first_
    Return FALSE if not a vectorizable STMT, TRUE otherwise.  */
 
 static bool
-vectorizable_store (gimple *stmt, gimple_stmt_iterator *gsi, gimple **vec_stmt,
-                    slp_tree slp_node, stmt_vector_for_cost *cost_vec)
+vectorizable_store (gimple *stmt, gimple_stmt_iterator *gsi,
+		    stmt_vec_info *vec_stmt, slp_tree slp_node,
+		    stmt_vector_for_cost *cost_vec)
 {
   tree data_ref;
   tree op;
@@ -7385,8 +7386,9 @@ hoist_defs_of_uses (gimple *stmt, struct
    Return FALSE if not a vectorizable STMT, TRUE otherwise.  */
 
 static bool
-vectorizable_load (gimple *stmt, gimple_stmt_iterator *gsi, gimple **vec_stmt,
-                   slp_tree slp_node, slp_instance slp_node_instance,
+vectorizable_load (gimple *stmt, gimple_stmt_iterator *gsi,
+		   stmt_vec_info *vec_stmt, slp_tree slp_node,
+		   slp_instance slp_node_instance,
 		   stmt_vector_for_cost *cost_vec)
 {
   tree scalar_dest;
@@ -8710,8 +8712,9 @@ vect_is_simple_cond (tree cond, vec_info
 
 bool
 vectorizable_condition (gimple *stmt, gimple_stmt_iterator *gsi,
-			gimple **vec_stmt, tree reduc_def, int reduc_index,
-			slp_tree slp_node, stmt_vector_for_cost *cost_vec)
+			stmt_vec_info *vec_stmt, tree reduc_def,
+			int reduc_index, slp_tree slp_node,
+			stmt_vector_for_cost *cost_vec)
 {
   tree scalar_dest = NULL_TREE;
   tree vec_dest = NULL_TREE;
@@ -9111,7 +9114,7 @@ vectorizable_condition (gimple *stmt, gi
 
 static bool
 vectorizable_comparison (gimple *stmt, gimple_stmt_iterator *gsi,
-			 gimple **vec_stmt, tree reduc_def,
+			 stmt_vec_info *vec_stmt, tree reduc_def,
 			 slp_tree slp_node, stmt_vector_for_cost *cost_vec)
 {
   tree lhs, rhs1, rhs2;
@@ -9383,7 +9386,7 @@ vectorizable_comparison (gimple *stmt, g
 
 static bool
 can_vectorize_live_stmts (gimple *stmt, gimple_stmt_iterator *gsi,
-			  slp_tree slp_node, gimple **vec_stmt,
+			  slp_tree slp_node, stmt_vec_info *vec_stmt,
 			  stmt_vector_for_cost *cost_vec)
 {
   if (slp_node)
@@ -9647,11 +9650,11 @@ vect_transform_stmt (gimple *stmt, gimpl
   stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   vec_info *vinfo = stmt_info->vinfo;
   bool is_store = false;
-  gimple *vec_stmt = NULL;
+  stmt_vec_info vec_stmt = NULL;
   bool done;
 
   gcc_assert (slp_node || !PURE_SLP_STMT (stmt_info));
-  gimple *old_vec_stmt = STMT_VINFO_VEC_STMT (stmt_info);
+  stmt_vec_info old_vec_stmt_info = STMT_VINFO_VEC_STMT (stmt_info);
 
   bool nested_p = (STMT_VINFO_LOOP_VINFO (stmt_info)
 		   && nested_in_vect_loop_p
@@ -9752,7 +9755,7 @@ vect_transform_stmt (gimple *stmt, gimpl
      This would break hybrid SLP vectorization.  */
   if (slp_node)
     gcc_assert (!vec_stmt
-		&& STMT_VINFO_VEC_STMT (stmt_info) == old_vec_stmt);
+		&& STMT_VINFO_VEC_STMT (stmt_info) == old_vec_stmt_info);
 
   /* Handle inner-loop stmts whose DEF is used in the loop-nest that
      is being vectorized, but outside the immediately enclosing loop.  */

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [16/46] Make STMT_VINFO_REDUC_DEF a stmt_vec_info
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (14 preceding siblings ...)
  2018-07-24  9:59 ` [17/46] Make LOOP_VINFO_REDUCTIONS an auto_vec<stmt_vec_info> Richard Sandiford
@ 2018-07-24  9:59 ` Richard Sandiford
  2018-07-25  9:22   ` Richard Biener
  2018-07-24  9:59 ` [15/46] Make SLP_TREE_VEC_STMTS a vec<stmt_vec_info> Richard Sandiford
                   ` (29 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24  9:59 UTC (permalink / raw)
  To: gcc-patches

This patch changes STMT_VINFO_REDUC_DEF from a gimple stmt to a
stmt_vec_info.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (_stmt_vec_info::reduc_def): Change from
	a gimple stmt to a stmt_vec_info.
	* tree-vect-loop.c (vect_active_double_reduction_p)
	(vect_force_simple_reduction, vectorizable_reduction): Update
	accordingly.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:22:50.777128110 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:22:53.909100298 +0100
@@ -921,7 +921,7 @@ struct _stmt_vec_info {
   /* On a reduction PHI the def returned by vect_force_simple_reduction.
      On the def returned by vect_force_simple_reduction the
      corresponding PHI.  */
-  gimple *reduc_def;
+  stmt_vec_info reduc_def;
 
   /* The number of scalar stmt references from active SLP instances.  */
   unsigned int num_slp_uses;
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:22:50.777128110 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:22:53.909100298 +0100
@@ -1499,8 +1499,7 @@ vect_active_double_reduction_p (stmt_vec
   if (STMT_VINFO_DEF_TYPE (stmt_info) != vect_double_reduction_def)
     return false;
 
-  gimple *other_phi = STMT_VINFO_REDUC_DEF (stmt_info);
-  return STMT_VINFO_RELEVANT_P (vinfo_for_stmt (other_phi));
+  return STMT_VINFO_RELEVANT_P (STMT_VINFO_REDUC_DEF (stmt_info));
 }
 
 /* Function vect_analyze_loop_operations.
@@ -3293,12 +3292,12 @@ vect_force_simple_reduction (loop_vec_in
 					  &v_reduc_type);
   if (def)
     {
-      stmt_vec_info reduc_def_info = vinfo_for_stmt (phi);
-      STMT_VINFO_REDUC_TYPE (reduc_def_info) = v_reduc_type;
-      STMT_VINFO_REDUC_DEF (reduc_def_info) = def;
-      reduc_def_info = vinfo_for_stmt (def);
-      STMT_VINFO_REDUC_TYPE (reduc_def_info) = v_reduc_type;
-      STMT_VINFO_REDUC_DEF (reduc_def_info) = phi;
+      stmt_vec_info phi_info = vinfo_for_stmt (phi);
+      stmt_vec_info def_info = vinfo_for_stmt (def);
+      STMT_VINFO_REDUC_TYPE (phi_info) = v_reduc_type;
+      STMT_VINFO_REDUC_DEF (phi_info) = def_info;
+      STMT_VINFO_REDUC_TYPE (def_info) = v_reduc_type;
+      STMT_VINFO_REDUC_DEF (def_info) = phi_info;
     }
   return def;
 }
@@ -6153,17 +6152,16 @@ vectorizable_reduction (gimple *stmt, gi
 	   for reductions involving a single statement.  */
 	return true;
 
-      gimple *reduc_stmt = STMT_VINFO_REDUC_DEF (stmt_info);
-      if (STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (reduc_stmt)))
-	reduc_stmt = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (reduc_stmt));
+      stmt_vec_info reduc_stmt_info = STMT_VINFO_REDUC_DEF (stmt_info);
+      if (STMT_VINFO_IN_PATTERN_P (reduc_stmt_info))
+	reduc_stmt_info = STMT_VINFO_RELATED_STMT (reduc_stmt_info);
 
-      stmt_vec_info reduc_stmt_info = vinfo_for_stmt (reduc_stmt);
       if (STMT_VINFO_VEC_REDUCTION_TYPE (reduc_stmt_info)
 	  == EXTRACT_LAST_REDUCTION)
 	/* Leave the scalar phi in place.  */
 	return true;
 
-      gcc_assert (is_gimple_assign (reduc_stmt));
+      gassign *reduc_stmt = as_a <gassign *> (reduc_stmt_info->stmt);
       for (unsigned k = 1; k < gimple_num_ops (reduc_stmt); ++k)
 	{
 	  tree op = gimple_op (reduc_stmt, k);
@@ -6314,7 +6312,7 @@ vectorizable_reduction (gimple *stmt, gi
      The last use is the reduction variable.  In case of nested cycle this
      assumption is not true: we use reduc_index to record the index of the
      reduction variable.  */
-  gimple *reduc_def_stmt = NULL;
+  stmt_vec_info reduc_def_info = NULL;
   int reduc_index = -1;
   for (i = 0; i < op_type; i++)
     {
@@ -6329,7 +6327,7 @@ vectorizable_reduction (gimple *stmt, gi
       gcc_assert (is_simple_use);
       if (dt == vect_reduction_def)
 	{
-	  reduc_def_stmt = def_stmt_info;
+	  reduc_def_info = def_stmt_info;
 	  reduc_index = i;
 	  continue;
 	}
@@ -6353,7 +6351,7 @@ vectorizable_reduction (gimple *stmt, gi
       if (dt == vect_nested_cycle)
 	{
 	  found_nested_cycle_def = true;
-	  reduc_def_stmt = def_stmt_info;
+	  reduc_def_info = def_stmt_info;
 	  reduc_index = i;
 	}
 
@@ -6391,12 +6389,16 @@ vectorizable_reduction (gimple *stmt, gi
 	}
 
       if (orig_stmt_info)
-	reduc_def_stmt = STMT_VINFO_REDUC_DEF (orig_stmt_info);
+	reduc_def_info = STMT_VINFO_REDUC_DEF (orig_stmt_info);
       else
-	reduc_def_stmt = STMT_VINFO_REDUC_DEF (stmt_info);
+	reduc_def_info = STMT_VINFO_REDUC_DEF (stmt_info);
     }
 
-  if (! reduc_def_stmt || gimple_code (reduc_def_stmt) != GIMPLE_PHI)
+  if (! reduc_def_info)
+    return false;
+
+  gphi *reduc_def_phi = dyn_cast <gphi *> (reduc_def_info->stmt);
+  if (!reduc_def_phi)
     return false;
 
   if (!(reduc_index == -1
@@ -6415,12 +6417,11 @@ vectorizable_reduction (gimple *stmt, gi
       return false;
     }
 
-  stmt_vec_info reduc_def_info = vinfo_for_stmt (reduc_def_stmt);
   /* PHIs should not participate in patterns.  */
   gcc_assert (!STMT_VINFO_RELATED_STMT (reduc_def_info));
   enum vect_reduction_type v_reduc_type
     = STMT_VINFO_REDUC_TYPE (reduc_def_info);
-  gimple *tmp = STMT_VINFO_REDUC_DEF (reduc_def_info);
+  stmt_vec_info tmp = STMT_VINFO_REDUC_DEF (reduc_def_info);
 
   STMT_VINFO_VEC_REDUCTION_TYPE (stmt_info) = v_reduc_type;
   /* If we have a condition reduction, see if we can simplify it further.  */
@@ -6547,15 +6548,14 @@ vectorizable_reduction (gimple *stmt, gi
 
   if (orig_stmt_info)
     gcc_assert (tmp == orig_stmt_info
-		|| (REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (tmp))
-		    == orig_stmt_info));
+		|| REDUC_GROUP_FIRST_ELEMENT (tmp) == orig_stmt_info);
   else
     /* We changed STMT to be the first stmt in reduction chain, hence we
        check that in this case the first element in the chain is STMT.  */
-    gcc_assert (stmt == tmp
-		|| REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (tmp)) == stmt);
+    gcc_assert (tmp == stmt_info
+		|| REDUC_GROUP_FIRST_ELEMENT (tmp) == stmt_info);
 
-  if (STMT_VINFO_LIVE_P (vinfo_for_stmt (reduc_def_stmt)))
+  if (STMT_VINFO_LIVE_P (reduc_def_info))
     return false;
 
   if (slp_node)
@@ -6702,9 +6702,9 @@ vectorizable_reduction (gimple *stmt, gi
 
   if (nested_cycle)
     {
-      def_bb = gimple_bb (reduc_def_stmt);
+      def_bb = gimple_bb (reduc_def_phi);
       def_stmt_loop = def_bb->loop_father;
-      def_arg = PHI_ARG_DEF_FROM_EDGE (reduc_def_stmt,
+      def_arg = PHI_ARG_DEF_FROM_EDGE (reduc_def_phi,
                                        loop_preheader_edge (def_stmt_loop));
       stmt_vec_info def_arg_stmt_info = loop_vinfo->lookup_def (def_arg);
       if (def_arg_stmt_info
@@ -6954,7 +6954,7 @@ vectorizable_reduction (gimple *stmt, gi
    in vectorizable_reduction and there are no intermediate stmts
    participating.  */
   stmt_vec_info use_stmt_info;
-  tree reduc_phi_result = gimple_phi_result (reduc_def_stmt);
+  tree reduc_phi_result = gimple_phi_result (reduc_def_phi);
   if (ncopies > 1
       && (STMT_VINFO_RELEVANT (stmt_info) <= vect_used_only_live)
       && (use_stmt_info = loop_vinfo->lookup_single_use (reduc_phi_result))
@@ -7039,7 +7039,7 @@ vectorizable_reduction (gimple *stmt, gi
 
   if (reduction_type == FOLD_LEFT_REDUCTION)
     return vectorize_fold_left_reduction
-      (stmt, gsi, vec_stmt, slp_node, reduc_def_stmt, code,
+      (stmt, gsi, vec_stmt, slp_node, reduc_def_phi, code,
        reduc_fn, ops, vectype_in, reduc_index, masks);
 
   if (reduction_type == EXTRACT_LAST_REDUCTION)
@@ -7070,7 +7070,7 @@ vectorizable_reduction (gimple *stmt, gi
   if (slp_node)
     phis.splice (SLP_TREE_VEC_STMTS (slp_node_instance->reduc_phis));
   else
-    phis.quick_push (STMT_VINFO_VEC_STMT (vinfo_for_stmt (reduc_def_stmt)));
+    phis.quick_push (STMT_VINFO_VEC_STMT (reduc_def_info));
 
   for (j = 0; j < ncopies; j++)
     {
@@ -7208,7 +7208,7 @@ vectorizable_reduction (gimple *stmt, gi
   if ((!single_defuse_cycle || code == COND_EXPR) && !slp_node)
     vect_defs[0] = gimple_get_lhs ((*vec_stmt)->stmt);
 
-  vect_create_epilog_for_reduction (vect_defs, stmt, reduc_def_stmt,
+  vect_create_epilog_for_reduction (vect_defs, stmt, reduc_def_phi,
 				    epilog_copies, reduc_fn, phis,
 				    double_reduc, slp_node, slp_node_instance,
 				    cond_reduc_val, cond_reduc_op_code,

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [15/46] Make SLP_TREE_VEC_STMTS a vec<stmt_vec_info>
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (15 preceding siblings ...)
  2018-07-24  9:59 ` [16/46] Make STMT_VINFO_REDUC_DEF a stmt_vec_info Richard Sandiford
@ 2018-07-24  9:59 ` Richard Sandiford
  2018-07-25  9:22   ` Richard Biener
  2018-07-24 10:00 ` [18/46] Make SLP_TREE_SCALAR_STMTS " Richard Sandiford
                   ` (28 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24  9:59 UTC (permalink / raw)
  To: gcc-patches

This patch changes SLP_TREE_VEC_STMTS from a vec<gimple *> to a
vec<stmt_vec_info>.  This involved making the same change to the
phis vector in vectorizable_reduction, since SLP_TREE_VEC_STMTS is
spliced into it here:

  phis.splice (SLP_TREE_VEC_STMTS (slp_node_instance->reduc_phis));


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (_slp_tree::vec_stmts): Change from a
	vec<gimple *> to a vec<stmt_vec_info>.
	* tree-vect-loop.c (vect_create_epilog_for_reduction): Change
	the reduction_phis argument from a vec<gimple *> to a
	vec<stmt_vec_info>.
	(vectorizable_reduction): Likewise the phis local variable that
	is passed to vect_create_epilog_for_reduction.  Update for new type
	of SLP_TREE_VEC_STMTS.
	(vectorizable_induction): Update for new type of SLP_TREE_VEC_STMTS.
	(vectorizable_live_operation): Likewise.
	* tree-vect-slp.c (vect_get_slp_vect_defs): Likewise.
	(vect_transform_slp_perm_load, vect_schedule_slp_instance): Likewise.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:22:47.489157307 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:22:50.777128110 +0100
@@ -143,7 +143,7 @@ struct _slp_tree {
      permutation.  */
   vec<unsigned> load_permutation;
   /* Vectorized stmt/s.  */
-  vec<gimple *> vec_stmts;
+  vec<stmt_vec_info> vec_stmts;
   /* Number of vector stmts that are created to replace the group of scalar
      stmts. It is calculated during the transformation phase as the number of
      scalar elements in one scalar iteration (GROUP_SIZE) multiplied by VF
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:22:47.489157307 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:22:50.777128110 +0100
@@ -4412,7 +4412,7 @@ get_initial_defs_for_reduction (slp_tree
 vect_create_epilog_for_reduction (vec<tree> vect_defs, gimple *stmt,
 				  gimple *reduc_def_stmt,
 				  int ncopies, internal_fn reduc_fn,
-				  vec<gimple *> reduction_phis,
+				  vec<stmt_vec_info> reduction_phis,
                                   bool double_reduc, 
 				  slp_tree slp_node,
 				  slp_instance slp_node_instance,
@@ -4429,6 +4429,7 @@ vect_create_epilog_for_reduction (vec<tr
   tree scalar_dest;
   tree scalar_type;
   gimple *new_phi = NULL, *phi;
+  stmt_vec_info phi_info;
   gimple_stmt_iterator exit_gsi;
   tree vec_dest;
   tree new_temp = NULL_TREE, new_dest, new_name, new_scalar_dest;
@@ -4442,7 +4443,8 @@ vect_create_epilog_for_reduction (vec<tr
   tree orig_name, scalar_result;
   imm_use_iterator imm_iter, phi_imm_iter;
   use_operand_p use_p, phi_use_p;
-  gimple *use_stmt, *reduction_phi = NULL;
+  gimple *use_stmt;
+  stmt_vec_info reduction_phi_info = NULL;
   bool nested_in_vect_loop = false;
   auto_vec<gimple *> new_phis;
   auto_vec<stmt_vec_info> inner_phis;
@@ -4540,7 +4542,7 @@ vect_create_epilog_for_reduction (vec<tr
     }
 
   /* Set phi nodes arguments.  */
-  FOR_EACH_VEC_ELT (reduction_phis, i, phi)
+  FOR_EACH_VEC_ELT (reduction_phis, i, phi_info)
     {
       tree vec_init_def = vec_initial_defs[i];
       tree def = vect_defs[i];
@@ -4548,7 +4550,7 @@ vect_create_epilog_for_reduction (vec<tr
         {
 	  if (j != 0)
 	    {
-	      phi = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (phi));
+	      phi_info = STMT_VINFO_RELATED_STMT (phi_info);
 	      if (nested_in_vect_loop)
 		vec_init_def
 		  = vect_get_vec_def_for_stmt_copy (initial_def_dt,
@@ -4557,6 +4559,7 @@ vect_create_epilog_for_reduction (vec<tr
 
 	  /* Set the loop-entry arg of the reduction-phi.  */
 
+	  gphi *phi = as_a <gphi *> (phi_info->stmt);
 	  if (STMT_VINFO_VEC_REDUCTION_TYPE (stmt_info)
 	      == INTEGER_INDUC_COND_REDUCTION)
 	    {
@@ -4569,19 +4572,18 @@ vect_create_epilog_for_reduction (vec<tr
 	      tree induc_val_vec
 		= build_vector_from_val (vec_init_def_type, induc_val);
 
-	      add_phi_arg (as_a <gphi *> (phi), induc_val_vec,
-			   loop_preheader_edge (loop), UNKNOWN_LOCATION);
+	      add_phi_arg (phi, induc_val_vec, loop_preheader_edge (loop),
+			   UNKNOWN_LOCATION);
 	    }
 	  else
-	    add_phi_arg (as_a <gphi *> (phi), vec_init_def,
-			 loop_preheader_edge (loop), UNKNOWN_LOCATION);
+	    add_phi_arg (phi, vec_init_def, loop_preheader_edge (loop),
+			 UNKNOWN_LOCATION);
 
           /* Set the loop-latch arg for the reduction-phi.  */
           if (j > 0)
             def = vect_get_vec_def_for_stmt_copy (vect_unknown_def_type, def);
 
-          add_phi_arg (as_a <gphi *> (phi), def, loop_latch_edge (loop),
-		       UNKNOWN_LOCATION);
+	  add_phi_arg (phi, def, loop_latch_edge (loop), UNKNOWN_LOCATION);
 
           if (dump_enabled_p ())
             {
@@ -5599,7 +5601,7 @@ vect_create_epilog_for_reduction (vec<tr
       if (k % ratio == 0)
         {
           epilog_stmt = new_phis[k / ratio];
-          reduction_phi = reduction_phis[k / ratio];
+	  reduction_phi_info = reduction_phis[k / ratio];
 	  if (double_reduc)
 	    inner_phi = inner_phis[k / ratio];
         }
@@ -5672,7 +5674,6 @@ vect_create_epilog_for_reduction (vec<tr
                   stmt_vec_info use_stmt_vinfo;
                   tree vect_phi_init, preheader_arg, vect_phi_res;
                   basic_block bb = gimple_bb (use_stmt);
-		  gimple *use;
 
                   /* Check that USE_STMT is really double reduction phi
                      node.  */
@@ -5722,13 +5723,14 @@ vect_create_epilog_for_reduction (vec<tr
                   /* Replace the use, i.e., set the correct vs1 in the regular
                      reduction phi node.  FORNOW, NCOPIES is always 1, so the
                      loop is redundant.  */
-                  use = reduction_phi;
-                  for (j = 0; j < ncopies; j++)
-                    {
-                      edge pr_edge = loop_preheader_edge (loop);
-                      SET_PHI_ARG_DEF (use, pr_edge->dest_idx, vect_phi_res);
-                      use = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (use));
-                    }
+		  stmt_vec_info use_info = reduction_phi_info;
+		  for (j = 0; j < ncopies; j++)
+		    {
+		      edge pr_edge = loop_preheader_edge (loop);
+		      SET_PHI_ARG_DEF (as_a <gphi *> (use_info->stmt),
+				       pr_edge->dest_idx, vect_phi_res);
+		      use_info = STMT_VINFO_RELATED_STMT (use_info);
+		    }
                 }
             }
         }
@@ -6112,7 +6114,7 @@ vectorizable_reduction (gimple *stmt, gi
   auto_vec<tree> vec_oprnds1;
   auto_vec<tree> vec_oprnds2;
   auto_vec<tree> vect_defs;
-  auto_vec<gimple *> phis;
+  auto_vec<stmt_vec_info> phis;
   int vec_num;
   tree def0, tem;
   tree cr_index_scalar_type = NULL_TREE, cr_index_vector_type = NULL_TREE;
@@ -6218,7 +6220,7 @@ vectorizable_reduction (gimple *stmt, gi
 		  stmt_vec_info new_phi_info = loop_vinfo->add_stmt (new_phi);
 
 		  if (slp_node)
-		    SLP_TREE_VEC_STMTS (slp_node).quick_push (new_phi);
+		    SLP_TREE_VEC_STMTS (slp_node).quick_push (new_phi_info);
 		  else
 		    {
 		      if (j == 0)
@@ -7075,9 +7077,9 @@ vectorizable_reduction (gimple *stmt, gi
       if (code == COND_EXPR)
         {
           gcc_assert (!slp_node);
-          vectorizable_condition (stmt, gsi, vec_stmt, 
-                                  PHI_RESULT (phis[0]), 
-                                  reduc_index, NULL, NULL);
+	  vectorizable_condition (stmt, gsi, vec_stmt,
+				  PHI_RESULT (phis[0]->stmt),
+				  reduc_index, NULL, NULL);
           /* Multiple types are not supported for condition.  */
           break;
         }
@@ -7501,7 +7503,8 @@ vectorizable_induction (gimple *phi,
 	  /* Create the induction-phi that defines the induction-operand.  */
 	  vec_dest = vect_get_new_vect_var (vectype, vect_simple_var, "vec_iv_");
 	  induction_phi = create_phi_node (vec_dest, iv_loop->header);
-	  loop_vinfo->add_stmt (induction_phi);
+	  stmt_vec_info induction_phi_info
+	    = loop_vinfo->add_stmt (induction_phi);
 	  induc_def = PHI_RESULT (induction_phi);
 
 	  /* Create the iv update inside the loop  */
@@ -7515,7 +7518,7 @@ vectorizable_induction (gimple *phi,
 	  add_phi_arg (induction_phi, vec_def, loop_latch_edge (iv_loop),
 		       UNKNOWN_LOCATION);
 
-	  SLP_TREE_VEC_STMTS (slp_node).quick_push (induction_phi);
+	  SLP_TREE_VEC_STMTS (slp_node).quick_push (induction_phi_info);
 	}
 
       /* Re-use IVs when we can.  */
@@ -7540,7 +7543,7 @@ vectorizable_induction (gimple *phi,
 	  vec_step = vect_init_vector (phi, new_vec, vectype, NULL);
 	  for (; ivn < nvects; ++ivn)
 	    {
-	      gimple *iv = SLP_TREE_VEC_STMTS (slp_node)[ivn - nivs];
+	      gimple *iv = SLP_TREE_VEC_STMTS (slp_node)[ivn - nivs]->stmt;
 	      tree def;
 	      if (gimple_code (iv) == GIMPLE_PHI)
 		def = gimple_phi_result (iv);
@@ -7556,8 +7559,8 @@ vectorizable_induction (gimple *phi,
 		  gimple_stmt_iterator tgsi = gsi_for_stmt (iv);
 		  gsi_insert_after (&tgsi, new_stmt, GSI_CONTINUE_LINKING);
 		}
-	      loop_vinfo->add_stmt (new_stmt);
-	      SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
+	      SLP_TREE_VEC_STMTS (slp_node).quick_push
+		(loop_vinfo->add_stmt (new_stmt));
 	    }
 	}
 
@@ -7943,7 +7946,7 @@ vectorizable_live_operation (gimple *stm
       gcc_assert (!LOOP_VINFO_FULLY_MASKED_P (loop_vinfo));
 
       /* Get the correct slp vectorized stmt.  */
-      gimple *vec_stmt = SLP_TREE_VEC_STMTS (slp_node)[vec_entry];
+      gimple *vec_stmt = SLP_TREE_VEC_STMTS (slp_node)[vec_entry]->stmt;
       if (gphi *phi = dyn_cast <gphi *> (vec_stmt))
 	vec_lhs = gimple_phi_result (phi);
       else
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c	2018-07-24 10:22:44.293185688 +0100
+++ gcc/tree-vect-slp.c	2018-07-24 10:22:50.777128110 +0100
@@ -3557,18 +3557,18 @@ vect_get_constant_vectors (tree op, slp_
 vect_get_slp_vect_defs (slp_tree slp_node, vec<tree> *vec_oprnds)
 {
   tree vec_oprnd;
-  gimple *vec_def_stmt;
+  stmt_vec_info vec_def_stmt_info;
   unsigned int i;
 
   gcc_assert (SLP_TREE_VEC_STMTS (slp_node).exists ());
 
-  FOR_EACH_VEC_ELT (SLP_TREE_VEC_STMTS (slp_node), i, vec_def_stmt)
+  FOR_EACH_VEC_ELT (SLP_TREE_VEC_STMTS (slp_node), i, vec_def_stmt_info)
     {
-      gcc_assert (vec_def_stmt);
-      if (gimple_code (vec_def_stmt) == GIMPLE_PHI)
-	vec_oprnd = gimple_phi_result (vec_def_stmt);
+      gcc_assert (vec_def_stmt_info);
+      if (gphi *vec_def_phi = dyn_cast <gphi *> (vec_def_stmt_info->stmt))
+	vec_oprnd = gimple_phi_result (vec_def_phi);
       else
-	vec_oprnd = gimple_get_lhs (vec_def_stmt);
+	vec_oprnd = gimple_get_lhs (vec_def_stmt_info->stmt);
       vec_oprnds->quick_push (vec_oprnd);
     }
 }
@@ -3687,6 +3687,7 @@ vect_transform_slp_perm_load (slp_tree n
 {
   gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[0];
   stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+  vec_info *vinfo = stmt_info->vinfo;
   tree mask_element_type = NULL_TREE, mask_type;
   int vec_index = 0;
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
@@ -3827,26 +3828,28 @@ vect_transform_slp_perm_load (slp_tree n
 		  /* Generate the permute statement if necessary.  */
 		  tree first_vec = dr_chain[first_vec_index];
 		  tree second_vec = dr_chain[second_vec_index];
-		  gimple *perm_stmt;
+		  stmt_vec_info perm_stmt_info;
 		  if (! noop_p)
 		    {
 		      tree perm_dest
 			= vect_create_destination_var (gimple_assign_lhs (stmt),
 						       vectype);
 		      perm_dest = make_ssa_name (perm_dest);
-		      perm_stmt = gimple_build_assign (perm_dest,
-						       VEC_PERM_EXPR,
-						       first_vec, second_vec,
-						       mask_vec);
-		      vect_finish_stmt_generation (stmt, perm_stmt, gsi);
+		      gassign *perm_stmt
+			= gimple_build_assign (perm_dest, VEC_PERM_EXPR,
+					       first_vec, second_vec,
+					       mask_vec);
+		      perm_stmt_info
+			= vect_finish_stmt_generation (stmt, perm_stmt, gsi);
 		    }
 		  else
 		    /* If mask was NULL_TREE generate the requested
 		       identity transform.  */
-		    perm_stmt = SSA_NAME_DEF_STMT (first_vec);
+		    perm_stmt_info = vinfo->lookup_def (first_vec);
 
 		  /* Store the vector statement in NODE.  */
-		  SLP_TREE_VEC_STMTS (node)[vect_stmts_counter++] = perm_stmt;
+		  SLP_TREE_VEC_STMTS (node)[vect_stmts_counter++]
+		    = perm_stmt_info;
 		}
 
 	      index = 0;
@@ -3948,8 +3951,8 @@ vect_schedule_slp_instance (slp_tree nod
 	  mask.quick_push (0);
       if (ocode != ERROR_MARK)
 	{
-	  vec<gimple *> v0;
-	  vec<gimple *> v1;
+	  vec<stmt_vec_info> v0;
+	  vec<stmt_vec_info> v1;
 	  unsigned j;
 	  tree tmask = NULL_TREE;
 	  vect_transform_stmt (stmt, &si, &grouped_store, node, instance);
@@ -3990,10 +3993,11 @@ vect_schedule_slp_instance (slp_tree nod
 	      gimple *vstmt;
 	      vstmt = gimple_build_assign (make_ssa_name (vectype),
 					   VEC_PERM_EXPR,
-					   gimple_assign_lhs (v0[j]),
-					   gimple_assign_lhs (v1[j]), tmask);
-	      vect_finish_stmt_generation (stmt, vstmt, &si);
-	      SLP_TREE_VEC_STMTS (node).quick_push (vstmt);
+					   gimple_assign_lhs (v0[j]->stmt),
+					   gimple_assign_lhs (v1[j]->stmt),
+					   tmask);
+	      SLP_TREE_VEC_STMTS (node).quick_push
+		(vect_finish_stmt_generation (stmt, vstmt, &si));
 	    }
 	  v0.release ();
 	  v1.release ();

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [17/46] Make LOOP_VINFO_REDUCTIONS an auto_vec<stmt_vec_info>
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (13 preceding siblings ...)
  2018-07-24  9:58 ` [14/46] Make STMT_VINFO_VEC_STMT " Richard Sandiford
@ 2018-07-24  9:59 ` Richard Sandiford
  2018-07-25  9:23   ` Richard Biener
  2018-07-24  9:59 ` [16/46] Make STMT_VINFO_REDUC_DEF a stmt_vec_info Richard Sandiford
                   ` (30 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24  9:59 UTC (permalink / raw)
  To: gcc-patches

This patch changes LOOP_VINFO_REDUCTIONS from an auto_vec<gimple *>
to an auto_vec<stmt_vec_info>.  It also changes the associated
vect_force_simple_reduction so that it takes and returns stmt_vec_infos
instead of gimple stmts.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (_loop_vec_info::reductions): Change from an
	auto_vec<gimple *> to an auto_vec<stmt_vec_info>.
	(vect_force_simple_reduction): Take and return stmt_vec_infos rather
	than gimple stmts.
	* tree-parloops.c (valid_reduction_p): Take a stmt_vec_info instead
	of a gimple stmt.
	(gather_scalar_reductions): Update after above interface changes.
	* tree-vect-loop.c (vect_analyze_scalar_cycles_1): Likewise.
	(vect_is_simple_reduction): Take and return stmt_vec_infos rather
	than gimple stmts.
	(vect_force_simple_reduction): Likewise.
	* tree-vect-patterns.c (vect_pattern_recog_1): Update use of
	LOOP_VINFO_REDUCTIONS.
	* tree-vect-slp.c (vect_analyze_slp_instance): Likewise.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:22:53.909100298 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:22:57.277070390 +0100
@@ -475,7 +475,7 @@ typedef struct _loop_vec_info : public v
   auto_vec<gimple *> may_misalign_stmts;
 
   /* Reduction cycles detected in the loop. Used in loop-aware SLP.  */
-  auto_vec<gimple *> reductions;
+  auto_vec<stmt_vec_info> reductions;
 
   /* All reduction chains in the loop, represented by the first
      stmt in the chain.  */
@@ -1627,8 +1627,8 @@ extern tree vect_create_addr_base_for_ve
 
 /* In tree-vect-loop.c.  */
 /* FORNOW: Used in tree-parloops.c.  */
-extern gimple *vect_force_simple_reduction (loop_vec_info, gimple *,
-					    bool *, bool);
+extern stmt_vec_info vect_force_simple_reduction (loop_vec_info, stmt_vec_info,
+						  bool *, bool);
 /* Used in gimple-loop-interchange.c.  */
 extern bool check_reduction_path (dump_user_location_t, loop_p, gphi *, tree,
 				  enum tree_code);
Index: gcc/tree-parloops.c
===================================================================
--- gcc/tree-parloops.c	2018-06-27 10:27:09.778650686 +0100
+++ gcc/tree-parloops.c	2018-07-24 10:22:57.273070426 +0100
@@ -2570,15 +2570,14 @@ set_reduc_phi_uids (reduction_info **slo
   return 1;
 }
 
-/* Return true if the type of reduction performed by STMT is suitable
+/* Return true if the type of reduction performed by STMT_INFO is suitable
    for this pass.  */
 
 static bool
-valid_reduction_p (gimple *stmt)
+valid_reduction_p (stmt_vec_info stmt_info)
 {
   /* Parallelization would reassociate the operation, which isn't
      allowed for in-order reductions.  */
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   vect_reduction_type reduc_type = STMT_VINFO_REDUC_TYPE (stmt_info);
   return reduc_type != FOLD_LEFT_REDUCTION;
 }
@@ -2615,10 +2614,11 @@ gather_scalar_reductions (loop_p loop, r
       if (simple_iv (loop, loop, res, &iv, true))
 	continue;
 
-      gimple *reduc_stmt
-	= vect_force_simple_reduction (simple_loop_info, phi,
+      stmt_vec_info reduc_stmt_info
+	= vect_force_simple_reduction (simple_loop_info,
+				       simple_loop_info->lookup_stmt (phi),
 				       &double_reduc, true);
-      if (!reduc_stmt || !valid_reduction_p (reduc_stmt))
+      if (!reduc_stmt_info || !valid_reduction_p (reduc_stmt_info))
 	continue;
 
       if (double_reduc)
@@ -2627,11 +2627,11 @@ gather_scalar_reductions (loop_p loop, r
 	    continue;
 
 	  double_reduc_phis.safe_push (phi);
-	  double_reduc_stmts.safe_push (reduc_stmt);
+	  double_reduc_stmts.safe_push (reduc_stmt_info->stmt);
 	  continue;
 	}
 
-      build_new_reduction (reduction_list, reduc_stmt, phi);
+      build_new_reduction (reduction_list, reduc_stmt_info->stmt, phi);
     }
   delete simple_loop_info;
 
@@ -2661,12 +2661,15 @@ gather_scalar_reductions (loop_p loop, r
 			     &iv, true))
 		continue;
 
-	      gimple *inner_reduc_stmt
-		= vect_force_simple_reduction (simple_loop_info, inner_phi,
+	      stmt_vec_info inner_phi_info
+		= simple_loop_info->lookup_stmt (inner_phi);
+	      stmt_vec_info inner_reduc_stmt_info
+		= vect_force_simple_reduction (simple_loop_info,
+					       inner_phi_info,
 					       &double_reduc, true);
 	      gcc_assert (!double_reduc);
-	      if (inner_reduc_stmt == NULL
-		  || !valid_reduction_p (inner_reduc_stmt))
+	      if (!inner_reduc_stmt_info
+		  || !valid_reduction_p (inner_reduc_stmt_info))
 		continue;
 
 	      build_new_reduction (reduction_list, double_reduc_stmts[i], phi);
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:22:53.909100298 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:22:57.273070426 +0100
@@ -546,7 +546,6 @@ vect_analyze_scalar_cycles_1 (loop_vec_i
       gimple *phi = worklist.pop ();
       tree def = PHI_RESULT (phi);
       stmt_vec_info stmt_vinfo = vinfo_for_stmt (phi);
-      gimple *reduc_stmt;
 
       if (dump_enabled_p ())
         {
@@ -557,9 +556,10 @@ vect_analyze_scalar_cycles_1 (loop_vec_i
       gcc_assert (!virtual_operand_p (def)
 		  && STMT_VINFO_DEF_TYPE (stmt_vinfo) == vect_unknown_def_type);
 
-      reduc_stmt = vect_force_simple_reduction (loop_vinfo, phi,
-						&double_reduc, false);
-      if (reduc_stmt)
+      stmt_vec_info reduc_stmt_info
+	= vect_force_simple_reduction (loop_vinfo, stmt_vinfo,
+				       &double_reduc, false);
+      if (reduc_stmt_info)
         {
           if (double_reduc)
             {
@@ -568,8 +568,8 @@ vect_analyze_scalar_cycles_1 (loop_vec_i
 				 "Detected double reduction.\n");
 
               STMT_VINFO_DEF_TYPE (stmt_vinfo) = vect_double_reduction_def;
-              STMT_VINFO_DEF_TYPE (vinfo_for_stmt (reduc_stmt)) =
-                                                    vect_double_reduction_def;
+	      STMT_VINFO_DEF_TYPE (reduc_stmt_info)
+		= vect_double_reduction_def;
             }
           else
             {
@@ -580,8 +580,7 @@ vect_analyze_scalar_cycles_1 (loop_vec_i
 				     "Detected vectorizable nested cycle.\n");
 
                   STMT_VINFO_DEF_TYPE (stmt_vinfo) = vect_nested_cycle;
-                  STMT_VINFO_DEF_TYPE (vinfo_for_stmt (reduc_stmt)) =
-                                                             vect_nested_cycle;
+		  STMT_VINFO_DEF_TYPE (reduc_stmt_info) = vect_nested_cycle;
                 }
               else
                 {
@@ -590,13 +589,13 @@ vect_analyze_scalar_cycles_1 (loop_vec_i
 				     "Detected reduction.\n");
 
                   STMT_VINFO_DEF_TYPE (stmt_vinfo) = vect_reduction_def;
-                  STMT_VINFO_DEF_TYPE (vinfo_for_stmt (reduc_stmt)) =
-                                                           vect_reduction_def;
+		  STMT_VINFO_DEF_TYPE (reduc_stmt_info) = vect_reduction_def;
                   /* Store the reduction cycles for possible vectorization in
                      loop-aware SLP if it was not detected as reduction
 		     chain.  */
-		  if (! REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (reduc_stmt)))
-		    LOOP_VINFO_REDUCTIONS (loop_vinfo).safe_push (reduc_stmt);
+		  if (! REDUC_GROUP_FIRST_ELEMENT (reduc_stmt_info))
+		    LOOP_VINFO_REDUCTIONS (loop_vinfo).safe_push
+		      (reduc_stmt_info);
                 }
             }
         }
@@ -2530,8 +2529,8 @@ vect_is_slp_reduction (loop_vec_info loo
   struct loop *loop = (gimple_bb (phi))->loop_father;
   struct loop *vect_loop = LOOP_VINFO_LOOP (loop_info);
   enum tree_code code;
-  gimple *current_stmt = NULL, *loop_use_stmt = NULL, *first, *next_stmt;
-  stmt_vec_info use_stmt_info, current_stmt_info;
+  gimple *loop_use_stmt = NULL, *first, *next_stmt;
+  stmt_vec_info use_stmt_info, current_stmt_info = NULL;
   tree lhs;
   imm_use_iterator imm_iter;
   use_operand_p use_p;
@@ -2593,9 +2592,8 @@ vect_is_slp_reduction (loop_vec_info loo
 
       /* Insert USE_STMT into reduction chain.  */
       use_stmt_info = loop_info->lookup_stmt (loop_use_stmt);
-      if (current_stmt)
+      if (current_stmt_info)
         {
-          current_stmt_info = vinfo_for_stmt (current_stmt);
 	  REDUC_GROUP_NEXT_ELEMENT (current_stmt_info) = loop_use_stmt;
           REDUC_GROUP_FIRST_ELEMENT (use_stmt_info)
             = REDUC_GROUP_FIRST_ELEMENT (current_stmt_info);
@@ -2604,7 +2602,7 @@ vect_is_slp_reduction (loop_vec_info loo
 	REDUC_GROUP_FIRST_ELEMENT (use_stmt_info) = loop_use_stmt;
 
       lhs = gimple_assign_lhs (loop_use_stmt);
-      current_stmt = loop_use_stmt;
+      current_stmt_info = use_stmt_info;
       size++;
    }
 
@@ -2614,7 +2612,7 @@ vect_is_slp_reduction (loop_vec_info loo
   /* Swap the operands, if needed, to make the reduction operand be the second
      operand.  */
   lhs = PHI_RESULT (phi);
-  next_stmt = REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (current_stmt));
+  next_stmt = REDUC_GROUP_FIRST_ELEMENT (current_stmt_info);
   while (next_stmt)
     {
       if (gimple_assign_rhs2 (next_stmt) == lhs)
@@ -2671,7 +2669,7 @@ vect_is_slp_reduction (loop_vec_info loo
     }
 
   /* Save the chain for further analysis in SLP detection.  */
-  first = REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (current_stmt));
+  first = REDUC_GROUP_FIRST_ELEMENT (current_stmt_info);
   LOOP_VINFO_REDUCTION_CHAINS (loop_info).safe_push (first);
   REDUC_GROUP_SIZE (vinfo_for_stmt (first)) = size;
 
@@ -2867,15 +2865,16 @@ check_reduction_path (dump_user_location
 
 */
 
-static gimple *
-vect_is_simple_reduction (loop_vec_info loop_info, gimple *phi,
+static stmt_vec_info
+vect_is_simple_reduction (loop_vec_info loop_info, stmt_vec_info phi_info,
 			  bool *double_reduc,
 			  bool need_wrapping_integral_overflow,
 			  enum vect_reduction_type *v_reduc_type)
 {
+  gphi *phi = as_a <gphi *> (phi_info->stmt);
   struct loop *loop = (gimple_bb (phi))->loop_father;
   struct loop *vect_loop = LOOP_VINFO_LOOP (loop_info);
-  gimple *def_stmt, *phi_use_stmt = NULL;
+  gimple *phi_use_stmt = NULL;
   enum tree_code orig_code, code;
   tree op1, op2, op3 = NULL_TREE, op4 = NULL_TREE;
   tree type;
@@ -2937,13 +2936,16 @@ vect_is_simple_reduction (loop_vec_info
       return NULL;
     }
 
-  def_stmt = SSA_NAME_DEF_STMT (loop_arg);
-  if (is_gimple_assign (def_stmt))
+  stmt_vec_info def_stmt_info = loop_info->lookup_def (loop_arg);
+  if (!def_stmt_info)
+    return NULL;
+
+  if (gassign *def_stmt = dyn_cast <gassign *> (def_stmt_info->stmt))
     {
       name = gimple_assign_lhs (def_stmt);
       phi_def = false;
     }
-  else if (gimple_code (def_stmt) == GIMPLE_PHI)
+  else if (gphi *def_stmt = dyn_cast <gphi *> (def_stmt_info->stmt))
     {
       name = PHI_RESULT (def_stmt);
       phi_def = true;
@@ -2954,14 +2956,12 @@ vect_is_simple_reduction (loop_vec_info
 	{
 	  dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
 			   "reduction: unhandled reduction operation: ");
-	  dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM, def_stmt, 0);
+	  dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
+			    def_stmt_info->stmt, 0);
 	}
       return NULL;
     }
 
-  if (! flow_bb_inside_loop_p (loop, gimple_bb (def_stmt)))
-    return NULL;
-
   nloop_uses = 0;
   auto_vec<gphi *, 3> lcphis;
   FOR_EACH_IMM_USE_FAST (use_p, imm_iter, name)
@@ -2987,6 +2987,7 @@ vect_is_simple_reduction (loop_vec_info
      defined in the inner loop.  */
   if (phi_def)
     {
+      gphi *def_stmt = as_a <gphi *> (def_stmt_info->stmt);
       op1 = PHI_ARG_DEF (def_stmt, 0);
 
       if (gimple_phi_num_args (def_stmt) != 1
@@ -3012,7 +3013,7 @@ vect_is_simple_reduction (loop_vec_info
 			    "detected double reduction: ");
 
           *double_reduc = true;
-          return def_stmt;
+	  return def_stmt_info;
         }
 
       return NULL;
@@ -3038,6 +3039,7 @@ vect_is_simple_reduction (loop_vec_info
 	  }
     }
 
+  gassign *def_stmt = as_a <gassign *> (def_stmt_info->stmt);
   bool nested_in_vect_loop = flow_loop_nested_p (vect_loop, loop);
   code = orig_code = gimple_assign_rhs_code (def_stmt);
 
@@ -3178,7 +3180,7 @@ vect_is_simple_reduction (loop_vec_info
     {
       if (dump_enabled_p ())
 	report_vect_op (MSG_NOTE, def_stmt, "detected reduction: ");
-      return def_stmt;
+      return def_stmt_info;
     }
 
   if (def1_info
@@ -3237,7 +3239,7 @@ vect_is_simple_reduction (loop_vec_info
             report_vect_op (MSG_NOTE, def_stmt, "detected reduction: ");
         }
 
-      return def_stmt;
+      return def_stmt_info;
     }
 
   /* Try to find SLP reduction chain.  */
@@ -3250,7 +3252,7 @@ vect_is_simple_reduction (loop_vec_info
         report_vect_op (MSG_NOTE, def_stmt,
 			"reduction: detected reduction chain: ");
 
-      return def_stmt;
+      return def_stmt_info;
     }
 
   /* Dissolve group eventually half-built by vect_is_slp_reduction.  */
@@ -3264,9 +3266,8 @@ vect_is_simple_reduction (loop_vec_info
     }
 
   /* Look for the expression computing loop_arg from loop PHI result.  */
-  if (check_reduction_path (vect_location, loop, as_a <gphi *> (phi), loop_arg,
-			    code))
-    return def_stmt;
+  if (check_reduction_path (vect_location, loop, phi, loop_arg, code))
+    return def_stmt_info;
 
   if (dump_enabled_p ())
     {
@@ -3281,25 +3282,24 @@ vect_is_simple_reduction (loop_vec_info
    in-place if it enables detection of more reductions.  Arguments
    as there.  */
 
-gimple *
-vect_force_simple_reduction (loop_vec_info loop_info, gimple *phi,
+stmt_vec_info
+vect_force_simple_reduction (loop_vec_info loop_info, stmt_vec_info phi_info,
 			     bool *double_reduc,
 			     bool need_wrapping_integral_overflow)
 {
   enum vect_reduction_type v_reduc_type;
-  gimple *def = vect_is_simple_reduction (loop_info, phi, double_reduc,
-					  need_wrapping_integral_overflow,
-					  &v_reduc_type);
-  if (def)
+  stmt_vec_info def_info
+    = vect_is_simple_reduction (loop_info, phi_info, double_reduc,
+				need_wrapping_integral_overflow,
+				&v_reduc_type);
+  if (def_info)
     {
-      stmt_vec_info phi_info = vinfo_for_stmt (phi);
-      stmt_vec_info def_info = vinfo_for_stmt (def);
       STMT_VINFO_REDUC_TYPE (phi_info) = v_reduc_type;
       STMT_VINFO_REDUC_DEF (phi_info) = def_info;
       STMT_VINFO_REDUC_TYPE (def_info) = v_reduc_type;
       STMT_VINFO_REDUC_DEF (def_info) = phi_info;
     }
-  return def;
+  return def_info;
 }
 
 /* Calculate cost of peeling the loop PEEL_ITERS_PROLOGUE times.  */
Index: gcc/tree-vect-patterns.c
===================================================================
--- gcc/tree-vect-patterns.c	2018-07-24 10:22:44.289185723 +0100
+++ gcc/tree-vect-patterns.c	2018-07-24 10:22:57.277070390 +0100
@@ -4851,9 +4851,9 @@ vect_pattern_recog_1 (vect_recog_func *r
   if (loop_vinfo)
     {
       unsigned ix, ix2;
-      gimple **elem_ptr;
+      stmt_vec_info *elem_ptr;
       VEC_ORDERED_REMOVE_IF (LOOP_VINFO_REDUCTIONS (loop_vinfo), ix, ix2,
-			     elem_ptr, *elem_ptr == stmt);
+			     elem_ptr, *elem_ptr == stmt_info);
     }
 }
 
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c	2018-07-24 10:22:50.777128110 +0100
+++ gcc/tree-vect-slp.c	2018-07-24 10:22:57.277070390 +0100
@@ -1931,6 +1931,7 @@ vect_analyze_slp_instance (vec_info *vin
   unsigned int group_size;
   tree vectype, scalar_type = NULL_TREE;
   gimple *next;
+  stmt_vec_info next_info;
   unsigned int i;
   vec<slp_tree> loads;
   struct data_reference *dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (stmt));
@@ -2008,9 +2009,9 @@ vect_analyze_slp_instance (vec_info *vin
   else
     {
       /* Collect reduction statements.  */
-      vec<gimple *> reductions = as_a <loop_vec_info> (vinfo)->reductions;
-      for (i = 0; reductions.iterate (i, &next); i++)
-	scalar_stmts.safe_push (next);
+      vec<stmt_vec_info> reductions = as_a <loop_vec_info> (vinfo)->reductions;
+      for (i = 0; reductions.iterate (i, &next_info); i++)
+	scalar_stmts.safe_push (next_info);
     }
 
   loads.create (group_size);

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [18/46] Make SLP_TREE_SCALAR_STMTS a vec<stmt_vec_info>
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (16 preceding siblings ...)
  2018-07-24  9:59 ` [15/46] Make SLP_TREE_VEC_STMTS a vec<stmt_vec_info> Richard Sandiford
@ 2018-07-24 10:00 ` Richard Sandiford
  2018-07-25  9:27   ` Richard Biener
  2018-07-24 10:01 ` [21/46] Make grouped_stores and reduction_chains use stmt_vec_infos Richard Sandiford
                   ` (27 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:00 UTC (permalink / raw)
  To: gcc-patches

This patch changes SLP_TREE_SCALAR_STMTS from a vec<gimple *> to
a vec<stmt_vec_info>.  It's longer than the previous conversions
but mostly mechanical.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (_slp_tree::stmts): Change from a vec<gimple *>
	to a vec<stmt_vec_info>.
	* tree-vect-slp.c (vect_free_slp_tree): Update accordingly.
	(vect_create_new_slp_node): Take a vec<gimple *> instead of a
	vec<stmt_vec_info>.
	(_slp_oprnd_info::def_stmts): Change from a vec<gimple *>
	to a vec<stmt_vec_info>.
	(bst_traits::value_type, bst_traits::value_type): Likewise.
	(bst_traits::hash): Update accordingly.
	(vect_get_and_check_slp_defs): Change the stmts parameter from
	a vec<gimple *> to a vec<stmt_vec_info>.
	(vect_two_operations_perm_ok_p, vect_build_slp_tree_1): Likewise.
	(vect_build_slp_tree): Likewise.
	(vect_build_slp_tree_2): Likewise.  Update uses of
	SLP_TREE_SCALAR_STMTS.
	(vect_print_slp_tree): Update uses of SLP_TREE_SCALAR_STMTS.
	(vect_mark_slp_stmts, vect_mark_slp_stmts_relevant)
	(vect_slp_rearrange_stmts, vect_attempt_slp_rearrange_stmts)
	(vect_supported_load_permutation_p, vect_find_last_scalar_stmt_in_slp)
	(vect_detect_hybrid_slp_stmts, vect_slp_analyze_node_operations_1)
	(vect_slp_analyze_node_operations, vect_slp_analyze_operations)
	(vect_bb_slp_scalar_cost, vect_slp_analyze_bb_1)
	(vect_get_constant_vectors, vect_get_slp_defs)
	(vect_transform_slp_perm_load, vect_schedule_slp_instance)
	(vect_remove_slp_scalar_calls, vect_schedule_slp): Likewise.
	(vect_analyze_slp_instance): Build up a vec of stmt_vec_infos
	instead of gimple stmts.
	* tree-vect-data-refs.c (vect_slp_analyze_node_dependences): Change
	the stores parameter for a vec<gimple *> to a vec<stmt_vec_info>.
	(vect_slp_analyze_instance_dependence): Update uses of
	SLP_TREE_SCALAR_STMTS.
	(vect_slp_analyze_and_verify_node_alignment): Likewise.
	(vect_slp_analyze_and_verify_instance_alignment): Likewise.
	* tree-vect-loop.c (neutral_op_for_slp_reduction): Likewise.
	(get_initial_defs_for_reduction): Likewise.
	(vect_create_epilog_for_reduction): Likewise.
	(vectorize_fold_left_reduction): Likewise.
	* tree-vect-stmts.c (vect_prologue_cost_for_slp_op): Likewise.
	(vect_model_simple_cost, vectorizable_shift, vectorizable_load)
	(can_vectorize_live_stmts): Likewise.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:22:57.277070390 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:23:00.401042649 +0100
@@ -138,7 +138,7 @@ struct _slp_tree {
   /* Nodes that contain def-stmts of this node statements operands.  */
   vec<slp_tree> children;
   /* A group of scalar stmts to be vectorized together.  */
-  vec<gimple *> stmts;
+  vec<stmt_vec_info> stmts;
   /* Load permutation relative to the stores, NULL if there is no
      permutation.  */
   vec<unsigned> load_permutation;
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c	2018-07-24 10:22:57.277070390 +0100
+++ gcc/tree-vect-slp.c	2018-07-24 10:23:00.401042649 +0100
@@ -66,11 +66,11 @@ vect_free_slp_tree (slp_tree node, bool
      statements would be redundant.  */
   if (!final_p)
     {
-      gimple *stmt;
-      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
+      stmt_vec_info stmt_info;
+      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
 	{
-	  gcc_assert (STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt)) > 0);
-	  STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt))--;
+	  gcc_assert (STMT_VINFO_NUM_SLP_USES (stmt_info) > 0);
+	  STMT_VINFO_NUM_SLP_USES (stmt_info)--;
 	}
     }
 
@@ -99,21 +99,21 @@ vect_free_slp_instance (slp_instance ins
 /* Create an SLP node for SCALAR_STMTS.  */
 
 static slp_tree
-vect_create_new_slp_node (vec<gimple *> scalar_stmts)
+vect_create_new_slp_node (vec<stmt_vec_info> scalar_stmts)
 {
   slp_tree node;
-  gimple *stmt = scalar_stmts[0];
+  stmt_vec_info stmt_info = scalar_stmts[0];
   unsigned int nops;
 
-  if (is_gimple_call (stmt))
+  if (gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt))
     nops = gimple_call_num_args (stmt);
-  else if (is_gimple_assign (stmt))
+  else if (gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt))
     {
       nops = gimple_num_ops (stmt) - 1;
       if (gimple_assign_rhs_code (stmt) == COND_EXPR)
 	nops++;
     }
-  else if (gimple_code (stmt) == GIMPLE_PHI)
+  else if (is_a <gphi *> (stmt_info->stmt))
     nops = 0;
   else
     return NULL;
@@ -128,8 +128,8 @@ vect_create_new_slp_node (vec<gimple *>
   SLP_TREE_DEF_TYPE (node) = vect_internal_def;
 
   unsigned i;
-  FOR_EACH_VEC_ELT (scalar_stmts, i, stmt)
-    STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt))++;
+  FOR_EACH_VEC_ELT (scalar_stmts, i, stmt_info)
+    STMT_VINFO_NUM_SLP_USES (stmt_info)++;
 
   return node;
 }
@@ -141,7 +141,7 @@ vect_create_new_slp_node (vec<gimple *>
 typedef struct _slp_oprnd_info
 {
   /* Def-stmts for the operands.  */
-  vec<gimple *> def_stmts;
+  vec<stmt_vec_info> def_stmts;
   /* Information about the first statement, its vector def-type, type, the
      operand itself in case it's constant, and an indication if it's a pattern
      stmt.  */
@@ -297,10 +297,10 @@ can_duplicate_and_interleave_p (unsigned
    ok return 0.  */
 static int
 vect_get_and_check_slp_defs (vec_info *vinfo, unsigned char *swap,
-			     vec<gimple *> stmts, unsigned stmt_num,
+			     vec<stmt_vec_info> stmts, unsigned stmt_num,
 			     vec<slp_oprnd_info> *oprnds_info)
 {
-  gimple *stmt = stmts[stmt_num];
+  stmt_vec_info stmt_info = stmts[stmt_num];
   tree oprnd;
   unsigned int i, number_of_oprnds;
   enum vect_def_type dt = vect_uninitialized_def;
@@ -312,12 +312,12 @@ vect_get_and_check_slp_defs (vec_info *v
   bool first = stmt_num == 0;
   bool second = stmt_num == 1;
 
-  if (is_gimple_call (stmt))
+  if (gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt))
     {
       number_of_oprnds = gimple_call_num_args (stmt);
       first_op_idx = 3;
     }
-  else if (is_gimple_assign (stmt))
+  else if (gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt))
     {
       enum tree_code code = gimple_assign_rhs_code (stmt);
       number_of_oprnds = gimple_num_ops (stmt) - 1;
@@ -347,12 +347,13 @@ vect_get_and_check_slp_defs (vec_info *v
 	  int *map = maps[*swap];
 
 	  if (i < 2)
-	    oprnd = TREE_OPERAND (gimple_op (stmt, first_op_idx), map[i]);
+	    oprnd = TREE_OPERAND (gimple_op (stmt_info->stmt,
+					     first_op_idx), map[i]);
 	  else
-	    oprnd = gimple_op (stmt, map[i]);
+	    oprnd = gimple_op (stmt_info->stmt, map[i]);
 	}
       else
-	oprnd = gimple_op (stmt, first_op_idx + (swapped ? !i : i));
+	oprnd = gimple_op (stmt_info->stmt, first_op_idx + (swapped ? !i : i));
 
       oprnd_info = (*oprnds_info)[i];
 
@@ -518,18 +519,20 @@ vect_get_and_check_slp_defs (vec_info *v
     {
       /* If there are already uses of this stmt in a SLP instance then
          we've committed to the operand order and can't swap it.  */
-      if (STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt)) != 0)
+      if (STMT_VINFO_NUM_SLP_USES (stmt_info) != 0)
 	{
 	  if (dump_enabled_p ())
 	    {
 	      dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
 			       "Build SLP failed: cannot swap operands of "
 			       "shared stmt ");
-	      dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM, stmt, 0);
+	      dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
+				stmt_info->stmt, 0);
 	    }
 	  return -1;
 	}
 
+      gassign *stmt = as_a <gassign *> (stmt_info->stmt);
       if (first_op_cond)
 	{
 	  tree cond = gimple_assign_rhs1 (stmt);
@@ -655,8 +658,9 @@ vect_record_max_nunits (vec_info *vinfo,
    would be permuted.  */
 
 static bool
-vect_two_operations_perm_ok_p (vec<gimple *> stmts, unsigned int group_size,
-			       tree vectype, tree_code alt_stmt_code)
+vect_two_operations_perm_ok_p (vec<stmt_vec_info> stmts,
+			       unsigned int group_size, tree vectype,
+			       tree_code alt_stmt_code)
 {
   unsigned HOST_WIDE_INT count;
   if (!TYPE_VECTOR_SUBPARTS (vectype).is_constant (&count))
@@ -666,7 +670,8 @@ vect_two_operations_perm_ok_p (vec<gimpl
   for (unsigned int i = 0; i < count; ++i)
     {
       unsigned int elt = i;
-      if (gimple_assign_rhs_code (stmts[i % group_size]) == alt_stmt_code)
+      gassign *stmt = as_a <gassign *> (stmts[i % group_size]->stmt);
+      if (gimple_assign_rhs_code (stmt) == alt_stmt_code)
 	elt += count;
       sel.quick_push (elt);
     }
@@ -690,12 +695,12 @@ vect_two_operations_perm_ok_p (vec<gimpl
 
 static bool
 vect_build_slp_tree_1 (vec_info *vinfo, unsigned char *swap,
-		       vec<gimple *> stmts, unsigned int group_size,
+		       vec<stmt_vec_info> stmts, unsigned int group_size,
 		       poly_uint64 *max_nunits, bool *matches,
 		       bool *two_operators)
 {
   unsigned int i;
-  gimple *first_stmt = stmts[0], *stmt = stmts[0];
+  stmt_vec_info first_stmt_info = stmts[0];
   enum tree_code first_stmt_code = ERROR_MARK;
   enum tree_code alt_stmt_code = ERROR_MARK;
   enum tree_code rhs_code = ERROR_MARK;
@@ -710,9 +715,10 @@ vect_build_slp_tree_1 (vec_info *vinfo,
   gimple *first_load = NULL, *prev_first_load = NULL;
 
   /* For every stmt in NODE find its def stmt/s.  */
-  FOR_EACH_VEC_ELT (stmts, i, stmt)
+  stmt_vec_info stmt_info;
+  FOR_EACH_VEC_ELT (stmts, i, stmt_info)
     {
-      stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+      gimple *stmt = stmt_info->stmt;
       swap[i] = 0;
       matches[i] = false;
 
@@ -723,7 +729,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
 	}
 
       /* Fail to vectorize statements marked as unvectorizable.  */
-      if (!STMT_VINFO_VECTORIZABLE (vinfo_for_stmt (stmt)))
+      if (!STMT_VINFO_VECTORIZABLE (stmt_info))
         {
           if (dump_enabled_p ())
             {
@@ -755,7 +761,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
       if (!vect_get_vector_types_for_stmt (stmt_info, &vectype,
 					   &nunits_vectype)
 	  || (nunits_vectype
-	      && !vect_record_max_nunits (vinfo, stmt, group_size,
+	      && !vect_record_max_nunits (vinfo, stmt_info, group_size,
 					  nunits_vectype, max_nunits)))
 	{
 	  /* Fatal mismatch.  */
@@ -877,7 +883,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
 		   && (alt_stmt_code == PLUS_EXPR
 		       || alt_stmt_code == MINUS_EXPR)
 		   && rhs_code == alt_stmt_code)
-              && !(STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt))
+	      && !(STMT_VINFO_GROUPED_ACCESS (stmt_info)
                    && (first_stmt_code == ARRAY_REF
                        || first_stmt_code == BIT_FIELD_REF
                        || first_stmt_code == INDIRECT_REF
@@ -893,7 +899,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
 		  dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
 				   "original stmt ");
 		  dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
-				    first_stmt, 0);
+				    first_stmt_info->stmt, 0);
 		}
 	      /* Mismatch.  */
 	      continue;
@@ -915,8 +921,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
 
 	  if (rhs_code == CALL_EXPR)
 	    {
-	      gimple *first_stmt = stmts[0];
-	      if (!compatible_calls_p (as_a <gcall *> (first_stmt),
+	      if (!compatible_calls_p (as_a <gcall *> (stmts[0]->stmt),
 				       as_a <gcall *> (stmt)))
 		{
 		  if (dump_enabled_p ())
@@ -933,7 +938,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
 	}
 
       /* Grouped store or load.  */
-      if (STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt)))
+      if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
 	{
 	  if (REFERENCE_CLASS_P (lhs))
 	    {
@@ -943,7 +948,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
 	  else
 	    {
 	      /* Load.  */
-              first_load = DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt));
+	      first_load = DR_GROUP_FIRST_ELEMENT (stmt_info);
               if (prev_first_load)
                 {
                   /* Check that there are no loads from different interleaving
@@ -1061,7 +1066,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
 					     vectype, alt_stmt_code))
 	{
 	  for (i = 0; i < group_size; ++i)
-	    if (gimple_assign_rhs_code (stmts[i]) == alt_stmt_code)
+	    if (gimple_assign_rhs_code (stmts[i]->stmt) == alt_stmt_code)
 	      {
 		matches[i] = false;
 		if (dump_enabled_p ())
@@ -1070,11 +1075,11 @@ vect_build_slp_tree_1 (vec_info *vinfo,
 				     "Build SLP failed: different operation "
 				     "in stmt ");
 		    dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
-				      stmts[i], 0);
+				      stmts[i]->stmt, 0);
 		    dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
 				     "original stmt ");
 		    dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
-				      first_stmt, 0);
+				      first_stmt_info->stmt, 0);
 		  }
 	      }
 	  return false;
@@ -1090,8 +1095,8 @@ vect_build_slp_tree_1 (vec_info *vinfo,
    need a special value for deleted that differs from empty.  */
 struct bst_traits
 {
-  typedef vec <gimple *> value_type;
-  typedef vec <gimple *> compare_type;
+  typedef vec <stmt_vec_info> value_type;
+  typedef vec <stmt_vec_info> compare_type;
   static inline hashval_t hash (value_type);
   static inline bool equal (value_type existing, value_type candidate);
   static inline bool is_empty (value_type x) { return !x.exists (); }
@@ -1105,7 +1110,7 @@ bst_traits::hash (value_type x)
 {
   inchash::hash h;
   for (unsigned i = 0; i < x.length (); ++i)
-    h.add_int (gimple_uid (x[i]));
+    h.add_int (gimple_uid (x[i]->stmt));
   return h.end ();
 }
 inline bool
@@ -1128,7 +1133,7 @@ typedef hash_map <vec <gimple *>, slp_tr
 
 static slp_tree
 vect_build_slp_tree_2 (vec_info *vinfo,
-		       vec<gimple *> stmts, unsigned int group_size,
+		       vec<stmt_vec_info> stmts, unsigned int group_size,
 		       poly_uint64 *max_nunits,
 		       vec<slp_tree> *loads,
 		       bool *matches, unsigned *npermutes, unsigned *tree_size,
@@ -1136,7 +1141,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
 
 static slp_tree
 vect_build_slp_tree (vec_info *vinfo,
-		     vec<gimple *> stmts, unsigned int group_size,
+		     vec<stmt_vec_info> stmts, unsigned int group_size,
 		     poly_uint64 *max_nunits, vec<slp_tree> *loads,
 		     bool *matches, unsigned *npermutes, unsigned *tree_size,
 		     unsigned max_tree_size)
@@ -1151,7 +1156,7 @@ vect_build_slp_tree (vec_info *vinfo,
      scalars, see PR81723.  */
   if (! res)
     {
-      vec <gimple *> x;
+      vec <stmt_vec_info> x;
       x.create (stmts.length ());
       x.splice (stmts);
       bst_fail->add (x);
@@ -1168,7 +1173,7 @@ vect_build_slp_tree (vec_info *vinfo,
 
 static slp_tree
 vect_build_slp_tree_2 (vec_info *vinfo,
-		       vec<gimple *> stmts, unsigned int group_size,
+		       vec<stmt_vec_info> stmts, unsigned int group_size,
 		       poly_uint64 *max_nunits,
 		       vec<slp_tree> *loads,
 		       bool *matches, unsigned *npermutes, unsigned *tree_size,
@@ -1176,53 +1181,54 @@ vect_build_slp_tree_2 (vec_info *vinfo,
 {
   unsigned nops, i, this_tree_size = 0;
   poly_uint64 this_max_nunits = *max_nunits;
-  gimple *stmt;
   slp_tree node;
 
   matches[0] = false;
 
-  stmt = stmts[0];
-  if (is_gimple_call (stmt))
+  stmt_vec_info stmt_info = stmts[0];
+  if (gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt))
     nops = gimple_call_num_args (stmt);
-  else if (is_gimple_assign (stmt))
+  else if (gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt))
     {
       nops = gimple_num_ops (stmt) - 1;
       if (gimple_assign_rhs_code (stmt) == COND_EXPR)
 	nops++;
     }
-  else if (gimple_code (stmt) == GIMPLE_PHI)
+  else if (is_a <gphi *> (stmt_info->stmt))
     nops = 0;
   else
     return NULL;
 
   /* If the SLP node is a PHI (induction or reduction), terminate
      the recursion.  */
-  if (gimple_code (stmt) == GIMPLE_PHI)
+  if (gphi *stmt = dyn_cast <gphi *> (stmt_info->stmt))
     {
       tree scalar_type = TREE_TYPE (PHI_RESULT (stmt));
       tree vectype = get_vectype_for_scalar_type (scalar_type);
-      if (!vect_record_max_nunits (vinfo, stmt, group_size, vectype,
+      if (!vect_record_max_nunits (vinfo, stmt_info, group_size, vectype,
 				   max_nunits))
 	return NULL;
 
-      vect_def_type def_type = STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt));
+      vect_def_type def_type = STMT_VINFO_DEF_TYPE (stmt_info);
       /* Induction from different IVs is not supported.  */
       if (def_type == vect_induction_def)
 	{
-	  FOR_EACH_VEC_ELT (stmts, i, stmt)
-	    if (stmt != stmts[0])
+	  stmt_vec_info other_info;
+	  FOR_EACH_VEC_ELT (stmts, i, other_info)
+	    if (stmt_info != other_info)
 	      return NULL;
 	}
       else
 	{
 	  /* Else def types have to match.  */
-	  FOR_EACH_VEC_ELT (stmts, i, stmt)
+	  stmt_vec_info other_info;
+	  FOR_EACH_VEC_ELT (stmts, i, other_info)
 	    {
 	      /* But for reduction chains only check on the first stmt.  */
-	      if (REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt))
-		  && REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)) != stmt)
+	      if (REDUC_GROUP_FIRST_ELEMENT (other_info)
+		  && REDUC_GROUP_FIRST_ELEMENT (other_info) != stmt_info)
 		continue;
-	      if (STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt)) != def_type)
+	      if (STMT_VINFO_DEF_TYPE (other_info) != def_type)
 		return NULL;
 	    }
 	}
@@ -1238,8 +1244,8 @@ vect_build_slp_tree_2 (vec_info *vinfo,
     return NULL;
 
   /* If the SLP node is a load, terminate the recursion.  */
-  if (STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt))
-      && DR_IS_READ (STMT_VINFO_DATA_REF (vinfo_for_stmt (stmt))))
+  if (STMT_VINFO_GROUPED_ACCESS (stmt_info)
+      && DR_IS_READ (STMT_VINFO_DATA_REF (stmt_info)))
     {
       *max_nunits = this_max_nunits;
       node = vect_create_new_slp_node (stmts);
@@ -1250,7 +1256,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
   /* Get at the operands, verifying they are compatible.  */
   vec<slp_oprnd_info> oprnds_info = vect_create_oprnd_info (nops, group_size);
   slp_oprnd_info oprnd_info;
-  FOR_EACH_VEC_ELT (stmts, i, stmt)
+  FOR_EACH_VEC_ELT (stmts, i, stmt_info)
     {
       int res = vect_get_and_check_slp_defs (vinfo, &swap[i],
 					     stmts, i, &oprnds_info);
@@ -1269,7 +1275,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
   auto_vec<slp_tree, 4> children;
   auto_vec<slp_tree> this_loads;
 
-  stmt = stmts[0];
+  stmt_info = stmts[0];
 
   if (tree_size)
     max_tree_size -= *tree_size;
@@ -1307,8 +1313,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
 	      /* ???  Rejecting patterns this way doesn't work.  We'd have to
 		 do extra work to cancel the pattern so the uses see the
 		 scalar version.  */
-	      && !is_pattern_stmt_p
-	            (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (child)[0])))
+	      && !is_pattern_stmt_p (SLP_TREE_SCALAR_STMTS (child)[0]))
 	    {
 	      slp_tree grandchild;
 
@@ -1352,7 +1357,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
 	  /* ???  Rejecting patterns this way doesn't work.  We'd have to
 	     do extra work to cancel the pattern so the uses see the
 	     scalar version.  */
-	  && !is_pattern_stmt_p (vinfo_for_stmt (stmt)))
+	  && !is_pattern_stmt_p (stmt_info))
 	{
 	  dump_printf_loc (MSG_NOTE, vect_location,
 			   "Building vector operands from scalars\n");
@@ -1373,7 +1378,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
 	     as well as the arms under some constraints.  */
 	  && nops == 2
 	  && oprnds_info[1]->first_dt == vect_internal_def
-	  && is_gimple_assign (stmt)
+	  && is_gimple_assign (stmt_info->stmt)
 	  /* Do so only if the number of not successful permutes was nor more
 	     than a cut-ff as re-trying the recursive match on
 	     possibly each level of the tree would expose exponential
@@ -1389,9 +1394,10 @@ vect_build_slp_tree_2 (vec_info *vinfo,
 		{
 		  if (matches[j] != !swap_not_matching)
 		    continue;
-		  gimple *stmt = stmts[j];
+		  stmt_vec_info stmt_info = stmts[j];
 		  /* Verify if we can swap operands of this stmt.  */
-		  if (!is_gimple_assign (stmt)
+		  gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt);
+		  if (!stmt
 		      || !commutative_tree_code (gimple_assign_rhs_code (stmt)))
 		    {
 		      if (!swap_not_matching)
@@ -1406,7 +1412,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
 		     node and temporarily do that when processing it
 		     (or wrap operand accessors in a helper).  */
 		  else if (swap[j] != 0
-			   || STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt)))
+			   || STMT_VINFO_NUM_SLP_USES (stmt_info))
 		    {
 		      if (!swap_not_matching)
 			{
@@ -1417,7 +1423,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
 					       "Build SLP failed: cannot swap "
 					       "operands of shared stmt ");
 			      dump_gimple_stmt (MSG_MISSED_OPTIMIZATION,
-						TDF_SLIM, stmts[j], 0);
+						TDF_SLIM, stmts[j]->stmt, 0);
 			    }
 			  goto fail;
 			}
@@ -1454,31 +1460,23 @@ vect_build_slp_tree_2 (vec_info *vinfo,
 		 if we end up building the operand from scalars as
 		 we'll continue to process swapped operand two.  */
 	      for (j = 0; j < group_size; ++j)
-		{
-		  gimple *stmt = stmts[j];
-		  gimple_set_plf (stmt, GF_PLF_1, false);
-		}
+		gimple_set_plf (stmts[j]->stmt, GF_PLF_1, false);
 	      for (j = 0; j < group_size; ++j)
-		{
-		  gimple *stmt = stmts[j];
-		  if (matches[j] == !swap_not_matching)
-		    {
-		      /* Avoid swapping operands twice.  */
-		      if (gimple_plf (stmt, GF_PLF_1))
-			continue;
-		      swap_ssa_operands (stmt, gimple_assign_rhs1_ptr (stmt),
-					 gimple_assign_rhs2_ptr (stmt));
-		      gimple_set_plf (stmt, GF_PLF_1, true);
-		    }
-		}
+		if (matches[j] == !swap_not_matching)
+		  {
+		    gassign *stmt = as_a <gassign *> (stmts[j]->stmt);
+		    /* Avoid swapping operands twice.  */
+		    if (gimple_plf (stmt, GF_PLF_1))
+		      continue;
+		    swap_ssa_operands (stmt, gimple_assign_rhs1_ptr (stmt),
+				       gimple_assign_rhs2_ptr (stmt));
+		    gimple_set_plf (stmt, GF_PLF_1, true);
+		  }
 	      /* Verify we swap all duplicates or none.  */
 	      if (flag_checking)
 		for (j = 0; j < group_size; ++j)
-		  {
-		    gimple *stmt = stmts[j];
-		    gcc_assert (gimple_plf (stmt, GF_PLF_1)
-				== (matches[j] == !swap_not_matching));
-		  }
+		  gcc_assert (gimple_plf (stmts[j]->stmt, GF_PLF_1)
+			      == (matches[j] == !swap_not_matching));
 
 	      /* If we have all children of child built up from scalars then
 		 just throw that away and build it up this node from scalars.  */
@@ -1486,8 +1484,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
 		  /* ???  Rejecting patterns this way doesn't work.  We'd have
 		     to do extra work to cancel the pattern so the uses see the
 		     scalar version.  */
-		  && !is_pattern_stmt_p
-			(vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (child)[0])))
+		  && !is_pattern_stmt_p (SLP_TREE_SCALAR_STMTS (child)[0]))
 		{
 		  unsigned int j;
 		  slp_tree grandchild;
@@ -1550,16 +1547,16 @@ vect_print_slp_tree (dump_flags_t dump_k
 		     slp_tree node)
 {
   int i;
-  gimple *stmt;
+  stmt_vec_info stmt_info;
   slp_tree child;
 
   dump_printf_loc (dump_kind, loc, "node%s\n",
 		   SLP_TREE_DEF_TYPE (node) != vect_internal_def
 		   ? " (external)" : "");
-  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
+  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
     {
       dump_printf_loc (dump_kind, loc, "\tstmt %d ", i);
-      dump_gimple_stmt (dump_kind, TDF_SLIM, stmt, 0);
+      dump_gimple_stmt (dump_kind, TDF_SLIM, stmt_info->stmt, 0);
     }
   FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
     vect_print_slp_tree (dump_kind, loc, child);
@@ -1575,15 +1572,15 @@ vect_print_slp_tree (dump_flags_t dump_k
 vect_mark_slp_stmts (slp_tree node, enum slp_vect_type mark, int j)
 {
   int i;
-  gimple *stmt;
+  stmt_vec_info stmt_info;
   slp_tree child;
 
   if (SLP_TREE_DEF_TYPE (node) != vect_internal_def)
     return;
 
-  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
+  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
     if (j < 0 || i == j)
-      STMT_SLP_TYPE (vinfo_for_stmt (stmt)) = mark;
+      STMT_SLP_TYPE (stmt_info) = mark;
 
   FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
     vect_mark_slp_stmts (child, mark, j);
@@ -1596,16 +1593,14 @@ vect_mark_slp_stmts (slp_tree node, enum
 vect_mark_slp_stmts_relevant (slp_tree node)
 {
   int i;
-  gimple *stmt;
   stmt_vec_info stmt_info;
   slp_tree child;
 
   if (SLP_TREE_DEF_TYPE (node) != vect_internal_def)
     return;
 
-  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
+  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
     {
-      stmt_info = vinfo_for_stmt (stmt);
       gcc_assert (!STMT_VINFO_RELEVANT (stmt_info)
                   || STMT_VINFO_RELEVANT (stmt_info) == vect_used_in_scope);
       STMT_VINFO_RELEVANT (stmt_info) = vect_used_in_scope;
@@ -1622,8 +1617,8 @@ vect_mark_slp_stmts_relevant (slp_tree n
 vect_slp_rearrange_stmts (slp_tree node, unsigned int group_size,
                           vec<unsigned> permutation)
 {
-  gimple *stmt;
-  vec<gimple *> tmp_stmts;
+  stmt_vec_info stmt_info;
+  vec<stmt_vec_info> tmp_stmts;
   unsigned int i;
   slp_tree child;
 
@@ -1634,8 +1629,8 @@ vect_slp_rearrange_stmts (slp_tree node,
   tmp_stmts.create (group_size);
   tmp_stmts.quick_grow_cleared (group_size);
 
-  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
-    tmp_stmts[permutation[i]] = stmt;
+  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
+    tmp_stmts[permutation[i]] = stmt_info;
 
   SLP_TREE_SCALAR_STMTS (node).release ();
   SLP_TREE_SCALAR_STMTS (node) = tmp_stmts;
@@ -1696,13 +1691,14 @@ vect_attempt_slp_rearrange_stmts (slp_in
   poly_uint64 unrolling_factor = SLP_INSTANCE_UNROLLING_FACTOR (slp_instn);
   FOR_EACH_VEC_ELT (SLP_INSTANCE_LOADS (slp_instn), i, node)
     {
-      gimple *first_stmt = SLP_TREE_SCALAR_STMTS (node)[0];
-      first_stmt = DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (first_stmt));
+      stmt_vec_info first_stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
+      first_stmt_info
+	= vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (first_stmt_info));
       /* But we have to keep those permutations that are required because
          of handling of gaps.  */
       if (known_eq (unrolling_factor, 1U)
-	  || (group_size == DR_GROUP_SIZE (vinfo_for_stmt (first_stmt))
-	      && DR_GROUP_GAP (vinfo_for_stmt (first_stmt)) == 0))
+	  || (group_size == DR_GROUP_SIZE (first_stmt_info)
+	      && DR_GROUP_GAP (first_stmt_info) == 0))
 	SLP_TREE_LOAD_PERMUTATION (node).release ();
       else
 	for (j = 0; j < SLP_TREE_LOAD_PERMUTATION (node).length (); ++j)
@@ -1721,7 +1717,7 @@ vect_supported_load_permutation_p (slp_i
   unsigned int group_size = SLP_INSTANCE_GROUP_SIZE (slp_instn);
   unsigned int i, j, k, next;
   slp_tree node;
-  gimple *stmt, *load, *next_load;
+  gimple *next_load;
 
   if (dump_enabled_p ())
     {
@@ -1750,18 +1746,18 @@ vect_supported_load_permutation_p (slp_i
       return false;
 
   node = SLP_INSTANCE_TREE (slp_instn);
-  stmt = SLP_TREE_SCALAR_STMTS (node)[0];
+  stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
 
   /* Reduction (there are no data-refs in the root).
      In reduction chain the order of the loads is not important.  */
-  if (!STMT_VINFO_DATA_REF (vinfo_for_stmt (stmt))
-      && !REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)))
+  if (!STMT_VINFO_DATA_REF (stmt_info)
+      && !REDUC_GROUP_FIRST_ELEMENT (stmt_info))
     vect_attempt_slp_rearrange_stmts (slp_instn);
 
   /* In basic block vectorization we allow any subchain of an interleaving
      chain.
      FORNOW: not supported in loop SLP because of realignment compications.  */
-  if (STMT_VINFO_BB_VINFO (vinfo_for_stmt (stmt)))
+  if (STMT_VINFO_BB_VINFO (stmt_info))
     {
       /* Check whether the loads in an instance form a subchain and thus
          no permutation is necessary.  */
@@ -1771,24 +1767,25 @@ vect_supported_load_permutation_p (slp_i
 	    continue;
 	  bool subchain_p = true;
           next_load = NULL;
-          FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), j, load)
-            {
-              if (j != 0
-		  && (next_load != load
-		      || DR_GROUP_GAP (vinfo_for_stmt (load)) != 1))
+	  stmt_vec_info load_info;
+	  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), j, load_info)
+	    {
+	      if (j != 0
+		  && (next_load != load_info
+		      || DR_GROUP_GAP (load_info) != 1))
 		{
 		  subchain_p = false;
 		  break;
 		}
-              next_load = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (load));
-            }
+	      next_load = DR_GROUP_NEXT_ELEMENT (load_info);
+	    }
 	  if (subchain_p)
 	    SLP_TREE_LOAD_PERMUTATION (node).release ();
 	  else
 	    {
-	      stmt_vec_info group_info
-		= vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (node)[0]);
-	      group_info = vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (group_info));
+	      stmt_vec_info group_info = SLP_TREE_SCALAR_STMTS (node)[0];
+	      group_info
+		= vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (group_info));
 	      unsigned HOST_WIDE_INT nunits;
 	      unsigned k, maxk = 0;
 	      FOR_EACH_VEC_ELT (SLP_TREE_LOAD_PERMUTATION (node), j, k)
@@ -1831,7 +1828,7 @@ vect_supported_load_permutation_p (slp_i
   poly_uint64 test_vf
     = force_common_multiple (SLP_INSTANCE_UNROLLING_FACTOR (slp_instn),
 			     LOOP_VINFO_VECT_FACTOR
-			     (STMT_VINFO_LOOP_VINFO (vinfo_for_stmt (stmt))));
+			     (STMT_VINFO_LOOP_VINFO (stmt_info)));
   FOR_EACH_VEC_ELT (SLP_INSTANCE_LOADS (slp_instn), i, node)
     if (node->load_permutation.exists ()
 	&& !vect_transform_slp_perm_load (node, vNULL, NULL, test_vf,
@@ -1847,15 +1844,15 @@ vect_supported_load_permutation_p (slp_i
 gimple *
 vect_find_last_scalar_stmt_in_slp (slp_tree node)
 {
-  gimple *last = NULL, *stmt;
+  gimple *last = NULL;
+  stmt_vec_info stmt_vinfo;
 
-  for (int i = 0; SLP_TREE_SCALAR_STMTS (node).iterate (i, &stmt); i++)
+  for (int i = 0; SLP_TREE_SCALAR_STMTS (node).iterate (i, &stmt_vinfo); i++)
     {
-      stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
       if (is_pattern_stmt_p (stmt_vinfo))
 	last = get_later_stmt (STMT_VINFO_RELATED_STMT (stmt_vinfo), last);
       else
-	last = get_later_stmt (stmt, last);
+	last = get_later_stmt (stmt_vinfo, last);
     }
 
   return last;
@@ -1926,6 +1923,7 @@ calculate_unrolling_factor (poly_uint64
 vect_analyze_slp_instance (vec_info *vinfo,
 			   gimple *stmt, unsigned max_tree_size)
 {
+  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   slp_instance new_instance;
   slp_tree node;
   unsigned int group_size;
@@ -1934,25 +1932,25 @@ vect_analyze_slp_instance (vec_info *vin
   stmt_vec_info next_info;
   unsigned int i;
   vec<slp_tree> loads;
-  struct data_reference *dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (stmt));
-  vec<gimple *> scalar_stmts;
+  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
+  vec<stmt_vec_info> scalar_stmts;
 
-  if (STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt)))
+  if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
     {
       scalar_type = TREE_TYPE (DR_REF (dr));
       vectype = get_vectype_for_scalar_type (scalar_type);
-      group_size = DR_GROUP_SIZE (vinfo_for_stmt (stmt));
+      group_size = DR_GROUP_SIZE (stmt_info);
     }
-  else if (!dr && REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)))
+  else if (!dr && REDUC_GROUP_FIRST_ELEMENT (stmt_info))
     {
       gcc_assert (is_a <loop_vec_info> (vinfo));
-      vectype = STMT_VINFO_VECTYPE (vinfo_for_stmt (stmt));
-      group_size = REDUC_GROUP_SIZE (vinfo_for_stmt (stmt));
+      vectype = STMT_VINFO_VECTYPE (stmt_info);
+      group_size = REDUC_GROUP_SIZE (stmt_info);
     }
   else
     {
       gcc_assert (is_a <loop_vec_info> (vinfo));
-      vectype = STMT_VINFO_VECTYPE (vinfo_for_stmt (stmt));
+      vectype = STMT_VINFO_VECTYPE (stmt_info);
       group_size = as_a <loop_vec_info> (vinfo)->reductions.length ();
     }
 
@@ -1973,38 +1971,38 @@ vect_analyze_slp_instance (vec_info *vin
   /* Create a node (a root of the SLP tree) for the packed grouped stores.  */
   scalar_stmts.create (group_size);
   next = stmt;
-  if (STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt)))
+  if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
     {
       /* Collect the stores and store them in SLP_TREE_SCALAR_STMTS.  */
       while (next)
         {
-	  if (STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (next))
-	      && STMT_VINFO_RELATED_STMT (vinfo_for_stmt (next)))
-	    scalar_stmts.safe_push (
-		  STMT_VINFO_RELATED_STMT (vinfo_for_stmt (next)));
+	  next_info = vinfo_for_stmt (next);
+	  if (STMT_VINFO_IN_PATTERN_P (next_info)
+	      && STMT_VINFO_RELATED_STMT (next_info))
+	    scalar_stmts.safe_push (STMT_VINFO_RELATED_STMT (next_info));
 	  else
-            scalar_stmts.safe_push (next);
+	    scalar_stmts.safe_push (next_info);
           next = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next));
         }
     }
-  else if (!dr && REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)))
+  else if (!dr && REDUC_GROUP_FIRST_ELEMENT (stmt_info))
     {
       /* Collect the reduction stmts and store them in
 	 SLP_TREE_SCALAR_STMTS.  */
       while (next)
         {
-	  if (STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (next))
-	      && STMT_VINFO_RELATED_STMT (vinfo_for_stmt (next)))
-	    scalar_stmts.safe_push (
-		  STMT_VINFO_RELATED_STMT (vinfo_for_stmt (next)));
+	  next_info = vinfo_for_stmt (next);
+	  if (STMT_VINFO_IN_PATTERN_P (next_info)
+	      && STMT_VINFO_RELATED_STMT (next_info))
+	    scalar_stmts.safe_push (STMT_VINFO_RELATED_STMT (next_info));
 	  else
-            scalar_stmts.safe_push (next);
+	    scalar_stmts.safe_push (next_info);
           next = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next));
         }
       /* Mark the first element of the reduction chain as reduction to properly
 	 transform the node.  In the reduction analysis phase only the last
 	 element of the chain is marked as reduction.  */
-      STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt)) = vect_reduction_def;
+      STMT_VINFO_DEF_TYPE (stmt_info) = vect_reduction_def;
     }
   else
     {
@@ -2068,15 +2066,16 @@ vect_analyze_slp_instance (vec_info *vin
 	{
 	  vec<unsigned> load_permutation;
 	  int j;
-	  gimple *load, *first_stmt;
+	  stmt_vec_info load_info;
+	  gimple *first_stmt;
 	  bool this_load_permuted = false;
 	  load_permutation.create (group_size);
 	  first_stmt = DR_GROUP_FIRST_ELEMENT
-	      (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (load_node)[0]));
-	  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (load_node), j, load)
+	    (SLP_TREE_SCALAR_STMTS (load_node)[0]);
+	  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (load_node), j, load_info)
 	    {
-		  int load_place = vect_get_place_in_interleaving_chain
-				     (load, first_stmt);
+	      int load_place = vect_get_place_in_interleaving_chain
+		(load_info, first_stmt);
 	      gcc_assert (load_place != -1);
 	      if (load_place != j)
 		this_load_permuted = true;
@@ -2124,7 +2123,7 @@ vect_analyze_slp_instance (vec_info *vin
 	  FOR_EACH_VEC_ELT (loads, i, load_node)
 	    {
 	      gimple *first_stmt = DR_GROUP_FIRST_ELEMENT
-		  (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (load_node)[0]));
+		(SLP_TREE_SCALAR_STMTS (load_node)[0]);
 	      stmt_vec_info stmt_vinfo = vinfo_for_stmt (first_stmt);
 		  /* Use SLP for strided accesses (or if we
 		     can't load-lanes).  */
@@ -2307,10 +2306,10 @@ vect_make_slp_decision (loop_vec_info lo
 static void
 vect_detect_hybrid_slp_stmts (slp_tree node, unsigned i, slp_vect_type stype)
 {
-  gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[i];
+  stmt_vec_info stmt_vinfo = SLP_TREE_SCALAR_STMTS (node)[i];
   imm_use_iterator imm_iter;
   gimple *use_stmt;
-  stmt_vec_info use_vinfo, stmt_vinfo = vinfo_for_stmt (stmt);
+  stmt_vec_info use_vinfo;
   slp_tree child;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_vinfo);
   int j;
@@ -2326,6 +2325,7 @@ vect_detect_hybrid_slp_stmts (slp_tree n
       gcc_checking_assert (PURE_SLP_STMT (stmt_vinfo));
       /* If we get a pattern stmt here we have to use the LHS of the
          original stmt for immediate uses.  */
+      gimple *stmt = stmt_vinfo->stmt;
       if (! STMT_VINFO_IN_PATTERN_P (stmt_vinfo)
 	  && STMT_VINFO_RELATED_STMT (stmt_vinfo))
 	stmt = STMT_VINFO_RELATED_STMT (stmt_vinfo)->stmt;
@@ -2366,7 +2366,7 @@ vect_detect_hybrid_slp_stmts (slp_tree n
       if (dump_enabled_p ())
 	{
 	  dump_printf_loc (MSG_NOTE, vect_location, "marking hybrid: ");
-	  dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
+	  dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_vinfo->stmt, 0);
 	}
       STMT_SLP_TYPE (stmt_vinfo) = hybrid;
     }
@@ -2525,9 +2525,8 @@ vect_slp_analyze_node_operations_1 (vec_
 				    slp_instance node_instance,
 				    stmt_vector_for_cost *cost_vec)
 {
-  gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[0];
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
-  gcc_assert (stmt_info);
+  stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
+  gimple *stmt = stmt_info->stmt;
   gcc_assert (STMT_SLP_TYPE (stmt_info) != loop_vect);
 
   /* For BB vectorization vector types are assigned here.
@@ -2551,10 +2550,10 @@ vect_slp_analyze_node_operations_1 (vec_
 	    return false;
 	}
 
-      gimple *sstmt;
+      stmt_vec_info sstmt_info;
       unsigned int i;
-      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, sstmt)
-	STMT_VINFO_VECTYPE (vinfo_for_stmt (sstmt)) = vectype;
+      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, sstmt_info)
+	STMT_VINFO_VECTYPE (sstmt_info) = vectype;
     }
 
   /* Calculate the number of vector statements to be created for the
@@ -2626,14 +2625,14 @@ vect_slp_analyze_node_operations (vec_in
   /* Push SLP node def-type to stmt operands.  */
   FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), j, child)
     if (SLP_TREE_DEF_TYPE (child) != vect_internal_def)
-      STMT_VINFO_DEF_TYPE (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (child)[0]))
+      STMT_VINFO_DEF_TYPE (SLP_TREE_SCALAR_STMTS (child)[0])
 	= SLP_TREE_DEF_TYPE (child);
   bool res = vect_slp_analyze_node_operations_1 (vinfo, node, node_instance,
 						 cost_vec);
   /* Restore def-types.  */
   FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), j, child)
     if (SLP_TREE_DEF_TYPE (child) != vect_internal_def)
-      STMT_VINFO_DEF_TYPE (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (child)[0]))
+      STMT_VINFO_DEF_TYPE (SLP_TREE_SCALAR_STMTS (child)[0])
 	= vect_internal_def;
   if (! res)
     return false;
@@ -2665,11 +2664,11 @@ vect_slp_analyze_operations (vec_info *v
 					     instance, visited, &lvisited,
 					     &cost_vec))
         {
+	  slp_tree node = SLP_INSTANCE_TREE (instance);
+	  stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
 	  dump_printf_loc (MSG_NOTE, vect_location,
 			   "removing SLP instance operations starting from: ");
-	  dump_gimple_stmt (MSG_NOTE, TDF_SLIM,
-			    SLP_TREE_SCALAR_STMTS
-			      (SLP_INSTANCE_TREE (instance))[0], 0);
+	  dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
 	  vect_free_slp_instance (instance, false);
           vinfo->slp_instances.ordered_remove (i);
 	  cost_vec.release ();
@@ -2701,14 +2700,14 @@ vect_bb_slp_scalar_cost (basic_block bb,
 			 stmt_vector_for_cost *cost_vec)
 {
   unsigned i;
-  gimple *stmt;
+  stmt_vec_info stmt_info;
   slp_tree child;
 
-  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
+  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
     {
+      gimple *stmt = stmt_info->stmt;
       ssa_op_iter op_iter;
       def_operand_p def_p;
-      stmt_vec_info stmt_info;
 
       if ((*life)[i])
 	continue;
@@ -2724,8 +2723,7 @@ vect_bb_slp_scalar_cost (basic_block bb,
 	  gimple *use_stmt;
 	  FOR_EACH_IMM_USE_STMT (use_stmt, use_iter, DEF_FROM_PTR (def_p))
 	    if (!is_gimple_debug (use_stmt)
-		&& (! vect_stmt_in_region_p (vinfo_for_stmt (stmt)->vinfo,
-					     use_stmt)
+		&& (! vect_stmt_in_region_p (stmt_info->vinfo, use_stmt)
 		    || ! PURE_SLP_STMT (vinfo_for_stmt (use_stmt))))
 	      {
 		(*life)[i] = true;
@@ -2740,7 +2738,6 @@ vect_bb_slp_scalar_cost (basic_block bb,
 	continue;
       gimple_set_visited (stmt, true);
 
-      stmt_info = vinfo_for_stmt (stmt);
       vect_cost_for_stmt kind;
       if (STMT_VINFO_DATA_REF (stmt_info))
         {
@@ -2944,11 +2941,11 @@ vect_slp_analyze_bb_1 (gimple_stmt_itera
       if (! vect_slp_analyze_and_verify_instance_alignment (instance)
 	  || ! vect_slp_analyze_instance_dependence (instance))
 	{
+	  slp_tree node = SLP_INSTANCE_TREE (instance);
+	  stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
 	  dump_printf_loc (MSG_NOTE, vect_location,
 			   "removing SLP instance operations starting from: ");
-	  dump_gimple_stmt (MSG_NOTE, TDF_SLIM,
-			    SLP_TREE_SCALAR_STMTS
-			      (SLP_INSTANCE_TREE (instance))[0], 0);
+	  dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
 	  vect_free_slp_instance (instance, false);
 	  BB_VINFO_SLP_INSTANCES (bb_vinfo).ordered_remove (i);
 	  continue;
@@ -3299,9 +3296,9 @@ vect_get_constant_vectors (tree op, slp_
                            vec<tree> *vec_oprnds,
 			   unsigned int op_num, unsigned int number_of_vectors)
 {
-  vec<gimple *> stmts = SLP_TREE_SCALAR_STMTS (slp_node);
-  gimple *stmt = stmts[0];
-  stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
+  vec<stmt_vec_info> stmts = SLP_TREE_SCALAR_STMTS (slp_node);
+  stmt_vec_info stmt_vinfo = stmts[0];
+  gimple *stmt = stmt_vinfo->stmt;
   unsigned HOST_WIDE_INT nunits;
   tree vec_cst;
   unsigned j, number_of_places_left_in_vector;
@@ -3320,7 +3317,7 @@ vect_get_constant_vectors (tree op, slp_
 
   /* Check if vector type is a boolean vector.  */
   if (VECT_SCALAR_BOOLEAN_TYPE_P (TREE_TYPE (op))
-      && vect_mask_constant_operand_p (stmt, op_num))
+      && vect_mask_constant_operand_p (stmt_vinfo, op_num))
     vector_type
       = build_same_sized_truth_vector_type (STMT_VINFO_VECTYPE (stmt_vinfo));
   else
@@ -3366,8 +3363,9 @@ vect_get_constant_vectors (tree op, slp_
   bool place_after_defs = false;
   for (j = 0; j < number_of_copies; j++)
     {
-      for (i = group_size - 1; stmts.iterate (i, &stmt); i--)
+      for (i = group_size - 1; stmts.iterate (i, &stmt_vinfo); i--)
         {
+	  stmt = stmt_vinfo->stmt;
           if (is_store)
             op = gimple_assign_rhs1 (stmt);
           else
@@ -3496,10 +3494,12 @@ vect_get_constant_vectors (tree op, slp_
 		{
 		  gsi = gsi_for_stmt
 		          (vect_find_last_scalar_stmt_in_slp (slp_node));
-		  init = vect_init_vector (stmt, vec_cst, vector_type, &gsi);
+		  init = vect_init_vector (stmt_vinfo, vec_cst, vector_type,
+					   &gsi);
 		}
 	      else
-		init = vect_init_vector (stmt, vec_cst, vector_type, NULL);
+		init = vect_init_vector (stmt_vinfo, vec_cst, vector_type,
+					 NULL);
 	      if (ctor_seq != NULL)
 		{
 		  gsi = gsi_for_stmt (SSA_NAME_DEF_STMT (init));
@@ -3612,15 +3612,14 @@ vect_get_slp_defs (vec<tree> ops, slp_tr
 	  /* We have to check both pattern and original def, if available.  */
 	  if (SLP_TREE_DEF_TYPE (child) == vect_internal_def)
 	    {
-	      gimple *first_def = SLP_TREE_SCALAR_STMTS (child)[0];
-	      stmt_vec_info related
-		= STMT_VINFO_RELATED_STMT (vinfo_for_stmt (first_def));
+	      stmt_vec_info first_def_info = SLP_TREE_SCALAR_STMTS (child)[0];
+	      stmt_vec_info related = STMT_VINFO_RELATED_STMT (first_def_info);
 	      tree first_def_op;
 
-	      if (gimple_code (first_def) == GIMPLE_PHI)
+	      if (gphi *first_def = dyn_cast <gphi *> (first_def_info->stmt))
 		first_def_op = gimple_phi_result (first_def);
 	      else
-		first_def_op = gimple_get_lhs (first_def);
+		first_def_op = gimple_get_lhs (first_def_info->stmt);
 	      if (operand_equal_p (oprnd, first_def_op, 0)
 		  || (related
 		      && operand_equal_p (oprnd,
@@ -3686,8 +3685,7 @@ vect_transform_slp_perm_load (slp_tree n
 			      slp_instance slp_node_instance, bool analyze_only,
 			      unsigned *n_perms)
 {
-  gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[0];
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+  stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
   vec_info *vinfo = stmt_info->vinfo;
   tree mask_element_type = NULL_TREE, mask_type;
   int vec_index = 0;
@@ -3779,7 +3777,7 @@ vect_transform_slp_perm_load (slp_tree n
 				   "permutation requires at "
 				   "least three vectors ");
 		  dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
-				    stmt, 0);
+				    stmt_info->stmt, 0);
 		}
 	      gcc_assert (analyze_only);
 	      return false;
@@ -3832,6 +3830,7 @@ vect_transform_slp_perm_load (slp_tree n
 		  stmt_vec_info perm_stmt_info;
 		  if (! noop_p)
 		    {
+		      gassign *stmt = as_a <gassign *> (stmt_info->stmt);
 		      tree perm_dest
 			= vect_create_destination_var (gimple_assign_lhs (stmt),
 						       vectype);
@@ -3841,7 +3840,8 @@ vect_transform_slp_perm_load (slp_tree n
 					       first_vec, second_vec,
 					       mask_vec);
 		      perm_stmt_info
-			= vect_finish_stmt_generation (stmt, perm_stmt, gsi);
+			= vect_finish_stmt_generation (stmt_info, perm_stmt,
+						       gsi);
 		    }
 		  else
 		    /* If mask was NULL_TREE generate the requested
@@ -3870,7 +3870,6 @@ vect_transform_slp_perm_load (slp_tree n
 vect_schedule_slp_instance (slp_tree node, slp_instance instance,
 			    scalar_stmts_to_slp_tree_map_t *bst_map)
 {
-  gimple *stmt;
   bool grouped_store, is_store;
   gimple_stmt_iterator si;
   stmt_vec_info stmt_info;
@@ -3897,11 +3896,13 @@ vect_schedule_slp_instance (slp_tree nod
   /* Push SLP node def-type to stmts.  */
   FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
     if (SLP_TREE_DEF_TYPE (child) != vect_internal_def)
-      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (child), j, stmt)
-	STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt)) = SLP_TREE_DEF_TYPE (child);
+      {
+	stmt_vec_info child_stmt_info;
+	FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (child), j, child_stmt_info)
+	  STMT_VINFO_DEF_TYPE (child_stmt_info) = SLP_TREE_DEF_TYPE (child);
+      }
 
-  stmt = SLP_TREE_SCALAR_STMTS (node)[0];
-  stmt_info = vinfo_for_stmt (stmt);
+  stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
 
   /* VECTYPE is the type of the destination.  */
   vectype = STMT_VINFO_VECTYPE (stmt_info);
@@ -3916,7 +3917,7 @@ vect_schedule_slp_instance (slp_tree nod
     {
       dump_printf_loc (MSG_NOTE,vect_location,
 		       "------>vectorizing SLP node starting from: ");
-      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
+      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
     }
 
   /* Vectorized stmts go before the last scalar stmt which is where
@@ -3928,7 +3929,7 @@ vect_schedule_slp_instance (slp_tree nod
      chain is marked as reduction.  */
   if (!STMT_VINFO_GROUPED_ACCESS (stmt_info)
       && REDUC_GROUP_FIRST_ELEMENT (stmt_info)
-      && REDUC_GROUP_FIRST_ELEMENT (stmt_info) == stmt)
+      && REDUC_GROUP_FIRST_ELEMENT (stmt_info) == stmt_info)
     {
       STMT_VINFO_DEF_TYPE (stmt_info) = vect_reduction_def;
       STMT_VINFO_TYPE (stmt_info) = reduc_vec_info_type;
@@ -3938,29 +3939,33 @@ vect_schedule_slp_instance (slp_tree nod
      both operations and then performing a merge.  */
   if (SLP_TREE_TWO_OPERATORS (node))
     {
+      gassign *stmt = as_a <gassign *> (stmt_info->stmt);
       enum tree_code code0 = gimple_assign_rhs_code (stmt);
       enum tree_code ocode = ERROR_MARK;
-      gimple *ostmt;
+      stmt_vec_info ostmt_info;
       vec_perm_builder mask (group_size, group_size, 1);
-      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, ostmt)
-	if (gimple_assign_rhs_code (ostmt) != code0)
-	  {
-	    mask.quick_push (1);
-	    ocode = gimple_assign_rhs_code (ostmt);
-	  }
-	else
-	  mask.quick_push (0);
+      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, ostmt_info)
+	{
+	  gassign *ostmt = as_a <gassign *> (ostmt_info->stmt);
+	  if (gimple_assign_rhs_code (ostmt) != code0)
+	    {
+	      mask.quick_push (1);
+	      ocode = gimple_assign_rhs_code (ostmt);
+	    }
+	  else
+	    mask.quick_push (0);
+	}
       if (ocode != ERROR_MARK)
 	{
 	  vec<stmt_vec_info> v0;
 	  vec<stmt_vec_info> v1;
 	  unsigned j;
 	  tree tmask = NULL_TREE;
-	  vect_transform_stmt (stmt, &si, &grouped_store, node, instance);
+	  vect_transform_stmt (stmt_info, &si, &grouped_store, node, instance);
 	  v0 = SLP_TREE_VEC_STMTS (node).copy ();
 	  SLP_TREE_VEC_STMTS (node).truncate (0);
 	  gimple_assign_set_rhs_code (stmt, ocode);
-	  vect_transform_stmt (stmt, &si, &grouped_store, node, instance);
+	  vect_transform_stmt (stmt_info, &si, &grouped_store, node, instance);
 	  gimple_assign_set_rhs_code (stmt, code0);
 	  v1 = SLP_TREE_VEC_STMTS (node).copy ();
 	  SLP_TREE_VEC_STMTS (node).truncate (0);
@@ -3998,20 +4003,24 @@ vect_schedule_slp_instance (slp_tree nod
 					   gimple_assign_lhs (v1[j]->stmt),
 					   tmask);
 	      SLP_TREE_VEC_STMTS (node).quick_push
-		(vect_finish_stmt_generation (stmt, vstmt, &si));
+		(vect_finish_stmt_generation (stmt_info, vstmt, &si));
 	    }
 	  v0.release ();
 	  v1.release ();
 	  return false;
 	}
     }
-  is_store = vect_transform_stmt (stmt, &si, &grouped_store, node, instance);
+  is_store = vect_transform_stmt (stmt_info, &si, &grouped_store, node,
+				  instance);
 
   /* Restore stmt def-types.  */
   FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
     if (SLP_TREE_DEF_TYPE (child) != vect_internal_def)
-      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (child), j, stmt)
-	STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt)) = vect_internal_def;
+      {
+	stmt_vec_info child_stmt_info;
+	FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (child), j, child_stmt_info)
+	  STMT_VINFO_DEF_TYPE (child_stmt_info) = vect_internal_def;
+      }
 
   return is_store;
 }
@@ -4024,7 +4033,7 @@ vect_schedule_slp_instance (slp_tree nod
 static void
 vect_remove_slp_scalar_calls (slp_tree node)
 {
-  gimple *stmt, *new_stmt;
+  gimple *new_stmt;
   gimple_stmt_iterator gsi;
   int i;
   slp_tree child;
@@ -4037,13 +4046,12 @@ vect_remove_slp_scalar_calls (slp_tree n
   FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
     vect_remove_slp_scalar_calls (child);
 
-  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
+  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
     {
-      if (!is_gimple_call (stmt) || gimple_bb (stmt) == NULL)
+      gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt);
+      if (!stmt || gimple_bb (stmt) == NULL)
 	continue;
-      stmt_info = vinfo_for_stmt (stmt);
-      if (stmt_info == NULL_STMT_VEC_INFO
-	  || is_pattern_stmt_p (stmt_info)
+      if (is_pattern_stmt_p (stmt_info)
 	  || !PURE_SLP_STMT (stmt_info))
 	continue;
       lhs = gimple_call_lhs (stmt);
@@ -4085,7 +4093,7 @@ vect_schedule_slp (vec_info *vinfo)
   FOR_EACH_VEC_ELT (slp_instances, i, instance)
     {
       slp_tree root = SLP_INSTANCE_TREE (instance);
-      gimple *store;
+      stmt_vec_info store_info;
       unsigned int j;
       gimple_stmt_iterator gsi;
 
@@ -4099,20 +4107,20 @@ vect_schedule_slp (vec_info *vinfo)
       if (is_a <loop_vec_info> (vinfo))
 	vect_remove_slp_scalar_calls (root);
 
-      for (j = 0; SLP_TREE_SCALAR_STMTS (root).iterate (j, &store)
+      for (j = 0; SLP_TREE_SCALAR_STMTS (root).iterate (j, &store_info)
                   && j < SLP_INSTANCE_GROUP_SIZE (instance); j++)
         {
-          if (!STMT_VINFO_DATA_REF (vinfo_for_stmt (store)))
-            break;
+	  if (!STMT_VINFO_DATA_REF (store_info))
+	    break;
 
-         if (is_pattern_stmt_p (vinfo_for_stmt (store)))
-           store = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (store));
-          /* Free the attached stmt_vec_info and remove the stmt.  */
-          gsi = gsi_for_stmt (store);
-	  unlink_stmt_vdef (store);
-          gsi_remove (&gsi, true);
-	  release_defs (store);
-          free_stmt_vec_info (store);
+	  if (is_pattern_stmt_p (store_info))
+	    store_info = STMT_VINFO_RELATED_STMT (store_info);
+	  /* Free the attached stmt_vec_info and remove the stmt.  */
+	  gsi = gsi_for_stmt (store_info);
+	  unlink_stmt_vdef (store_info);
+	  gsi_remove (&gsi, true);
+	  release_defs (store_info);
+	  free_stmt_vec_info (store_info);
         }
     }
 
Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c	2018-07-24 10:22:47.485157343 +0100
+++ gcc/tree-vect-data-refs.c	2018-07-24 10:23:00.397042684 +0100
@@ -665,7 +665,8 @@ vect_slp_analyze_data_ref_dependence (st
 
 static bool
 vect_slp_analyze_node_dependences (slp_instance instance, slp_tree node,
-				   vec<gimple *> stores, gimple *last_store)
+				   vec<stmt_vec_info> stores,
+				   gimple *last_store)
 {
   /* This walks over all stmts involved in the SLP load/store done
      in NODE verifying we can sink them up to the last stmt in the
@@ -673,13 +674,13 @@ vect_slp_analyze_node_dependences (slp_i
   gimple *last_access = vect_find_last_scalar_stmt_in_slp (node);
   for (unsigned k = 0; k < SLP_INSTANCE_GROUP_SIZE (instance); ++k)
     {
-      gimple *access = SLP_TREE_SCALAR_STMTS (node)[k];
-      if (access == last_access)
+      stmt_vec_info access_info = SLP_TREE_SCALAR_STMTS (node)[k];
+      if (access_info == last_access)
 	continue;
-      data_reference *dr_a = STMT_VINFO_DATA_REF (vinfo_for_stmt (access));
+      data_reference *dr_a = STMT_VINFO_DATA_REF (access_info);
       ao_ref ref;
       bool ref_initialized_p = false;
-      for (gimple_stmt_iterator gsi = gsi_for_stmt (access);
+      for (gimple_stmt_iterator gsi = gsi_for_stmt (access_info->stmt);
 	   gsi_stmt (gsi) != last_access; gsi_next (&gsi))
 	{
 	  gimple *stmt = gsi_stmt (gsi);
@@ -712,11 +713,10 @@ vect_slp_analyze_node_dependences (slp_i
 	      if (stmt != last_store)
 		continue;
 	      unsigned i;
-	      gimple *store;
-	      FOR_EACH_VEC_ELT (stores, i, store)
+	      stmt_vec_info store_info;
+	      FOR_EACH_VEC_ELT (stores, i, store_info)
 		{
-		  data_reference *store_dr
-		    = STMT_VINFO_DATA_REF (vinfo_for_stmt (store));
+		  data_reference *store_dr = STMT_VINFO_DATA_REF (store_info);
 		  ddr_p ddr = initialize_data_dependence_relation
 				(dr_a, store_dr, vNULL);
 		  dependent = vect_slp_analyze_data_ref_dependence (ddr);
@@ -753,7 +753,7 @@ vect_slp_analyze_instance_dependence (sl
 
   /* The stores of this instance are at the root of the SLP tree.  */
   slp_tree store = SLP_INSTANCE_TREE (instance);
-  if (! STMT_VINFO_DATA_REF (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (store)[0])))
+  if (! STMT_VINFO_DATA_REF (SLP_TREE_SCALAR_STMTS (store)[0]))
     store = NULL;
 
   /* Verify we can sink stores to the vectorized stmt insert location.  */
@@ -766,7 +766,7 @@ vect_slp_analyze_instance_dependence (sl
       /* Mark stores in this instance and remember the last one.  */
       last_store = vect_find_last_scalar_stmt_in_slp (store);
       for (unsigned k = 0; k < SLP_INSTANCE_GROUP_SIZE (instance); ++k)
-	gimple_set_visited (SLP_TREE_SCALAR_STMTS (store)[k], true);
+	gimple_set_visited (SLP_TREE_SCALAR_STMTS (store)[k]->stmt, true);
     }
 
   bool res = true;
@@ -788,7 +788,7 @@ vect_slp_analyze_instance_dependence (sl
   /* Unset the visited flag.  */
   if (store)
     for (unsigned k = 0; k < SLP_INSTANCE_GROUP_SIZE (instance); ++k)
-      gimple_set_visited (SLP_TREE_SCALAR_STMTS (store)[k], false);
+      gimple_set_visited (SLP_TREE_SCALAR_STMTS (store)[k]->stmt, false);
 
   return res;
 }
@@ -2389,10 +2389,11 @@ vect_slp_analyze_and_verify_node_alignme
   /* We vectorize from the first scalar stmt in the node unless
      the node is permuted in which case we start from the first
      element in the group.  */
-  gimple *first_stmt = SLP_TREE_SCALAR_STMTS (node)[0];
-  data_reference_p first_dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (first_stmt));
+  stmt_vec_info first_stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
+  gimple *first_stmt = first_stmt_info->stmt;
+  data_reference_p first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
   if (SLP_TREE_LOAD_PERMUTATION (node).exists ())
-    first_stmt = DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (first_stmt));
+    first_stmt = DR_GROUP_FIRST_ELEMENT (first_stmt_info);
 
   data_reference_p dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (first_stmt));
   vect_compute_data_ref_alignment (dr);
@@ -2429,7 +2430,7 @@ vect_slp_analyze_and_verify_instance_ali
       return false;
 
   node = SLP_INSTANCE_TREE (instance);
-  if (STMT_VINFO_DATA_REF (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (node)[0]))
+  if (STMT_VINFO_DATA_REF (SLP_TREE_SCALAR_STMTS (node)[0])
       && ! vect_slp_analyze_and_verify_node_alignment
 	     (SLP_INSTANCE_TREE (instance)))
     return false;
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:22:57.273070426 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:23:00.397042684 +0100
@@ -2186,8 +2186,7 @@ vect_analyze_loop_2 (loop_vec_info loop_
   FOR_EACH_VEC_ELT (LOOP_VINFO_SLP_INSTANCES (loop_vinfo), i, instance)
     {
       stmt_vec_info vinfo;
-      vinfo = vinfo_for_stmt
-	  (SLP_TREE_SCALAR_STMTS (SLP_INSTANCE_TREE (instance))[0]);
+      vinfo = SLP_TREE_SCALAR_STMTS (SLP_INSTANCE_TREE (instance))[0];
       if (! STMT_VINFO_GROUPED_ACCESS (vinfo))
 	continue;
       vinfo = vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (vinfo));
@@ -2199,7 +2198,7 @@ vect_analyze_loop_2 (loop_vec_info loop_
        return false;
       FOR_EACH_VEC_ELT (SLP_INSTANCE_LOADS (instance), j, node)
 	{
-	  vinfo = vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (node)[0]);
+	  vinfo = SLP_TREE_SCALAR_STMTS (node)[0];
 	  vinfo = vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (vinfo));
 	  bool single_element_p = !DR_GROUP_NEXT_ELEMENT (vinfo);
 	  size = DR_GROUP_SIZE (vinfo);
@@ -2442,12 +2441,11 @@ reduction_fn_for_scalar_code (enum tree_
 neutral_op_for_slp_reduction (slp_tree slp_node, tree_code code,
 			      bool reduc_chain)
 {
-  vec<gimple *> stmts = SLP_TREE_SCALAR_STMTS (slp_node);
-  gimple *stmt = stmts[0];
-  stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
+  vec<stmt_vec_info> stmts = SLP_TREE_SCALAR_STMTS (slp_node);
+  stmt_vec_info stmt_vinfo = stmts[0];
   tree vector_type = STMT_VINFO_VECTYPE (stmt_vinfo);
   tree scalar_type = TREE_TYPE (vector_type);
-  struct loop *loop = gimple_bb (stmt)->loop_father;
+  struct loop *loop = gimple_bb (stmt_vinfo->stmt)->loop_father;
   gcc_assert (loop);
 
   switch (code)
@@ -2473,7 +2471,8 @@ neutral_op_for_slp_reduction (slp_tree s
 	 has only a single initial value, so that value is neutral for
 	 all statements.  */
       if (reduc_chain)
-	return PHI_ARG_DEF_FROM_EDGE (stmt, loop_preheader_edge (loop));
+	return PHI_ARG_DEF_FROM_EDGE (stmt_vinfo->stmt,
+				      loop_preheader_edge (loop));
       return NULL_TREE;
 
     default:
@@ -4182,9 +4181,8 @@ get_initial_defs_for_reduction (slp_tree
 				unsigned int number_of_vectors,
 				bool reduc_chain, tree neutral_op)
 {
-  vec<gimple *> stmts = SLP_TREE_SCALAR_STMTS (slp_node);
-  gimple *stmt = stmts[0];
-  stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
+  vec<stmt_vec_info> stmts = SLP_TREE_SCALAR_STMTS (slp_node);
+  stmt_vec_info stmt_vinfo = stmts[0];
   unsigned HOST_WIDE_INT nunits;
   unsigned j, number_of_places_left_in_vector;
   tree vector_type;
@@ -4201,7 +4199,7 @@ get_initial_defs_for_reduction (slp_tree
 
   gcc_assert (STMT_VINFO_DEF_TYPE (stmt_vinfo) == vect_reduction_def);
 
-  loop = (gimple_bb (stmt))->loop_father;
+  loop = (gimple_bb (stmt_vinfo->stmt))->loop_father;
   gcc_assert (loop);
   edge pe = loop_preheader_edge (loop);
 
@@ -4234,7 +4232,7 @@ get_initial_defs_for_reduction (slp_tree
   elts.quick_grow (nunits);
   for (j = 0; j < number_of_copies; j++)
     {
-      for (i = group_size - 1; stmts.iterate (i, &stmt); i--)
+      for (i = group_size - 1; stmts.iterate (i, &stmt_vinfo); i--)
         {
 	  tree op;
 	  /* Get the def before the loop.  In reduction chain we have only
@@ -4244,7 +4242,7 @@ get_initial_defs_for_reduction (slp_tree
 	      && neutral_op)
 	    op = neutral_op;
 	  else
-	    op = PHI_ARG_DEF_FROM_EDGE (stmt, pe);
+	    op = PHI_ARG_DEF_FROM_EDGE (stmt_vinfo->stmt, pe);
 
           /* Create 'vect_ = {op0,op1,...,opn}'.  */
           number_of_places_left_in_vector--;
@@ -5128,7 +5126,8 @@ vect_create_epilog_for_reduction (vec<tr
       gcc_assert (pow2p_hwi (group_size));
 
       slp_tree orig_phis_slp_node = slp_node_instance->reduc_phis;
-      vec<gimple *> orig_phis = SLP_TREE_SCALAR_STMTS (orig_phis_slp_node);
+      vec<stmt_vec_info> orig_phis
+	= SLP_TREE_SCALAR_STMTS (orig_phis_slp_node);
       gimple_seq seq = NULL;
 
       /* Build a vector {0, 1, 2, ...}, with the same number of elements
@@ -5159,7 +5158,7 @@ vect_create_epilog_for_reduction (vec<tr
 	  if (!neutral_op)
 	    {
 	      tree scalar_value
-		= PHI_ARG_DEF_FROM_EDGE (orig_phis[i],
+		= PHI_ARG_DEF_FROM_EDGE (orig_phis[i]->stmt,
 					 loop_preheader_edge (loop));
 	      vector_identity = gimple_build_vector_from_val (&seq, vectype,
 							      scalar_value);
@@ -5572,12 +5571,13 @@ vect_create_epilog_for_reduction (vec<tr
      the loop exit phi node.  */
   if (REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)))
     {
-      gimple *dest_stmt = SLP_TREE_SCALAR_STMTS (slp_node)[group_size - 1];
+      stmt_vec_info dest_stmt_info
+	= SLP_TREE_SCALAR_STMTS (slp_node)[group_size - 1];
       /* Handle reduction patterns.  */
-      if (STMT_VINFO_RELATED_STMT (vinfo_for_stmt (dest_stmt)))
-	dest_stmt = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (dest_stmt));
+      if (STMT_VINFO_RELATED_STMT (dest_stmt_info))
+	dest_stmt_info = STMT_VINFO_RELATED_STMT (dest_stmt_info);
 
-      scalar_dest = gimple_assign_lhs (dest_stmt);
+      scalar_dest = gimple_assign_lhs (dest_stmt_info->stmt);
       group_size = 1;
     }
 
@@ -5607,13 +5607,12 @@ vect_create_epilog_for_reduction (vec<tr
 
       if (slp_reduc)
         {
-	  gimple *current_stmt = SLP_TREE_SCALAR_STMTS (slp_node)[k];
+	  stmt_vec_info scalar_stmt_info = SLP_TREE_SCALAR_STMTS (slp_node)[k];
 
-	  orig_stmt_info
-	    = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (current_stmt));
+	  orig_stmt_info = STMT_VINFO_RELATED_STMT (scalar_stmt_info);
 	  /* SLP statements can't participate in patterns.  */
 	  gcc_assert (!orig_stmt_info);
-	  scalar_dest = gimple_assign_lhs (current_stmt);
+	  scalar_dest = gimple_assign_lhs (scalar_stmt_info->stmt);
         }
 
       phis.create (3);
@@ -5881,23 +5880,23 @@ vectorize_fold_left_reduction (gimple *s
   tree op0 = ops[1 - reduc_index];
 
   int group_size = 1;
-  gimple *scalar_dest_def;
+  stmt_vec_info scalar_dest_def_info;
   auto_vec<tree> vec_oprnds0;
   if (slp_node)
     {
       vect_get_vec_defs (op0, NULL_TREE, stmt, &vec_oprnds0, NULL, slp_node);
       group_size = SLP_TREE_SCALAR_STMTS (slp_node).length ();
-      scalar_dest_def = SLP_TREE_SCALAR_STMTS (slp_node)[group_size - 1];
+      scalar_dest_def_info = SLP_TREE_SCALAR_STMTS (slp_node)[group_size - 1];
     }
   else
     {
       tree loop_vec_def0 = vect_get_vec_def_for_operand (op0, stmt);
       vec_oprnds0.create (1);
       vec_oprnds0.quick_push (loop_vec_def0);
-      scalar_dest_def = stmt;
+      scalar_dest_def_info = stmt_info;
     }
 
-  tree scalar_dest = gimple_assign_lhs (scalar_dest_def);
+  tree scalar_dest = gimple_assign_lhs (scalar_dest_def_info->stmt);
   tree scalar_type = TREE_TYPE (scalar_dest);
   tree reduc_var = gimple_phi_result (reduc_def_stmt);
 
@@ -5964,10 +5963,11 @@ vectorize_fold_left_reduction (gimple *s
       if (i == vec_num - 1)
 	{
 	  gimple_set_lhs (new_stmt, scalar_dest);
-	  new_stmt_info = vect_finish_replace_stmt (scalar_dest_def, new_stmt);
+	  new_stmt_info = vect_finish_replace_stmt (scalar_dest_def_info,
+						    new_stmt);
 	}
       else
-	new_stmt_info = vect_finish_stmt_generation (scalar_dest_def,
+	new_stmt_info = vect_finish_stmt_generation (scalar_dest_def_info,
 						     new_stmt, gsi);
 
       if (slp_node)
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:22:47.489157307 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:23:00.401042649 +0100
@@ -806,7 +806,7 @@ vect_prologue_cost_for_slp_op (slp_tree
 			       unsigned opno, enum vect_def_type dt,
 			       stmt_vector_for_cost *cost_vec)
 {
-  gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[0];
+  gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[0]->stmt;
   tree op = gimple_op (stmt, opno);
   unsigned prologue_cost = 0;
 
@@ -838,11 +838,11 @@ vect_prologue_cost_for_slp_op (slp_tree
     {
       unsigned si = j % group_size;
       if (nelt == 0)
-	elt = gimple_op (SLP_TREE_SCALAR_STMTS (node)[si], opno);
+	elt = gimple_op (SLP_TREE_SCALAR_STMTS (node)[si]->stmt, opno);
       /* ???  We're just tracking whether all operands of a single
 	 vector initializer are the same, ideally we'd check if
 	 we emitted the same one already.  */
-      else if (elt != gimple_op (SLP_TREE_SCALAR_STMTS (node)[si],
+      else if (elt != gimple_op (SLP_TREE_SCALAR_STMTS (node)[si]->stmt,
 				 opno))
 	elt = NULL_TREE;
       nelt++;
@@ -889,7 +889,7 @@ vect_model_simple_cost (stmt_vec_info st
       /* Scan operands and account for prologue cost of constants/externals.
 	 ???  This over-estimates cost for multiple uses and should be
 	 re-engineered.  */
-      gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[0];
+      gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[0]->stmt;
       tree lhs = gimple_get_lhs (stmt);
       for (unsigned i = 0; i < gimple_num_ops (stmt); ++i)
 	{
@@ -5532,12 +5532,15 @@ vectorizable_shift (gimple *stmt, gimple
 	 a scalar shift.  */
       if (slp_node)
 	{
-	  vec<gimple *> stmts = SLP_TREE_SCALAR_STMTS (slp_node);
-	  gimple *slpstmt;
+	  vec<stmt_vec_info> stmts = SLP_TREE_SCALAR_STMTS (slp_node);
+	  stmt_vec_info slpstmt_info;
 
-	  FOR_EACH_VEC_ELT (stmts, k, slpstmt)
-	    if (!operand_equal_p (gimple_assign_rhs2 (slpstmt), op1, 0))
-	      scalar_shift_arg = false;
+	  FOR_EACH_VEC_ELT (stmts, k, slpstmt_info)
+	    {
+	      gassign *slpstmt = as_a <gassign *> (slpstmt_info->stmt);
+	      if (!operand_equal_p (gimple_assign_rhs2 (slpstmt), op1, 0))
+		scalar_shift_arg = false;
+	    }
 	}
 
       /* If the shift amount is computed by a pattern stmt we cannot
@@ -7421,7 +7424,7 @@ vectorizable_load (gimple *stmt, gimple_
   vec<tree> dr_chain = vNULL;
   bool grouped_load = false;
   gimple *first_stmt;
-  gimple *first_stmt_for_drptr = NULL;
+  stmt_vec_info first_stmt_info_for_drptr = NULL;
   bool inv_p;
   bool compute_in_loop = false;
   struct loop *at_loop;
@@ -7930,7 +7933,7 @@ vectorizable_load (gimple *stmt, gimple_
       /* For BB vectorization always use the first stmt to base
 	 the data ref pointer on.  */
       if (bb_vinfo)
-	first_stmt_for_drptr = SLP_TREE_SCALAR_STMTS (slp_node)[0];
+	first_stmt_info_for_drptr = SLP_TREE_SCALAR_STMTS (slp_node)[0];
 
       /* Check if the chain of loads is already vectorized.  */
       if (STMT_VINFO_VEC_STMT (vinfo_for_stmt (first_stmt))
@@ -8180,17 +8183,17 @@ vectorizable_load (gimple *stmt, gimple_
 	      dataref_offset = build_int_cst (ref_type, 0);
 	      inv_p = false;
 	    }
-	  else if (first_stmt_for_drptr
-		   && first_stmt != first_stmt_for_drptr)
+	  else if (first_stmt_info_for_drptr
+		   && first_stmt != first_stmt_info_for_drptr)
 	    {
 	      dataref_ptr
-		= vect_create_data_ref_ptr (first_stmt_for_drptr, aggr_type,
-					    at_loop, offset, &dummy, gsi,
-					    &ptr_incr, simd_lane_access_p,
+		= vect_create_data_ref_ptr (first_stmt_info_for_drptr,
+					    aggr_type, at_loop, offset, &dummy,
+					    gsi, &ptr_incr, simd_lane_access_p,
 					    &inv_p, byte_offset, bump);
 	      /* Adjust the pointer by the difference to first_stmt.  */
 	      data_reference_p ptrdr
-		= STMT_VINFO_DATA_REF (vinfo_for_stmt (first_stmt_for_drptr));
+		= STMT_VINFO_DATA_REF (first_stmt_info_for_drptr);
 	      tree diff = fold_convert (sizetype,
 					size_binop (MINUS_EXPR,
 						    DR_INIT (first_dr),
@@ -9391,13 +9394,12 @@ can_vectorize_live_stmts (gimple *stmt,
 {
   if (slp_node)
     {
-      gimple *slp_stmt;
+      stmt_vec_info slp_stmt_info;
       unsigned int i;
-      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (slp_node), i, slp_stmt)
+      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (slp_node), i, slp_stmt_info)
 	{
-	  stmt_vec_info slp_stmt_info = vinfo_for_stmt (slp_stmt);
 	  if (STMT_VINFO_LIVE_P (slp_stmt_info)
-	      && !vectorizable_live_operation (slp_stmt, gsi, slp_node, i,
+	      && !vectorizable_live_operation (slp_stmt_info, gsi, slp_node, i,
 					       vec_stmt, cost_vec))
 	    return false;
 	}

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [19/46] Make vect_dr_stmt return a stmt_vec_info
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (19 preceding siblings ...)
  2018-07-24 10:01 ` [20/46] Make *FIRST_ELEMENT and *NEXT_ELEMENT stmt_vec_infos Richard Sandiford
@ 2018-07-24 10:01 ` Richard Sandiford
  2018-07-25  9:28   ` Richard Biener
  2018-07-24 10:02 ` [24/46] Make stmt_info_for_cost use " Richard Sandiford
                   ` (24 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:01 UTC (permalink / raw)
  To: gcc-patches

This patch makes vect_dr_stmt return a stmt_vec_info instead of a
gimple stmt.  Rather than retain a separate gimple stmt variable
in cases where both existed, the patch replaces uses of the gimple
variable with the uses of the stmt_vec_info.  Later patches do this
more generally.

Many things that are keyed off a data_reference would these days
be better keyed off a stmt_vec_info, but it's more convenient
to do that later in the series.  The vect_dr_size calls that are
left over do still benefit from this patch.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (vect_dr_stmt): Return a stmt_vec_info rather
	than a gimple stmt.
	* tree-vect-data-refs.c (vect_analyze_data_ref_dependence)
	(vect_slp_analyze_data_ref_dependence, vect_record_base_alignments)
	(vect_calculate_target_alignmentm, vect_compute_data_ref_alignment)
	(vect_update_misalignment_for_peel, vect_verify_datarefs_alignment)
	(vector_alignment_reachable_p, vect_get_data_access_cost)
	(vect_get_peeling_costs_all_drs, vect_peeling_hash_get_lowest_cost)
	(vect_peeling_supportable, vect_enhance_data_refs_alignment)
	(vect_find_same_alignment_drs, vect_analyze_data_refs_alignment)
	(vect_analyze_group_access_1, vect_analyze_group_access)
	(vect_analyze_data_ref_access, vect_analyze_data_ref_accesses)
	(vect_vfa_access_size, vect_small_gap_p, vect_analyze_data_refs)
	(vect_supportable_dr_alignment): Remove vinfo_for_stmt from the
	result of vect_dr_stmt and use the stmt_vec_info instead of
	the associated gimple stmt.
	* tree-vect-loop-manip.c (get_misalign_in_elems): Likewise.
	(vect_gen_prolog_loop_niters): Likewise.
	* tree-vect-loop.c (vect_analyze_loop_2): Likewise.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:23:00.401042649 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:23:04.033010396 +0100
@@ -1370,7 +1370,7 @@ vect_dr_behavior (data_reference *dr)
    a pattern this returns the corresponding pattern stmt.  Otherwise
    DR_STMT is returned.  */
 
-inline gimple *
+inline stmt_vec_info
 vect_dr_stmt (data_reference *dr)
 {
   gimple *stmt = DR_STMT (dr);
@@ -1379,7 +1379,7 @@ vect_dr_stmt (data_reference *dr)
     return STMT_VINFO_RELATED_STMT (stmt_info);
   /* DR_STMT should never refer to a stmt in a pattern replacement.  */
   gcc_checking_assert (!STMT_VINFO_RELATED_STMT (stmt_info));
-  return stmt;
+  return stmt_info;
 }
 
 /* Return true if the vect cost model is unlimited.  */
Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c	2018-07-24 10:23:00.397042684 +0100
+++ gcc/tree-vect-data-refs.c	2018-07-24 10:23:04.029010432 +0100
@@ -294,8 +294,8 @@ vect_analyze_data_ref_dependence (struct
   struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
   struct data_reference *dra = DDR_A (ddr);
   struct data_reference *drb = DDR_B (ddr);
-  stmt_vec_info stmtinfo_a = vinfo_for_stmt (vect_dr_stmt (dra));
-  stmt_vec_info stmtinfo_b = vinfo_for_stmt (vect_dr_stmt (drb));
+  stmt_vec_info stmtinfo_a = vect_dr_stmt (dra);
+  stmt_vec_info stmtinfo_b = vect_dr_stmt (drb);
   lambda_vector dist_v;
   unsigned int loop_depth;
 
@@ -627,9 +627,9 @@ vect_slp_analyze_data_ref_dependence (st
 
   /* If dra and drb are part of the same interleaving chain consider
      them independent.  */
-  if (STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (vect_dr_stmt (dra)))
-      && (DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (vect_dr_stmt (dra)))
-	  == DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (vect_dr_stmt (drb)))))
+  if (STMT_VINFO_GROUPED_ACCESS (vect_dr_stmt (dra))
+      && (DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (dra))
+	  == DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (drb))))
     return false;
 
   /* Unknown data dependence.  */
@@ -841,19 +841,18 @@ vect_record_base_alignments (vec_info *v
   unsigned int i;
   FOR_EACH_VEC_ELT (vinfo->shared->datarefs, i, dr)
     {
-      gimple *stmt = vect_dr_stmt (dr);
-      stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+      stmt_vec_info stmt_info = vect_dr_stmt (dr);
       if (!DR_IS_CONDITIONAL_IN_STMT (dr)
 	  && STMT_VINFO_VECTORIZABLE (stmt_info)
 	  && !STMT_VINFO_GATHER_SCATTER_P (stmt_info))
 	{
-	  vect_record_base_alignment (vinfo, stmt, &DR_INNERMOST (dr));
+	  vect_record_base_alignment (vinfo, stmt_info, &DR_INNERMOST (dr));
 
 	  /* If DR is nested in the loop that is being vectorized, we can also
 	     record the alignment of the base wrt the outer loop.  */
-	  if (loop && nested_in_vect_loop_p (loop, stmt))
+	  if (loop && nested_in_vect_loop_p (loop, stmt_info))
 	    vect_record_base_alignment
-		(vinfo, stmt, &STMT_VINFO_DR_WRT_VEC_LOOP (stmt_info));
+		(vinfo, stmt_info, &STMT_VINFO_DR_WRT_VEC_LOOP (stmt_info));
 	}
     }
 }
@@ -863,8 +862,7 @@ vect_record_base_alignments (vec_info *v
 static unsigned int
 vect_calculate_target_alignment (struct data_reference *dr)
 {
-  gimple *stmt = vect_dr_stmt (dr);
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+  stmt_vec_info stmt_info = vect_dr_stmt (dr);
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
   return targetm.vectorize.preferred_vector_alignment (vectype);
 }
@@ -882,8 +880,7 @@ vect_calculate_target_alignment (struct
 static void
 vect_compute_data_ref_alignment (struct data_reference *dr)
 {
-  gimple *stmt = vect_dr_stmt (dr);
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+  stmt_vec_info stmt_info = vect_dr_stmt (dr);
   vec_base_alignments *base_alignments = &stmt_info->vinfo->base_alignments;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct loop *loop = NULL;
@@ -923,7 +920,7 @@ vect_compute_data_ref_alignment (struct
      stays the same throughout the execution of the inner-loop, which is why
      we have to check that the stride of the dataref in the inner-loop evenly
      divides by the vector alignment.  */
-  else if (nested_in_vect_loop_p (loop, stmt))
+  else if (nested_in_vect_loop_p (loop, stmt_info))
     {
       step_preserves_misalignment_p
 	= (DR_STEP_ALIGNMENT (dr) % vector_alignment) == 0;
@@ -1074,8 +1071,8 @@ vect_update_misalignment_for_peel (struc
   struct data_reference *current_dr;
   int dr_size = vect_get_scalar_dr_size (dr);
   int dr_peel_size = vect_get_scalar_dr_size (dr_peel);
-  stmt_vec_info stmt_info = vinfo_for_stmt (vect_dr_stmt (dr));
-  stmt_vec_info peel_stmt_info = vinfo_for_stmt (vect_dr_stmt (dr_peel));
+  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  stmt_vec_info peel_stmt_info = vect_dr_stmt (dr_peel);
 
  /* For interleaved data accesses the step in the loop must be multiplied by
      the size of the interleaving group.  */
@@ -1086,8 +1083,7 @@ vect_update_misalignment_for_peel (struc
 
   /* It can be assumed that the data refs with the same alignment as dr_peel
      are aligned in the vector loop.  */
-  same_aligned_drs
-    = STMT_VINFO_SAME_ALIGN_REFS (vinfo_for_stmt (vect_dr_stmt (dr_peel)));
+  same_aligned_drs = STMT_VINFO_SAME_ALIGN_REFS (vect_dr_stmt (dr_peel));
   FOR_EACH_VEC_ELT (same_aligned_drs, i, current_dr)
     {
       if (current_dr != dr)
@@ -1167,15 +1163,14 @@ vect_verify_datarefs_alignment (loop_vec
 
   FOR_EACH_VEC_ELT (datarefs, i, dr)
     {
-      gimple *stmt = vect_dr_stmt (dr);
-      stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+      stmt_vec_info stmt_info = vect_dr_stmt (dr);
 
       if (!STMT_VINFO_RELEVANT_P (stmt_info))
 	continue;
 
       /* For interleaving, only the alignment of the first access matters.   */
       if (STMT_VINFO_GROUPED_ACCESS (stmt_info)
-	  && DR_GROUP_FIRST_ELEMENT (stmt_info) != stmt)
+	  && DR_GROUP_FIRST_ELEMENT (stmt_info) != stmt_info)
 	continue;
 
       /* Strided accesses perform only component accesses, alignment is
@@ -1212,8 +1207,7 @@ not_size_aligned (tree exp)
 static bool
 vector_alignment_reachable_p (struct data_reference *dr)
 {
-  gimple *stmt = vect_dr_stmt (dr);
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+  stmt_vec_info stmt_info = vect_dr_stmt (dr);
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
 
   if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
@@ -1282,8 +1276,7 @@ vect_get_data_access_cost (struct data_r
 			   stmt_vector_for_cost *body_cost_vec,
 			   stmt_vector_for_cost *prologue_cost_vec)
 {
-  gimple *stmt = vect_dr_stmt (dr);
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+  stmt_vec_info stmt_info = vect_dr_stmt (dr);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   int ncopies;
 
@@ -1412,16 +1405,15 @@ vect_get_peeling_costs_all_drs (vec<data
 
   FOR_EACH_VEC_ELT (datarefs, i, dr)
     {
-      gimple *stmt = vect_dr_stmt (dr);
-      stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+      stmt_vec_info stmt_info = vect_dr_stmt (dr);
       if (!STMT_VINFO_RELEVANT_P (stmt_info))
 	continue;
 
       /* For interleaving, only the alignment of the first access
          matters.  */
       if (STMT_VINFO_GROUPED_ACCESS (stmt_info)
-          && DR_GROUP_FIRST_ELEMENT (stmt_info) != stmt)
-        continue;
+	  && DR_GROUP_FIRST_ELEMENT (stmt_info) != stmt_info)
+	continue;
 
       /* Strided accesses perform only component accesses, alignment is
          irrelevant for them.  */
@@ -1453,8 +1445,7 @@ vect_peeling_hash_get_lowest_cost (_vect
   vect_peel_info elem = *slot;
   int dummy;
   unsigned int inside_cost = 0, outside_cost = 0;
-  gimple *stmt = vect_dr_stmt (elem->dr);
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+  stmt_vec_info stmt_info = vect_dr_stmt (elem->dr);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   stmt_vector_for_cost prologue_cost_vec, body_cost_vec,
 		       epilogue_cost_vec;
@@ -1537,8 +1528,6 @@ vect_peeling_supportable (loop_vec_info
   unsigned i;
   struct data_reference *dr = NULL;
   vec<data_reference_p> datarefs = LOOP_VINFO_DATAREFS (loop_vinfo);
-  gimple *stmt;
-  stmt_vec_info stmt_info;
   enum dr_alignment_support supportable_dr_alignment;
 
   /* Ensure that all data refs can be vectorized after the peel.  */
@@ -1549,12 +1538,11 @@ vect_peeling_supportable (loop_vec_info
       if (dr == dr0)
 	continue;
 
-      stmt = vect_dr_stmt (dr);
-      stmt_info = vinfo_for_stmt (stmt);
+      stmt_vec_info stmt_info = vect_dr_stmt (dr);
       /* For interleaving, only the alignment of the first access
 	 matters.  */
       if (STMT_VINFO_GROUPED_ACCESS (stmt_info)
-	  && DR_GROUP_FIRST_ELEMENT (stmt_info) != stmt)
+	  && DR_GROUP_FIRST_ELEMENT (stmt_info) != stmt_info)
 	continue;
 
       /* Strided accesses perform only component accesses, alignment is
@@ -1678,8 +1666,6 @@ vect_enhance_data_refs_alignment (loop_v
   bool do_peeling = false;
   bool do_versioning = false;
   bool stat;
-  gimple *stmt;
-  stmt_vec_info stmt_info;
   unsigned int npeel = 0;
   bool one_misalignment_known = false;
   bool one_misalignment_unknown = false;
@@ -1731,8 +1717,7 @@ vect_enhance_data_refs_alignment (loop_v
 
   FOR_EACH_VEC_ELT (datarefs, i, dr)
     {
-      stmt = vect_dr_stmt (dr);
-      stmt_info = vinfo_for_stmt (stmt);
+      stmt_vec_info stmt_info = vect_dr_stmt (dr);
 
       if (!STMT_VINFO_RELEVANT_P (stmt_info))
 	continue;
@@ -1740,8 +1725,8 @@ vect_enhance_data_refs_alignment (loop_v
       /* For interleaving, only the alignment of the first access
          matters.  */
       if (STMT_VINFO_GROUPED_ACCESS (stmt_info)
-          && DR_GROUP_FIRST_ELEMENT (stmt_info) != stmt)
-        continue;
+	  && DR_GROUP_FIRST_ELEMENT (stmt_info) != stmt_info)
+	continue;
 
       /* For scatter-gather or invariant accesses there is nothing
 	 to enhance.  */
@@ -1943,8 +1928,7 @@ vect_enhance_data_refs_alignment (loop_v
       epilogue_cost_vec.release ();
 
       peel_for_unknown_alignment.peel_info.count = 1
-	+ STMT_VINFO_SAME_ALIGN_REFS
-	(vinfo_for_stmt (vect_dr_stmt (dr0))).length ();
+	+ STMT_VINFO_SAME_ALIGN_REFS (vect_dr_stmt (dr0)).length ();
     }
 
   peel_for_unknown_alignment.peel_info.npeel = 0;
@@ -2025,8 +2009,7 @@ vect_enhance_data_refs_alignment (loop_v
 
   if (do_peeling)
     {
-      stmt = vect_dr_stmt (dr0);
-      stmt_info = vinfo_for_stmt (stmt);
+      stmt_vec_info stmt_info = vect_dr_stmt (dr0);
       vectype = STMT_VINFO_VECTYPE (stmt_info);
 
       if (known_alignment_for_access_p (dr0))
@@ -2049,7 +2032,7 @@ vect_enhance_data_refs_alignment (loop_v
 	  /* For interleaved data access every iteration accesses all the
 	     members of the group, therefore we divide the number of iterations
 	     by the group size.  */
-	  stmt_info = vinfo_for_stmt (vect_dr_stmt (dr0));
+	  stmt_info = vect_dr_stmt (dr0);
 	  if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
 	    npeel /= DR_GROUP_SIZE (stmt_info);
 
@@ -2123,7 +2106,7 @@ vect_enhance_data_refs_alignment (loop_v
 	      {
 		/* Strided accesses perform only component accesses, alignment
 		   is irrelevant for them.  */
-		stmt_info = vinfo_for_stmt (vect_dr_stmt (dr));
+		stmt_info = vect_dr_stmt (dr);
 		if (STMT_VINFO_STRIDED_P (stmt_info)
 		    && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
 		  continue;
@@ -2172,14 +2155,13 @@ vect_enhance_data_refs_alignment (loop_v
     {
       FOR_EACH_VEC_ELT (datarefs, i, dr)
         {
-	  stmt = vect_dr_stmt (dr);
-	  stmt_info = vinfo_for_stmt (stmt);
+	  stmt_vec_info stmt_info = vect_dr_stmt (dr);
 
 	  /* For interleaving, only the alignment of the first access
 	     matters.  */
 	  if (aligned_access_p (dr)
 	      || (STMT_VINFO_GROUPED_ACCESS (stmt_info)
-		  && DR_GROUP_FIRST_ELEMENT (stmt_info) != stmt))
+		  && DR_GROUP_FIRST_ELEMENT (stmt_info) != stmt_info))
 	    continue;
 
 	  if (STMT_VINFO_STRIDED_P (stmt_info))
@@ -2196,7 +2178,6 @@ vect_enhance_data_refs_alignment (loop_v
 
           if (!supportable_dr_alignment)
             {
-	      gimple *stmt;
               int mask;
               tree vectype;
 
@@ -2208,9 +2189,9 @@ vect_enhance_data_refs_alignment (loop_v
                   break;
                 }
 
-              stmt = vect_dr_stmt (dr);
-              vectype = STMT_VINFO_VECTYPE (vinfo_for_stmt (stmt));
-              gcc_assert (vectype);
+	      stmt_info = vect_dr_stmt (dr);
+	      vectype = STMT_VINFO_VECTYPE (stmt_info);
+	      gcc_assert (vectype);
 
 	      /* At present we don't support versioning for alignment
 		 with variable VF, since there's no guarantee that the
@@ -2237,8 +2218,7 @@ vect_enhance_data_refs_alignment (loop_v
               gcc_assert (!LOOP_VINFO_PTR_MASK (loop_vinfo)
                           || LOOP_VINFO_PTR_MASK (loop_vinfo) == mask);
               LOOP_VINFO_PTR_MASK (loop_vinfo) = mask;
-              LOOP_VINFO_MAY_MISALIGN_STMTS (loop_vinfo).safe_push (
-		      vect_dr_stmt (dr));
+	      LOOP_VINFO_MAY_MISALIGN_STMTS (loop_vinfo).safe_push (stmt_info);
             }
         }
 
@@ -2298,8 +2278,8 @@ vect_find_same_alignment_drs (struct dat
 {
   struct data_reference *dra = DDR_A (ddr);
   struct data_reference *drb = DDR_B (ddr);
-  stmt_vec_info stmtinfo_a = vinfo_for_stmt (vect_dr_stmt (dra));
-  stmt_vec_info stmtinfo_b = vinfo_for_stmt (vect_dr_stmt (drb));
+  stmt_vec_info stmtinfo_a = vect_dr_stmt (dra);
+  stmt_vec_info stmtinfo_b = vect_dr_stmt (drb);
 
   if (DDR_ARE_DEPENDENT (ddr) == chrec_known)
     return;
@@ -2372,7 +2352,7 @@ vect_analyze_data_refs_alignment (loop_v
   vect_record_base_alignments (vinfo);
   FOR_EACH_VEC_ELT (datarefs, i, dr)
     {
-      stmt_vec_info stmt_info = vinfo_for_stmt (vect_dr_stmt (dr));
+      stmt_vec_info stmt_info = vect_dr_stmt (dr);
       if (STMT_VINFO_VECTORIZABLE (stmt_info))
 	vect_compute_data_ref_alignment (dr);
     }
@@ -2451,8 +2431,7 @@ vect_analyze_group_access_1 (struct data
   tree step = DR_STEP (dr);
   tree scalar_type = TREE_TYPE (DR_REF (dr));
   HOST_WIDE_INT type_size = TREE_INT_CST_LOW (TYPE_SIZE_UNIT (scalar_type));
-  gimple *stmt = vect_dr_stmt (dr);
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+  stmt_vec_info stmt_info = vect_dr_stmt (dr);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   bb_vec_info bb_vinfo = STMT_VINFO_BB_VINFO (stmt_info);
   HOST_WIDE_INT dr_step = -1;
@@ -2491,7 +2470,7 @@ vect_analyze_group_access_1 (struct data
     groupsize = 0;
 
   /* Not consecutive access is possible only if it is a part of interleaving.  */
-  if (!DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)))
+  if (!DR_GROUP_FIRST_ELEMENT (stmt_info))
     {
       /* Check if it this DR is a part of interleaving, and is a single
 	 element of the group that is accessed in the loop.  */
@@ -2502,8 +2481,8 @@ vect_analyze_group_access_1 (struct data
 	  && (dr_step % type_size) == 0
 	  && groupsize > 0)
 	{
-	  DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)) = stmt;
-	  DR_GROUP_SIZE (vinfo_for_stmt (stmt)) = groupsize;
+	  DR_GROUP_FIRST_ELEMENT (stmt_info) = stmt_info;
+	  DR_GROUP_SIZE (stmt_info) = groupsize;
 	  DR_GROUP_GAP (stmt_info) = groupsize - 1;
 	  if (dump_enabled_p ())
 	    {
@@ -2522,29 +2501,30 @@ vect_analyze_group_access_1 (struct data
         {
  	  dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
 	                   "not consecutive access ");
-	  dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM, stmt, 0);
+	  dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
+			    stmt_info->stmt, 0);
         }
 
       if (bb_vinfo)
-        {
-          /* Mark the statement as unvectorizable.  */
-          STMT_VINFO_VECTORIZABLE (vinfo_for_stmt (vect_dr_stmt (dr))) = false;
-          return true;
-        }
+	{
+	  /* Mark the statement as unvectorizable.  */
+	  STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr)) = false;
+	  return true;
+	}
 
       dump_printf_loc (MSG_NOTE, vect_location, "using strided accesses\n");
       STMT_VINFO_STRIDED_P (stmt_info) = true;
       return true;
     }
 
-  if (DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)) == stmt)
+  if (DR_GROUP_FIRST_ELEMENT (stmt_info) == stmt_info)
     {
       /* First stmt in the interleaving chain. Check the chain.  */
-      gimple *next = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (stmt));
+      gimple *next = DR_GROUP_NEXT_ELEMENT (stmt_info);
       struct data_reference *data_ref = dr;
       unsigned int count = 1;
       tree prev_init = DR_INIT (data_ref);
-      gimple *prev = stmt;
+      gimple *prev = stmt_info;
       HOST_WIDE_INT diff, gaps = 0;
 
       /* By construction, all group members have INTEGER_CST DR_INITs.  */
@@ -2643,9 +2623,9 @@ vect_analyze_group_access_1 (struct data
 	 difference between the groupsize and the last accessed
 	 element.
 	 When there is no gap, this difference should be 0.  */
-      DR_GROUP_GAP (vinfo_for_stmt (stmt)) = groupsize - last_accessed_element;
+      DR_GROUP_GAP (stmt_info) = groupsize - last_accessed_element;
 
-      DR_GROUP_SIZE (vinfo_for_stmt (stmt)) = groupsize;
+      DR_GROUP_SIZE (stmt_info) = groupsize;
       if (dump_enabled_p ())
 	{
 	  dump_printf_loc (MSG_NOTE, vect_location,
@@ -2656,22 +2636,22 @@ vect_analyze_group_access_1 (struct data
 	    dump_printf (MSG_NOTE, "store ");
 	  dump_printf (MSG_NOTE, "of size %u starting with ",
 		       (unsigned)groupsize);
-	  dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
-	  if (DR_GROUP_GAP (vinfo_for_stmt (stmt)) != 0)
+	  dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
+	  if (DR_GROUP_GAP (stmt_info) != 0)
 	    dump_printf_loc (MSG_NOTE, vect_location,
 			     "There is a gap of %u elements after the group\n",
-			     DR_GROUP_GAP (vinfo_for_stmt (stmt)));
+			     DR_GROUP_GAP (stmt_info));
 	}
 
       /* SLP: create an SLP data structure for every interleaving group of
 	 stores for further analysis in vect_analyse_slp.  */
       if (DR_IS_WRITE (dr) && !slp_impossible)
-        {
-          if (loop_vinfo)
-            LOOP_VINFO_GROUPED_STORES (loop_vinfo).safe_push (stmt);
-          if (bb_vinfo)
-            BB_VINFO_GROUPED_STORES (bb_vinfo).safe_push (stmt);
-        }
+	{
+	  if (loop_vinfo)
+	    LOOP_VINFO_GROUPED_STORES (loop_vinfo).safe_push (stmt_info);
+	  if (bb_vinfo)
+	    BB_VINFO_GROUPED_STORES (bb_vinfo).safe_push (stmt_info);
+	}
     }
 
   return true;
@@ -2689,7 +2669,7 @@ vect_analyze_group_access (struct data_r
     {
       /* Dissolve the group if present.  */
       gimple *next;
-      gimple *stmt = DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (vect_dr_stmt (dr)));
+      gimple *stmt = DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (dr));
       while (stmt)
 	{
 	  stmt_vec_info vinfo = vinfo_for_stmt (stmt);
@@ -2712,8 +2692,7 @@ vect_analyze_data_ref_access (struct dat
 {
   tree step = DR_STEP (dr);
   tree scalar_type = TREE_TYPE (DR_REF (dr));
-  gimple *stmt = vect_dr_stmt (dr);
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+  stmt_vec_info stmt_info = vect_dr_stmt (dr);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct loop *loop = NULL;
 
@@ -2734,8 +2713,8 @@ vect_analyze_data_ref_access (struct dat
   /* Allow loads with zero step in inner-loop vectorization.  */
   if (loop_vinfo && integer_zerop (step))
     {
-      DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)) = NULL;
-      if (!nested_in_vect_loop_p (loop, stmt))
+      DR_GROUP_FIRST_ELEMENT (stmt_info) = NULL;
+      if (!nested_in_vect_loop_p (loop, stmt_info))
 	return DR_IS_READ (dr);
       /* Allow references with zero step for outer loops marked
 	 with pragma omp simd only - it guarantees absence of
@@ -2749,11 +2728,11 @@ vect_analyze_data_ref_access (struct dat
 	}
     }
 
-  if (loop && nested_in_vect_loop_p (loop, stmt))
+  if (loop && nested_in_vect_loop_p (loop, stmt_info))
     {
       /* Interleaved accesses are not yet supported within outer-loop
         vectorization for references in the inner-loop.  */
-      DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)) = NULL;
+      DR_GROUP_FIRST_ELEMENT (stmt_info) = NULL;
 
       /* For the rest of the analysis we use the outer-loop step.  */
       step = STMT_VINFO_DR_STEP (stmt_info);
@@ -2775,12 +2754,12 @@ vect_analyze_data_ref_access (struct dat
 	      && !compare_tree_int (TYPE_SIZE_UNIT (scalar_type), -dr_step)))
 	{
 	  /* Mark that it is not interleaving.  */
-	  DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)) = NULL;
+	  DR_GROUP_FIRST_ELEMENT (stmt_info) = NULL;
 	  return true;
 	}
     }
 
-  if (loop && nested_in_vect_loop_p (loop, stmt))
+  if (loop && nested_in_vect_loop_p (loop, stmt_info))
     {
       if (dump_enabled_p ())
 	dump_printf_loc (MSG_NOTE, vect_location,
@@ -2939,7 +2918,7 @@ vect_analyze_data_ref_accesses (vec_info
   for (i = 0; i < datarefs_copy.length () - 1;)
     {
       data_reference_p dra = datarefs_copy[i];
-      stmt_vec_info stmtinfo_a = vinfo_for_stmt (vect_dr_stmt (dra));
+      stmt_vec_info stmtinfo_a = vect_dr_stmt (dra);
       stmt_vec_info lastinfo = NULL;
       if (!STMT_VINFO_VECTORIZABLE (stmtinfo_a)
 	  || STMT_VINFO_GATHER_SCATTER_P (stmtinfo_a))
@@ -2950,7 +2929,7 @@ vect_analyze_data_ref_accesses (vec_info
       for (i = i + 1; i < datarefs_copy.length (); ++i)
 	{
 	  data_reference_p drb = datarefs_copy[i];
-	  stmt_vec_info stmtinfo_b = vinfo_for_stmt (vect_dr_stmt (drb));
+	  stmt_vec_info stmtinfo_b = vect_dr_stmt (drb);
 	  if (!STMT_VINFO_VECTORIZABLE (stmtinfo_b)
 	      || STMT_VINFO_GATHER_SCATTER_P (stmtinfo_b))
 	    break;
@@ -3073,7 +3052,7 @@ vect_analyze_data_ref_accesses (vec_info
     }
 
   FOR_EACH_VEC_ELT (datarefs_copy, i, dr)
-    if (STMT_VINFO_VECTORIZABLE (vinfo_for_stmt (vect_dr_stmt (dr))) 
+    if (STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr))
         && !vect_analyze_data_ref_access (dr))
       {
 	if (dump_enabled_p ())
@@ -3081,11 +3060,11 @@ vect_analyze_data_ref_accesses (vec_info
 	                   "not vectorized: complicated access pattern.\n");
 
         if (is_a <bb_vec_info> (vinfo))
-          {
-            /* Mark the statement as not vectorizable.  */
-            STMT_VINFO_VECTORIZABLE (vinfo_for_stmt (vect_dr_stmt (dr))) = false;
-            continue;
-          }
+	  {
+	    /* Mark the statement as not vectorizable.  */
+	    STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr)) = false;
+	    continue;
+	  }
         else
 	  {
 	    datarefs_copy.release ();
@@ -3124,7 +3103,7 @@ vect_vfa_segment_size (struct data_refer
 static unsigned HOST_WIDE_INT
 vect_vfa_access_size (data_reference *dr)
 {
-  stmt_vec_info stmt_vinfo = vinfo_for_stmt (vect_dr_stmt (dr));
+  stmt_vec_info stmt_vinfo = vect_dr_stmt (dr);
   tree ref_type = TREE_TYPE (DR_REF (dr));
   unsigned HOST_WIDE_INT ref_size = tree_to_uhwi (TYPE_SIZE_UNIT (ref_type));
   unsigned HOST_WIDE_INT access_size = ref_size;
@@ -3298,7 +3277,7 @@ vect_check_lower_bound (loop_vec_info lo
 static bool
 vect_small_gap_p (loop_vec_info loop_vinfo, data_reference *dr, poly_int64 gap)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (vect_dr_stmt (dr));
+  stmt_vec_info stmt_info = vect_dr_stmt (dr);
   HOST_WIDE_INT count
     = estimated_poly_value (LOOP_VINFO_VECT_FACTOR (loop_vinfo));
   if (DR_GROUP_FIRST_ELEMENT (stmt_info))
@@ -4141,14 +4120,11 @@ vect_analyze_data_refs (vec_info *vinfo,
   vec<data_reference_p> datarefs = vinfo->shared->datarefs;
   FOR_EACH_VEC_ELT (datarefs, i, dr)
     {
-      gimple *stmt;
-      stmt_vec_info stmt_info;
       enum { SG_NONE, GATHER, SCATTER } gatherscatter = SG_NONE;
       poly_uint64 vf;
 
       gcc_assert (DR_REF (dr));
-      stmt = vect_dr_stmt (dr);
-      stmt_info = vinfo_for_stmt (stmt);
+      stmt_vec_info stmt_info = vect_dr_stmt (dr);
 
       /* Check that analysis of the data-ref succeeded.  */
       if (!DR_BASE_ADDRESS (dr) || !DR_OFFSET (dr) || !DR_INIT (dr)
@@ -4168,7 +4144,7 @@ vect_analyze_data_refs (vec_info *vinfo,
 	  /* If target supports vector gather loads or scatter stores,
 	     see if they can't be used.  */
 	  if (is_a <loop_vec_info> (vinfo)
-	      && !nested_in_vect_loop_p (loop, stmt))
+	      && !nested_in_vect_loop_p (loop, stmt_info))
 	    {
 	      if (maybe_gather || maybe_scatter)
 		{
@@ -4186,7 +4162,8 @@ vect_analyze_data_refs (vec_info *vinfo,
 		  dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
                                    "not vectorized: data ref analysis "
                                    "failed ");
-		  dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM, stmt, 0);
+		  dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
+				    stmt_info->stmt, 0);
 		}
 	      if (is_a <bb_vec_info> (vinfo))
 		{
@@ -4202,14 +4179,15 @@ vect_analyze_data_refs (vec_info *vinfo,
       /* See if this was detected as SIMD lane access.  */
       if (dr->aux == (void *)-1)
 	{
-	  if (nested_in_vect_loop_p (loop, stmt))
+	  if (nested_in_vect_loop_p (loop, stmt_info))
 	    {
 	      if (dump_enabled_p ())
 		{
 		  dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
 				   "not vectorized: data ref analysis "
 				   "failed ");
-		  dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM, stmt, 0);
+		  dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
+				    stmt_info->stmt, 0);
 		}
 	      return false;
 	    }
@@ -4224,7 +4202,8 @@ vect_analyze_data_refs (vec_info *vinfo,
               dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
                                "not vectorized: base object not addressable "
 			       "for stmt: ");
-              dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM, stmt, 0);
+	      dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
+				stmt_info->stmt, 0);
             }
           if (is_a <bb_vec_info> (vinfo))
 	    {
@@ -4240,14 +4219,15 @@ vect_analyze_data_refs (vec_info *vinfo,
 	  && DR_STEP (dr)
 	  && TREE_CODE (DR_STEP (dr)) != INTEGER_CST)
 	{
-	  if (nested_in_vect_loop_p (loop, stmt))
+	  if (nested_in_vect_loop_p (loop, stmt_info))
 	    {
 	      if (dump_enabled_p ())
 		{
 		  dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, 
                                    "not vectorized: not suitable for strided "
                                    "load ");
-		  dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM, stmt, 0);
+		  dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
+				    stmt_info->stmt, 0);
 		}
 	      return false;
 	    }
@@ -4262,7 +4242,7 @@ vect_analyze_data_refs (vec_info *vinfo,
 	 inner-most enclosing loop).  We do that by building a reference to the
 	 first location accessed by the inner-loop, and analyze it relative to
 	 the outer-loop.  */
-      if (loop && nested_in_vect_loop_p (loop, stmt))
+      if (loop && nested_in_vect_loop_p (loop, stmt_info))
 	{
 	  /* Build a reference to the first location accessed by the
 	     inner loop: *(BASE + INIT + OFFSET).  By construction,
@@ -4329,7 +4309,8 @@ vect_analyze_data_refs (vec_info *vinfo,
             {
               dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
                                "not vectorized: no vectype for stmt: ");
-              dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM, stmt, 0);
+	      dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
+				stmt_info->stmt, 0);
               dump_printf (MSG_MISSED_OPTIMIZATION, " scalar_type: ");
               dump_generic_expr (MSG_MISSED_OPTIMIZATION, TDF_DETAILS,
                                  scalar_type);
@@ -4351,7 +4332,7 @@ vect_analyze_data_refs (vec_info *vinfo,
 	    {
 	      dump_printf_loc (MSG_NOTE, vect_location,
 			       "got vectype for stmt: ");
-	      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
+	      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
 	      dump_generic_expr (MSG_NOTE, TDF_SLIM,
 				 STMT_VINFO_VECTYPE (stmt_info));
 	      dump_printf (MSG_NOTE, "\n");
@@ -4366,7 +4347,8 @@ vect_analyze_data_refs (vec_info *vinfo,
       if (gatherscatter != SG_NONE)
 	{
 	  gather_scatter_info gs_info;
-	  if (!vect_check_gather_scatter (stmt, as_a <loop_vec_info> (vinfo),
+	  if (!vect_check_gather_scatter (stmt_info,
+					  as_a <loop_vec_info> (vinfo),
 					  &gs_info)
 	      || !get_vectype_for_scalar_type (TREE_TYPE (gs_info.offset)))
 	    {
@@ -4378,7 +4360,8 @@ vect_analyze_data_refs (vec_info *vinfo,
 				   "load " :
 				   "not vectorized: not suitable for scatter "
 				   "store ");
-		  dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM, stmt, 0);
+		  dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
+				    stmt_info->stmt, 0);
 		}
 	      return false;
 	    }
@@ -6459,8 +6442,7 @@ enum dr_alignment_support
 vect_supportable_dr_alignment (struct data_reference *dr,
                                bool check_aligned_accesses)
 {
-  gimple *stmt = vect_dr_stmt (dr);
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+  stmt_vec_info stmt_info = vect_dr_stmt (dr);
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
   machine_mode mode = TYPE_MODE (vectype);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
@@ -6472,16 +6454,16 @@ vect_supportable_dr_alignment (struct da
 
   /* For now assume all conditional loads/stores support unaligned
      access without any special code.  */
-  if (is_gimple_call (stmt)
-      && gimple_call_internal_p (stmt)
-      && (gimple_call_internal_fn (stmt) == IFN_MASK_LOAD
-	  || gimple_call_internal_fn (stmt) == IFN_MASK_STORE))
-    return dr_unaligned_supported;
+  if (gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt))
+    if (gimple_call_internal_p (stmt)
+	&& (gimple_call_internal_fn (stmt) == IFN_MASK_LOAD
+	    || gimple_call_internal_fn (stmt) == IFN_MASK_STORE))
+      return dr_unaligned_supported;
 
   if (loop_vinfo)
     {
       vect_loop = LOOP_VINFO_LOOP (loop_vinfo);
-      nested_in_vect_loop = nested_in_vect_loop_p (vect_loop, stmt);
+      nested_in_vect_loop = nested_in_vect_loop_p (vect_loop, stmt_info);
     }
 
   /* Possibly unaligned access.  */
Index: gcc/tree-vect-loop-manip.c
===================================================================
--- gcc/tree-vect-loop-manip.c	2018-07-24 10:22:33.821278677 +0100
+++ gcc/tree-vect-loop-manip.c	2018-07-24 10:23:04.029010432 +0100
@@ -1560,8 +1560,7 @@ vect_update_ivs_after_vectorizer (loop_v
 get_misalign_in_elems (gimple **seq, loop_vec_info loop_vinfo)
 {
   struct data_reference *dr = LOOP_VINFO_UNALIGNED_DR (loop_vinfo);
-  gimple *dr_stmt = vect_dr_stmt (dr);
-  stmt_vec_info stmt_info = vinfo_for_stmt (dr_stmt);
+  stmt_vec_info stmt_info = vect_dr_stmt (dr);
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
 
   unsigned int target_align = DR_TARGET_ALIGNMENT (dr);
@@ -1571,7 +1570,7 @@ get_misalign_in_elems (gimple **seq, loo
   tree offset = (negative
 		 ? size_int (-TYPE_VECTOR_SUBPARTS (vectype) + 1)
 		 : size_zero_node);
-  tree start_addr = vect_create_addr_base_for_vector_ref (dr_stmt, seq,
+  tree start_addr = vect_create_addr_base_for_vector_ref (stmt_info, seq,
 							  offset);
   tree type = unsigned_type_for (TREE_TYPE (start_addr));
   tree target_align_minus_1 = build_int_cst (type, target_align - 1);
@@ -1631,8 +1630,7 @@ vect_gen_prolog_loop_niters (loop_vec_in
   tree niters_type = TREE_TYPE (LOOP_VINFO_NITERS (loop_vinfo));
   gimple_seq stmts = NULL, new_stmts = NULL;
   tree iters, iters_name;
-  gimple *dr_stmt = vect_dr_stmt (dr);
-  stmt_vec_info stmt_info = vinfo_for_stmt (dr_stmt);
+  stmt_vec_info stmt_info = vect_dr_stmt (dr);
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
   unsigned int target_align = DR_TARGET_ALIGNMENT (dr);
 
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:23:00.397042684 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:23:04.033010396 +0100
@@ -2145,8 +2145,7 @@ vect_analyze_loop_2 (loop_vec_info loop_
 	  if (LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) < 0)
 	    {
 	      struct data_reference *dr = LOOP_VINFO_UNALIGNED_DR (loop_vinfo);
-	      tree vectype
-		= STMT_VINFO_VECTYPE (vinfo_for_stmt (vect_dr_stmt (dr)));
+	      tree vectype = STMT_VINFO_VECTYPE (vect_dr_stmt (dr));
 	      niters_th += TYPE_VECTOR_SUBPARTS (vectype) - 1;
 	    }
 	  else

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [20/46] Make *FIRST_ELEMENT and *NEXT_ELEMENT stmt_vec_infos
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (18 preceding siblings ...)
  2018-07-24 10:01 ` [21/46] Make grouped_stores and reduction_chains use stmt_vec_infos Richard Sandiford
@ 2018-07-24 10:01 ` Richard Sandiford
  2018-07-25  9:28   ` Richard Biener
  2018-07-24 10:01 ` [19/46] Make vect_dr_stmt return a stmt_vec_info Richard Sandiford
                   ` (25 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:01 UTC (permalink / raw)
  To: gcc-patches

This patch changes {REDUC,DR}_GROUP_{FIRST,NEXT} element from a
gimple stmt to stmt_vec_info.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (_stmt_vec_info::first_element): Change from
	a gimple stmt to a stmt_vec_info.
	(_stmt_vec_info::next_element): Likewise.
	* tree-vect-data-refs.c (vect_update_misalignment_for_peel)
	(vect_slp_analyze_and_verify_node_alignment)
	(vect_analyze_group_access_1, vect_analyze_group_access)
	(vect_small_gap_p, vect_prune_runtime_alias_test_list)
	(vect_create_data_ref_ptr, vect_record_grouped_load_vectors)
	(vect_supportable_dr_alignment): Update accordingly.
	* tree-vect-loop.c (vect_fixup_reduc_chain): Likewise.
	(vect_fixup_scalar_cycles_with_patterns, vect_is_slp_reduction)
	(vect_is_simple_reduction, vectorizable_reduction): Likewise.
	* tree-vect-patterns.c (vect_reassociating_reduction_p): Likewise.
	* tree-vect-slp.c (vect_build_slp_tree_1)
	(vect_attempt_slp_rearrange_stmts, vect_supported_load_permutation_p)
	(vect_split_slp_store_group, vect_analyze_slp_instance)
	(vect_analyze_slp, vect_transform_slp_perm_load): Likewise.
	* tree-vect-stmts.c (vect_model_store_cost, vect_model_load_cost)
	(get_group_load_store_type, get_load_store_type)
	(get_group_alias_ptr_type, vectorizable_store, vectorizable_load)
	(vect_transform_stmt, vect_remove_stores): Likewise.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:23:04.033010396 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:23:08.536970400 +0100
@@ -871,9 +871,9 @@ struct _stmt_vec_info {
 
   /* Interleaving and reduction chains info.  */
   /* First element in the group.  */
-  gimple *first_element;
+  stmt_vec_info first_element;
   /* Pointer to the next element in the group.  */
-  gimple *next_element;
+  stmt_vec_info next_element;
   /* For data-refs, in case that two or more stmts share data-ref, this is the
      pointer to the previously detected stmt with the same dr.  */
   gimple *same_dr_stmt;
Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c	2018-07-24 10:23:04.029010432 +0100
+++ gcc/tree-vect-data-refs.c	2018-07-24 10:23:08.532970436 +0100
@@ -1077,7 +1077,7 @@ vect_update_misalignment_for_peel (struc
  /* For interleaved data accesses the step in the loop must be multiplied by
      the size of the interleaving group.  */
   if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
-    dr_size *= DR_GROUP_SIZE (vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (stmt_info)));
+    dr_size *= DR_GROUP_SIZE (DR_GROUP_FIRST_ELEMENT (stmt_info));
   if (STMT_VINFO_GROUPED_ACCESS (peel_stmt_info))
     dr_peel_size *= DR_GROUP_SIZE (peel_stmt_info);
 
@@ -2370,12 +2370,11 @@ vect_slp_analyze_and_verify_node_alignme
      the node is permuted in which case we start from the first
      element in the group.  */
   stmt_vec_info first_stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
-  gimple *first_stmt = first_stmt_info->stmt;
   data_reference_p first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
   if (SLP_TREE_LOAD_PERMUTATION (node).exists ())
-    first_stmt = DR_GROUP_FIRST_ELEMENT (first_stmt_info);
+    first_stmt_info = DR_GROUP_FIRST_ELEMENT (first_stmt_info);
 
-  data_reference_p dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (first_stmt));
+  data_reference_p dr = STMT_VINFO_DATA_REF (first_stmt_info);
   vect_compute_data_ref_alignment (dr);
   /* For creating the data-ref pointer we need alignment of the
      first element anyway.  */
@@ -2520,11 +2519,11 @@ vect_analyze_group_access_1 (struct data
   if (DR_GROUP_FIRST_ELEMENT (stmt_info) == stmt_info)
     {
       /* First stmt in the interleaving chain. Check the chain.  */
-      gimple *next = DR_GROUP_NEXT_ELEMENT (stmt_info);
+      stmt_vec_info next = DR_GROUP_NEXT_ELEMENT (stmt_info);
       struct data_reference *data_ref = dr;
       unsigned int count = 1;
       tree prev_init = DR_INIT (data_ref);
-      gimple *prev = stmt_info;
+      stmt_vec_info prev = stmt_info;
       HOST_WIDE_INT diff, gaps = 0;
 
       /* By construction, all group members have INTEGER_CST DR_INITs.  */
@@ -2535,8 +2534,7 @@ vect_analyze_group_access_1 (struct data
              stmt, and the rest get their vectorized loads from the first
              one.  */
           if (!tree_int_cst_compare (DR_INIT (data_ref),
-                                     DR_INIT (STMT_VINFO_DATA_REF (
-						   vinfo_for_stmt (next)))))
+				     DR_INIT (STMT_VINFO_DATA_REF (next))))
             {
               if (DR_IS_WRITE (data_ref))
                 {
@@ -2550,16 +2548,16 @@ vect_analyze_group_access_1 (struct data
 		dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
 				 "Two or more load stmts share the same dr.\n");
 
-              /* For load use the same data-ref load.  */
-              DR_GROUP_SAME_DR_STMT (vinfo_for_stmt (next)) = prev;
+	      /* For load use the same data-ref load.  */
+	      DR_GROUP_SAME_DR_STMT (next) = prev;
 
-              prev = next;
-              next = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next));
-              continue;
+	      prev = next;
+	      next = DR_GROUP_NEXT_ELEMENT (next);
+	      continue;
             }
 
-          prev = next;
-          data_ref = STMT_VINFO_DATA_REF (vinfo_for_stmt (next));
+	  prev = next;
+	  data_ref = STMT_VINFO_DATA_REF (next);
 
 	  /* All group members have the same STEP by construction.  */
 	  gcc_checking_assert (operand_equal_p (DR_STEP (data_ref), step, 0));
@@ -2587,12 +2585,12 @@ vect_analyze_group_access_1 (struct data
 
           /* Store the gap from the previous member of the group. If there is no
              gap in the access, DR_GROUP_GAP is always 1.  */
-          DR_GROUP_GAP (vinfo_for_stmt (next)) = diff;
+	  DR_GROUP_GAP (next) = diff;
 
-          prev_init = DR_INIT (data_ref);
-          next = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next));
-          /* Count the number of data-refs in the chain.  */
-          count++;
+	  prev_init = DR_INIT (data_ref);
+	  next = DR_GROUP_NEXT_ELEMENT (next);
+	  /* Count the number of data-refs in the chain.  */
+	  count++;
         }
 
       if (groupsize == 0)
@@ -2668,15 +2666,13 @@ vect_analyze_group_access (struct data_r
   if (!vect_analyze_group_access_1 (dr))
     {
       /* Dissolve the group if present.  */
-      gimple *next;
-      gimple *stmt = DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (dr));
-      while (stmt)
-	{
-	  stmt_vec_info vinfo = vinfo_for_stmt (stmt);
-	  next = DR_GROUP_NEXT_ELEMENT (vinfo);
-	  DR_GROUP_FIRST_ELEMENT (vinfo) = NULL;
-	  DR_GROUP_NEXT_ELEMENT (vinfo) = NULL;
-	  stmt = next;
+      stmt_vec_info stmt_info = DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (dr));
+      while (stmt_info)
+	{
+	  stmt_vec_info next = DR_GROUP_NEXT_ELEMENT (stmt_info);
+	  DR_GROUP_FIRST_ELEMENT (stmt_info) = NULL;
+	  DR_GROUP_NEXT_ELEMENT (stmt_info) = NULL;
+	  stmt_info = next;
 	}
       return false;
     }
@@ -3281,7 +3277,7 @@ vect_small_gap_p (loop_vec_info loop_vin
   HOST_WIDE_INT count
     = estimated_poly_value (LOOP_VINFO_VECT_FACTOR (loop_vinfo));
   if (DR_GROUP_FIRST_ELEMENT (stmt_info))
-    count *= DR_GROUP_SIZE (vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (stmt_info)));
+    count *= DR_GROUP_SIZE (DR_GROUP_FIRST_ELEMENT (stmt_info));
   return estimated_poly_value (gap) <= count * vect_get_scalar_dr_size (dr);
 }
 
@@ -3379,11 +3375,9 @@ vect_prune_runtime_alias_test_list (loop
       int comp_res;
       poly_uint64 lower_bound;
       struct data_reference *dr_a, *dr_b;
-      gimple *dr_group_first_a, *dr_group_first_b;
       tree segment_length_a, segment_length_b;
       unsigned HOST_WIDE_INT access_size_a, access_size_b;
       unsigned int align_a, align_b;
-      gimple *stmt_a, *stmt_b;
 
       /* Ignore the alias if the VF we chose ended up being no greater
 	 than the dependence distance.  */
@@ -3409,15 +3403,15 @@ vect_prune_runtime_alias_test_list (loop
 	}
 
       dr_a = DDR_A (ddr);
-      stmt_a = vect_dr_stmt (DDR_A (ddr));
+      stmt_vec_info stmt_info_a = vect_dr_stmt (DDR_A (ddr));
 
       dr_b = DDR_B (ddr);
-      stmt_b = vect_dr_stmt (DDR_B (ddr));
+      stmt_vec_info stmt_info_b = vect_dr_stmt (DDR_B (ddr));
 
       /* Skip the pair if inter-iteration dependencies are irrelevant
 	 and intra-iteration dependencies are guaranteed to be honored.  */
       if (ignore_step_p
-	  && (vect_preserves_scalar_order_p (stmt_a, stmt_b)
+	  && (vect_preserves_scalar_order_p (stmt_info_a, stmt_info_b)
 	      || vectorizable_with_step_bound_p (dr_a, dr_b, &lower_bound)))
 	{
 	  if (dump_enabled_p ())
@@ -3468,18 +3462,18 @@ vect_prune_runtime_alias_test_list (loop
 	  continue;
 	}
 
-      dr_group_first_a = DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt_a));
+      stmt_vec_info dr_group_first_a = DR_GROUP_FIRST_ELEMENT (stmt_info_a);
       if (dr_group_first_a)
 	{
-	  stmt_a = dr_group_first_a;
-	  dr_a = STMT_VINFO_DATA_REF (vinfo_for_stmt (stmt_a));
+	  stmt_info_a = dr_group_first_a;
+	  dr_a = STMT_VINFO_DATA_REF (stmt_info_a);
 	}
 
-      dr_group_first_b = DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt_b));
+      stmt_vec_info dr_group_first_b = DR_GROUP_FIRST_ELEMENT (stmt_info_b);
       if (dr_group_first_b)
 	{
-	  stmt_b = dr_group_first_b;
-	  dr_b = STMT_VINFO_DATA_REF (vinfo_for_stmt (stmt_b));
+	  stmt_info_b = dr_group_first_b;
+	  dr_b = STMT_VINFO_DATA_REF (stmt_info_b);
 	}
 
       if (ignore_step_p)
@@ -4734,10 +4728,9 @@ vect_create_data_ref_ptr (gimple *stmt,
   /* Likewise for any of the data references in the stmt group.  */
   else if (DR_GROUP_SIZE (stmt_info) > 1)
     {
-      gimple *orig_stmt = DR_GROUP_FIRST_ELEMENT (stmt_info);
+      stmt_vec_info sinfo = DR_GROUP_FIRST_ELEMENT (stmt_info);
       do
 	{
-	  stmt_vec_info sinfo = vinfo_for_stmt (orig_stmt);
 	  struct data_reference *sdr = STMT_VINFO_DATA_REF (sinfo);
 	  if (!alias_sets_conflict_p (get_alias_set (aggr_type),
 				      get_alias_set (DR_REF (sdr))))
@@ -4745,9 +4738,9 @@ vect_create_data_ref_ptr (gimple *stmt,
 	      need_ref_all = true;
 	      break;
 	    }
-	  orig_stmt = DR_GROUP_NEXT_ELEMENT (sinfo);
+	  sinfo = DR_GROUP_NEXT_ELEMENT (sinfo);
 	}
-      while (orig_stmt);
+      while (sinfo);
     }
   aggr_ptr_type = build_pointer_type_for_mode (aggr_type, ptr_mode,
 					       need_ref_all);
@@ -6345,19 +6338,18 @@ vect_record_grouped_load_vectors (gimple
 {
   stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   vec_info *vinfo = stmt_info->vinfo;
-  gimple *first_stmt = DR_GROUP_FIRST_ELEMENT (stmt_info);
-  gimple *next_stmt;
+  stmt_vec_info first_stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
   unsigned int i, gap_count;
   tree tmp_data_ref;
 
   /* Put a permuted data-ref in the VECTORIZED_STMT field.
      Since we scan the chain starting from it's first node, their order
      corresponds the order of data-refs in RESULT_CHAIN.  */
-  next_stmt = first_stmt;
+  stmt_vec_info next_stmt_info = first_stmt_info;
   gap_count = 1;
   FOR_EACH_VEC_ELT (result_chain, i, tmp_data_ref)
     {
-      if (!next_stmt)
+      if (!next_stmt_info)
 	break;
 
       /* Skip the gaps.  Loads created for the gaps will be removed by dead
@@ -6366,27 +6358,27 @@ vect_record_grouped_load_vectors (gimple
        DR_GROUP_GAP is the number of steps in elements from the previous
        access (if there is no gap DR_GROUP_GAP is 1).  We skip loads that
        correspond to the gaps.  */
-      if (next_stmt != first_stmt
-          && gap_count < DR_GROUP_GAP (vinfo_for_stmt (next_stmt)))
+      if (next_stmt_info != first_stmt_info
+	  && gap_count < DR_GROUP_GAP (next_stmt_info))
       {
         gap_count++;
         continue;
       }
 
-      while (next_stmt)
+      while (next_stmt_info)
         {
 	  stmt_vec_info new_stmt_info = vinfo->lookup_def (tmp_data_ref);
 	  /* We assume that if VEC_STMT is not NULL, this is a case of multiple
 	     copies, and we put the new vector statement in the first available
 	     RELATED_STMT.  */
-	  if (!STMT_VINFO_VEC_STMT (vinfo_for_stmt (next_stmt)))
-	    STMT_VINFO_VEC_STMT (vinfo_for_stmt (next_stmt)) = new_stmt_info;
+	  if (!STMT_VINFO_VEC_STMT (next_stmt_info))
+	    STMT_VINFO_VEC_STMT (next_stmt_info) = new_stmt_info;
 	  else
             {
-              if (!DR_GROUP_SAME_DR_STMT (vinfo_for_stmt (next_stmt)))
+	      if (!DR_GROUP_SAME_DR_STMT (next_stmt_info))
                 {
 		  stmt_vec_info prev_stmt_info
-		    = STMT_VINFO_VEC_STMT (vinfo_for_stmt (next_stmt));
+		    = STMT_VINFO_VEC_STMT (next_stmt_info);
 		  stmt_vec_info rel_stmt_info
 		    = STMT_VINFO_RELATED_STMT (prev_stmt_info);
 		  while (rel_stmt_info)
@@ -6399,12 +6391,12 @@ vect_record_grouped_load_vectors (gimple
                 }
             }
 
-	  next_stmt = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next_stmt));
+	  next_stmt_info = DR_GROUP_NEXT_ELEMENT (next_stmt_info);
 	  gap_count = 1;
-	  /* If NEXT_STMT accesses the same DR as the previous statement,
+	  /* If NEXT_STMT_INFO accesses the same DR as the previous statement,
 	     put the same TMP_DATA_REF as its vectorized statement; otherwise
 	     get the next data-ref from RESULT_CHAIN.  */
-	  if (!next_stmt || !DR_GROUP_SAME_DR_STMT (vinfo_for_stmt (next_stmt)))
+	  if (!next_stmt_info || !DR_GROUP_SAME_DR_STMT (next_stmt_info))
 	    break;
         }
     }
@@ -6545,8 +6537,8 @@ vect_supportable_dr_alignment (struct da
 	  if (loop_vinfo
 	      && STMT_SLP_TYPE (stmt_info)
 	      && !multiple_p (LOOP_VINFO_VECT_FACTOR (loop_vinfo)
-			      * DR_GROUP_SIZE (vinfo_for_stmt
-					    (DR_GROUP_FIRST_ELEMENT (stmt_info))),
+			      * (DR_GROUP_SIZE
+				 (DR_GROUP_FIRST_ELEMENT (stmt_info))),
 			      TYPE_VECTOR_SUBPARTS (vectype)))
 	    ;
 	  else if (!loop_vinfo
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:23:04.033010396 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:23:08.532970436 +0100
@@ -661,14 +661,14 @@ vect_fixup_reduc_chain (gimple *stmt)
   REDUC_GROUP_SIZE (firstp) = REDUC_GROUP_SIZE (stmt_info);
   do
     {
-      stmtp = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (stmt));
+      stmtp = STMT_VINFO_RELATED_STMT (stmt_info);
       REDUC_GROUP_FIRST_ELEMENT (stmtp) = firstp;
-      stmt = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (stmt));
-      if (stmt)
+      stmt_info = REDUC_GROUP_NEXT_ELEMENT (stmt_info);
+      if (stmt_info)
 	REDUC_GROUP_NEXT_ELEMENT (stmtp)
-	  = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (stmt));
+	  = STMT_VINFO_RELATED_STMT (stmt_info);
     }
-  while (stmt);
+  while (stmt_info);
   STMT_VINFO_DEF_TYPE (stmtp) = vect_reduction_def;
 }
 
@@ -683,12 +683,12 @@ vect_fixup_scalar_cycles_with_patterns (
   FOR_EACH_VEC_ELT (LOOP_VINFO_REDUCTION_CHAINS (loop_vinfo), i, first)
     if (STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (first)))
       {
-	gimple *next = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (first));
+	stmt_vec_info next = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (first));
 	while (next)
 	  {
-	    if (! STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (next)))
+	    if (! STMT_VINFO_IN_PATTERN_P (next))
 	      break;
-	    next = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next));
+	    next = REDUC_GROUP_NEXT_ELEMENT (next);
 	  }
 	/* If not all stmt in the chain are patterns try to handle
 	   the chain without patterns.  */
@@ -2188,7 +2188,7 @@ vect_analyze_loop_2 (loop_vec_info loop_
       vinfo = SLP_TREE_SCALAR_STMTS (SLP_INSTANCE_TREE (instance))[0];
       if (! STMT_VINFO_GROUPED_ACCESS (vinfo))
 	continue;
-      vinfo = vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (vinfo));
+      vinfo = DR_GROUP_FIRST_ELEMENT (vinfo);
       unsigned int size = DR_GROUP_SIZE (vinfo);
       tree vectype = STMT_VINFO_VECTYPE (vinfo);
       if (! vect_store_lanes_supported (vectype, size, false)
@@ -2198,7 +2198,7 @@ vect_analyze_loop_2 (loop_vec_info loop_
       FOR_EACH_VEC_ELT (SLP_INSTANCE_LOADS (instance), j, node)
 	{
 	  vinfo = SLP_TREE_SCALAR_STMTS (node)[0];
-	  vinfo = vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (vinfo));
+	  vinfo = DR_GROUP_FIRST_ELEMENT (vinfo);
 	  bool single_element_p = !DR_GROUP_NEXT_ELEMENT (vinfo);
 	  size = DR_GROUP_SIZE (vinfo);
 	  vectype = STMT_VINFO_VECTYPE (vinfo);
@@ -2527,7 +2527,7 @@ vect_is_slp_reduction (loop_vec_info loo
   struct loop *loop = (gimple_bb (phi))->loop_father;
   struct loop *vect_loop = LOOP_VINFO_LOOP (loop_info);
   enum tree_code code;
-  gimple *loop_use_stmt = NULL, *first, *next_stmt;
+  gimple *loop_use_stmt = NULL;
   stmt_vec_info use_stmt_info, current_stmt_info = NULL;
   tree lhs;
   imm_use_iterator imm_iter;
@@ -2592,12 +2592,12 @@ vect_is_slp_reduction (loop_vec_info loo
       use_stmt_info = loop_info->lookup_stmt (loop_use_stmt);
       if (current_stmt_info)
         {
-	  REDUC_GROUP_NEXT_ELEMENT (current_stmt_info) = loop_use_stmt;
+	  REDUC_GROUP_NEXT_ELEMENT (current_stmt_info) = use_stmt_info;
           REDUC_GROUP_FIRST_ELEMENT (use_stmt_info)
             = REDUC_GROUP_FIRST_ELEMENT (current_stmt_info);
         }
       else
-	REDUC_GROUP_FIRST_ELEMENT (use_stmt_info) = loop_use_stmt;
+	REDUC_GROUP_FIRST_ELEMENT (use_stmt_info) = use_stmt_info;
 
       lhs = gimple_assign_lhs (loop_use_stmt);
       current_stmt_info = use_stmt_info;
@@ -2610,9 +2610,10 @@ vect_is_slp_reduction (loop_vec_info loo
   /* Swap the operands, if needed, to make the reduction operand be the second
      operand.  */
   lhs = PHI_RESULT (phi);
-  next_stmt = REDUC_GROUP_FIRST_ELEMENT (current_stmt_info);
-  while (next_stmt)
+  stmt_vec_info next_stmt_info = REDUC_GROUP_FIRST_ELEMENT (current_stmt_info);
+  while (next_stmt_info)
     {
+      gassign *next_stmt = as_a <gassign *> (next_stmt_info->stmt);
       if (gimple_assign_rhs2 (next_stmt) == lhs)
 	{
 	  tree op = gimple_assign_rhs1 (next_stmt);
@@ -2626,7 +2627,7 @@ vect_is_slp_reduction (loop_vec_info loo
 	      && vect_valid_reduction_input_p (def_stmt_info))
 	    {
 	      lhs = gimple_assign_lhs (next_stmt);
-	      next_stmt = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next_stmt));
+	      next_stmt_info = REDUC_GROUP_NEXT_ELEMENT (next_stmt_info);
  	      continue;
 	    }
 
@@ -2663,13 +2664,14 @@ vect_is_slp_reduction (loop_vec_info loo
         }
 
       lhs = gimple_assign_lhs (next_stmt);
-      next_stmt = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next_stmt));
+      next_stmt_info = REDUC_GROUP_NEXT_ELEMENT (next_stmt_info);
     }
 
   /* Save the chain for further analysis in SLP detection.  */
-  first = REDUC_GROUP_FIRST_ELEMENT (current_stmt_info);
-  LOOP_VINFO_REDUCTION_CHAINS (loop_info).safe_push (first);
-  REDUC_GROUP_SIZE (vinfo_for_stmt (first)) = size;
+  stmt_vec_info first_stmt_info
+    = REDUC_GROUP_FIRST_ELEMENT (current_stmt_info);
+  LOOP_VINFO_REDUCTION_CHAINS (loop_info).safe_push (first_stmt_info);
+  REDUC_GROUP_SIZE (first_stmt_info) = size;
 
   return true;
 }
@@ -3254,12 +3256,12 @@ vect_is_simple_reduction (loop_vec_info
     }
 
   /* Dissolve group eventually half-built by vect_is_slp_reduction.  */
-  gimple *first = REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (def_stmt));
+  stmt_vec_info first = REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (def_stmt));
   while (first)
     {
-      gimple *next = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (first));
-      REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (first)) = NULL;
-      REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (first)) = NULL;
+      stmt_vec_info next = REDUC_GROUP_NEXT_ELEMENT (first);
+      REDUC_GROUP_FIRST_ELEMENT (first) = NULL;
+      REDUC_GROUP_NEXT_ELEMENT (first) = NULL;
       first = next;
     }
 
@@ -6130,7 +6132,8 @@ vectorizable_reduction (gimple *stmt, gi
     }
 
   if (REDUC_GROUP_FIRST_ELEMENT (stmt_info))
-    gcc_assert (slp_node && REDUC_GROUP_FIRST_ELEMENT (stmt_info) == stmt);
+    gcc_assert (slp_node
+		&& REDUC_GROUP_FIRST_ELEMENT (stmt_info) == stmt_info);
 
   if (gimple_code (stmt) == GIMPLE_PHI)
     {
@@ -6784,8 +6787,8 @@ vectorizable_reduction (gimple *stmt, gi
   tree neutral_op = NULL_TREE;
   if (slp_node)
     neutral_op = neutral_op_for_slp_reduction
-		   (slp_node_instance->reduc_phis, code,
-		    REDUC_GROUP_FIRST_ELEMENT (stmt_info) != NULL);
+      (slp_node_instance->reduc_phis, code,
+       REDUC_GROUP_FIRST_ELEMENT (stmt_info) != NULL_STMT_VEC_INFO);
 
   if (double_reduc && reduction_type == FOLD_LEFT_REDUCTION)
     {
Index: gcc/tree-vect-patterns.c
===================================================================
--- gcc/tree-vect-patterns.c	2018-07-24 10:22:57.277070390 +0100
+++ gcc/tree-vect-patterns.c	2018-07-24 10:23:08.536970400 +0100
@@ -820,7 +820,7 @@ vect_reassociating_reduction_p (stmt_vec
 {
   return (STMT_VINFO_DEF_TYPE (stmt_vinfo) == vect_reduction_def
 	  ? STMT_VINFO_REDUC_TYPE (stmt_vinfo) != FOLD_LEFT_REDUCTION
-	  : REDUC_GROUP_FIRST_ELEMENT (stmt_vinfo) != NULL);
+	  : REDUC_GROUP_FIRST_ELEMENT (stmt_vinfo) != NULL_STMT_VEC_INFO);
 }
 
 /* As above, but also require it to have code CODE and to be a reduction
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c	2018-07-24 10:23:00.401042649 +0100
+++ gcc/tree-vect-slp.c	2018-07-24 10:23:08.536970400 +0100
@@ -712,7 +712,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
   int icode;
   machine_mode optab_op2_mode;
   machine_mode vec_mode;
-  gimple *first_load = NULL, *prev_first_load = NULL;
+  stmt_vec_info first_load = NULL, prev_first_load = NULL;
 
   /* For every stmt in NODE find its def stmt/s.  */
   stmt_vec_info stmt_info;
@@ -1692,8 +1692,7 @@ vect_attempt_slp_rearrange_stmts (slp_in
   FOR_EACH_VEC_ELT (SLP_INSTANCE_LOADS (slp_instn), i, node)
     {
       stmt_vec_info first_stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
-      first_stmt_info
-	= vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (first_stmt_info));
+      first_stmt_info = DR_GROUP_FIRST_ELEMENT (first_stmt_info);
       /* But we have to keep those permutations that are required because
          of handling of gaps.  */
       if (known_eq (unrolling_factor, 1U)
@@ -1717,7 +1716,6 @@ vect_supported_load_permutation_p (slp_i
   unsigned int group_size = SLP_INSTANCE_GROUP_SIZE (slp_instn);
   unsigned int i, j, k, next;
   slp_tree node;
-  gimple *next_load;
 
   if (dump_enabled_p ())
     {
@@ -1766,26 +1764,25 @@ vect_supported_load_permutation_p (slp_i
 	  if (!SLP_TREE_LOAD_PERMUTATION (node).exists ())
 	    continue;
 	  bool subchain_p = true;
-          next_load = NULL;
+	  stmt_vec_info next_load_info = NULL;
 	  stmt_vec_info load_info;
 	  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), j, load_info)
 	    {
 	      if (j != 0
-		  && (next_load != load_info
+		  && (next_load_info != load_info
 		      || DR_GROUP_GAP (load_info) != 1))
 		{
 		  subchain_p = false;
 		  break;
 		}
-	      next_load = DR_GROUP_NEXT_ELEMENT (load_info);
+	      next_load_info = DR_GROUP_NEXT_ELEMENT (load_info);
 	    }
 	  if (subchain_p)
 	    SLP_TREE_LOAD_PERMUTATION (node).release ();
 	  else
 	    {
 	      stmt_vec_info group_info = SLP_TREE_SCALAR_STMTS (node)[0];
-	      group_info
-		= vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (group_info));
+	      group_info = DR_GROUP_FIRST_ELEMENT (group_info);
 	      unsigned HOST_WIDE_INT nunits;
 	      unsigned k, maxk = 0;
 	      FOR_EACH_VEC_ELT (SLP_TREE_LOAD_PERMUTATION (node), j, k)
@@ -1868,33 +1865,33 @@ vect_find_last_scalar_stmt_in_slp (slp_t
 vect_split_slp_store_group (gimple *first_stmt, unsigned group1_size)
 {
   stmt_vec_info first_vinfo = vinfo_for_stmt (first_stmt);
-  gcc_assert (DR_GROUP_FIRST_ELEMENT (first_vinfo) == first_stmt);
+  gcc_assert (DR_GROUP_FIRST_ELEMENT (first_vinfo) == first_vinfo);
   gcc_assert (group1_size > 0);
   int group2_size = DR_GROUP_SIZE (first_vinfo) - group1_size;
   gcc_assert (group2_size > 0);
   DR_GROUP_SIZE (first_vinfo) = group1_size;
 
-  gimple *stmt = first_stmt;
+  stmt_vec_info stmt_info = first_vinfo;
   for (unsigned i = group1_size; i > 1; i--)
     {
-      stmt = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (stmt));
-      gcc_assert (DR_GROUP_GAP (vinfo_for_stmt (stmt)) == 1);
+      stmt_info = DR_GROUP_NEXT_ELEMENT (stmt_info);
+      gcc_assert (DR_GROUP_GAP (stmt_info) == 1);
     }
   /* STMT is now the last element of the first group.  */
-  gimple *group2 = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (stmt));
-  DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (stmt)) = 0;
+  stmt_vec_info group2 = DR_GROUP_NEXT_ELEMENT (stmt_info);
+  DR_GROUP_NEXT_ELEMENT (stmt_info) = 0;
 
-  DR_GROUP_SIZE (vinfo_for_stmt (group2)) = group2_size;
-  for (stmt = group2; stmt; stmt = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (stmt)))
+  DR_GROUP_SIZE (group2) = group2_size;
+  for (stmt_info = group2; stmt_info;
+       stmt_info = DR_GROUP_NEXT_ELEMENT (stmt_info))
     {
-      DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)) = group2;
-      gcc_assert (DR_GROUP_GAP (vinfo_for_stmt (stmt)) == 1);
+      DR_GROUP_FIRST_ELEMENT (stmt_info) = group2;
+      gcc_assert (DR_GROUP_GAP (stmt_info) == 1);
     }
 
   /* For the second group, the DR_GROUP_GAP is that before the original group,
      plus skipping over the first vector.  */
-  DR_GROUP_GAP (vinfo_for_stmt (group2))
-    = DR_GROUP_GAP (first_vinfo) + group1_size;
+  DR_GROUP_GAP (group2) = DR_GROUP_GAP (first_vinfo) + group1_size;
 
   /* DR_GROUP_GAP of the first group now has to skip over the second group too.  */
   DR_GROUP_GAP (first_vinfo) += group2_size;
@@ -1928,8 +1925,6 @@ vect_analyze_slp_instance (vec_info *vin
   slp_tree node;
   unsigned int group_size;
   tree vectype, scalar_type = NULL_TREE;
-  gimple *next;
-  stmt_vec_info next_info;
   unsigned int i;
   vec<slp_tree> loads;
   struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
@@ -1970,34 +1965,32 @@ vect_analyze_slp_instance (vec_info *vin
 
   /* Create a node (a root of the SLP tree) for the packed grouped stores.  */
   scalar_stmts.create (group_size);
-  next = stmt;
+  stmt_vec_info next_info = stmt_info;
   if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
     {
       /* Collect the stores and store them in SLP_TREE_SCALAR_STMTS.  */
-      while (next)
+      while (next_info)
         {
-	  next_info = vinfo_for_stmt (next);
 	  if (STMT_VINFO_IN_PATTERN_P (next_info)
 	      && STMT_VINFO_RELATED_STMT (next_info))
 	    scalar_stmts.safe_push (STMT_VINFO_RELATED_STMT (next_info));
 	  else
 	    scalar_stmts.safe_push (next_info);
-          next = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next));
+	  next_info = DR_GROUP_NEXT_ELEMENT (next_info);
         }
     }
   else if (!dr && REDUC_GROUP_FIRST_ELEMENT (stmt_info))
     {
       /* Collect the reduction stmts and store them in
 	 SLP_TREE_SCALAR_STMTS.  */
-      while (next)
+      while (next_info)
         {
-	  next_info = vinfo_for_stmt (next);
 	  if (STMT_VINFO_IN_PATTERN_P (next_info)
 	      && STMT_VINFO_RELATED_STMT (next_info))
 	    scalar_stmts.safe_push (STMT_VINFO_RELATED_STMT (next_info));
 	  else
 	    scalar_stmts.safe_push (next_info);
-          next = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next));
+	  next_info = REDUC_GROUP_NEXT_ELEMENT (next_info);
         }
       /* Mark the first element of the reduction chain as reduction to properly
 	 transform the node.  In the reduction analysis phase only the last
@@ -2067,15 +2060,14 @@ vect_analyze_slp_instance (vec_info *vin
 	  vec<unsigned> load_permutation;
 	  int j;
 	  stmt_vec_info load_info;
-	  gimple *first_stmt;
 	  bool this_load_permuted = false;
 	  load_permutation.create (group_size);
-	  first_stmt = DR_GROUP_FIRST_ELEMENT
+	  stmt_vec_info first_stmt_info = DR_GROUP_FIRST_ELEMENT
 	    (SLP_TREE_SCALAR_STMTS (load_node)[0]);
 	  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (load_node), j, load_info)
 	    {
 	      int load_place = vect_get_place_in_interleaving_chain
-		(load_info, first_stmt);
+		(load_info, first_stmt_info);
 	      gcc_assert (load_place != -1);
 	      if (load_place != j)
 		this_load_permuted = true;
@@ -2086,8 +2078,8 @@ vect_analyze_slp_instance (vec_info *vin
 	         a gap either because the group is larger than the SLP
 		 group-size or because there is a gap between the groups.  */
 	      && (known_eq (unrolling_factor, 1U)
-		  || (group_size == DR_GROUP_SIZE (vinfo_for_stmt (first_stmt))
-		      && DR_GROUP_GAP (vinfo_for_stmt (first_stmt)) == 0)))
+		  || (group_size == DR_GROUP_SIZE (first_stmt_info)
+		      && DR_GROUP_GAP (first_stmt_info) == 0)))
 	    {
 	      load_permutation.release ();
 	      continue;
@@ -2122,11 +2114,9 @@ vect_analyze_slp_instance (vec_info *vin
 	  slp_tree load_node;
 	  FOR_EACH_VEC_ELT (loads, i, load_node)
 	    {
-	      gimple *first_stmt = DR_GROUP_FIRST_ELEMENT
+	      stmt_vec_info stmt_vinfo = DR_GROUP_FIRST_ELEMENT
 		(SLP_TREE_SCALAR_STMTS (load_node)[0]);
-	      stmt_vec_info stmt_vinfo = vinfo_for_stmt (first_stmt);
-		  /* Use SLP for strided accesses (or if we
-		     can't load-lanes).  */
+	      /* Use SLP for strided accesses (or if we can't load-lanes).  */
 	      if (STMT_VINFO_STRIDED_P (stmt_vinfo)
 		  || ! vect_load_lanes_supported
 			(STMT_VINFO_VECTYPE (stmt_vinfo),
@@ -2230,11 +2220,11 @@ vect_analyze_slp (vec_info *vinfo, unsig
 					     max_tree_size))
 	      {
 		/* Dissolve reduction chain group.  */
-		gimple *next, *stmt = first_element;
+		gimple *stmt = first_element;
 		while (stmt)
 		  {
 		    stmt_vec_info vinfo = vinfo_for_stmt (stmt);
-		    next = REDUC_GROUP_NEXT_ELEMENT (vinfo);
+		    stmt_vec_info next = REDUC_GROUP_NEXT_ELEMENT (vinfo);
 		    REDUC_GROUP_FIRST_ELEMENT (vinfo) = NULL;
 		    REDUC_GROUP_NEXT_ELEMENT (vinfo) = NULL;
 		    stmt = next;
@@ -3698,7 +3688,7 @@ vect_transform_slp_perm_load (slp_tree n
   if (!STMT_VINFO_GROUPED_ACCESS (stmt_info))
     return false;
 
-  stmt_info = vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (stmt_info));
+  stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
 
   mode = TYPE_MODE (vectype);
 
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:23:00.401042649 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:23:08.536970400 +0100
@@ -978,7 +978,7 @@ vect_model_store_cost (stmt_vec_info stm
 		       stmt_vector_for_cost *cost_vec)
 {
   unsigned int inside_cost = 0, prologue_cost = 0;
-  gimple *first_stmt = STMT_VINFO_STMT (stmt_info);
+  stmt_vec_info first_stmt_info = stmt_info;
   bool grouped_access_p = STMT_VINFO_GROUPED_ACCESS (stmt_info);
 
   /* ???  Somehow we need to fix this at the callers.  */
@@ -998,12 +998,12 @@ vect_model_store_cost (stmt_vec_info stm
   /* Grouped stores update all elements in the group at once,
      so we want the DR for the first statement.  */
   if (!slp_node && grouped_access_p)
-    first_stmt = DR_GROUP_FIRST_ELEMENT (stmt_info);
+    first_stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
 
   /* True if we should include any once-per-group costs as well as
      the cost of the statement itself.  For SLP we only get called
      once per group anyhow.  */
-  bool first_stmt_p = (first_stmt == STMT_VINFO_STMT (stmt_info));
+  bool first_stmt_p = (first_stmt_info == stmt_info);
 
   /* We assume that the cost of a single store-lanes instruction is
      equivalent to the cost of DR_GROUP_SIZE separate stores.  If a grouped
@@ -1014,7 +1014,7 @@ vect_model_store_cost (stmt_vec_info stm
     {
       /* Uses a high and low interleave or shuffle operations for each
 	 needed permute.  */
-      int group_size = DR_GROUP_SIZE (vinfo_for_stmt (first_stmt));
+      int group_size = DR_GROUP_SIZE (first_stmt_info);
       int nstmts = ncopies * ceil_log2 (group_size) * group_size;
       inside_cost = record_stmt_cost (cost_vec, nstmts, vec_perm,
 				      stmt_info, 0, vect_body);
@@ -1122,7 +1122,6 @@ vect_model_load_cost (stmt_vec_info stmt
 		      slp_tree slp_node,
 		      stmt_vector_for_cost *cost_vec)
 {
-  gimple *first_stmt = STMT_VINFO_STMT (stmt_info);
   unsigned int inside_cost = 0, prologue_cost = 0;
   bool grouped_access_p = STMT_VINFO_GROUPED_ACCESS (stmt_info);
 
@@ -1136,28 +1135,27 @@ vect_model_load_cost (stmt_vec_info stmt
     {
       /* If the load is permuted then the alignment is determined by
 	 the first group element not by the first scalar stmt DR.  */
-      gimple *stmt = DR_GROUP_FIRST_ELEMENT (stmt_info);
-      stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+      stmt_vec_info first_stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
       /* Record the cost for the permutation.  */
       unsigned n_perms;
       unsigned assumed_nunits
-	= vect_nunits_for_cost (STMT_VINFO_VECTYPE (stmt_info));
+	= vect_nunits_for_cost (STMT_VINFO_VECTYPE (first_stmt_info));
       unsigned slp_vf = (ncopies * assumed_nunits) / instance->group_size; 
       vect_transform_slp_perm_load (slp_node, vNULL, NULL,
 				    slp_vf, instance, true,
 				    &n_perms);
       inside_cost += record_stmt_cost (cost_vec, n_perms, vec_perm,
-				       stmt_info, 0, vect_body);
+				       first_stmt_info, 0, vect_body);
       /* And adjust the number of loads performed.  This handles
 	 redundancies as well as loads that are later dead.  */
-      auto_sbitmap perm (DR_GROUP_SIZE (stmt_info));
+      auto_sbitmap perm (DR_GROUP_SIZE (first_stmt_info));
       bitmap_clear (perm);
       for (unsigned i = 0;
 	   i < SLP_TREE_LOAD_PERMUTATION (slp_node).length (); ++i)
 	bitmap_set_bit (perm, SLP_TREE_LOAD_PERMUTATION (slp_node)[i]);
       ncopies = 0;
       bool load_seen = false;
-      for (unsigned i = 0; i < DR_GROUP_SIZE (stmt_info); ++i)
+      for (unsigned i = 0; i < DR_GROUP_SIZE (first_stmt_info); ++i)
 	{
 	  if (i % assumed_nunits == 0)
 	    {
@@ -1171,19 +1169,21 @@ vect_model_load_cost (stmt_vec_info stmt
       if (load_seen)
 	ncopies++;
       gcc_assert (ncopies
-		  <= (DR_GROUP_SIZE (stmt_info) - DR_GROUP_GAP (stmt_info)
+		  <= (DR_GROUP_SIZE (first_stmt_info)
+		      - DR_GROUP_GAP (first_stmt_info)
 		      + assumed_nunits - 1) / assumed_nunits);
     }
 
   /* Grouped loads read all elements in the group at once,
      so we want the DR for the first statement.  */
+  stmt_vec_info first_stmt_info = stmt_info;
   if (!slp_node && grouped_access_p)
-    first_stmt = DR_GROUP_FIRST_ELEMENT (stmt_info);
+    first_stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
 
   /* True if we should include any once-per-group costs as well as
      the cost of the statement itself.  For SLP we only get called
      once per group anyhow.  */
-  bool first_stmt_p = (first_stmt == STMT_VINFO_STMT (stmt_info));
+  bool first_stmt_p = (first_stmt_info == stmt_info);
 
   /* We assume that the cost of a single load-lanes instruction is
      equivalent to the cost of DR_GROUP_SIZE separate loads.  If a grouped
@@ -1194,7 +1194,7 @@ vect_model_load_cost (stmt_vec_info stmt
     {
       /* Uses an even and odd extract operations or shuffle operations
 	 for each needed permute.  */
-      int group_size = DR_GROUP_SIZE (vinfo_for_stmt (first_stmt));
+      int group_size = DR_GROUP_SIZE (first_stmt_info);
       int nstmts = ncopies * ceil_log2 (group_size) * group_size;
       inside_cost += record_stmt_cost (cost_vec, nstmts, vec_perm,
 				       stmt_info, 0, vect_body);
@@ -2183,12 +2183,12 @@ get_group_load_store_type (gimple *stmt,
   vec_info *vinfo = stmt_info->vinfo;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct loop *loop = loop_vinfo ? LOOP_VINFO_LOOP (loop_vinfo) : NULL;
-  gimple *first_stmt = DR_GROUP_FIRST_ELEMENT (stmt_info);
-  data_reference *first_dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (first_stmt));
-  unsigned int group_size = DR_GROUP_SIZE (vinfo_for_stmt (first_stmt));
-  bool single_element_p = (stmt == first_stmt
+  stmt_vec_info first_stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
+  data_reference *first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
+  unsigned int group_size = DR_GROUP_SIZE (first_stmt_info);
+  bool single_element_p = (stmt_info == first_stmt_info
 			   && !DR_GROUP_NEXT_ELEMENT (stmt_info));
-  unsigned HOST_WIDE_INT gap = DR_GROUP_GAP (vinfo_for_stmt (first_stmt));
+  unsigned HOST_WIDE_INT gap = DR_GROUP_GAP (first_stmt_info);
   poly_uint64 nunits = TYPE_VECTOR_SUBPARTS (vectype);
 
   /* True if the vectorized statements would access beyond the last
@@ -2315,14 +2315,14 @@ get_group_load_store_type (gimple *stmt,
 	*memory_access_type = VMAT_GATHER_SCATTER;
     }
 
-  if (vls_type != VLS_LOAD && first_stmt == stmt)
+  if (vls_type != VLS_LOAD && first_stmt_info == stmt_info)
     {
       /* STMT is the leader of the group. Check the operands of all the
 	 stmts of the group.  */
-      gimple *next_stmt = DR_GROUP_NEXT_ELEMENT (stmt_info);
-      while (next_stmt)
+      stmt_vec_info next_stmt_info = DR_GROUP_NEXT_ELEMENT (stmt_info);
+      while (next_stmt_info)
 	{
-	  tree op = vect_get_store_rhs (next_stmt);
+	  tree op = vect_get_store_rhs (next_stmt_info);
 	  enum vect_def_type dt;
 	  if (!vect_is_simple_use (op, vinfo, &dt))
 	    {
@@ -2331,7 +2331,7 @@ get_group_load_store_type (gimple *stmt,
 				 "use not simple.\n");
 	      return false;
 	    }
-	  next_stmt = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next_stmt));
+	  next_stmt_info = DR_GROUP_NEXT_ELEMENT (next_stmt_info);
 	}
     }
 
@@ -2482,7 +2482,7 @@ get_load_store_type (gimple *stmt, tree
      traditional behavior until that can be fixed.  */
   if (*memory_access_type == VMAT_ELEMENTWISE
       && !STMT_VINFO_STRIDED_P (stmt_info)
-      && !(stmt == DR_GROUP_FIRST_ELEMENT (stmt_info)
+      && !(stmt_info == DR_GROUP_FIRST_ELEMENT (stmt_info)
 	   && !DR_GROUP_NEXT_ELEMENT (stmt_info)
 	   && !pow2p_hwi (DR_GROUP_SIZE (stmt_info))))
     {
@@ -6195,13 +6195,13 @@ ensure_base_align (struct data_reference
 get_group_alias_ptr_type (gimple *first_stmt)
 {
   struct data_reference *first_dr, *next_dr;
-  gimple *next_stmt;
 
   first_dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (first_stmt));
-  next_stmt = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (first_stmt));
-  while (next_stmt)
+  stmt_vec_info next_stmt_info
+    = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (first_stmt));
+  while (next_stmt_info)
     {
-      next_dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (next_stmt));
+      next_dr = STMT_VINFO_DATA_REF (next_stmt_info);
       if (get_alias_set (DR_REF (first_dr))
 	  != get_alias_set (DR_REF (next_dr)))
 	{
@@ -6210,7 +6210,7 @@ get_group_alias_ptr_type (gimple *first_
 			     "conflicting alias set types.\n");
 	  return ptr_type_node;
 	}
-      next_stmt = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next_stmt));
+      next_stmt_info = DR_GROUP_NEXT_ELEMENT (next_stmt_info);
     }
   return reference_alias_ptr_type (DR_REF (first_dr));
 }
@@ -6248,7 +6248,7 @@ vectorizable_store (gimple *stmt, gimple
   gimple *ptr_incr = NULL;
   int ncopies;
   int j;
-  gimple *next_stmt, *first_stmt;
+  stmt_vec_info first_stmt_info;
   bool grouped_store;
   unsigned int group_size, i;
   vec<tree> oprnds = vNULL;
@@ -6400,13 +6400,13 @@ vectorizable_store (gimple *stmt, gimple
 		   && (slp || memory_access_type != VMAT_CONTIGUOUS));
   if (grouped_store)
     {
-      first_stmt = DR_GROUP_FIRST_ELEMENT (stmt_info);
-      first_dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (first_stmt));
-      group_size = DR_GROUP_SIZE (vinfo_for_stmt (first_stmt));
+      first_stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
+      first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
+      group_size = DR_GROUP_SIZE (first_stmt_info);
     }
   else
     {
-      first_stmt = stmt;
+      first_stmt_info = stmt_info;
       first_dr = dr;
       group_size = vec_num = 1;
     }
@@ -6584,10 +6584,7 @@ vectorizable_store (gimple *stmt, gimple
     }
 
   if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
-    {
-      gimple *group_stmt = DR_GROUP_FIRST_ELEMENT (stmt_info);
-      DR_GROUP_STORE_COUNT (vinfo_for_stmt (group_stmt))++;
-    }
+    DR_GROUP_STORE_COUNT (DR_GROUP_FIRST_ELEMENT (stmt_info))++;
 
   if (grouped_store)
     {
@@ -6596,8 +6593,8 @@ vectorizable_store (gimple *stmt, gimple
 
       /* We vectorize all the stmts of the interleaving group when we
 	 reach the last stmt in the group.  */
-      if (DR_GROUP_STORE_COUNT (vinfo_for_stmt (first_stmt))
-	  < DR_GROUP_SIZE (vinfo_for_stmt (first_stmt))
+      if (DR_GROUP_STORE_COUNT (first_stmt_info)
+	  < DR_GROUP_SIZE (first_stmt_info)
 	  && !slp)
 	{
 	  *vec_stmt = NULL;
@@ -6610,17 +6607,18 @@ vectorizable_store (gimple *stmt, gimple
           /* VEC_NUM is the number of vect stmts to be created for this 
              group.  */
           vec_num = SLP_TREE_NUMBER_OF_VEC_STMTS (slp_node);
-          first_stmt = SLP_TREE_SCALAR_STMTS (slp_node)[0]; 
-	  gcc_assert (DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (first_stmt)) == first_stmt);
-          first_dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (first_stmt));
-	  op = vect_get_store_rhs (first_stmt);
+	  first_stmt_info = SLP_TREE_SCALAR_STMTS (slp_node)[0];
+	  gcc_assert (DR_GROUP_FIRST_ELEMENT (first_stmt_info)
+		      == first_stmt_info);
+	  first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
+	  op = vect_get_store_rhs (first_stmt_info);
         } 
       else
         /* VEC_NUM is the number of vect stmts to be created for this 
            group.  */
 	vec_num = group_size;
 
-      ref_type = get_group_alias_ptr_type (first_stmt);
+      ref_type = get_group_alias_ptr_type (first_stmt_info);
     }
   else
     ref_type = reference_alias_ptr_type (DR_REF (first_dr));
@@ -6759,7 +6757,7 @@ vectorizable_store (gimple *stmt, gimple
 
       prev_stmt_info = NULL;
       alias_off = build_int_cst (ref_type, 0);
-      next_stmt = first_stmt;
+      stmt_vec_info next_stmt_info = first_stmt_info;
       for (g = 0; g < group_size; g++)
 	{
 	  running_off = offvar;
@@ -6780,7 +6778,7 @@ vectorizable_store (gimple *stmt, gimple
 	  for (j = 0; j < ncopies; j++)
 	    {
 	      /* We've set op and dt above, from vect_get_store_rhs,
-		 and first_stmt == stmt.  */
+		 and first_stmt_info == stmt_info.  */
 	      if (j == 0)
 		{
 		  if (slp)
@@ -6791,8 +6789,9 @@ vectorizable_store (gimple *stmt, gimple
 		    }
 		  else
 		    {
-		      op = vect_get_store_rhs (next_stmt);
-		      vec_oprnd = vect_get_vec_def_for_operand (op, next_stmt);
+		      op = vect_get_store_rhs (next_stmt_info);
+		      vec_oprnd = vect_get_vec_def_for_operand
+			(op, next_stmt_info);
 		    }
 		}
 	      else
@@ -6866,7 +6865,7 @@ vectorizable_store (gimple *stmt, gimple
 		    }
 		}
 	    }
-	  next_stmt = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next_stmt));
+	  next_stmt_info = DR_GROUP_NEXT_ELEMENT (next_stmt_info);
 	  if (slp)
 	    break;
 	}
@@ -6985,19 +6984,20 @@ vectorizable_store (gimple *stmt, gimple
 
 		 If the store is not grouped, DR_GROUP_SIZE is 1, and DR_CHAIN and
 		 OPRNDS are of size 1.  */
-	      next_stmt = first_stmt;
+	      stmt_vec_info next_stmt_info = first_stmt_info;
 	      for (i = 0; i < group_size; i++)
 		{
 		  /* Since gaps are not supported for interleaved stores,
 		     DR_GROUP_SIZE is the exact number of stmts in the chain.
-		     Therefore, NEXT_STMT can't be NULL_TREE.  In case that
-		     there is no interleaving, DR_GROUP_SIZE is 1, and only one
-		     iteration of the loop will be executed.  */
-		  op = vect_get_store_rhs (next_stmt);
-		  vec_oprnd = vect_get_vec_def_for_operand (op, next_stmt);
+		     Therefore, NEXT_STMT_INFO can't be NULL_TREE.  In case
+		     that there is no interleaving, DR_GROUP_SIZE is 1,
+		     and only one iteration of the loop will be executed.  */
+		  op = vect_get_store_rhs (next_stmt_info);
+		  vec_oprnd = vect_get_vec_def_for_operand
+		    (op, next_stmt_info);
 		  dr_chain.quick_push (vec_oprnd);
 		  oprnds.quick_push (vec_oprnd);
-		  next_stmt = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next_stmt));
+		  next_stmt_info = DR_GROUP_NEXT_ELEMENT (next_stmt_info);
 		}
 	      if (mask)
 		vec_mask = vect_get_vec_def_for_operand (mask, stmt,
@@ -7029,7 +7029,7 @@ vectorizable_store (gimple *stmt, gimple
 	    }
 	  else
 	    dataref_ptr
-	      = vect_create_data_ref_ptr (first_stmt, aggr_type,
+	      = vect_create_data_ref_ptr (first_stmt_info, aggr_type,
 					  simd_lane_access_p ? loop : NULL,
 					  offset, &dummy, gsi, &ptr_incr,
 					  simd_lane_access_p, &inv_p,
@@ -7132,7 +7132,7 @@ vectorizable_store (gimple *stmt, gimple
 					&result_chain);
 	    }
 
-	  next_stmt = first_stmt;
+	  stmt_vec_info next_stmt_info = first_stmt_info;
 	  for (i = 0; i < vec_num; i++)
 	    {
 	      unsigned align, misalign;
@@ -7249,8 +7249,8 @@ vectorizable_store (gimple *stmt, gimple
 	      if (slp)
 		continue;
 
-	      next_stmt = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next_stmt));
-	      if (!next_stmt)
+	      next_stmt_info = DR_GROUP_NEXT_ELEMENT (next_stmt_info);
+	      if (!next_stmt_info)
 		break;
 	    }
 	}
@@ -7423,7 +7423,7 @@ vectorizable_load (gimple *stmt, gimple_
   gphi *phi = NULL;
   vec<tree> dr_chain = vNULL;
   bool grouped_load = false;
-  gimple *first_stmt;
+  stmt_vec_info first_stmt_info;
   stmt_vec_info first_stmt_info_for_drptr = NULL;
   bool inv_p;
   bool compute_in_loop = false;
@@ -7565,8 +7565,8 @@ vectorizable_load (gimple *stmt, gimple_
       gcc_assert (!nested_in_vect_loop);
       gcc_assert (!STMT_VINFO_GATHER_SCATTER_P (stmt_info));
 
-      first_stmt = DR_GROUP_FIRST_ELEMENT (stmt_info);
-      group_size = DR_GROUP_SIZE (vinfo_for_stmt (first_stmt));
+      first_stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
+      group_size = DR_GROUP_SIZE (first_stmt_info);
 
       if (slp && SLP_TREE_LOAD_PERMUTATION (slp_node).exists ())
 	slp_perm = true;
@@ -7696,25 +7696,26 @@ vectorizable_load (gimple *stmt, gimple_
 
       if (grouped_load)
 	{
-	  first_stmt = DR_GROUP_FIRST_ELEMENT (stmt_info);
-	  first_dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (first_stmt));
+	  first_stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
+	  first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
 	}
       else
 	{
-	  first_stmt = stmt;
+	  first_stmt_info = stmt_info;
 	  first_dr = dr;
 	}
       if (slp && grouped_load)
 	{
-	  group_size = DR_GROUP_SIZE (vinfo_for_stmt (first_stmt));
-	  ref_type = get_group_alias_ptr_type (first_stmt);
+	  group_size = DR_GROUP_SIZE (first_stmt_info);
+	  ref_type = get_group_alias_ptr_type (first_stmt_info);
 	}
       else
 	{
 	  if (grouped_load)
 	    cst_offset
 	      = (tree_to_uhwi (TYPE_SIZE_UNIT (TREE_TYPE (vectype)))
-		 * vect_get_place_in_interleaving_chain (stmt, first_stmt));
+		 * vect_get_place_in_interleaving_chain (stmt,
+							 first_stmt_info));
 	  group_size = 1;
 	  ref_type = reference_alias_ptr_type (DR_REF (dr));
 	}
@@ -7924,19 +7925,19 @@ vectorizable_load (gimple *stmt, gimple_
 
   if (grouped_load)
     {
-      first_stmt = DR_GROUP_FIRST_ELEMENT (stmt_info);
-      group_size = DR_GROUP_SIZE (vinfo_for_stmt (first_stmt));
+      first_stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
+      group_size = DR_GROUP_SIZE (first_stmt_info);
       /* For SLP vectorization we directly vectorize a subchain
          without permutation.  */
       if (slp && ! SLP_TREE_LOAD_PERMUTATION (slp_node).exists ())
-	first_stmt = SLP_TREE_SCALAR_STMTS (slp_node)[0];
+	first_stmt_info = SLP_TREE_SCALAR_STMTS (slp_node)[0];
       /* For BB vectorization always use the first stmt to base
 	 the data ref pointer on.  */
       if (bb_vinfo)
 	first_stmt_info_for_drptr = SLP_TREE_SCALAR_STMTS (slp_node)[0];
 
       /* Check if the chain of loads is already vectorized.  */
-      if (STMT_VINFO_VEC_STMT (vinfo_for_stmt (first_stmt))
+      if (STMT_VINFO_VEC_STMT (first_stmt_info)
 	  /* For SLP we would need to copy over SLP_TREE_VEC_STMTS.
 	     ???  But we can only do so if there is exactly one
 	     as we have no way to get at the rest.  Leave the CSE
@@ -7950,7 +7951,7 @@ vectorizable_load (gimple *stmt, gimple_
 	  *vec_stmt = STMT_VINFO_VEC_STMT (stmt_info);
 	  return true;
 	}
-      first_dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (first_stmt));
+      first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
       group_gap_adj = 0;
 
       /* VEC_NUM is the number of vect stmts to be created for this group.  */
@@ -7979,11 +7980,11 @@ vectorizable_load (gimple *stmt, gimple_
       else
 	vec_num = group_size;
 
-      ref_type = get_group_alias_ptr_type (first_stmt);
+      ref_type = get_group_alias_ptr_type (first_stmt_info);
     }
   else
     {
-      first_stmt = stmt;
+      first_stmt_info = stmt_info;
       first_dr = dr;
       group_size = vec_num = 1;
       group_gap_adj = 0;
@@ -8120,7 +8121,7 @@ vectorizable_load (gimple *stmt, gimple_
        || alignment_support_scheme == dr_explicit_realign)
       && !compute_in_loop)
     {
-      msq = vect_setup_realignment (first_stmt, gsi, &realignment_token,
+      msq = vect_setup_realignment (first_stmt_info, gsi, &realignment_token,
 				    alignment_support_scheme, NULL_TREE,
 				    &at_loop);
       if (alignment_support_scheme == dr_explicit_realign_optimized)
@@ -8184,7 +8185,7 @@ vectorizable_load (gimple *stmt, gimple_
 	      inv_p = false;
 	    }
 	  else if (first_stmt_info_for_drptr
-		   && first_stmt != first_stmt_info_for_drptr)
+		   && first_stmt_info != first_stmt_info_for_drptr)
 	    {
 	      dataref_ptr
 		= vect_create_data_ref_ptr (first_stmt_info_for_drptr,
@@ -8209,7 +8210,7 @@ vectorizable_load (gimple *stmt, gimple_
 	    }
 	  else
 	    dataref_ptr
-	      = vect_create_data_ref_ptr (first_stmt, aggr_type, at_loop,
+	      = vect_create_data_ref_ptr (first_stmt_info, aggr_type, at_loop,
 					  offset, &dummy, gsi, &ptr_incr,
 					  simd_lane_access_p, &inv_p,
 					  byte_offset, bump);
@@ -8388,7 +8389,7 @@ vectorizable_load (gimple *stmt, gimple_
 		    tree vs = size_int (TYPE_VECTOR_SUBPARTS (vectype));
 
 		    if (compute_in_loop)
-		      msq = vect_setup_realignment (first_stmt, gsi,
+		      msq = vect_setup_realignment (first_stmt_info, gsi,
 						    &realignment_token,
 						    dr_explicit_realign,
 						    dataref_ptr, NULL);
@@ -9708,8 +9709,7 @@ vect_transform_stmt (gimple *stmt, gimpl
 	     one are skipped, and there vec_stmt_info shouldn't be freed
 	     meanwhile.  */
 	  *grouped_store = true;
-	  stmt_vec_info group_info
-	    = vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (stmt_info));
+	  stmt_vec_info group_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
 	  if (DR_GROUP_STORE_COUNT (group_info) == DR_GROUP_SIZE (group_info))
 	    is_store = true;
 	}
@@ -9817,14 +9817,13 @@ vect_transform_stmt (gimple *stmt, gimpl
 vect_remove_stores (gimple *first_stmt)
 {
   gimple *next = first_stmt;
-  gimple *tmp;
   gimple_stmt_iterator next_si;
 
   while (next)
     {
       stmt_vec_info stmt_info = vinfo_for_stmt (next);
 
-      tmp = DR_GROUP_NEXT_ELEMENT (stmt_info);
+      stmt_vec_info tmp = DR_GROUP_NEXT_ELEMENT (stmt_info);
       if (is_pattern_stmt_p (stmt_info))
 	next = STMT_VINFO_RELATED_STMT (stmt_info);
       /* Free the attached stmt_vec_info and remove the stmt.  */

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [21/46] Make grouped_stores and reduction_chains use stmt_vec_infos
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (17 preceding siblings ...)
  2018-07-24 10:00 ` [18/46] Make SLP_TREE_SCALAR_STMTS " Richard Sandiford
@ 2018-07-24 10:01 ` Richard Sandiford
  2018-07-25  9:28   ` Richard Biener
  2018-07-24 10:01 ` [20/46] Make *FIRST_ELEMENT and *NEXT_ELEMENT stmt_vec_infos Richard Sandiford
                   ` (26 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:01 UTC (permalink / raw)
  To: gcc-patches

This patch changes the SLP lists grouped_stores and reduction_chains
from auto_vec<gimple *> to auto_vec<stmt_vec_info>.  It was easier
to do them together due to the way vect_analyze_slp is structured.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (vec_info::grouped_stores): Change from
	an auto_vec<gimple *> to an auto_vec<stmt_vec_info>.
	(_loop_vec_info::reduction_chains): Likewise.
	* tree-vect-loop.c (vect_fixup_scalar_cycles_with_patterns): Update
	accordingly.
	* tree-vect-slp.c (vect_analyze_slp): Likewise.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:23:08.536970400 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:23:12.060939107 +0100
@@ -259,7 +259,7 @@ struct vec_info {
 
   /* All interleaving chains of stores, represented by the first
      stmt in the chain.  */
-  auto_vec<gimple *> grouped_stores;
+  auto_vec<stmt_vec_info> grouped_stores;
 
   /* Cost data used by the target cost model.  */
   void *target_cost_data;
@@ -479,7 +479,7 @@ typedef struct _loop_vec_info : public v
 
   /* All reduction chains in the loop, represented by the first
      stmt in the chain.  */
-  auto_vec<gimple *> reduction_chains;
+  auto_vec<stmt_vec_info> reduction_chains;
 
   /* Cost vector for a single scalar iteration.  */
   auto_vec<stmt_info_for_cost> scalar_cost_vec;
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:23:08.532970436 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:23:12.060939107 +0100
@@ -677,13 +677,13 @@ vect_fixup_reduc_chain (gimple *stmt)
 static void
 vect_fixup_scalar_cycles_with_patterns (loop_vec_info loop_vinfo)
 {
-  gimple *first;
+  stmt_vec_info first;
   unsigned i;
 
   FOR_EACH_VEC_ELT (LOOP_VINFO_REDUCTION_CHAINS (loop_vinfo), i, first)
-    if (STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (first)))
+    if (STMT_VINFO_IN_PATTERN_P (first))
       {
-	stmt_vec_info next = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (first));
+	stmt_vec_info next = REDUC_GROUP_NEXT_ELEMENT (first);
 	while (next)
 	  {
 	    if (! STMT_VINFO_IN_PATTERN_P (next))
@@ -696,7 +696,7 @@ vect_fixup_scalar_cycles_with_patterns (
 	  {
 	    vect_fixup_reduc_chain (first);
 	    LOOP_VINFO_REDUCTION_CHAINS (loop_vinfo)[i]
-	      = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (first));
+	      = STMT_VINFO_RELATED_STMT (first);
 	  }
       }
 }
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c	2018-07-24 10:23:08.536970400 +0100
+++ gcc/tree-vect-slp.c	2018-07-24 10:23:12.060939107 +0100
@@ -2202,7 +2202,7 @@ vect_analyze_slp_instance (vec_info *vin
 vect_analyze_slp (vec_info *vinfo, unsigned max_tree_size)
 {
   unsigned int i;
-  gimple *first_element;
+  stmt_vec_info first_element;
 
   DUMP_VECT_SCOPE ("vect_analyze_slp");
 
@@ -2220,17 +2220,15 @@ vect_analyze_slp (vec_info *vinfo, unsig
 					     max_tree_size))
 	      {
 		/* Dissolve reduction chain group.  */
-		gimple *stmt = first_element;
-		while (stmt)
+		stmt_vec_info vinfo = first_element;
+		while (vinfo)
 		  {
-		    stmt_vec_info vinfo = vinfo_for_stmt (stmt);
 		    stmt_vec_info next = REDUC_GROUP_NEXT_ELEMENT (vinfo);
 		    REDUC_GROUP_FIRST_ELEMENT (vinfo) = NULL;
 		    REDUC_GROUP_NEXT_ELEMENT (vinfo) = NULL;
-		    stmt = next;
+		    vinfo = next;
 		  }
-		STMT_VINFO_DEF_TYPE (vinfo_for_stmt (first_element))
-		  = vect_internal_def;
+		STMT_VINFO_DEF_TYPE (first_element) = vect_internal_def;
 	      }
 	}
 

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [22/46] Make DR_GROUP_SAME_DR_STMT a stmt_vec_info
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (21 preceding siblings ...)
  2018-07-24 10:02 ` [24/46] Make stmt_info_for_cost use " Richard Sandiford
@ 2018-07-24 10:02 ` Richard Sandiford
  2018-07-25  9:29   ` Richard Biener
  2018-07-24 10:02 ` [23/46] Make LOOP_VINFO_MAY_MISALIGN_STMTS use stmt_vec_info Richard Sandiford
                   ` (22 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:02 UTC (permalink / raw)
  To: gcc-patches

This patch changes STMT_VINFO_SAME_DR_STMT from a gimple stmt to a
stmt_vec_info.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (_stmt_vec_info::same_dr_stmt): Change from
	a gimple stmt to a stmt_vec_info.
	* tree-vect-stmts.c (vectorizable_load): Update accordingly.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:23:12.060939107 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:23:15.756906285 +0100
@@ -876,7 +876,7 @@ struct _stmt_vec_info {
   stmt_vec_info next_element;
   /* For data-refs, in case that two or more stmts share data-ref, this is the
      pointer to the previously detected stmt with the same dr.  */
-  gimple *same_dr_stmt;
+  stmt_vec_info same_dr_stmt;
   /* The size of the group.  */
   unsigned int size;
   /* For stores, number of stores from this group seen. We vectorize the last
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:23:08.536970400 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:23:15.756906285 +0100
@@ -7590,8 +7590,7 @@ vectorizable_load (gimple *stmt, gimple_
 	 we have to give up.  */
       if (DR_GROUP_SAME_DR_STMT (stmt_info)
 	  && (STMT_SLP_TYPE (stmt_info)
-	      != STMT_SLP_TYPE (vinfo_for_stmt
-				 (DR_GROUP_SAME_DR_STMT (stmt_info)))))
+	      != STMT_SLP_TYPE (DR_GROUP_SAME_DR_STMT (stmt_info))))
 	{
 	  if (dump_enabled_p ())
 	    dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [24/46] Make stmt_info_for_cost use a stmt_vec_info
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (20 preceding siblings ...)
  2018-07-24 10:01 ` [19/46] Make vect_dr_stmt return a stmt_vec_info Richard Sandiford
@ 2018-07-24 10:02 ` Richard Sandiford
  2018-07-25  9:30   ` Richard Biener
  2018-07-24 10:02 ` [22/46] Make DR_GROUP_SAME_DR_STMT " Richard Sandiford
                   ` (23 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:02 UTC (permalink / raw)
  To: gcc-patches

This patch makes stmt_info_for_cost carry a stmt_vec_info instead
of a gimple stmt.  The structure is internal to the vectoriser,
so targets aren't affected.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (stmt_info_for_cost::stmt): Replace with...
	(stmt_info_for_cost::stmt_info): ...this new field.
	(add_stmt_costs): Update accordingly.
	* tree-vect-loop.c (vect_compute_single_scalar_iteration_cost)
	(vect_get_known_peeling_cost): Likewise.
	(vect_estimate_min_profitable_iters): Likewise.
	* tree-vect-stmts.c (record_stmt_cost): Likewise.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:23:18.856878757 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:23:22.264848493 +0100
@@ -116,7 +116,7 @@ struct stmt_info_for_cost {
   int count;
   enum vect_cost_for_stmt kind;
   enum vect_cost_model_location where;
-  gimple *stmt;
+  stmt_vec_info stmt_info;
   int misalign;
 };
 
@@ -1282,10 +1282,7 @@ add_stmt_costs (void *data, stmt_vector_
   stmt_info_for_cost *cost;
   unsigned i;
   FOR_EACH_VEC_ELT (*cost_vec, i, cost)
-    add_stmt_cost (data, cost->count, cost->kind,
-		   (cost->stmt
-		    ? vinfo_for_stmt (cost->stmt)
-		    : NULL_STMT_VEC_INFO),
+    add_stmt_cost (data, cost->count, cost->kind, cost->stmt_info,
 		   cost->misalign, cost->where);
 }
 
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:23:12.060939107 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:23:22.260848529 +0100
@@ -1136,13 +1136,9 @@ vect_compute_single_scalar_iteration_cos
   int j;
   FOR_EACH_VEC_ELT (LOOP_VINFO_SCALAR_ITERATION_COST (loop_vinfo),
 		    j, si)
-    {
-      struct _stmt_vec_info *stmt_info
-	= si->stmt ? vinfo_for_stmt (si->stmt) : NULL_STMT_VEC_INFO;
-      (void) add_stmt_cost (target_cost_data, si->count,
-			    si->kind, stmt_info, si->misalign,
-			    vect_body);
-    }
+    (void) add_stmt_cost (target_cost_data, si->count,
+			  si->kind, si->stmt_info, si->misalign,
+			  vect_body);
   unsigned dummy, body_cost = 0;
   finish_cost (target_cost_data, &dummy, &body_cost, &dummy);
   destroy_cost_data (target_cost_data);
@@ -3344,24 +3340,16 @@ vect_get_known_peeling_cost (loop_vec_in
   int j;
   if (peel_iters_prologue)
     FOR_EACH_VEC_ELT (*scalar_cost_vec, j, si)
-	{
-	  stmt_vec_info stmt_info
-	    = si->stmt ? vinfo_for_stmt (si->stmt) : NULL_STMT_VEC_INFO;
-	  retval += record_stmt_cost (prologue_cost_vec,
-				      si->count * peel_iters_prologue,
-				      si->kind, stmt_info, si->misalign,
-				      vect_prologue);
-	}
+      retval += record_stmt_cost (prologue_cost_vec,
+				  si->count * peel_iters_prologue,
+				  si->kind, si->stmt_info, si->misalign,
+				  vect_prologue);
   if (*peel_iters_epilogue)
     FOR_EACH_VEC_ELT (*scalar_cost_vec, j, si)
-	{
-	  stmt_vec_info stmt_info
-	    = si->stmt ? vinfo_for_stmt (si->stmt) : NULL_STMT_VEC_INFO;
-	  retval += record_stmt_cost (epilogue_cost_vec,
-				      si->count * *peel_iters_epilogue,
-				      si->kind, stmt_info, si->misalign,
-				      vect_epilogue);
-	}
+      retval += record_stmt_cost (epilogue_cost_vec,
+				  si->count * *peel_iters_epilogue,
+				  si->kind, si->stmt_info, si->misalign,
+				  vect_epilogue);
 
   return retval;
 }
@@ -3497,13 +3485,9 @@ vect_estimate_min_profitable_iters (loop
 	  int j;
 	  FOR_EACH_VEC_ELT (LOOP_VINFO_SCALAR_ITERATION_COST (loop_vinfo),
 			    j, si)
-	    {
-	      struct _stmt_vec_info *stmt_info
-		= si->stmt ? vinfo_for_stmt (si->stmt) : NULL_STMT_VEC_INFO;
-	      (void) add_stmt_cost (target_cost_data, si->count,
-				    si->kind, stmt_info, si->misalign,
-				    vect_epilogue);
-	    }
+	    (void) add_stmt_cost (target_cost_data, si->count,
+				  si->kind, si->stmt_info, si->misalign,
+				  vect_epilogue);
 	}
     }
   else if (npeel < 0)
@@ -3535,15 +3519,13 @@ vect_estimate_min_profitable_iters (loop
       int j;
       FOR_EACH_VEC_ELT (LOOP_VINFO_SCALAR_ITERATION_COST (loop_vinfo), j, si)
 	{
-	  struct _stmt_vec_info *stmt_info
-	    = si->stmt ? vinfo_for_stmt (si->stmt) : NULL_STMT_VEC_INFO;
 	  (void) add_stmt_cost (target_cost_data,
 				si->count * peel_iters_prologue,
-				si->kind, stmt_info, si->misalign,
+				si->kind, si->stmt_info, si->misalign,
 				vect_prologue);
 	  (void) add_stmt_cost (target_cost_data,
 				si->count * peel_iters_epilogue,
-				si->kind, stmt_info, si->misalign,
+				si->kind, si->stmt_info, si->misalign,
 				vect_epilogue);
 	}
     }
@@ -3566,20 +3548,12 @@ vect_estimate_min_profitable_iters (loop
 					  &epilogue_cost_vec);
 
       FOR_EACH_VEC_ELT (prologue_cost_vec, j, si)
-	{
-	  struct _stmt_vec_info *stmt_info
-	    = si->stmt ? vinfo_for_stmt (si->stmt) : NULL_STMT_VEC_INFO;
-	  (void) add_stmt_cost (data, si->count, si->kind, stmt_info,
-				si->misalign, vect_prologue);
-	}
+	(void) add_stmt_cost (data, si->count, si->kind, si->stmt_info,
+			      si->misalign, vect_prologue);
 
       FOR_EACH_VEC_ELT (epilogue_cost_vec, j, si)
-	{
-	  struct _stmt_vec_info *stmt_info
-	    = si->stmt ? vinfo_for_stmt (si->stmt) : NULL_STMT_VEC_INFO;
-	  (void) add_stmt_cost (data, si->count, si->kind, stmt_info,
-				si->misalign, vect_epilogue);
-	}
+	(void) add_stmt_cost (data, si->count, si->kind, si->stmt_info,
+			      si->misalign, vect_epilogue);
 
       prologue_cost_vec.release ();
       epilogue_cost_vec.release ();
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:23:15.756906285 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:23:22.260848529 +0100
@@ -98,9 +98,7 @@ record_stmt_cost (stmt_vector_for_cost *
       && STMT_VINFO_GATHER_SCATTER_P (stmt_info))
     kind = vector_scatter_store;
 
-  stmt_info_for_cost si = { count, kind, where,
-      stmt_info ? STMT_VINFO_STMT (stmt_info) : NULL,
-      misalign };
+  stmt_info_for_cost si = { count, kind, where, stmt_info, misalign };
   body_cost_vec->safe_push (si);
 
   tree vectype = stmt_info ? stmt_vectype (stmt_info) : NULL_TREE;

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [23/46] Make LOOP_VINFO_MAY_MISALIGN_STMTS use stmt_vec_info
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (22 preceding siblings ...)
  2018-07-24 10:02 ` [22/46] Make DR_GROUP_SAME_DR_STMT " Richard Sandiford
@ 2018-07-24 10:02 ` Richard Sandiford
  2018-07-25  9:29   ` Richard Biener
  2018-07-24 10:03 ` [26/46] Make more use of dyn_cast in tree-vect* Richard Sandiford
                   ` (21 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:02 UTC (permalink / raw)
  To: gcc-patches

This patch changes LOOP_VINFO_MAY_MISALIGN_STMTS from an
auto_vec<gimple *> to an auto_vec<stmt_vec_info>.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (_loop_vec_info::may_misalign_stmts): Change
	from an auto_vec<gimple *> to an auto_vec<stmt_vec_info>.
	* tree-vect-data-refs.c (vect_enhance_data_refs_alignment): Update
	accordingly.
	* tree-vect-loop-manip.c (vect_create_cond_for_align_checks): Likewise.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:23:15.756906285 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:23:18.856878757 +0100
@@ -472,7 +472,7 @@ typedef struct _loop_vec_info : public v
 
   /* Statements in the loop that have data references that are candidates for a
      runtime (loop versioning) misalignment check.  */
-  auto_vec<gimple *> may_misalign_stmts;
+  auto_vec<stmt_vec_info> may_misalign_stmts;
 
   /* Reduction cycles detected in the loop. Used in loop-aware SLP.  */
   auto_vec<stmt_vec_info> reductions;
Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c	2018-07-24 10:23:08.532970436 +0100
+++ gcc/tree-vect-data-refs.c	2018-07-24 10:23:18.856878757 +0100
@@ -2231,16 +2231,15 @@ vect_enhance_data_refs_alignment (loop_v
 
   if (do_versioning)
     {
-      vec<gimple *> may_misalign_stmts
+      vec<stmt_vec_info> may_misalign_stmts
         = LOOP_VINFO_MAY_MISALIGN_STMTS (loop_vinfo);
-      gimple *stmt;
+      stmt_vec_info stmt_info;
 
       /* It can now be assumed that the data references in the statements
          in LOOP_VINFO_MAY_MISALIGN_STMTS will be aligned in the version
          of the loop being vectorized.  */
-      FOR_EACH_VEC_ELT (may_misalign_stmts, i, stmt)
+      FOR_EACH_VEC_ELT (may_misalign_stmts, i, stmt_info)
         {
-          stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
           dr = STMT_VINFO_DATA_REF (stmt_info);
 	  SET_DR_MISALIGNMENT (dr, 0);
 	  if (dump_enabled_p ())
Index: gcc/tree-vect-loop-manip.c
===================================================================
--- gcc/tree-vect-loop-manip.c	2018-07-24 10:23:04.029010432 +0100
+++ gcc/tree-vect-loop-manip.c	2018-07-24 10:23:18.856878757 +0100
@@ -2772,9 +2772,9 @@ vect_create_cond_for_align_checks (loop_
                                    tree *cond_expr,
 				   gimple_seq *cond_expr_stmt_list)
 {
-  vec<gimple *> may_misalign_stmts
+  vec<stmt_vec_info> may_misalign_stmts
     = LOOP_VINFO_MAY_MISALIGN_STMTS (loop_vinfo);
-  gimple *ref_stmt;
+  stmt_vec_info stmt_info;
   int mask = LOOP_VINFO_PTR_MASK (loop_vinfo);
   tree mask_cst;
   unsigned int i;
@@ -2795,23 +2795,22 @@ vect_create_cond_for_align_checks (loop_
   /* Create expression (mask & (dr_1 || ... || dr_n)) where dr_i is the address
      of the first vector of the i'th data reference. */
 
-  FOR_EACH_VEC_ELT (may_misalign_stmts, i, ref_stmt)
+  FOR_EACH_VEC_ELT (may_misalign_stmts, i, stmt_info)
     {
       gimple_seq new_stmt_list = NULL;
       tree addr_base;
       tree addr_tmp_name;
       tree new_or_tmp_name;
       gimple *addr_stmt, *or_stmt;
-      stmt_vec_info stmt_vinfo = vinfo_for_stmt (ref_stmt);
-      tree vectype = STMT_VINFO_VECTYPE (stmt_vinfo);
+      tree vectype = STMT_VINFO_VECTYPE (stmt_info);
       bool negative = tree_int_cst_compare
-	(DR_STEP (STMT_VINFO_DATA_REF (stmt_vinfo)), size_zero_node) < 0;
+	(DR_STEP (STMT_VINFO_DATA_REF (stmt_info)), size_zero_node) < 0;
       tree offset = negative
 	? size_int (-TYPE_VECTOR_SUBPARTS (vectype) + 1) : size_zero_node;
 
       /* create: addr_tmp = (int)(address_of_first_vector) */
       addr_base =
-	vect_create_addr_base_for_vector_ref (ref_stmt, &new_stmt_list,
+	vect_create_addr_base_for_vector_ref (stmt_info, &new_stmt_list,
 					      offset);
       if (new_stmt_list != NULL)
 	gimple_seq_add_seq (cond_expr_stmt_list, new_stmt_list);

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [27/46] Remove duplicated stmt_vec_info lookups
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (25 preceding siblings ...)
  2018-07-24 10:03 ` [25/46] Make get_earlier/later_stmt take and return stmt_vec_infos Richard Sandiford
@ 2018-07-24 10:03 ` Richard Sandiford
  2018-07-25  9:32   ` Richard Biener
  2018-07-24 10:04 ` [29/46] Use stmt_vec_info instead of gimple stmts internally (part 2) Richard Sandiford
                   ` (18 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:03 UTC (permalink / raw)
  To: gcc-patches

Various places called vect_dr_stmt or vinfo_for_stmt multiple times
on the same input.  This patch makes them reuse the earlier result.
It also splits a couple of single vinfo_for_stmt calls out into
separate statements so that they can be reused in later patches.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vect-data-refs.c (vect_analyze_data_ref_dependence)
	(vect_slp_analyze_node_dependences, vect_analyze_data_ref_accesses)
	(vect_permute_store_chain, vect_permute_load_chain)
	(vect_shift_permute_load_chain, vect_transform_grouped_load): Avoid
	repeated stmt_vec_info lookups.
	* tree-vect-loop-manip.c (vect_can_advance_ivs_p): Likewise.
	(vect_update_ivs_after_vectorizer): Likewise.
	* tree-vect-loop.c (vect_is_simple_reduction): Likewise.
	(vect_create_epilog_for_reduction, vectorizable_reduction): Likewise.
	* tree-vect-patterns.c (adjust_bool_stmts): Likewise.
	* tree-vect-slp.c (vect_analyze_slp_instance): Likewise.
	(vect_bb_slp_scalar_cost): Likewise.
	* tree-vect-stmts.c (get_group_alias_ptr_type): Likewise.

Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c	2018-07-24 10:23:28.452793542 +0100
+++ gcc/tree-vect-data-refs.c	2018-07-24 10:23:31.736764378 +0100
@@ -472,8 +472,7 @@ vect_analyze_data_ref_dependence (struct
 		... = a[i];
 		a[i+1] = ...;
 	     where loads from the group interleave with the store.  */
-	  if (!vect_preserves_scalar_order_p (vect_dr_stmt(dra),
-					      vect_dr_stmt (drb)))
+	  if (!vect_preserves_scalar_order_p (stmtinfo_a, stmtinfo_b))
 	    {
 	      if (dump_enabled_p ())
 		dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
@@ -673,6 +672,7 @@ vect_slp_analyze_node_dependences (slp_i
      in NODE verifying we can sink them up to the last stmt in the
      group.  */
   stmt_vec_info last_access_info = vect_find_last_scalar_stmt_in_slp (node);
+  vec_info *vinfo = last_access_info->vinfo;
   for (unsigned k = 0; k < SLP_INSTANCE_GROUP_SIZE (instance); ++k)
     {
       stmt_vec_info access_info = SLP_TREE_SCALAR_STMTS (node)[k];
@@ -691,7 +691,8 @@ vect_slp_analyze_node_dependences (slp_i
 
 	  /* If we couldn't record a (single) data reference for this
 	     stmt we have to resort to the alias oracle.  */
-	  data_reference *dr_b = STMT_VINFO_DATA_REF (vinfo_for_stmt (stmt));
+	  stmt_vec_info stmt_info = vinfo->lookup_stmt (stmt);
+	  data_reference *dr_b = STMT_VINFO_DATA_REF (stmt_info);
 	  if (!dr_b)
 	    {
 	      /* We are moving a store or sinking a load - this means
@@ -2951,7 +2952,7 @@ vect_analyze_data_ref_accesses (vec_info
 	      || data_ref_compare_tree (DR_BASE_ADDRESS (dra),
 					DR_BASE_ADDRESS (drb)) != 0
 	      || data_ref_compare_tree (DR_OFFSET (dra), DR_OFFSET (drb)) != 0
-	      || !can_group_stmts_p (vect_dr_stmt (dra), vect_dr_stmt (drb)))
+	      || !can_group_stmts_p (stmtinfo_a, stmtinfo_b))
 	    break;
 
 	  /* Check that the data-refs have the same constant size.  */
@@ -3040,11 +3041,11 @@ vect_analyze_data_ref_accesses (vec_info
 	  /* Link the found element into the group list.  */
 	  if (!DR_GROUP_FIRST_ELEMENT (stmtinfo_a))
 	    {
-	      DR_GROUP_FIRST_ELEMENT (stmtinfo_a) = vect_dr_stmt (dra);
+	      DR_GROUP_FIRST_ELEMENT (stmtinfo_a) = stmtinfo_a;
 	      lastinfo = stmtinfo_a;
 	    }
-	  DR_GROUP_FIRST_ELEMENT (stmtinfo_b) = vect_dr_stmt (dra);
-	  DR_GROUP_NEXT_ELEMENT (lastinfo) = vect_dr_stmt (drb);
+	  DR_GROUP_FIRST_ELEMENT (stmtinfo_b) = stmtinfo_a;
+	  DR_GROUP_NEXT_ELEMENT (lastinfo) = stmtinfo_b;
 	  lastinfo = stmtinfo_b;
 	}
     }
@@ -5219,9 +5220,10 @@ vect_permute_store_chain (vec<tree> dr_c
 			  gimple_stmt_iterator *gsi,
 			  vec<tree> *result_chain)
 {
+  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   tree vect1, vect2, high, low;
   gimple *perm_stmt;
-  tree vectype = STMT_VINFO_VECTYPE (vinfo_for_stmt (stmt));
+  tree vectype = STMT_VINFO_VECTYPE (stmt_info);
   tree perm_mask_low, perm_mask_high;
   tree data_ref;
   tree perm3_mask_low, perm3_mask_high;
@@ -5840,11 +5842,12 @@ vect_permute_load_chain (vec<tree> dr_ch
 			 gimple_stmt_iterator *gsi,
 			 vec<tree> *result_chain)
 {
+  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   tree data_ref, first_vect, second_vect;
   tree perm_mask_even, perm_mask_odd;
   tree perm3_mask_low, perm3_mask_high;
   gimple *perm_stmt;
-  tree vectype = STMT_VINFO_VECTYPE (vinfo_for_stmt (stmt));
+  tree vectype = STMT_VINFO_VECTYPE (stmt_info);
   unsigned int i, j, log_length = exact_log2 (length);
 
   result_chain->quick_grow (length);
@@ -6043,14 +6046,14 @@ vect_shift_permute_load_chain (vec<tree>
 			       gimple_stmt_iterator *gsi,
 			       vec<tree> *result_chain)
 {
+  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   tree vect[3], vect_shift[3], data_ref, first_vect, second_vect;
   tree perm2_mask1, perm2_mask2, perm3_mask;
   tree select_mask, shift1_mask, shift2_mask, shift3_mask, shift4_mask;
   gimple *perm_stmt;
 
-  tree vectype = STMT_VINFO_VECTYPE (vinfo_for_stmt (stmt));
+  tree vectype = STMT_VINFO_VECTYPE (stmt_info);
   unsigned int i;
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
 
   unsigned HOST_WIDE_INT nelt, vf;
@@ -6310,6 +6313,7 @@ vect_shift_permute_load_chain (vec<tree>
 vect_transform_grouped_load (gimple *stmt, vec<tree> dr_chain, int size,
 			     gimple_stmt_iterator *gsi)
 {
+  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   machine_mode mode;
   vec<tree> result_chain = vNULL;
 
@@ -6321,7 +6325,7 @@ vect_transform_grouped_load (gimple *stm
   /* If reassociation width for vector type is 2 or greater target machine can
      execute 2 or more vector instructions in parallel.  Otherwise try to
      get chain for loads group using vect_shift_permute_load_chain.  */
-  mode = TYPE_MODE (STMT_VINFO_VECTYPE (vinfo_for_stmt (stmt)));
+  mode = TYPE_MODE (STMT_VINFO_VECTYPE (stmt_info));
   if (targetm.sched.reassociation_width (VEC_PERM_EXPR, mode) > 1
       || pow2p_hwi (size)
       || !vect_shift_permute_load_chain (dr_chain, size, stmt,
Index: gcc/tree-vect-loop-manip.c
===================================================================
--- gcc/tree-vect-loop-manip.c	2018-07-24 10:23:18.856878757 +0100
+++ gcc/tree-vect-loop-manip.c	2018-07-24 10:23:31.736764378 +0100
@@ -1377,6 +1377,7 @@ vect_can_advance_ivs_p (loop_vec_info lo
       tree evolution_part;
 
       gphi *phi = gsi.phi ();
+      stmt_vec_info phi_info = loop_vinfo->lookup_stmt (phi);
       if (dump_enabled_p ())
 	{
           dump_printf_loc (MSG_NOTE, vect_location, "Analyze phi: ");
@@ -1397,8 +1398,7 @@ vect_can_advance_ivs_p (loop_vec_info lo
 
       /* Analyze the evolution function.  */
 
-      evolution_part
-	= STMT_VINFO_LOOP_PHI_EVOLUTION_PART (vinfo_for_stmt (phi));
+      evolution_part = STMT_VINFO_LOOP_PHI_EVOLUTION_PART (phi_info);
       if (evolution_part == NULL_TREE)
         {
 	  if (dump_enabled_p ())
@@ -1500,6 +1500,7 @@ vect_update_ivs_after_vectorizer (loop_v
 
       gphi *phi = gsi.phi ();
       gphi *phi1 = gsi1.phi ();
+      stmt_vec_info phi_info = loop_vinfo->lookup_stmt (phi);
       if (dump_enabled_p ())
 	{
 	  dump_printf_loc (MSG_NOTE, vect_location,
@@ -1517,7 +1518,7 @@ vect_update_ivs_after_vectorizer (loop_v
 	}
 
       type = TREE_TYPE (gimple_phi_result (phi));
-      step_expr = STMT_VINFO_LOOP_PHI_EVOLUTION_PART (vinfo_for_stmt (phi));
+      step_expr = STMT_VINFO_LOOP_PHI_EVOLUTION_PART (phi_info);
       step_expr = unshare_expr (step_expr);
 
       /* FORNOW: We do not support IVs whose evolution function is a polynomial
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:23:28.456793506 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:23:31.740764343 +0100
@@ -3252,7 +3252,7 @@ vect_is_simple_reduction (loop_vec_info
     }
 
   /* Dissolve group eventually half-built by vect_is_slp_reduction.  */
-  stmt_vec_info first = REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (def_stmt));
+  stmt_vec_info first = REDUC_GROUP_FIRST_ELEMENT (def_stmt_info);
   while (first)
     {
       stmt_vec_info next = REDUC_GROUP_NEXT_ELEMENT (first);
@@ -4784,7 +4784,7 @@ vect_create_epilog_for_reduction (vec<tr
      # b1 = phi <b2, b0>
      a2 = operation (a1)
      b2 = operation (b1)  */
-  slp_reduc = (slp_node && !REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)));
+  slp_reduc = (slp_node && !REDUC_GROUP_FIRST_ELEMENT (stmt_info));
 
   /* True if we should implement SLP_REDUC using native reduction operations
      instead of scalar operations.  */
@@ -4799,7 +4799,7 @@ vect_create_epilog_for_reduction (vec<tr
 
      we may end up with more than one vector result.  Here we reduce them to
      one vector.  */
-  if (REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)) || direct_slp_reduc)
+  if (REDUC_GROUP_FIRST_ELEMENT (stmt_info) || direct_slp_reduc)
     {
       tree first_vect = PHI_RESULT (new_phis[0]);
       gassign *new_vec_stmt = NULL;
@@ -5544,7 +5544,7 @@ vect_create_epilog_for_reduction (vec<tr
      necessary, hence we set here REDUC_GROUP_SIZE to 1.  SCALAR_DEST is the
      LHS of the last stmt in the reduction chain, since we are looking for
      the loop exit phi node.  */
-  if (REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)))
+  if (REDUC_GROUP_FIRST_ELEMENT (stmt_info))
     {
       stmt_vec_info dest_stmt_info
 	= SLP_TREE_SCALAR_STMTS (slp_node)[group_size - 1];
@@ -6095,8 +6095,8 @@ vectorizable_reduction (gimple *stmt, gi
   tree cond_reduc_val = NULL_TREE;
 
   /* Make sure it was already recognized as a reduction computation.  */
-  if (STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt)) != vect_reduction_def
-      && STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt)) != vect_nested_cycle)
+  if (STMT_VINFO_DEF_TYPE (stmt_info) != vect_reduction_def
+      && STMT_VINFO_DEF_TYPE (stmt_info) != vect_nested_cycle)
     return false;
 
   if (nested_in_vect_loop_p (loop, stmt))
@@ -6789,7 +6789,7 @@ vectorizable_reduction (gimple *stmt, gi
 
   if (reduction_type == FOLD_LEFT_REDUCTION
       && slp_node
-      && !REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)))
+      && !REDUC_GROUP_FIRST_ELEMENT (stmt_info))
     {
       /* We cannot use in-order reductions in this case because there is
 	 an implicit reassociation of the operations involved.  */
@@ -6818,7 +6818,7 @@ vectorizable_reduction (gimple *stmt, gi
 
   /* Check extra constraints for variable-length unchained SLP reductions.  */
   if (STMT_SLP_TYPE (stmt_info)
-      && !REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt))
+      && !REDUC_GROUP_FIRST_ELEMENT (stmt_info)
       && !nunits_out.is_constant ())
     {
       /* We checked above that we could build the initial vector when
Index: gcc/tree-vect-patterns.c
===================================================================
--- gcc/tree-vect-patterns.c	2018-07-24 10:23:08.536970400 +0100
+++ gcc/tree-vect-patterns.c	2018-07-24 10:23:31.740764343 +0100
@@ -3505,6 +3505,8 @@ sort_after_uid (const void *p1, const vo
 adjust_bool_stmts (hash_set <gimple *> &bool_stmt_set,
 		   tree out_type, gimple *stmt)
 {
+  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+
   /* Gather original stmts in the bool pattern in their order of appearance
      in the IL.  */
   auto_vec<gimple *> bool_stmts (bool_stmt_set.elements ());
@@ -3517,11 +3519,11 @@ adjust_bool_stmts (hash_set <gimple *> &
   hash_map <tree, tree> defs;
   for (unsigned i = 0; i < bool_stmts.length (); ++i)
     adjust_bool_pattern (gimple_assign_lhs (bool_stmts[i]),
-			 out_type, vinfo_for_stmt (stmt), defs);
+			 out_type, stmt_info, defs);
 
   /* Pop the last pattern seq stmt and install it as pattern root for STMT.  */
   gimple *pattern_stmt
-    = gimple_seq_last_stmt (STMT_VINFO_PATTERN_DEF_SEQ (vinfo_for_stmt (stmt)));
+    = gimple_seq_last_stmt (STMT_VINFO_PATTERN_DEF_SEQ (stmt_info));
   return gimple_assign_lhs (pattern_stmt);
 }
 
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c	2018-07-24 10:23:25.232822136 +0100
+++ gcc/tree-vect-slp.c	2018-07-24 10:23:31.740764343 +0100
@@ -2157,8 +2157,8 @@ vect_analyze_slp_instance (vec_info *vin
      vector size.  */
   unsigned HOST_WIDE_INT const_nunits;
   if (is_a <bb_vec_info> (vinfo)
-      && STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt))
-      && DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt))
+      && STMT_VINFO_GROUPED_ACCESS (stmt_info)
+      && DR_GROUP_FIRST_ELEMENT (stmt_info)
       && nunits.is_constant (&const_nunits))
     {
       /* We consider breaking the group only on VF boundaries from the existing
@@ -2693,6 +2693,7 @@ vect_bb_slp_scalar_cost (basic_block bb,
   FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
     {
       gimple *stmt = stmt_info->stmt;
+      vec_info *vinfo = stmt_info->vinfo;
       ssa_op_iter op_iter;
       def_operand_p def_p;
 
@@ -2709,12 +2710,14 @@ vect_bb_slp_scalar_cost (basic_block bb,
 	  imm_use_iterator use_iter;
 	  gimple *use_stmt;
 	  FOR_EACH_IMM_USE_STMT (use_stmt, use_iter, DEF_FROM_PTR (def_p))
-	    if (!is_gimple_debug (use_stmt)
-		&& (! vect_stmt_in_region_p (stmt_info->vinfo, use_stmt)
-		    || ! PURE_SLP_STMT (vinfo_for_stmt (use_stmt))))
+	    if (!is_gimple_debug (use_stmt))
 	      {
-		(*life)[i] = true;
-		BREAK_FROM_IMM_USE_STMT (use_iter);
+		stmt_vec_info use_stmt_info = vinfo->lookup_stmt (use_stmt);
+		if (!use_stmt_info || !PURE_SLP_STMT (use_stmt_info))
+		  {
+		    (*life)[i] = true;
+		    BREAK_FROM_IMM_USE_STMT (use_iter);
+		  }
 	      }
 	}
       if ((*life)[i])
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:23:28.456793506 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:23:31.744764307 +0100
@@ -6193,11 +6193,11 @@ ensure_base_align (struct data_reference
 static tree
 get_group_alias_ptr_type (gimple *first_stmt)
 {
+  stmt_vec_info first_stmt_info = vinfo_for_stmt (first_stmt);
   struct data_reference *first_dr, *next_dr;
 
-  first_dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (first_stmt));
-  stmt_vec_info next_stmt_info
-    = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (first_stmt));
+  first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
+  stmt_vec_info next_stmt_info = DR_GROUP_NEXT_ELEMENT (first_stmt_info);
   while (next_stmt_info)
     {
       next_dr = STMT_VINFO_DATA_REF (next_stmt_info);

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [25/46] Make get_earlier/later_stmt take and return stmt_vec_infos
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (24 preceding siblings ...)
  2018-07-24 10:03 ` [26/46] Make more use of dyn_cast in tree-vect* Richard Sandiford
@ 2018-07-24 10:03 ` Richard Sandiford
  2018-07-25  9:31   ` Richard Biener
  2018-07-24 10:03 ` [27/46] Remove duplicated stmt_vec_info lookups Richard Sandiford
                   ` (19 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:03 UTC (permalink / raw)
  To: gcc-patches

...and also make vect_find_last_scalar_stmt_in_slp return a stmt_vec_info.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (get_earlier_stmt, get_later_stmt): Take and
	return stmt_vec_infos rather than gimple stmts.  Do not accept
	null arguments.
	(vect_find_last_scalar_stmt_in_slp): Return a stmt_vec_info instead
	of a gimple stmt.
	* tree-vect-slp.c (vect_find_last_scalar_stmt_in_slp): Likewise.
	Update use of get_later_stmt.
	(vect_get_constant_vectors): Update call accordingly.
	(vect_schedule_slp_instance): Likewise
	* tree-vect-data-refs.c (vect_slp_analyze_node_dependences): Likewise.
	(vect_slp_analyze_instance_dependence): Likewise.
	(vect_preserves_scalar_order_p): Update use of get_earlier_stmt.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:23:22.264848493 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:23:25.232822136 +0100
@@ -1119,68 +1119,36 @@ set_vinfo_for_stmt (gimple *stmt, stmt_v
     }
 }
 
-/* Return the earlier statement between STMT1 and STMT2.  */
+/* Return the earlier statement between STMT1_INFO and STMT2_INFO.  */
 
-static inline gimple *
-get_earlier_stmt (gimple *stmt1, gimple *stmt2)
+static inline stmt_vec_info
+get_earlier_stmt (stmt_vec_info stmt1_info, stmt_vec_info stmt2_info)
 {
-  unsigned int uid1, uid2;
+  gcc_checking_assert ((STMT_VINFO_IN_PATTERN_P (stmt1_info)
+			|| !STMT_VINFO_RELATED_STMT (stmt1_info))
+		       && (STMT_VINFO_IN_PATTERN_P (stmt2_info)
+			   || !STMT_VINFO_RELATED_STMT (stmt2_info)));
 
-  if (stmt1 == NULL)
-    return stmt2;
-
-  if (stmt2 == NULL)
-    return stmt1;
-
-  uid1 = gimple_uid (stmt1);
-  uid2 = gimple_uid (stmt2);
-
-  if (uid1 == 0 || uid2 == 0)
-    return NULL;
-
-  gcc_assert (uid1 <= stmt_vec_info_vec->length ()
-	      && uid2 <= stmt_vec_info_vec->length ());
-  gcc_checking_assert ((STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (stmt1))
-			|| !STMT_VINFO_RELATED_STMT (vinfo_for_stmt (stmt1)))
-		       && (STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (stmt2))
-			   || !STMT_VINFO_RELATED_STMT (vinfo_for_stmt (stmt2))));
-
-  if (uid1 < uid2)
-    return stmt1;
+  if (gimple_uid (stmt1_info->stmt) < gimple_uid (stmt2_info->stmt))
+    return stmt1_info;
   else
-    return stmt2;
+    return stmt2_info;
 }
 
-/* Return the later statement between STMT1 and STMT2.  */
+/* Return the later statement between STMT1_INFO and STMT2_INFO.  */
 
-static inline gimple *
-get_later_stmt (gimple *stmt1, gimple *stmt2)
+static inline stmt_vec_info
+get_later_stmt (stmt_vec_info stmt1_info, stmt_vec_info stmt2_info)
 {
-  unsigned int uid1, uid2;
-
-  if (stmt1 == NULL)
-    return stmt2;
-
-  if (stmt2 == NULL)
-    return stmt1;
-
-  uid1 = gimple_uid (stmt1);
-  uid2 = gimple_uid (stmt2);
-
-  if (uid1 == 0 || uid2 == 0)
-    return NULL;
-
-  gcc_assert (uid1 <= stmt_vec_info_vec->length ()
-	      && uid2 <= stmt_vec_info_vec->length ());
-  gcc_checking_assert ((STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (stmt1))
-			|| !STMT_VINFO_RELATED_STMT (vinfo_for_stmt (stmt1)))
-		       && (STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (stmt2))
-			   || !STMT_VINFO_RELATED_STMT (vinfo_for_stmt (stmt2))));
+  gcc_checking_assert ((STMT_VINFO_IN_PATTERN_P (stmt1_info)
+			|| !STMT_VINFO_RELATED_STMT (stmt1_info))
+		       && (STMT_VINFO_IN_PATTERN_P (stmt2_info)
+			   || !STMT_VINFO_RELATED_STMT (stmt2_info)));
 
-  if (uid1 > uid2)
-    return stmt1;
+  if (gimple_uid (stmt1_info->stmt) > gimple_uid (stmt2_info->stmt))
+    return stmt1_info;
   else
-    return stmt2;
+    return stmt2_info;
 }
 
 /* Return TRUE if a statement represented by STMT_INFO is a part of a
@@ -1674,7 +1642,7 @@ extern bool vect_make_slp_decision (loop
 extern void vect_detect_hybrid_slp (loop_vec_info);
 extern void vect_get_slp_defs (vec<tree> , slp_tree, vec<vec<tree> > *);
 extern bool vect_slp_bb (basic_block);
-extern gimple *vect_find_last_scalar_stmt_in_slp (slp_tree);
+extern stmt_vec_info vect_find_last_scalar_stmt_in_slp (slp_tree);
 extern bool is_simple_and_all_uses_invariant (gimple *, loop_vec_info);
 extern bool can_duplicate_and_interleave_p (unsigned int, machine_mode,
 					    unsigned int * = NULL,
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c	2018-07-24 10:23:12.060939107 +0100
+++ gcc/tree-vect-slp.c	2018-07-24 10:23:25.232822136 +0100
@@ -1838,18 +1838,17 @@ vect_supported_load_permutation_p (slp_i
 
 /* Find the last store in SLP INSTANCE.  */
 
-gimple *
+stmt_vec_info
 vect_find_last_scalar_stmt_in_slp (slp_tree node)
 {
-  gimple *last = NULL;
+  stmt_vec_info last = NULL;
   stmt_vec_info stmt_vinfo;
 
   for (int i = 0; SLP_TREE_SCALAR_STMTS (node).iterate (i, &stmt_vinfo); i++)
     {
       if (is_pattern_stmt_p (stmt_vinfo))
-	last = get_later_stmt (STMT_VINFO_RELATED_STMT (stmt_vinfo), last);
-      else
-	last = get_later_stmt (stmt_vinfo, last);
+	stmt_vinfo = STMT_VINFO_RELATED_STMT (stmt_vinfo);
+      last = last ? get_later_stmt (stmt_vinfo, last) : stmt_vinfo;
     }
 
   return last;
@@ -3480,8 +3479,9 @@ vect_get_constant_vectors (tree op, slp_
 	      gimple_stmt_iterator gsi;
 	      if (place_after_defs)
 		{
-		  gsi = gsi_for_stmt
-		          (vect_find_last_scalar_stmt_in_slp (slp_node));
+		  stmt_vec_info last_stmt_info
+		    = vect_find_last_scalar_stmt_in_slp (slp_node);
+		  gsi = gsi_for_stmt (last_stmt_info->stmt);
 		  init = vect_init_vector (stmt_vinfo, vec_cst, vector_type,
 					   &gsi);
 		}
@@ -3910,7 +3910,8 @@ vect_schedule_slp_instance (slp_tree nod
 
   /* Vectorized stmts go before the last scalar stmt which is where
      all uses are ready.  */
-  si = gsi_for_stmt (vect_find_last_scalar_stmt_in_slp (node));
+  stmt_vec_info last_stmt_info = vect_find_last_scalar_stmt_in_slp (node);
+  si = gsi_for_stmt (last_stmt_info->stmt);
 
   /* Mark the first element of the reduction chain as reduction to properly
      transform the node.  In the analysis phase only the last element of the
Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c	2018-07-24 10:23:18.856878757 +0100
+++ gcc/tree-vect-data-refs.c	2018-07-24 10:23:25.228822172 +0100
@@ -216,8 +216,8 @@ vect_preserves_scalar_order_p (gimple *s
     stmtinfo_a = STMT_VINFO_RELATED_STMT (stmtinfo_a);
   if (is_pattern_stmt_p (stmtinfo_b))
     stmtinfo_b = STMT_VINFO_RELATED_STMT (stmtinfo_b);
-  gimple *earlier_stmt = get_earlier_stmt (stmtinfo_a, stmtinfo_b);
-  return !DR_IS_WRITE (STMT_VINFO_DATA_REF (vinfo_for_stmt (earlier_stmt)));
+  stmt_vec_info earlier_stmt_info = get_earlier_stmt (stmtinfo_a, stmtinfo_b);
+  return !DR_IS_WRITE (STMT_VINFO_DATA_REF (earlier_stmt_info));
 }
 
 /* A subroutine of vect_analyze_data_ref_dependence.  Handle
@@ -671,17 +671,17 @@ vect_slp_analyze_node_dependences (slp_i
   /* This walks over all stmts involved in the SLP load/store done
      in NODE verifying we can sink them up to the last stmt in the
      group.  */
-  gimple *last_access = vect_find_last_scalar_stmt_in_slp (node);
+  stmt_vec_info last_access_info = vect_find_last_scalar_stmt_in_slp (node);
   for (unsigned k = 0; k < SLP_INSTANCE_GROUP_SIZE (instance); ++k)
     {
       stmt_vec_info access_info = SLP_TREE_SCALAR_STMTS (node)[k];
-      if (access_info == last_access)
+      if (access_info == last_access_info)
 	continue;
       data_reference *dr_a = STMT_VINFO_DATA_REF (access_info);
       ao_ref ref;
       bool ref_initialized_p = false;
       for (gimple_stmt_iterator gsi = gsi_for_stmt (access_info->stmt);
-	   gsi_stmt (gsi) != last_access; gsi_next (&gsi))
+	   gsi_stmt (gsi) != last_access_info->stmt; gsi_next (&gsi))
 	{
 	  gimple *stmt = gsi_stmt (gsi);
 	  if (! gimple_vuse (stmt)
@@ -757,14 +757,14 @@ vect_slp_analyze_instance_dependence (sl
     store = NULL;
 
   /* Verify we can sink stores to the vectorized stmt insert location.  */
-  gimple *last_store = NULL;
+  stmt_vec_info last_store_info = NULL;
   if (store)
     {
       if (! vect_slp_analyze_node_dependences (instance, store, vNULL, NULL))
 	return false;
 
       /* Mark stores in this instance and remember the last one.  */
-      last_store = vect_find_last_scalar_stmt_in_slp (store);
+      last_store_info = vect_find_last_scalar_stmt_in_slp (store);
       for (unsigned k = 0; k < SLP_INSTANCE_GROUP_SIZE (instance); ++k)
 	gimple_set_visited (SLP_TREE_SCALAR_STMTS (store)[k]->stmt, true);
     }
@@ -779,7 +779,7 @@ vect_slp_analyze_instance_dependence (sl
     if (! vect_slp_analyze_node_dependences (instance, load,
 					     store
 					     ? SLP_TREE_SCALAR_STMTS (store)
-					     : vNULL, last_store))
+					     : vNULL, last_store_info))
       {
 	res = false;
 	break;

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [26/46] Make more use of dyn_cast in tree-vect*
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (23 preceding siblings ...)
  2018-07-24 10:02 ` [23/46] Make LOOP_VINFO_MAY_MISALIGN_STMTS use stmt_vec_info Richard Sandiford
@ 2018-07-24 10:03 ` Richard Sandiford
  2018-07-25  9:31   ` Richard Biener
  2018-07-24 10:03 ` [25/46] Make get_earlier/later_stmt take and return stmt_vec_infos Richard Sandiford
                   ` (20 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:03 UTC (permalink / raw)
  To: gcc-patches

If we use stmt_vec_infos to represent statements in the vectoriser,
it's then more natural to use dyn_cast when processing the statement
as an assignment, call, etc.  This patch does that in a few more places.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vect-data-refs.c (vect_check_gather_scatter): Pass the
	gcall rather than the generic gimple stmt to gimple_call_internal_fn.
	(vect_get_smallest_scalar_type, can_group_stmts_p): Use dyn_cast
	to get gassigns and gcalls, rather than operating on generc gimple
	stmts.
	* tree-vect-stmts.c (exist_non_indexing_operands_for_use_p)
	(vect_mark_stmts_to_be_vectorized, vectorizable_store)
	(vectorizable_load, vect_analyze_stmt): Likewise.
	* tree-vect-loop.c (vectorizable_reduction): Likewise gphi.

Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c	2018-07-24 10:23:25.228822172 +0100
+++ gcc/tree-vect-data-refs.c	2018-07-24 10:23:28.452793542 +0100
@@ -130,15 +130,16 @@ vect_get_smallest_scalar_type (gimple *s
 
   lhs = rhs = TREE_INT_CST_LOW (TYPE_SIZE_UNIT (scalar_type));
 
-  if (is_gimple_assign (stmt)
-      && (gimple_assign_cast_p (stmt)
-          || gimple_assign_rhs_code (stmt) == DOT_PROD_EXPR
-          || gimple_assign_rhs_code (stmt) == WIDEN_SUM_EXPR
-          || gimple_assign_rhs_code (stmt) == WIDEN_MULT_EXPR
-          || gimple_assign_rhs_code (stmt) == WIDEN_LSHIFT_EXPR
-          || gimple_assign_rhs_code (stmt) == FLOAT_EXPR))
+  gassign *assign = dyn_cast <gassign *> (stmt);
+  if (assign
+      && (gimple_assign_cast_p (assign)
+	  || gimple_assign_rhs_code (assign) == DOT_PROD_EXPR
+	  || gimple_assign_rhs_code (assign) == WIDEN_SUM_EXPR
+	  || gimple_assign_rhs_code (assign) == WIDEN_MULT_EXPR
+	  || gimple_assign_rhs_code (assign) == WIDEN_LSHIFT_EXPR
+	  || gimple_assign_rhs_code (assign) == FLOAT_EXPR))
     {
-      tree rhs_type = TREE_TYPE (gimple_assign_rhs1 (stmt));
+      tree rhs_type = TREE_TYPE (gimple_assign_rhs1 (assign));
 
       rhs = TREE_INT_CST_LOW (TYPE_SIZE_UNIT (rhs_type));
       if (rhs < lhs)
@@ -2850,21 +2851,23 @@ can_group_stmts_p (gimple *stmt1, gimple
   if (gimple_assign_single_p (stmt1))
     return gimple_assign_single_p (stmt2);
 
-  if (is_gimple_call (stmt1) && gimple_call_internal_p (stmt1))
+  gcall *call1 = dyn_cast <gcall *> (stmt1);
+  if (call1 && gimple_call_internal_p (call1))
     {
       /* Check for two masked loads or two masked stores.  */
-      if (!is_gimple_call (stmt2) || !gimple_call_internal_p (stmt2))
+      gcall *call2 = dyn_cast <gcall *> (stmt2);
+      if (!call2 || !gimple_call_internal_p (call2))
 	return false;
-      internal_fn ifn = gimple_call_internal_fn (stmt1);
+      internal_fn ifn = gimple_call_internal_fn (call1);
       if (ifn != IFN_MASK_LOAD && ifn != IFN_MASK_STORE)
 	return false;
-      if (ifn != gimple_call_internal_fn (stmt2))
+      if (ifn != gimple_call_internal_fn (call2))
 	return false;
 
       /* Check that the masks are the same.  Cope with casts of masks,
 	 like those created by build_mask_conversion.  */
-      tree mask1 = gimple_call_arg (stmt1, 2);
-      tree mask2 = gimple_call_arg (stmt2, 2);
+      tree mask1 = gimple_call_arg (call1, 2);
+      tree mask2 = gimple_call_arg (call2, 2);
       if (!operand_equal_p (mask1, mask2, 0))
 	{
 	  mask1 = strip_conversion (mask1);
@@ -3665,7 +3668,7 @@ vect_check_gather_scatter (gimple *stmt,
   gcall *call = dyn_cast <gcall *> (stmt);
   if (call && gimple_call_internal_p (call))
     {
-      ifn = gimple_call_internal_fn (stmt);
+      ifn = gimple_call_internal_fn (call);
       if (internal_gather_scatter_fn_p (ifn))
 	{
 	  vect_describe_gather_scatter_call (call, info);
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:23:22.260848529 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:23:28.456793506 +0100
@@ -389,30 +389,31 @@ exist_non_indexing_operands_for_use_p (t
      Therefore, all we need to check is if STMT falls into the
      first case, and whether var corresponds to USE.  */
 
-  if (!gimple_assign_copy_p (stmt))
+  gassign *assign = dyn_cast <gassign *> (stmt);
+  if (!assign || !gimple_assign_copy_p (assign))
     {
-      if (is_gimple_call (stmt)
-	  && gimple_call_internal_p (stmt))
+      gcall *call = dyn_cast <gcall *> (stmt);
+      if (call && gimple_call_internal_p (call))
 	{
-	  internal_fn ifn = gimple_call_internal_fn (stmt);
+	  internal_fn ifn = gimple_call_internal_fn (call);
 	  int mask_index = internal_fn_mask_index (ifn);
 	  if (mask_index >= 0
-	      && use == gimple_call_arg (stmt, mask_index))
+	      && use == gimple_call_arg (call, mask_index))
 	    return true;
 	  int stored_value_index = internal_fn_stored_value_index (ifn);
 	  if (stored_value_index >= 0
-	      && use == gimple_call_arg (stmt, stored_value_index))
+	      && use == gimple_call_arg (call, stored_value_index))
 	    return true;
 	  if (internal_gather_scatter_fn_p (ifn)
-	      && use == gimple_call_arg (stmt, 1))
+	      && use == gimple_call_arg (call, 1))
 	    return true;
 	}
       return false;
     }
 
-  if (TREE_CODE (gimple_assign_lhs (stmt)) == SSA_NAME)
+  if (TREE_CODE (gimple_assign_lhs (assign)) == SSA_NAME)
     return false;
-  operand = gimple_assign_rhs1 (stmt);
+  operand = gimple_assign_rhs1 (assign);
   if (TREE_CODE (operand) != SSA_NAME)
     return false;
 
@@ -739,10 +740,10 @@ vect_mark_stmts_to_be_vectorized (loop_v
           /* Pattern statements are not inserted into the code, so
              FOR_EACH_PHI_OR_STMT_USE optimizes their operands out, and we
              have to scan the RHS or function arguments instead.  */
-          if (is_gimple_assign (stmt))
-            {
-	      enum tree_code rhs_code = gimple_assign_rhs_code (stmt);
-	      tree op = gimple_assign_rhs1 (stmt);
+	  if (gassign *assign = dyn_cast <gassign *> (stmt))
+	    {
+	      enum tree_code rhs_code = gimple_assign_rhs_code (assign);
+	      tree op = gimple_assign_rhs1 (assign);
 
 	      i = 1;
 	      if (rhs_code == COND_EXPR && COMPARISON_CLASS_P (op))
@@ -754,25 +755,25 @@ vect_mark_stmts_to_be_vectorized (loop_v
 		    return false;
 		  i = 2;
 		}
-	      for (; i < gimple_num_ops (stmt); i++)
-                {
-		  op = gimple_op (stmt, i);
+	      for (; i < gimple_num_ops (assign); i++)
+		{
+		  op = gimple_op (assign, i);
                   if (TREE_CODE (op) == SSA_NAME
 		      && !process_use (stmt, op, loop_vinfo, relevant,
 				       &worklist, false))
                     return false;
                  }
             }
-          else if (is_gimple_call (stmt))
-            {
-              for (i = 0; i < gimple_call_num_args (stmt); i++)
-                {
-                  tree arg = gimple_call_arg (stmt, i);
+	  else if (gcall *call = dyn_cast <gcall *> (stmt))
+	    {
+	      for (i = 0; i < gimple_call_num_args (call); i++)
+		{
+		  tree arg = gimple_call_arg (call, i);
 		  if (!process_use (stmt, arg, loop_vinfo, relevant,
 				    &worklist, false))
                     return false;
-                }
-            }
+		}
+	    }
         }
       else
         FOR_EACH_PHI_OR_STMT_USE (use_p, stmt, iter, SSA_OP_USE)
@@ -6274,9 +6275,9 @@ vectorizable_store (gimple *stmt, gimple
   /* Is vectorizable store? */
 
   tree mask = NULL_TREE, mask_vectype = NULL_TREE;
-  if (is_gimple_assign (stmt))
+  if (gassign *assign = dyn_cast <gassign *> (stmt))
     {
-      tree scalar_dest = gimple_assign_lhs (stmt);
+      tree scalar_dest = gimple_assign_lhs (assign);
       if (TREE_CODE (scalar_dest) == VIEW_CONVERT_EXPR
 	  && is_pattern_stmt_p (stmt_info))
 	scalar_dest = TREE_OPERAND (scalar_dest, 0);
@@ -7445,13 +7446,13 @@ vectorizable_load (gimple *stmt, gimple_
     return false;
 
   tree mask = NULL_TREE, mask_vectype = NULL_TREE;
-  if (is_gimple_assign (stmt))
+  if (gassign *assign = dyn_cast <gassign *> (stmt))
     {
-      scalar_dest = gimple_assign_lhs (stmt);
+      scalar_dest = gimple_assign_lhs (assign);
       if (TREE_CODE (scalar_dest) != SSA_NAME)
 	return false;
 
-      tree_code code = gimple_assign_rhs_code (stmt);
+      tree_code code = gimple_assign_rhs_code (assign);
       if (code != ARRAY_REF
 	  && code != BIT_FIELD_REF
 	  && code != INDIRECT_REF
@@ -9557,9 +9558,9 @@ vect_analyze_stmt (gimple *stmt, bool *n
   if (STMT_VINFO_RELEVANT_P (stmt_info))
     {
       gcc_assert (!VECTOR_MODE_P (TYPE_MODE (gimple_expr_type (stmt))));
+      gcall *call = dyn_cast <gcall *> (stmt);
       gcc_assert (STMT_VINFO_VECTYPE (stmt_info)
-		  || (is_gimple_call (stmt)
-		      && gimple_call_lhs (stmt) == NULL_TREE));
+		  || (call && gimple_call_lhs (call) == NULL_TREE));
       *need_to_vectorize = true;
     }
 
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:23:22.260848529 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:23:28.456793506 +0100
@@ -6109,9 +6109,9 @@ vectorizable_reduction (gimple *stmt, gi
     gcc_assert (slp_node
 		&& REDUC_GROUP_FIRST_ELEMENT (stmt_info) == stmt_info);
 
-  if (gimple_code (stmt) == GIMPLE_PHI)
+  if (gphi *phi = dyn_cast <gphi *> (stmt))
     {
-      tree phi_result = gimple_phi_result (stmt);
+      tree phi_result = gimple_phi_result (phi);
       /* Analysis is fully done on the reduction stmt invocation.  */
       if (! vec_stmt)
 	{
@@ -6141,7 +6141,7 @@ vectorizable_reduction (gimple *stmt, gi
       for (unsigned k = 1; k < gimple_num_ops (reduc_stmt); ++k)
 	{
 	  tree op = gimple_op (reduc_stmt, k);
-	  if (op == gimple_phi_result (stmt))
+	  if (op == phi_result)
 	    continue;
 	  if (k == 1
 	      && gimple_assign_rhs_code (reduc_stmt) == COND_EXPR)

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [29/46] Use stmt_vec_info instead of gimple stmts internally (part 2)
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (26 preceding siblings ...)
  2018-07-24 10:03 ` [27/46] Remove duplicated stmt_vec_info lookups Richard Sandiford
@ 2018-07-24 10:04 ` Richard Sandiford
  2018-07-25 10:03   ` Richard Biener
  2018-07-24 10:04 ` [30/46] Use stmt_vec_infos rather than gimple stmts for worklists Richard Sandiford
                   ` (17 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:04 UTC (permalink / raw)
  To: gcc-patches

This second part handles the less mechnical cases, i.e. those that don't
just involve swapping a gimple stmt for an existing stmt_vec_info.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vect-loop.c (vect_analyze_loop_operations): Look up the
	statement before passing it to vect_analyze_stmt.
	(vect_create_epilog_for_reduction): Use a stmt_vec_info to walk
	the chain of phi vector definitions.  Track the exit phi via its
	stmt_vec_info.
	(vectorizable_reduction): Set cond_stmt_vinfo directly from the
	STMT_VINFO_REDUC_DEF.
	* tree-vect-slp.c (vect_get_place_in_interleaving_chain): Use
	stmt_vec_infos to handle the statement chains.
	(vect_get_slp_defs): Record the first statement in the node
	using a stmt_vec_info.
	* tree-vect-stmts.c (vect_mark_stmts_to_be_vectorized): Look up
	statements here and pass their stmt_vec_info down to subroutines.
	(vect_init_vector_1): Hoist call to vinfo_for_stmt and pass it
	down to vect_finish_stmt_generation.
	(vect_init_vector, vect_get_vec_defs, vect_finish_replace_stmt)
	(vect_finish_stmt_generation): Call vinfo_for_stmt and pass
	stmt_vec_infos to subroutines.
	(vect_remove_stores): Use stmt_vec_infos to handle the statement
	chains.

Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:23:35.376732054 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:23:38.964700191 +0100
@@ -1629,8 +1629,9 @@ vect_analyze_loop_operations (loop_vec_i
         {
 	  gimple *stmt = gsi_stmt (si);
 	  if (!gimple_clobber_p (stmt)
-	      && !vect_analyze_stmt (stmt, &need_to_vectorize, NULL, NULL,
-				     &cost_vec))
+	      && !vect_analyze_stmt (loop_vinfo->lookup_stmt (stmt),
+				     &need_to_vectorize,
+				     NULL, NULL, &cost_vec))
 	    return false;
         }
     } /* bbs */
@@ -4832,11 +4833,11 @@ vect_create_epilog_for_reduction (vec<tr
       tree first_vect = PHI_RESULT (new_phis[0]);
       gassign *new_vec_stmt = NULL;
       vec_dest = vect_create_destination_var (scalar_dest, vectype);
-      gimple *next_phi = new_phis[0];
+      stmt_vec_info next_phi_info = loop_vinfo->lookup_stmt (new_phis[0]);
       for (int k = 1; k < ncopies; ++k)
 	{
-	  next_phi = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (next_phi));
-	  tree second_vect = PHI_RESULT (next_phi);
+	  next_phi_info = STMT_VINFO_RELATED_STMT (next_phi_info);
+	  tree second_vect = PHI_RESULT (next_phi_info->stmt);
           tree tem = make_ssa_name (vec_dest, new_vec_stmt);
           new_vec_stmt = gimple_build_assign (tem, code,
 					      first_vect, second_vect);
@@ -5573,11 +5574,12 @@ vect_create_epilog_for_reduction (vec<tr
   else
     ratio = 1;
 
+  stmt_vec_info epilog_stmt_info = NULL;
   for (k = 0; k < group_size; k++)
     {
       if (k % ratio == 0)
         {
-          epilog_stmt = new_phis[k / ratio];
+	  epilog_stmt_info = loop_vinfo->lookup_stmt (new_phis[k / ratio]);
 	  reduction_phi_info = reduction_phis[k / ratio];
 	  if (double_reduc)
 	    inner_phi = inner_phis[k / ratio];
@@ -5623,8 +5625,7 @@ vect_create_epilog_for_reduction (vec<tr
 	      if (double_reduc)
 		STMT_VINFO_VEC_STMT (exit_phi_vinfo) = inner_phi;
 	      else
-		STMT_VINFO_VEC_STMT (exit_phi_vinfo)
-		  = vinfo_for_stmt (epilog_stmt);
+		STMT_VINFO_VEC_STMT (exit_phi_vinfo) = epilog_stmt_info;
               if (!double_reduc
                   || STMT_VINFO_DEF_TYPE (exit_phi_vinfo)
                       != vect_double_reduction_def)
@@ -6070,7 +6071,7 @@ vectorizable_reduction (gimple *stmt, gi
   optab optab;
   tree new_temp = NULL_TREE;
   enum vect_def_type dt, cond_reduc_dt = vect_unknown_def_type;
-  gimple *cond_reduc_def_stmt = NULL;
+  stmt_vec_info cond_stmt_vinfo = NULL;
   enum tree_code cond_reduc_op_code = ERROR_MARK;
   tree scalar_type;
   bool is_simple_use;
@@ -6348,7 +6349,7 @@ vectorizable_reduction (gimple *stmt, gi
 	      && is_nonwrapping_integer_induction (def_stmt_info, loop))
 	    {
 	      cond_reduc_dt = dt;
-	      cond_reduc_def_stmt = def_stmt_info;
+	      cond_stmt_vinfo = def_stmt_info;
 	    }
 	}
     }
@@ -6454,7 +6455,6 @@ vectorizable_reduction (gimple *stmt, gi
 	}
       else if (cond_reduc_dt == vect_induction_def)
 	{
-	  stmt_vec_info cond_stmt_vinfo = vinfo_for_stmt (cond_reduc_def_stmt);
 	  tree base
 	    = STMT_VINFO_LOOP_PHI_EVOLUTION_BASE_UNCHANGED (cond_stmt_vinfo);
 	  tree step = STMT_VINFO_LOOP_PHI_EVOLUTION_PART (cond_stmt_vinfo);
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c	2018-07-24 10:23:35.380732018 +0100
+++ gcc/tree-vect-slp.c	2018-07-24 10:23:38.964700191 +0100
@@ -201,21 +201,23 @@ vect_free_oprnd_info (vec<slp_oprnd_info
 int
 vect_get_place_in_interleaving_chain (gimple *stmt, gimple *first_stmt)
 {
-  gimple *next_stmt = first_stmt;
+  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+  stmt_vec_info first_stmt_info = vinfo_for_stmt (first_stmt);
+  stmt_vec_info next_stmt_info = first_stmt_info;
   int result = 0;
 
-  if (first_stmt != DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)))
+  if (first_stmt_info != DR_GROUP_FIRST_ELEMENT (stmt_info))
     return -1;
 
   do
     {
-      if (next_stmt == stmt)
+      if (next_stmt_info == stmt_info)
 	return result;
-      next_stmt = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next_stmt));
-      if (next_stmt)
-	result += DR_GROUP_GAP (vinfo_for_stmt (next_stmt));
+      next_stmt_info = DR_GROUP_NEXT_ELEMENT (next_stmt_info);
+      if (next_stmt_info)
+	result += DR_GROUP_GAP (next_stmt_info);
     }
-  while (next_stmt);
+  while (next_stmt_info);
 
   return -1;
 }
@@ -3577,7 +3579,6 @@ vect_get_slp_vect_defs (slp_tree slp_nod
 vect_get_slp_defs (vec<tree> ops, slp_tree slp_node,
 		   vec<vec<tree> > *vec_oprnds)
 {
-  gimple *first_stmt;
   int number_of_vects = 0, i;
   unsigned int child_index = 0;
   HOST_WIDE_INT lhs_size_unit, rhs_size_unit;
@@ -3586,7 +3587,7 @@ vect_get_slp_defs (vec<tree> ops, slp_tr
   tree oprnd;
   bool vectorized_defs;
 
-  first_stmt = SLP_TREE_SCALAR_STMTS (slp_node)[0];
+  stmt_vec_info first_stmt_info = SLP_TREE_SCALAR_STMTS (slp_node)[0];
   FOR_EACH_VEC_ELT (ops, i, oprnd)
     {
       /* For each operand we check if it has vectorized definitions in a child
@@ -3637,8 +3638,8 @@ vect_get_slp_defs (vec<tree> ops, slp_tr
                  vect_schedule_slp_instance (), fix it by replacing LHS with
                  RHS, if necessary.  See vect_get_smallest_scalar_type () for
                  details.  */
-              vect_get_smallest_scalar_type (first_stmt, &lhs_size_unit,
-                                             &rhs_size_unit);
+	      vect_get_smallest_scalar_type (first_stmt_info, &lhs_size_unit,
+					     &rhs_size_unit);
               if (rhs_size_unit != lhs_size_unit)
                 {
                   number_of_vects *= rhs_size_unit;
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:23:35.384731983 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:23:38.968700155 +0100
@@ -622,7 +622,6 @@ vect_mark_stmts_to_be_vectorized (loop_v
   unsigned int i;
   stmt_vec_info stmt_vinfo;
   basic_block bb;
-  gimple *phi;
   bool live_p;
   enum vect_relevant relevant;
 
@@ -636,27 +635,27 @@ vect_mark_stmts_to_be_vectorized (loop_v
       bb = bbs[i];
       for (si = gsi_start_phis (bb); !gsi_end_p (si); gsi_next (&si))
 	{
-	  phi = gsi_stmt (si);
+	  stmt_vec_info phi_info = loop_vinfo->lookup_stmt (gsi_stmt (si));
 	  if (dump_enabled_p ())
 	    {
 	      dump_printf_loc (MSG_NOTE, vect_location, "init: phi relevant? ");
-	      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, phi, 0);
+	      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, phi_info->stmt, 0);
 	    }
 
-	  if (vect_stmt_relevant_p (phi, loop_vinfo, &relevant, &live_p))
-	    vect_mark_relevant (&worklist, phi, relevant, live_p);
+	  if (vect_stmt_relevant_p (phi_info, loop_vinfo, &relevant, &live_p))
+	    vect_mark_relevant (&worklist, phi_info, relevant, live_p);
 	}
       for (si = gsi_start_bb (bb); !gsi_end_p (si); gsi_next (&si))
 	{
-	  stmt = gsi_stmt (si);
+	  stmt_vec_info stmt_info = loop_vinfo->lookup_stmt (gsi_stmt (si));
 	  if (dump_enabled_p ())
 	    {
 	      dump_printf_loc (MSG_NOTE, vect_location, "init: stmt relevant? ");
-	      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
+	      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
 	    }
 
-	  if (vect_stmt_relevant_p (stmt, loop_vinfo, &relevant, &live_p))
-	    vect_mark_relevant (&worklist, stmt, relevant, live_p);
+	  if (vect_stmt_relevant_p (stmt_info, loop_vinfo, &relevant, &live_p))
+	    vect_mark_relevant (&worklist, stmt_info, relevant, live_p);
 	}
     }
 
@@ -1350,11 +1349,11 @@ vect_get_load_cost (stmt_vec_info stmt_i
 static void
 vect_init_vector_1 (gimple *stmt, gimple *new_stmt, gimple_stmt_iterator *gsi)
 {
+  stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
   if (gsi)
-    vect_finish_stmt_generation (stmt, new_stmt, gsi);
+    vect_finish_stmt_generation (stmt_vinfo, new_stmt, gsi);
   else
     {
-      stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
       loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_vinfo);
 
       if (loop_vinfo)
@@ -1404,6 +1403,7 @@ vect_init_vector_1 (gimple *stmt, gimple
 tree
 vect_init_vector (gimple *stmt, tree val, tree type, gimple_stmt_iterator *gsi)
 {
+  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   gimple *init_stmt;
   tree new_temp;
 
@@ -1427,7 +1427,7 @@ vect_init_vector (gimple *stmt, tree val
 		  new_temp = make_ssa_name (TREE_TYPE (type));
 		  init_stmt = gimple_build_assign (new_temp, COND_EXPR,
 						   val, true_val, false_val);
-		  vect_init_vector_1 (stmt, init_stmt, gsi);
+		  vect_init_vector_1 (stmt_info, init_stmt, gsi);
 		  val = new_temp;
 		}
 	    }
@@ -1443,7 +1443,7 @@ vect_init_vector (gimple *stmt, tree val
 							      val));
 	      else
 		init_stmt = gimple_build_assign (new_temp, NOP_EXPR, val);
-	      vect_init_vector_1 (stmt, init_stmt, gsi);
+	      vect_init_vector_1 (stmt_info, init_stmt, gsi);
 	      val = new_temp;
 	    }
 	}
@@ -1452,7 +1452,7 @@ vect_init_vector (gimple *stmt, tree val
 
   new_temp = vect_get_new_ssa_name (type, vect_simple_var, "cst_");
   init_stmt = gimple_build_assign  (new_temp, val);
-  vect_init_vector_1 (stmt, init_stmt, gsi);
+  vect_init_vector_1 (stmt_info, init_stmt, gsi);
   return new_temp;
 }
 
@@ -1690,6 +1690,7 @@ vect_get_vec_defs (tree op0, tree op1, g
 		   vec<tree> *vec_oprnds1,
 		   slp_tree slp_node)
 {
+  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   if (slp_node)
     {
       int nops = (op1 == NULL_TREE) ? 1 : 2;
@@ -1711,13 +1712,13 @@ vect_get_vec_defs (tree op0, tree op1, g
       tree vec_oprnd;
 
       vec_oprnds0->create (1);
-      vec_oprnd = vect_get_vec_def_for_operand (op0, stmt);
+      vec_oprnd = vect_get_vec_def_for_operand (op0, stmt_info);
       vec_oprnds0->quick_push (vec_oprnd);
 
       if (op1)
 	{
 	  vec_oprnds1->create (1);
-	  vec_oprnd = vect_get_vec_def_for_operand (op1, stmt);
+	  vec_oprnd = vect_get_vec_def_for_operand (op1, stmt_info);
 	  vec_oprnds1->quick_push (vec_oprnd);
 	}
     }
@@ -1760,12 +1761,13 @@ vect_finish_stmt_generation_1 (gimple *s
 stmt_vec_info
 vect_finish_replace_stmt (gimple *stmt, gimple *vec_stmt)
 {
-  gcc_assert (gimple_get_lhs (stmt) == gimple_get_lhs (vec_stmt));
+  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+  gcc_assert (gimple_get_lhs (stmt_info->stmt) == gimple_get_lhs (vec_stmt));
 
-  gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
+  gimple_stmt_iterator gsi = gsi_for_stmt (stmt_info->stmt);
   gsi_replace (&gsi, vec_stmt, false);
 
-  return vect_finish_stmt_generation_1 (stmt, vec_stmt);
+  return vect_finish_stmt_generation_1 (stmt_info, vec_stmt);
 }
 
 /* Add VEC_STMT to the vectorized implementation of STMT and insert it
@@ -1775,7 +1777,8 @@ vect_finish_replace_stmt (gimple *stmt,
 vect_finish_stmt_generation (gimple *stmt, gimple *vec_stmt,
 			     gimple_stmt_iterator *gsi)
 {
-  gcc_assert (gimple_code (stmt) != GIMPLE_LABEL);
+  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+  gcc_assert (gimple_code (stmt_info->stmt) != GIMPLE_LABEL);
 
   if (!gsi_end_p (*gsi)
       && gimple_has_mem_ops (vec_stmt))
@@ -1804,7 +1807,7 @@ vect_finish_stmt_generation (gimple *stm
 	}
     }
   gsi_insert_before (gsi, vec_stmt, GSI_SAME_STMT);
-  return vect_finish_stmt_generation_1 (stmt, vec_stmt);
+  return vect_finish_stmt_generation_1 (stmt_info, vec_stmt);
 }
 
 /* We want to vectorize a call to combined function CFN with function
@@ -9856,23 +9859,21 @@ vect_transform_stmt (gimple *stmt, gimpl
 void
 vect_remove_stores (gimple *first_stmt)
 {
-  gimple *next = first_stmt;
+  stmt_vec_info next_stmt_info = vinfo_for_stmt (first_stmt);
   gimple_stmt_iterator next_si;
 
-  while (next)
+  while (next_stmt_info)
     {
-      stmt_vec_info stmt_info = vinfo_for_stmt (next);
-
-      stmt_vec_info tmp = DR_GROUP_NEXT_ELEMENT (stmt_info);
-      if (is_pattern_stmt_p (stmt_info))
-	next = STMT_VINFO_RELATED_STMT (stmt_info);
+      stmt_vec_info tmp = DR_GROUP_NEXT_ELEMENT (next_stmt_info);
+      if (is_pattern_stmt_p (next_stmt_info))
+	next_stmt_info = STMT_VINFO_RELATED_STMT (next_stmt_info);
       /* Free the attached stmt_vec_info and remove the stmt.  */
-      next_si = gsi_for_stmt (next);
-      unlink_stmt_vdef (next);
+      next_si = gsi_for_stmt (next_stmt_info->stmt);
+      unlink_stmt_vdef (next_stmt_info->stmt);
       gsi_remove (&next_si, true);
-      release_defs (next);
-      free_stmt_vec_info (next);
-      next = tmp;
+      release_defs (next_stmt_info->stmt);
+      free_stmt_vec_info (next_stmt_info);
+      next_stmt_info = tmp;
     }
 }
 

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [30/46] Use stmt_vec_infos rather than gimple stmts for worklists
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (27 preceding siblings ...)
  2018-07-24 10:04 ` [29/46] Use stmt_vec_info instead of gimple stmts internally (part 2) Richard Sandiford
@ 2018-07-24 10:04 ` Richard Sandiford
  2018-07-25 10:04   ` Richard Biener
  2018-07-24 10:04 ` [28/46] Use stmt_vec_info instead of gimple stmts internally (part 1) Richard Sandiford
                   ` (16 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:04 UTC (permalink / raw)
  To: gcc-patches

2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vect-loop.c (vect_analyze_scalar_cycles_1): Change the type
	of the worklist from a vector of gimple stmts to a vector of
	stmt_vec_infos.
	* tree-vect-stmts.c (vect_mark_relevant, process_use)
	(vect_mark_stmts_to_be_vectorized): Likewise

Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:23:38.964700191 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:23:42.472669038 +0100
@@ -474,7 +474,7 @@ vect_analyze_scalar_cycles_1 (loop_vec_i
 {
   basic_block bb = loop->header;
   tree init, step;
-  auto_vec<gimple *, 64> worklist;
+  auto_vec<stmt_vec_info, 64> worklist;
   gphi_iterator gsi;
   bool double_reduc;
 
@@ -543,9 +543,9 @@ vect_analyze_scalar_cycles_1 (loop_vec_i
   /* Second - identify all reductions and nested cycles.  */
   while (worklist.length () > 0)
     {
-      gimple *phi = worklist.pop ();
+      stmt_vec_info stmt_vinfo = worklist.pop ();
+      gphi *phi = as_a <gphi *> (stmt_vinfo->stmt);
       tree def = PHI_RESULT (phi);
-      stmt_vec_info stmt_vinfo = vinfo_for_stmt (phi);
 
       if (dump_enabled_p ())
         {
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:23:38.968700155 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:23:42.472669038 +0100
@@ -194,7 +194,7 @@ vect_clobber_variable (gimple *stmt, gim
    Mark STMT as "relevant for vectorization" and add it to WORKLIST.  */
 
 static void
-vect_mark_relevant (vec<gimple *> *worklist, gimple *stmt,
+vect_mark_relevant (vec<stmt_vec_info> *worklist, gimple *stmt,
 		    enum vect_relevant relevant, bool live_p)
 {
   stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
@@ -453,7 +453,7 @@ exist_non_indexing_operands_for_use_p (t
 
 static bool
 process_use (gimple *stmt, tree use, loop_vec_info loop_vinfo,
-	     enum vect_relevant relevant, vec<gimple *> *worklist,
+	     enum vect_relevant relevant, vec<stmt_vec_info> *worklist,
 	     bool force)
 {
   stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
@@ -618,16 +618,14 @@ vect_mark_stmts_to_be_vectorized (loop_v
   basic_block *bbs = LOOP_VINFO_BBS (loop_vinfo);
   unsigned int nbbs = loop->num_nodes;
   gimple_stmt_iterator si;
-  gimple *stmt;
   unsigned int i;
-  stmt_vec_info stmt_vinfo;
   basic_block bb;
   bool live_p;
   enum vect_relevant relevant;
 
   DUMP_VECT_SCOPE ("vect_mark_stmts_to_be_vectorized");
 
-  auto_vec<gimple *, 64> worklist;
+  auto_vec<stmt_vec_info, 64> worklist;
 
   /* 1. Init worklist.  */
   for (i = 0; i < nbbs; i++)
@@ -665,17 +663,17 @@ vect_mark_stmts_to_be_vectorized (loop_v
       use_operand_p use_p;
       ssa_op_iter iter;
 
-      stmt = worklist.pop ();
+      stmt_vec_info stmt_vinfo = worklist.pop ();
       if (dump_enabled_p ())
 	{
-          dump_printf_loc (MSG_NOTE, vect_location, "worklist: examine stmt: ");
-          dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
+	  dump_printf_loc (MSG_NOTE, vect_location,
+			   "worklist: examine stmt: ");
+	  dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_vinfo->stmt, 0);
 	}
 
       /* Examine the USEs of STMT. For each USE, mark the stmt that defines it
 	 (DEF_STMT) as relevant/irrelevant according to the relevance property
 	 of STMT.  */
-      stmt_vinfo = vinfo_for_stmt (stmt);
       relevant = STMT_VINFO_RELEVANT (stmt_vinfo);
 
       /* Generally, the relevance property of STMT (in STMT_VINFO_RELEVANT) is

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [28/46] Use stmt_vec_info instead of gimple stmts internally (part 1)
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (28 preceding siblings ...)
  2018-07-24 10:04 ` [30/46] Use stmt_vec_infos rather than gimple stmts for worklists Richard Sandiford
@ 2018-07-24 10:04 ` Richard Sandiford
  2018-07-25  9:33   ` Richard Biener
  2018-07-24 10:05 ` [32/46] Use stmt_vec_info in function interfaces (part 2) Richard Sandiford
                   ` (15 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:04 UTC (permalink / raw)
  To: gcc-patches

This first part makes functions use stmt_vec_infos instead of
gimple stmts in cases where the stmt_vec_info was already available
and where the change is mechanical.  Most of it is just replacing
"stmt" with "stmt_info".


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vect-data-refs.c (vect_slp_analyze_node_dependences):
	(vect_check_gather_scatter, vect_create_data_ref_ptr, bump_vector_ptr)
	(vect_permute_store_chain, vect_setup_realignment)
	(vect_permute_load_chain, vect_shift_permute_load_chain)
	(vect_transform_grouped_load): Use stmt_vec_info rather than gimple
	stmts internally, and when passing values to other vectorizer routines.
	* tree-vect-loop-manip.c (vect_can_advance_ivs_p): Likewise.
	* tree-vect-loop.c (vect_analyze_scalar_cycles_1)
	(vect_analyze_loop_operations, get_initial_def_for_reduction)
	(vect_create_epilog_for_reduction, vectorize_fold_left_reduction)
	(vectorizable_reduction, vectorizable_induction)
	(vectorizable_live_operation, vect_transform_loop_stmt)
	(vect_transform_loop): Likewise.
	* tree-vect-patterns.c (vect_reassociating_reduction_p)
	(vect_recog_widen_op_pattern, vect_recog_mixed_size_cond_pattern)
	(vect_recog_bool_pattern, vect_recog_gather_scatter_pattern): Likewise.
	* tree-vect-slp.c (vect_analyze_slp_instance): Likewise.
	(vect_slp_analyze_node_operations_1): Likewise.
	* tree-vect-stmts.c (vect_mark_relevant, process_use)
	(exist_non_indexing_operands_for_use_p, vect_init_vector_1)
	(vect_mark_stmts_to_be_vectorized, vect_get_vec_def_for_operand)
	(vect_finish_stmt_generation_1, get_group_load_store_type)
	(get_load_store_type, vect_build_gather_load_calls)
	(vectorizable_bswap, vectorizable_call, vectorizable_simd_clone_call)
	(vect_create_vectorized_demotion_stmts, vectorizable_conversion)
	(vectorizable_assignment, vectorizable_shift, vectorizable_operation)
	(vectorizable_store, vectorizable_load, vectorizable_condition)
	(vectorizable_comparison, vect_analyze_stmt, vect_transform_stmt)
	(supportable_widening_operation): Likewise.
	(vect_get_vector_types_for_stmt): Likewise.
	* tree-vectorizer.h (vect_dr_behavior): Likewise.

Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c	2018-07-24 10:23:31.736764378 +0100
+++ gcc/tree-vect-data-refs.c	2018-07-24 10:23:35.376732054 +0100
@@ -712,7 +712,7 @@ vect_slp_analyze_node_dependences (slp_i
 	     been sunk to (and we verify if we can do that as well).  */
 	  if (gimple_visited_p (stmt))
 	    {
-	      if (stmt != last_store)
+	      if (stmt_info != last_store)
 		continue;
 	      unsigned i;
 	      stmt_vec_info store_info;
@@ -3666,7 +3666,7 @@ vect_check_gather_scatter (gimple *stmt,
 
   /* See whether this is already a call to a gather/scatter internal function.
      If not, see whether it's a masked load or store.  */
-  gcall *call = dyn_cast <gcall *> (stmt);
+  gcall *call = dyn_cast <gcall *> (stmt_info->stmt);
   if (call && gimple_call_internal_p (call))
     {
       ifn = gimple_call_internal_fn (call);
@@ -4677,8 +4677,8 @@ vect_create_data_ref_ptr (gimple *stmt,
   if (loop_vinfo)
     {
       loop = LOOP_VINFO_LOOP (loop_vinfo);
-      nested_in_vect_loop = nested_in_vect_loop_p (loop, stmt);
-      containing_loop = (gimple_bb (stmt))->loop_father;
+      nested_in_vect_loop = nested_in_vect_loop_p (loop, stmt_info);
+      containing_loop = (gimple_bb (stmt_info->stmt))->loop_father;
       pe = loop_preheader_edge (loop);
     }
   else
@@ -4786,7 +4786,7 @@ vect_create_data_ref_ptr (gimple *stmt,
 
   /* Create: (&(base[init_val+offset]+byte_offset) in the loop preheader.  */
 
-  new_temp = vect_create_addr_base_for_vector_ref (stmt, &new_stmt_list,
+  new_temp = vect_create_addr_base_for_vector_ref (stmt_info, &new_stmt_list,
 						   offset, byte_offset);
   if (new_stmt_list)
     {
@@ -4934,7 +4934,7 @@ bump_vector_ptr (tree dataref_ptr, gimpl
     new_dataref_ptr = make_ssa_name (TREE_TYPE (dataref_ptr));
   incr_stmt = gimple_build_assign (new_dataref_ptr, POINTER_PLUS_EXPR,
 				   dataref_ptr, update);
-  vect_finish_stmt_generation (stmt, incr_stmt, gsi);
+  vect_finish_stmt_generation (stmt_info, incr_stmt, gsi);
 
   /* Copy the points-to information if it exists. */
   if (DR_PTR_INFO (dr))
@@ -5282,7 +5282,7 @@ vect_permute_store_chain (vec<tree> dr_c
 	  data_ref = make_temp_ssa_name (vectype, NULL, "vect_shuffle3_low");
 	  perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR, vect1,
 					   vect2, perm3_mask_low);
-	  vect_finish_stmt_generation (stmt, perm_stmt, gsi);
+	  vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
 
 	  vect1 = data_ref;
 	  vect2 = dr_chain[2];
@@ -5293,7 +5293,7 @@ vect_permute_store_chain (vec<tree> dr_c
 	  data_ref = make_temp_ssa_name (vectype, NULL, "vect_shuffle3_high");
 	  perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR, vect1,
 					   vect2, perm3_mask_high);
-	  vect_finish_stmt_generation (stmt, perm_stmt, gsi);
+	  vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
 	  (*result_chain)[j] = data_ref;
 	}
     }
@@ -5332,7 +5332,7 @@ vect_permute_store_chain (vec<tree> dr_c
 		high = make_temp_ssa_name (vectype, NULL, "vect_inter_high");
 		perm_stmt = gimple_build_assign (high, VEC_PERM_EXPR, vect1,
 						 vect2, perm_mask_high);
-		vect_finish_stmt_generation (stmt, perm_stmt, gsi);
+		vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
 		(*result_chain)[2*j] = high;
 
 		/* Create interleaving stmt:
@@ -5342,7 +5342,7 @@ vect_permute_store_chain (vec<tree> dr_c
 		low = make_temp_ssa_name (vectype, NULL, "vect_inter_low");
 		perm_stmt = gimple_build_assign (low, VEC_PERM_EXPR, vect1,
 						 vect2, perm_mask_low);
-		vect_finish_stmt_generation (stmt, perm_stmt, gsi);
+		vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
 		(*result_chain)[2*j+1] = low;
 	      }
 	    memcpy (dr_chain.address (), result_chain->address (),
@@ -5415,7 +5415,7 @@ vect_setup_realignment (gimple *stmt, gi
   struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   struct loop *loop = NULL;
   edge pe = NULL;
-  tree scalar_dest = gimple_assign_lhs (stmt);
+  tree scalar_dest = gimple_assign_lhs (stmt_info->stmt);
   tree vec_dest;
   gimple *inc;
   tree ptr;
@@ -5429,13 +5429,13 @@ vect_setup_realignment (gimple *stmt, gi
   bool inv_p;
   bool compute_in_loop = false;
   bool nested_in_vect_loop = false;
-  struct loop *containing_loop = (gimple_bb (stmt))->loop_father;
+  struct loop *containing_loop = (gimple_bb (stmt_info->stmt))->loop_father;
   struct loop *loop_for_initial_load = NULL;
 
   if (loop_vinfo)
     {
       loop = LOOP_VINFO_LOOP (loop_vinfo);
-      nested_in_vect_loop = nested_in_vect_loop_p (loop, stmt);
+      nested_in_vect_loop = nested_in_vect_loop_p (loop, stmt_info);
     }
 
   gcc_assert (alignment_support_scheme == dr_explicit_realign
@@ -5518,9 +5518,9 @@ vect_setup_realignment (gimple *stmt, gi
 
       gcc_assert (!compute_in_loop);
       vec_dest = vect_create_destination_var (scalar_dest, vectype);
-      ptr = vect_create_data_ref_ptr (stmt, vectype, loop_for_initial_load,
-				      NULL_TREE, &init_addr, NULL, &inc,
-				      true, &inv_p);
+      ptr = vect_create_data_ref_ptr (stmt_info, vectype,
+				      loop_for_initial_load, NULL_TREE,
+				      &init_addr, NULL, &inc, true, &inv_p);
       if (TREE_CODE (ptr) == SSA_NAME)
 	new_temp = copy_ssa_name (ptr);
       else
@@ -5562,7 +5562,7 @@ vect_setup_realignment (gimple *stmt, gi
       if (!init_addr)
 	{
 	  /* Generate the INIT_ADDR computation outside LOOP.  */
-	  init_addr = vect_create_addr_base_for_vector_ref (stmt, &stmts,
+	  init_addr = vect_create_addr_base_for_vector_ref (stmt_info, &stmts,
 							    NULL_TREE);
           if (loop)
             {
@@ -5890,7 +5890,7 @@ vect_permute_load_chain (vec<tree> dr_ch
 	  data_ref = make_temp_ssa_name (vectype, NULL, "vect_shuffle3_low");
 	  perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR, first_vect,
 					   second_vect, perm3_mask_low);
-	  vect_finish_stmt_generation (stmt, perm_stmt, gsi);
+	  vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
 
 	  /* Create interleaving stmt (high part of):
 	     high = VEC_PERM_EXPR <first_vect, second_vect2, {k, 3 + k, 6 + k,
@@ -5900,7 +5900,7 @@ vect_permute_load_chain (vec<tree> dr_ch
 	  data_ref = make_temp_ssa_name (vectype, NULL, "vect_shuffle3_high");
 	  perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR, first_vect,
 					   second_vect, perm3_mask_high);
-	  vect_finish_stmt_generation (stmt, perm_stmt, gsi);
+	  vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
 	  (*result_chain)[k] = data_ref;
 	}
     }
@@ -5935,7 +5935,7 @@ vect_permute_load_chain (vec<tree> dr_ch
 	      perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR,
 					       first_vect, second_vect,
 					       perm_mask_even);
-	      vect_finish_stmt_generation (stmt, perm_stmt, gsi);
+	      vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
 	      (*result_chain)[j/2] = data_ref;
 
 	      /* data_ref = permute_odd (first_data_ref, second_data_ref);  */
@@ -5943,7 +5943,7 @@ vect_permute_load_chain (vec<tree> dr_ch
 	      perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR,
 					       first_vect, second_vect,
 					       perm_mask_odd);
-	      vect_finish_stmt_generation (stmt, perm_stmt, gsi);
+	      vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
 	      (*result_chain)[j/2+length/2] = data_ref;
 	    }
 	  memcpy (dr_chain.address (), result_chain->address (),
@@ -6143,26 +6143,26 @@ vect_shift_permute_load_chain (vec<tree>
 	      perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR,
 					       first_vect, first_vect,
 					       perm2_mask1);
-	      vect_finish_stmt_generation (stmt, perm_stmt, gsi);
+	      vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
 	      vect[0] = data_ref;
 
 	      data_ref = make_temp_ssa_name (vectype, NULL, "vect_shuffle2");
 	      perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR,
 					       second_vect, second_vect,
 					       perm2_mask2);
-	      vect_finish_stmt_generation (stmt, perm_stmt, gsi);
+	      vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
 	      vect[1] = data_ref;
 
 	      data_ref = make_temp_ssa_name (vectype, NULL, "vect_shift");
 	      perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR,
 					       vect[0], vect[1], shift1_mask);
-	      vect_finish_stmt_generation (stmt, perm_stmt, gsi);
+	      vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
 	      (*result_chain)[j/2 + length/2] = data_ref;
 
 	      data_ref = make_temp_ssa_name (vectype, NULL, "vect_select");
 	      perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR,
 					       vect[0], vect[1], select_mask);
-	      vect_finish_stmt_generation (stmt, perm_stmt, gsi);
+	      vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
 	      (*result_chain)[j/2] = data_ref;
 	    }
 	  memcpy (dr_chain.address (), result_chain->address (),
@@ -6259,7 +6259,7 @@ vect_shift_permute_load_chain (vec<tree>
 	  perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR,
 					   dr_chain[k], dr_chain[k],
 					   perm3_mask);
-	  vect_finish_stmt_generation (stmt, perm_stmt, gsi);
+	  vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
 	  vect[k] = data_ref;
 	}
 
@@ -6269,7 +6269,7 @@ vect_shift_permute_load_chain (vec<tree>
 	  perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR,
 					   vect[k % 3], vect[(k + 1) % 3],
 					   shift1_mask);
-	  vect_finish_stmt_generation (stmt, perm_stmt, gsi);
+	  vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
 	  vect_shift[k] = data_ref;
 	}
 
@@ -6280,7 +6280,7 @@ vect_shift_permute_load_chain (vec<tree>
 					   vect_shift[(4 - k) % 3],
 					   vect_shift[(3 - k) % 3],
 					   shift2_mask);
-	  vect_finish_stmt_generation (stmt, perm_stmt, gsi);
+	  vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
 	  vect[k] = data_ref;
 	}
 
@@ -6289,13 +6289,13 @@ vect_shift_permute_load_chain (vec<tree>
       data_ref = make_temp_ssa_name (vectype, NULL, "vect_shift3");
       perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR, vect[0],
 				       vect[0], shift3_mask);
-      vect_finish_stmt_generation (stmt, perm_stmt, gsi);
+      vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
       (*result_chain)[nelt % 3] = data_ref;
 
       data_ref = make_temp_ssa_name (vectype, NULL, "vect_shift4");
       perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR, vect[1],
 				       vect[1], shift4_mask);
-      vect_finish_stmt_generation (stmt, perm_stmt, gsi);
+      vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
       (*result_chain)[0] = data_ref;
       return true;
     }
@@ -6328,10 +6328,10 @@ vect_transform_grouped_load (gimple *stm
   mode = TYPE_MODE (STMT_VINFO_VECTYPE (stmt_info));
   if (targetm.sched.reassociation_width (VEC_PERM_EXPR, mode) > 1
       || pow2p_hwi (size)
-      || !vect_shift_permute_load_chain (dr_chain, size, stmt,
+      || !vect_shift_permute_load_chain (dr_chain, size, stmt_info,
 					 gsi, &result_chain))
-    vect_permute_load_chain (dr_chain, size, stmt, gsi, &result_chain);
-  vect_record_grouped_load_vectors (stmt, result_chain);
+    vect_permute_load_chain (dr_chain, size, stmt_info, gsi, &result_chain);
+  vect_record_grouped_load_vectors (stmt_info, result_chain);
   result_chain.release ();
 }
 
Index: gcc/tree-vect-loop-manip.c
===================================================================
--- gcc/tree-vect-loop-manip.c	2018-07-24 10:23:31.736764378 +0100
+++ gcc/tree-vect-loop-manip.c	2018-07-24 10:23:35.376732054 +0100
@@ -1380,8 +1380,8 @@ vect_can_advance_ivs_p (loop_vec_info lo
       stmt_vec_info phi_info = loop_vinfo->lookup_stmt (phi);
       if (dump_enabled_p ())
 	{
-          dump_printf_loc (MSG_NOTE, vect_location, "Analyze phi: ");
-          dump_gimple_stmt (MSG_NOTE, TDF_SLIM, phi, 0);
+	  dump_printf_loc (MSG_NOTE, vect_location, "Analyze phi: ");
+	  dump_gimple_stmt (MSG_NOTE, TDF_SLIM, phi_info->stmt, 0);
 	}
 
       /* Skip virtual phi's. The data dependences that are associated with
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:23:31.740764343 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:23:35.376732054 +0100
@@ -526,7 +526,7 @@ vect_analyze_scalar_cycles_1 (loop_vec_i
 	  || (LOOP_VINFO_LOOP (loop_vinfo) != loop
 	      && TREE_CODE (step) != INTEGER_CST))
 	{
-	  worklist.safe_push (phi);
+	  worklist.safe_push (stmt_vinfo);
 	  continue;
 	}
 
@@ -1595,11 +1595,12 @@ vect_analyze_loop_operations (loop_vec_i
               need_to_vectorize = true;
               if (STMT_VINFO_DEF_TYPE (stmt_info) == vect_induction_def
 		  && ! PURE_SLP_STMT (stmt_info))
-                ok = vectorizable_induction (phi, NULL, NULL, NULL, &cost_vec);
+		ok = vectorizable_induction (stmt_info, NULL, NULL, NULL,
+					     &cost_vec);
 	      else if ((STMT_VINFO_DEF_TYPE (stmt_info) == vect_reduction_def
 			|| STMT_VINFO_DEF_TYPE (stmt_info) == vect_nested_cycle)
 		       && ! PURE_SLP_STMT (stmt_info))
-		ok = vectorizable_reduction (phi, NULL, NULL, NULL, NULL,
+		ok = vectorizable_reduction (stmt_info, NULL, NULL, NULL, NULL,
 					     &cost_vec);
             }
 
@@ -1607,7 +1608,7 @@ vect_analyze_loop_operations (loop_vec_i
 	  if (ok
 	      && STMT_VINFO_LIVE_P (stmt_info)
 	      && !PURE_SLP_STMT (stmt_info))
-	    ok = vectorizable_live_operation (phi, NULL, NULL, -1, NULL,
+	    ok = vectorizable_live_operation (stmt_info, NULL, NULL, -1, NULL,
 					      &cost_vec);
 
           if (!ok)
@@ -4045,7 +4046,7 @@ get_initial_def_for_reduction (gimple *s
   struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
   tree scalar_type = TREE_TYPE (init_val);
   tree vectype = get_vectype_for_scalar_type (scalar_type);
-  enum tree_code code = gimple_assign_rhs_code (stmt);
+  enum tree_code code = gimple_assign_rhs_code (stmt_vinfo->stmt);
   tree def_for_init;
   tree init_def;
   REAL_VALUE_TYPE real_init_val = dconst0;
@@ -4057,8 +4058,8 @@ get_initial_def_for_reduction (gimple *s
   gcc_assert (POINTER_TYPE_P (scalar_type) || INTEGRAL_TYPE_P (scalar_type)
 	      || SCALAR_FLOAT_TYPE_P (scalar_type));
 
-  gcc_assert (nested_in_vect_loop_p (loop, stmt)
-	      || loop == (gimple_bb (stmt))->loop_father);
+  gcc_assert (nested_in_vect_loop_p (loop, stmt_vinfo)
+	      || loop == (gimple_bb (stmt_vinfo->stmt))->loop_father);
 
   vect_reduction_type reduction_type
     = STMT_VINFO_VEC_REDUCTION_TYPE (stmt_vinfo);
@@ -4127,7 +4128,7 @@ get_initial_def_for_reduction (gimple *s
 	    if (reduction_type != COND_REDUCTION
 		&& reduction_type != EXTRACT_LAST_REDUCTION)
 	      {
-		init_def = vect_get_vec_def_for_operand (init_val, stmt);
+		init_def = vect_get_vec_def_for_operand (init_val, stmt_vinfo);
 		break;
 	      }
 	  }
@@ -4406,7 +4407,7 @@ vect_create_epilog_for_reduction (vec<tr
   tree vec_dest;
   tree new_temp = NULL_TREE, new_dest, new_name, new_scalar_dest;
   gimple *epilog_stmt = NULL;
-  enum tree_code code = gimple_assign_rhs_code (stmt);
+  enum tree_code code = gimple_assign_rhs_code (stmt_info->stmt);
   gimple *exit_phi;
   tree bitsize;
   tree adjustment_def = NULL;
@@ -4435,7 +4436,7 @@ vect_create_epilog_for_reduction (vec<tr
   if (slp_node)
     group_size = SLP_TREE_SCALAR_STMTS (slp_node).length (); 
 
-  if (nested_in_vect_loop_p (loop, stmt))
+  if (nested_in_vect_loop_p (loop, stmt_info))
     {
       outer_loop = loop;
       loop = loop->inner;
@@ -4504,11 +4505,13 @@ vect_create_epilog_for_reduction (vec<tr
 	  /* Do not use an adjustment def as that case is not supported
 	     correctly if ncopies is not one.  */
 	  vect_is_simple_use (initial_def, loop_vinfo, &initial_def_dt);
-	  vec_initial_def = vect_get_vec_def_for_operand (initial_def, stmt);
+	  vec_initial_def = vect_get_vec_def_for_operand (initial_def,
+							  stmt_info);
 	}
       else
-	vec_initial_def = get_initial_def_for_reduction (stmt, initial_def,
-							 &adjustment_def);
+	vec_initial_def
+	  = get_initial_def_for_reduction (stmt_info, initial_def,
+					   &adjustment_def);
       vec_initial_defs.create (1);
       vec_initial_defs.quick_push (vec_initial_def);
     }
@@ -5676,7 +5679,7 @@ vect_create_epilog_for_reduction (vec<tr
                   preheader_arg = PHI_ARG_DEF_FROM_EDGE (use_stmt,
                                              loop_preheader_edge (outer_loop));
                   vect_phi_init = get_initial_def_for_reduction
-		    (stmt, preheader_arg, NULL);
+		    (stmt_info, preheader_arg, NULL);
 
                   /* Update phi node arguments with vs0 and vs2.  */
                   add_phi_arg (vect_phi, vect_phi_init,
@@ -5841,7 +5844,7 @@ vectorize_fold_left_reduction (gimple *s
   else
     ncopies = vect_get_num_copies (loop_vinfo, vectype_in);
 
-  gcc_assert (!nested_in_vect_loop_p (loop, stmt));
+  gcc_assert (!nested_in_vect_loop_p (loop, stmt_info));
   gcc_assert (ncopies == 1);
   gcc_assert (TREE_CODE_LENGTH (code) == binary_op);
   gcc_assert (reduc_index == (code == MINUS_EXPR ? 0 : 1));
@@ -5859,13 +5862,14 @@ vectorize_fold_left_reduction (gimple *s
   auto_vec<tree> vec_oprnds0;
   if (slp_node)
     {
-      vect_get_vec_defs (op0, NULL_TREE, stmt, &vec_oprnds0, NULL, slp_node);
+      vect_get_vec_defs (op0, NULL_TREE, stmt_info, &vec_oprnds0, NULL,
+			 slp_node);
       group_size = SLP_TREE_SCALAR_STMTS (slp_node).length ();
       scalar_dest_def_info = SLP_TREE_SCALAR_STMTS (slp_node)[group_size - 1];
     }
   else
     {
-      tree loop_vec_def0 = vect_get_vec_def_for_operand (op0, stmt);
+      tree loop_vec_def0 = vect_get_vec_def_for_operand (op0, stmt_info);
       vec_oprnds0.create (1);
       vec_oprnds0.quick_push (loop_vec_def0);
       scalar_dest_def_info = stmt_info;
@@ -6099,7 +6103,7 @@ vectorizable_reduction (gimple *stmt, gi
       && STMT_VINFO_DEF_TYPE (stmt_info) != vect_nested_cycle)
     return false;
 
-  if (nested_in_vect_loop_p (loop, stmt))
+  if (nested_in_vect_loop_p (loop, stmt_info))
     {
       loop = loop->inner;
       nested_cycle = true;
@@ -6109,7 +6113,7 @@ vectorizable_reduction (gimple *stmt, gi
     gcc_assert (slp_node
 		&& REDUC_GROUP_FIRST_ELEMENT (stmt_info) == stmt_info);
 
-  if (gphi *phi = dyn_cast <gphi *> (stmt))
+  if (gphi *phi = dyn_cast <gphi *> (stmt_info->stmt))
     {
       tree phi_result = gimple_phi_result (phi);
       /* Analysis is fully done on the reduction stmt invocation.  */
@@ -6164,7 +6168,7 @@ vectorizable_reduction (gimple *stmt, gi
 	  && STMT_VINFO_RELEVANT (reduc_stmt_info) <= vect_used_only_live
 	  && (use_stmt_info = loop_vinfo->lookup_single_use (phi_result))
 	  && (use_stmt_info == reduc_stmt_info
-	      || STMT_VINFO_RELATED_STMT (use_stmt_info) == reduc_stmt))
+	      || STMT_VINFO_RELATED_STMT (use_stmt_info) == reduc_stmt_info))
 	single_defuse_cycle = true;
 
       /* Create the destination vector  */
@@ -6548,7 +6552,7 @@ vectorizable_reduction (gimple *stmt, gi
     {
       /* Only call during the analysis stage, otherwise we'll lose
 	 STMT_VINFO_TYPE.  */
-      if (!vec_stmt && !vectorizable_condition (stmt, gsi, NULL,
+      if (!vec_stmt && !vectorizable_condition (stmt_info, gsi, NULL,
 						ops[reduc_index], 0, NULL,
 						cost_vec))
         {
@@ -6935,7 +6939,7 @@ vectorizable_reduction (gimple *stmt, gi
       && (STMT_VINFO_RELEVANT (stmt_info) <= vect_used_only_live)
       && (use_stmt_info = loop_vinfo->lookup_single_use (reduc_phi_result))
       && (use_stmt_info == stmt_info
-	  || STMT_VINFO_RELATED_STMT (use_stmt_info) == stmt))
+	  || STMT_VINFO_RELATED_STMT (use_stmt_info) == stmt_info))
     {
       single_defuse_cycle = true;
       epilog_copies = 1;
@@ -7015,13 +7019,13 @@ vectorizable_reduction (gimple *stmt, gi
 
   if (reduction_type == FOLD_LEFT_REDUCTION)
     return vectorize_fold_left_reduction
-      (stmt, gsi, vec_stmt, slp_node, reduc_def_phi, code,
+      (stmt_info, gsi, vec_stmt, slp_node, reduc_def_phi, code,
        reduc_fn, ops, vectype_in, reduc_index, masks);
 
   if (reduction_type == EXTRACT_LAST_REDUCTION)
     {
       gcc_assert (!slp_node);
-      return vectorizable_condition (stmt, gsi, vec_stmt,
+      return vectorizable_condition (stmt_info, gsi, vec_stmt,
 				     NULL, reduc_index, NULL, NULL);
     }
 
@@ -7053,7 +7057,7 @@ vectorizable_reduction (gimple *stmt, gi
       if (code == COND_EXPR)
         {
           gcc_assert (!slp_node);
-	  vectorizable_condition (stmt, gsi, vec_stmt,
+	  vectorizable_condition (stmt_info, gsi, vec_stmt,
 				  PHI_RESULT (phis[0]->stmt),
 				  reduc_index, NULL, NULL);
           /* Multiple types are not supported for condition.  */
@@ -7090,12 +7094,12 @@ vectorizable_reduction (gimple *stmt, gi
           else
 	    {
               vec_oprnds0.quick_push
-		(vect_get_vec_def_for_operand (ops[0], stmt));
+		(vect_get_vec_def_for_operand (ops[0], stmt_info));
               vec_oprnds1.quick_push
-		(vect_get_vec_def_for_operand (ops[1], stmt));
+		(vect_get_vec_def_for_operand (ops[1], stmt_info));
               if (op_type == ternary_op)
 		vec_oprnds2.quick_push 
-		  (vect_get_vec_def_for_operand (ops[2], stmt));
+		  (vect_get_vec_def_for_operand (ops[2], stmt_info));
 	    }
         }
       else
@@ -7144,7 +7148,8 @@ vectorizable_reduction (gimple *stmt, gi
 	      new_temp = make_ssa_name (vec_dest, call);
 	      gimple_call_set_lhs (call, new_temp);
 	      gimple_call_set_nothrow (call, true);
-	      new_stmt_info = vect_finish_stmt_generation (stmt, call, gsi);
+	      new_stmt_info
+		= vect_finish_stmt_generation (stmt_info, call, gsi);
 	    }
 	  else
 	    {
@@ -7156,7 +7161,7 @@ vectorizable_reduction (gimple *stmt, gi
 	      new_temp = make_ssa_name (vec_dest, new_stmt);
 	      gimple_assign_set_lhs (new_stmt, new_temp);
 	      new_stmt_info
-		= vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		= vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 	    }
 
           if (slp_node)
@@ -7184,7 +7189,7 @@ vectorizable_reduction (gimple *stmt, gi
   if ((!single_defuse_cycle || code == COND_EXPR) && !slp_node)
     vect_defs[0] = gimple_get_lhs ((*vec_stmt)->stmt);
 
-  vect_create_epilog_for_reduction (vect_defs, stmt, reduc_def_phi,
+  vect_create_epilog_for_reduction (vect_defs, stmt_info, reduc_def_phi,
 				    epilog_copies, reduc_fn, phis,
 				    double_reduc, slp_node, slp_node_instance,
 				    cond_reduc_val, cond_reduc_op_code,
@@ -7293,7 +7298,7 @@ vectorizable_induction (gimple *phi,
   gcc_assert (ncopies >= 1);
 
   /* FORNOW. These restrictions should be relaxed.  */
-  if (nested_in_vect_loop_p (loop, phi))
+  if (nested_in_vect_loop_p (loop, stmt_info))
     {
       imm_use_iterator imm_iter;
       use_operand_p use_p;
@@ -7443,10 +7448,10 @@ vectorizable_induction (gimple *phi,
       new_name = fold_build2 (MULT_EXPR, TREE_TYPE (step_expr),
 			      expr, step_expr);
       if (! CONSTANT_CLASS_P (new_name))
-	new_name = vect_init_vector (phi, new_name,
+	new_name = vect_init_vector (stmt_info, new_name,
 				     TREE_TYPE (step_expr), NULL);
       new_vec = build_vector_from_val (vectype, new_name);
-      vec_step = vect_init_vector (phi, new_vec, vectype, NULL);
+      vec_step = vect_init_vector (stmt_info, new_vec, vectype, NULL);
 
       /* Now generate the IVs.  */
       unsigned group_size = SLP_TREE_SCALAR_STMTS (slp_node).length ();
@@ -7513,10 +7518,10 @@ vectorizable_induction (gimple *phi,
 	  new_name = fold_build2 (MULT_EXPR, TREE_TYPE (step_expr),
 				  expr, step_expr);
 	  if (! CONSTANT_CLASS_P (new_name))
-	    new_name = vect_init_vector (phi, new_name,
+	    new_name = vect_init_vector (stmt_info, new_name,
 					 TREE_TYPE (step_expr), NULL);
 	  new_vec = build_vector_from_val (vectype, new_name);
-	  vec_step = vect_init_vector (phi, new_vec, vectype, NULL);
+	  vec_step = vect_init_vector (stmt_info, new_vec, vectype, NULL);
 	  for (; ivn < nvects; ++ivn)
 	    {
 	      gimple *iv = SLP_TREE_VEC_STMTS (slp_node)[ivn - nivs]->stmt;
@@ -7549,7 +7554,7 @@ vectorizable_induction (gimple *phi,
       /* iv_loop is nested in the loop to be vectorized.  init_expr had already
 	 been created during vectorization of previous stmts.  We obtain it
 	 from the STMT_VINFO_VEC_STMT of the defining stmt.  */
-      vec_init = vect_get_vec_def_for_operand (init_expr, phi);
+      vec_init = vect_get_vec_def_for_operand (init_expr, stmt_info);
       /* If the initial value is not of proper type, convert it.  */
       if (!useless_type_conversion_p (vectype, TREE_TYPE (vec_init)))
 	{
@@ -7651,7 +7656,7 @@ vectorizable_induction (gimple *phi,
   gcc_assert (CONSTANT_CLASS_P (new_name)
 	      || TREE_CODE (new_name) == SSA_NAME);
   new_vec = build_vector_from_val (vectype, t);
-  vec_step = vect_init_vector (phi, new_vec, vectype, NULL);
+  vec_step = vect_init_vector (stmt_info, new_vec, vectype, NULL);
 
 
   /* Create the following def-use cycle:
@@ -7717,7 +7722,7 @@ vectorizable_induction (gimple *phi,
       gcc_assert (CONSTANT_CLASS_P (new_name)
 		  || TREE_CODE (new_name) == SSA_NAME);
       new_vec = build_vector_from_val (vectype, t);
-      vec_step = vect_init_vector (phi, new_vec, vectype, NULL);
+      vec_step = vect_init_vector (stmt_info, new_vec, vectype, NULL);
 
       vec_def = induc_def;
       prev_stmt_vinfo = induction_phi_info;
@@ -7815,7 +7820,7 @@ vectorizable_live_operation (gimple *stm
     return false;
 
   /* FORNOW.  CHECKME.  */
-  if (nested_in_vect_loop_p (loop, stmt))
+  if (nested_in_vect_loop_p (loop, stmt_info))
     return false;
 
   /* If STMT is not relevant and it is a simple assignment and its inputs are
@@ -7823,7 +7828,7 @@ vectorizable_live_operation (gimple *stm
      scalar value that it computes will be used.  */
   if (!STMT_VINFO_RELEVANT_P (stmt_info))
     {
-      gcc_assert (is_simple_and_all_uses_invariant (stmt, loop_vinfo));
+      gcc_assert (is_simple_and_all_uses_invariant (stmt_info, loop_vinfo));
       if (dump_enabled_p ())
 	dump_printf_loc (MSG_NOTE, vect_location,
 			 "statement is simple and uses invariant.  Leaving in "
@@ -8222,11 +8227,11 @@ vect_transform_loop_stmt (loop_vec_info
     {
       dump_printf_loc (MSG_NOTE, vect_location,
 		       "------>vectorizing statement: ");
-      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
+      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
     }
 
   if (MAY_HAVE_DEBUG_BIND_STMTS && !STMT_VINFO_LIVE_P (stmt_info))
-    vect_loop_kill_debug_uses (loop, stmt);
+    vect_loop_kill_debug_uses (loop, stmt_info);
 
   if (!STMT_VINFO_RELEVANT_P (stmt_info)
       && !STMT_VINFO_LIVE_P (stmt_info))
@@ -8267,7 +8272,7 @@ vect_transform_loop_stmt (loop_vec_info
     dump_printf_loc (MSG_NOTE, vect_location, "transform statement.\n");
 
   bool grouped_store = false;
-  if (vect_transform_stmt (stmt, gsi, &grouped_store, NULL, NULL))
+  if (vect_transform_stmt (stmt_info, gsi, &grouped_store, NULL, NULL))
     *seen_store = stmt_info;
 }
 
@@ -8422,7 +8427,7 @@ vect_transform_loop (loop_vec_info loop_
 	    continue;
 
 	  if (MAY_HAVE_DEBUG_BIND_STMTS && !STMT_VINFO_LIVE_P (stmt_info))
-	    vect_loop_kill_debug_uses (loop, phi);
+	    vect_loop_kill_debug_uses (loop, stmt_info);
 
 	  if (!STMT_VINFO_RELEVANT_P (stmt_info)
 	      && !STMT_VINFO_LIVE_P (stmt_info))
@@ -8441,7 +8446,7 @@ vect_transform_loop (loop_vec_info loop_
 	    {
 	      if (dump_enabled_p ())
 		dump_printf_loc (MSG_NOTE, vect_location, "transform phi.\n");
-	      vect_transform_stmt (phi, NULL, NULL, NULL, NULL);
+	      vect_transform_stmt (stmt_info, NULL, NULL, NULL, NULL);
 	    }
 	}
 
Index: gcc/tree-vect-patterns.c
===================================================================
--- gcc/tree-vect-patterns.c	2018-07-24 10:23:31.740764343 +0100
+++ gcc/tree-vect-patterns.c	2018-07-24 10:23:35.380732018 +0100
@@ -842,7 +842,7 @@ vect_reassociating_reduction_p (stmt_vec
   /* We don't allow changing the order of the computation in the inner-loop
      when doing outer-loop vectorization.  */
   struct loop *loop = LOOP_VINFO_LOOP (loop_info);
-  if (loop && nested_in_vect_loop_p (loop, assign))
+  if (loop && nested_in_vect_loop_p (loop, stmt_info))
     return false;
 
   if (!vect_reassociating_reduction_p (stmt_info))
@@ -1196,7 +1196,7 @@ vect_recog_widen_op_pattern (stmt_vec_in
   auto_vec<tree> dummy_vec;
   if (!vectype
       || !vecitype
-      || !supportable_widening_operation (wide_code, last_stmt,
+      || !supportable_widening_operation (wide_code, last_stmt_info,
 					  vecitype, vectype,
 					  &dummy_code, &dummy_code,
 					  &dummy_int, &dummy_vec))
@@ -3118,11 +3118,11 @@ vect_recog_mixed_size_cond_pattern (stmt
     return NULL;
 
   if ((TREE_CODE (then_clause) != INTEGER_CST
-       && !type_conversion_p (then_clause, last_stmt, false, &orig_type0,
-                              &def_stmt0, &promotion))
+       && !type_conversion_p (then_clause, stmt_vinfo, false, &orig_type0,
+			      &def_stmt0, &promotion))
       || (TREE_CODE (else_clause) != INTEGER_CST
-          && !type_conversion_p (else_clause, last_stmt, false, &orig_type1,
-                                 &def_stmt1, &promotion)))
+	  && !type_conversion_p (else_clause, stmt_vinfo, false, &orig_type1,
+				 &def_stmt1, &promotion)))
     return NULL;
 
   if (orig_type0 && orig_type1
@@ -3709,7 +3709,7 @@ vect_recog_bool_pattern (stmt_vec_info s
 
       if (check_bool_pattern (var, vinfo, bool_stmts))
 	{
-	  rhs = adjust_bool_stmts (bool_stmts, TREE_TYPE (lhs), last_stmt);
+	  rhs = adjust_bool_stmts (bool_stmts, TREE_TYPE (lhs), stmt_vinfo);
 	  lhs = vect_recog_temp_ssa_var (TREE_TYPE (lhs), NULL);
 	  if (useless_type_conversion_p (TREE_TYPE (lhs), TREE_TYPE (rhs)))
 	    pattern_stmt = gimple_build_assign (lhs, SSA_NAME, rhs);
@@ -3776,7 +3776,7 @@ vect_recog_bool_pattern (stmt_vec_info s
       if (!check_bool_pattern (var, vinfo, bool_stmts))
 	return NULL;
 
-      rhs = adjust_bool_stmts (bool_stmts, type, last_stmt);
+      rhs = adjust_bool_stmts (bool_stmts, type, stmt_vinfo);
 
       lhs = vect_recog_temp_ssa_var (TREE_TYPE (lhs), NULL);
       pattern_stmt 
@@ -3800,7 +3800,7 @@ vect_recog_bool_pattern (stmt_vec_info s
 	return NULL;
 
       if (check_bool_pattern (var, vinfo, bool_stmts))
-	rhs = adjust_bool_stmts (bool_stmts, TREE_TYPE (vectype), last_stmt);
+	rhs = adjust_bool_stmts (bool_stmts, TREE_TYPE (vectype), stmt_vinfo);
       else
 	{
 	  tree type = search_type_for_mask (var, vinfo);
@@ -4234,13 +4234,12 @@ vect_recog_gather_scatter_pattern (stmt_
 
   /* Get the boolean that controls whether the load or store happens.
      This is null if the operation is unconditional.  */
-  gimple *stmt = stmt_info->stmt;
-  tree mask = vect_get_load_store_mask (stmt);
+  tree mask = vect_get_load_store_mask (stmt_info);
 
   /* Make sure that the target supports an appropriate internal
      function for the gather/scatter operation.  */
   gather_scatter_info gs_info;
-  if (!vect_check_gather_scatter (stmt, loop_vinfo, &gs_info)
+  if (!vect_check_gather_scatter (stmt_info, loop_vinfo, &gs_info)
       || gs_info.decl)
     return NULL;
 
@@ -4273,7 +4272,7 @@ vect_recog_gather_scatter_pattern (stmt_
     }
   else
     {
-      tree rhs = vect_get_store_rhs (stmt);
+      tree rhs = vect_get_store_rhs (stmt_info);
       if (mask != NULL)
 	pattern_stmt = gimple_build_call_internal (IFN_MASK_SCATTER_STORE, 5,
 						   base, offset, scale, rhs,
@@ -4295,7 +4294,7 @@ vect_recog_gather_scatter_pattern (stmt_
 
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
   *type_out = vectype;
-  vect_pattern_detected ("gather/scatter pattern", stmt);
+  vect_pattern_detected ("gather/scatter pattern", stmt_info->stmt);
 
   return pattern_stmt;
 }
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c	2018-07-24 10:23:31.740764343 +0100
+++ gcc/tree-vect-slp.c	2018-07-24 10:23:35.380732018 +0100
@@ -2096,8 +2096,8 @@ vect_analyze_slp_instance (vec_info *vin
                   dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
 				   "Build SLP failed: unsupported load "
 				   "permutation ");
-		      dump_gimple_stmt (MSG_MISSED_OPTIMIZATION,
-					TDF_SLIM, stmt, 0);
+		  dump_gimple_stmt (MSG_MISSED_OPTIMIZATION,
+				    TDF_SLIM, stmt_info->stmt, 0);
                 }
 	      vect_free_slp_instance (new_instance, false);
               return false;
@@ -2172,8 +2172,9 @@ vect_analyze_slp_instance (vec_info *vin
 	  gcc_assert ((const_nunits & (const_nunits - 1)) == 0);
 	  unsigned group1_size = i & ~(const_nunits - 1);
 
-	  gimple *rest = vect_split_slp_store_group (stmt, group1_size);
-	  bool res = vect_analyze_slp_instance (vinfo, stmt, max_tree_size);
+	  gimple *rest = vect_split_slp_store_group (stmt_info, group1_size);
+	  bool res = vect_analyze_slp_instance (vinfo, stmt_info,
+						max_tree_size);
 	  /* If the first non-match was in the middle of a vector,
 	     skip the rest of that vector.  */
 	  if (group1_size < i)
@@ -2513,7 +2514,6 @@ vect_slp_analyze_node_operations_1 (vec_
 				    stmt_vector_for_cost *cost_vec)
 {
   stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
-  gimple *stmt = stmt_info->stmt;
   gcc_assert (STMT_SLP_TYPE (stmt_info) != loop_vect);
 
   /* For BB vectorization vector types are assigned here.
@@ -2567,7 +2567,7 @@ vect_slp_analyze_node_operations_1 (vec_
     }
 
   bool dummy;
-  return vect_analyze_stmt (stmt, &dummy, node, node_instance, cost_vec);
+  return vect_analyze_stmt (stmt_info, &dummy, node, node_instance, cost_vec);
 }
 
 /* Analyze statements contained in SLP tree NODE after recursively analyzing
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:23:31.744764307 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:23:35.384731983 +0100
@@ -205,7 +205,7 @@ vect_mark_relevant (vec<gimple *> *workl
     {
       dump_printf_loc (MSG_NOTE, vect_location,
 		       "mark relevant %d, live %d: ", relevant, live_p);
-      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
+      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
     }
 
   /* If this stmt is an original stmt in a pattern, we might need to mark its
@@ -244,7 +244,7 @@ vect_mark_relevant (vec<gimple *> *workl
       return;
     }
 
-  worklist->safe_push (stmt);
+  worklist->safe_push (stmt_info);
 }
 
 
@@ -389,10 +389,10 @@ exist_non_indexing_operands_for_use_p (t
      Therefore, all we need to check is if STMT falls into the
      first case, and whether var corresponds to USE.  */
 
-  gassign *assign = dyn_cast <gassign *> (stmt);
+  gassign *assign = dyn_cast <gassign *> (stmt_info->stmt);
   if (!assign || !gimple_assign_copy_p (assign))
     {
-      gcall *call = dyn_cast <gcall *> (stmt);
+      gcall *call = dyn_cast <gcall *> (stmt_info->stmt);
       if (call && gimple_call_internal_p (call))
 	{
 	  internal_fn ifn = gimple_call_internal_fn (call);
@@ -463,7 +463,7 @@ process_use (gimple *stmt, tree use, loo
 
   /* case 1: we are only interested in uses that need to be vectorized.  Uses
      that are used for address computation are not considered relevant.  */
-  if (!force && !exist_non_indexing_operands_for_use_p (use, stmt))
+  if (!force && !exist_non_indexing_operands_for_use_p (use, stmt_vinfo))
      return true;
 
   if (!vect_is_simple_use (use, loop_vinfo, &dt, &dstmt_vinfo))
@@ -484,8 +484,8 @@ process_use (gimple *stmt, tree use, loo
      only way that STMT, which is a reduction-phi, was put in the worklist,
      as there should be no other uses for DSTMT_VINFO in the loop.  So we just
      check that everything is as expected, and we are done.  */
-  bb = gimple_bb (stmt);
-  if (gimple_code (stmt) == GIMPLE_PHI
+  bb = gimple_bb (stmt_vinfo->stmt);
+  if (gimple_code (stmt_vinfo->stmt) == GIMPLE_PHI
       && STMT_VINFO_DEF_TYPE (stmt_vinfo) == vect_reduction_def
       && gimple_code (dstmt_vinfo->stmt) != GIMPLE_PHI
       && STMT_VINFO_DEF_TYPE (dstmt_vinfo) == vect_reduction_def
@@ -576,10 +576,11 @@ process_use (gimple *stmt, tree use, loo
      inductions.  Otherwise we'll needlessly vectorize the IV increment
      and cause hybrid SLP for SLP inductions.  Unless the PHI is live
      of course.  */
-  else if (gimple_code (stmt) == GIMPLE_PHI
+  else if (gimple_code (stmt_vinfo->stmt) == GIMPLE_PHI
 	   && STMT_VINFO_DEF_TYPE (stmt_vinfo) == vect_induction_def
 	   && ! STMT_VINFO_LIVE_P (stmt_vinfo)
-	   && (PHI_ARG_DEF_FROM_EDGE (stmt, loop_latch_edge (bb->loop_father))
+	   && (PHI_ARG_DEF_FROM_EDGE (stmt_vinfo->stmt,
+				      loop_latch_edge (bb->loop_father))
 	       == use))
     {
       if (dump_enabled_p ())
@@ -740,7 +741,7 @@ vect_mark_stmts_to_be_vectorized (loop_v
           /* Pattern statements are not inserted into the code, so
              FOR_EACH_PHI_OR_STMT_USE optimizes their operands out, and we
              have to scan the RHS or function arguments instead.  */
-	  if (gassign *assign = dyn_cast <gassign *> (stmt))
+	  if (gassign *assign = dyn_cast <gassign *> (stmt_vinfo->stmt))
 	    {
 	      enum tree_code rhs_code = gimple_assign_rhs_code (assign);
 	      tree op = gimple_assign_rhs1 (assign);
@@ -748,10 +749,10 @@ vect_mark_stmts_to_be_vectorized (loop_v
 	      i = 1;
 	      if (rhs_code == COND_EXPR && COMPARISON_CLASS_P (op))
 		{
-		  if (!process_use (stmt, TREE_OPERAND (op, 0), loop_vinfo,
-				    relevant, &worklist, false)
-		      || !process_use (stmt, TREE_OPERAND (op, 1), loop_vinfo,
-				       relevant, &worklist, false))
+		  if (!process_use (stmt_vinfo, TREE_OPERAND (op, 0),
+				    loop_vinfo, relevant, &worklist, false)
+		      || !process_use (stmt_vinfo, TREE_OPERAND (op, 1),
+				       loop_vinfo, relevant, &worklist, false))
 		    return false;
 		  i = 2;
 		}
@@ -759,27 +760,27 @@ vect_mark_stmts_to_be_vectorized (loop_v
 		{
 		  op = gimple_op (assign, i);
                   if (TREE_CODE (op) == SSA_NAME
-		      && !process_use (stmt, op, loop_vinfo, relevant,
+		      && !process_use (stmt_vinfo, op, loop_vinfo, relevant,
 				       &worklist, false))
                     return false;
                  }
             }
-	  else if (gcall *call = dyn_cast <gcall *> (stmt))
+	  else if (gcall *call = dyn_cast <gcall *> (stmt_vinfo->stmt))
 	    {
 	      for (i = 0; i < gimple_call_num_args (call); i++)
 		{
 		  tree arg = gimple_call_arg (call, i);
-		  if (!process_use (stmt, arg, loop_vinfo, relevant,
+		  if (!process_use (stmt_vinfo, arg, loop_vinfo, relevant,
 				    &worklist, false))
                     return false;
 		}
 	    }
         }
       else
-        FOR_EACH_PHI_OR_STMT_USE (use_p, stmt, iter, SSA_OP_USE)
+	FOR_EACH_PHI_OR_STMT_USE (use_p, stmt_vinfo->stmt, iter, SSA_OP_USE)
           {
             tree op = USE_FROM_PTR (use_p);
-	    if (!process_use (stmt, op, loop_vinfo, relevant,
+	    if (!process_use (stmt_vinfo, op, loop_vinfo, relevant,
 			      &worklist, false))
               return false;
           }
@@ -787,9 +788,9 @@ vect_mark_stmts_to_be_vectorized (loop_v
       if (STMT_VINFO_GATHER_SCATTER_P (stmt_vinfo))
 	{
 	  gather_scatter_info gs_info;
-	  if (!vect_check_gather_scatter (stmt, loop_vinfo, &gs_info))
+	  if (!vect_check_gather_scatter (stmt_vinfo, loop_vinfo, &gs_info))
 	    gcc_unreachable ();
-	  if (!process_use (stmt, gs_info.offset, loop_vinfo, relevant,
+	  if (!process_use (stmt_vinfo, gs_info.offset, loop_vinfo, relevant,
 			    &worklist, true))
 	    return false;
 	}
@@ -1362,8 +1363,8 @@ vect_init_vector_1 (gimple *stmt, gimple
 	  basic_block new_bb;
 	  edge pe;
 
-          if (nested_in_vect_loop_p (loop, stmt))
-            loop = loop->inner;
+	  if (nested_in_vect_loop_p (loop, stmt_vinfo))
+	    loop = loop->inner;
 
 	  pe = loop_preheader_edge (loop);
           new_bb = gsi_insert_on_edge_immediate (pe, new_stmt);
@@ -1573,7 +1574,7 @@ vect_get_vec_def_for_operand (tree op, g
 	vector_type = get_vectype_for_scalar_type (TREE_TYPE (op));
 
       gcc_assert (vector_type);
-      return vect_init_vector (stmt, op, vector_type, NULL);
+      return vect_init_vector (stmt_vinfo, op, vector_type, NULL);
     }
   else
     return vect_get_vec_def_for_operand_1 (def_stmt_info, dt);
@@ -1740,12 +1741,12 @@ vect_finish_stmt_generation_1 (gimple *s
       dump_gimple_stmt (MSG_NOTE, TDF_SLIM, vec_stmt, 0);
     }
 
-  gimple_set_location (vec_stmt, gimple_location (stmt));
+  gimple_set_location (vec_stmt, gimple_location (stmt_info->stmt));
 
   /* While EH edges will generally prevent vectorization, stmt might
      e.g. be in a must-not-throw region.  Ensure newly created stmts
      that could throw are part of the same region.  */
-  int lp_nr = lookup_stmt_eh_lp (stmt);
+  int lp_nr = lookup_stmt_eh_lp (stmt_info->stmt);
   if (lp_nr != 0 && stmt_could_throw_p (vec_stmt))
     add_stmt_to_eh_lp (vec_stmt, lp_nr);
 
@@ -2269,7 +2270,7 @@ get_group_load_store_type (gimple *stmt,
 
       if (!STMT_VINFO_STRIDED_P (stmt_info)
 	  && (can_overrun_p || !would_overrun_p)
-	  && compare_step_with_zero (stmt) > 0)
+	  && compare_step_with_zero (stmt_info) > 0)
 	{
 	  /* First cope with the degenerate case of a single-element
 	     vector.  */
@@ -2309,7 +2310,7 @@ get_group_load_store_type (gimple *stmt,
       if (*memory_access_type == VMAT_ELEMENTWISE
 	  && single_element_p
 	  && loop_vinfo
-	  && vect_use_strided_gather_scatters_p (stmt, loop_vinfo,
+	  && vect_use_strided_gather_scatters_p (stmt_info, loop_vinfo,
 						 masked_p, gs_info))
 	*memory_access_type = VMAT_GATHER_SCATTER;
     }
@@ -2421,7 +2422,7 @@ get_load_store_type (gimple *stmt, tree
   if (STMT_VINFO_GATHER_SCATTER_P (stmt_info))
     {
       *memory_access_type = VMAT_GATHER_SCATTER;
-      if (!vect_check_gather_scatter (stmt, loop_vinfo, gs_info))
+      if (!vect_check_gather_scatter (stmt_info, loop_vinfo, gs_info))
 	gcc_unreachable ();
       else if (!vect_is_simple_use (gs_info->offset, vinfo,
 				    &gs_info->offset_dt,
@@ -2436,15 +2437,15 @@ get_load_store_type (gimple *stmt, tree
     }
   else if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
     {
-      if (!get_group_load_store_type (stmt, vectype, slp, masked_p, vls_type,
-				      memory_access_type, gs_info))
+      if (!get_group_load_store_type (stmt_info, vectype, slp, masked_p,
+				      vls_type, memory_access_type, gs_info))
 	return false;
     }
   else if (STMT_VINFO_STRIDED_P (stmt_info))
     {
       gcc_assert (!slp);
       if (loop_vinfo
-	  && vect_use_strided_gather_scatters_p (stmt, loop_vinfo,
+	  && vect_use_strided_gather_scatters_p (stmt_info, loop_vinfo,
 						 masked_p, gs_info))
 	*memory_access_type = VMAT_GATHER_SCATTER;
       else
@@ -2452,10 +2453,10 @@ get_load_store_type (gimple *stmt, tree
     }
   else
     {
-      int cmp = compare_step_with_zero (stmt);
+      int cmp = compare_step_with_zero (stmt_info);
       if (cmp < 0)
 	*memory_access_type = get_negative_load_store_type
-	  (stmt, vectype, vls_type, ncopies);
+	  (stmt_info, vectype, vls_type, ncopies);
       else if (cmp == 0)
 	{
 	  gcc_assert (vls_type == VLS_LOAD);
@@ -2742,8 +2743,8 @@ vect_build_gather_load_calls (gimple *st
   else
     gcc_unreachable ();
 
-  tree vec_dest = vect_create_destination_var (gimple_get_lhs (stmt),
-					       vectype);
+  tree scalar_dest = gimple_get_lhs (stmt_info->stmt);
+  tree vec_dest = vect_create_destination_var (scalar_dest, vectype);
 
   tree ptr = fold_convert (ptrtype, gs_info->base);
   if (!is_gimple_min_invariant (ptr))
@@ -2765,8 +2766,8 @@ vect_build_gather_load_calls (gimple *st
 
   if (!mask)
     {
-      src_op = vect_build_zero_merge_argument (stmt, rettype);
-      mask_op = vect_build_all_ones_mask (stmt, masktype);
+      src_op = vect_build_zero_merge_argument (stmt_info, rettype);
+      mask_op = vect_build_all_ones_mask (stmt_info, masktype);
     }
 
   for (int j = 0; j < ncopies; ++j)
@@ -2774,10 +2775,10 @@ vect_build_gather_load_calls (gimple *st
       tree op, var;
       if (modifier == WIDEN && (j & 1))
 	op = permute_vec_elements (vec_oprnd0, vec_oprnd0,
-				   perm_mask, stmt, gsi);
+				   perm_mask, stmt_info, gsi);
       else if (j == 0)
 	op = vec_oprnd0
-	  = vect_get_vec_def_for_operand (gs_info->offset, stmt);
+	  = vect_get_vec_def_for_operand (gs_info->offset, stmt_info);
       else
 	op = vec_oprnd0
 	  = vect_get_vec_def_for_stmt_copy (gs_info->offset_dt, vec_oprnd0);
@@ -2789,7 +2790,7 @@ vect_build_gather_load_calls (gimple *st
 	  var = vect_get_new_ssa_name (idxtype, vect_simple_var);
 	  op = build1 (VIEW_CONVERT_EXPR, idxtype, op);
 	  gassign *new_stmt = gimple_build_assign (var, VIEW_CONVERT_EXPR, op);
-	  vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	  vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 	  op = var;
 	}
 
@@ -2797,11 +2798,11 @@ vect_build_gather_load_calls (gimple *st
 	{
 	  if (mask_perm_mask && (j & 1))
 	    mask_op = permute_vec_elements (mask_op, mask_op,
-					    mask_perm_mask, stmt, gsi);
+					    mask_perm_mask, stmt_info, gsi);
 	  else
 	    {
 	      if (j == 0)
-		vec_mask = vect_get_vec_def_for_operand (mask, stmt);
+		vec_mask = vect_get_vec_def_for_operand (mask, stmt_info);
 	      else
 		vec_mask = vect_get_vec_def_for_stmt_copy (mask_dt, vec_mask);
 
@@ -2815,7 +2816,7 @@ vect_build_gather_load_calls (gimple *st
 		  mask_op = build1 (VIEW_CONVERT_EXPR, masktype, mask_op);
 		  gassign *new_stmt
 		    = gimple_build_assign (var, VIEW_CONVERT_EXPR, mask_op);
-		  vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		  vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 		  mask_op = var;
 		}
 	    }
@@ -2832,17 +2833,19 @@ vect_build_gather_load_calls (gimple *st
 				TYPE_VECTOR_SUBPARTS (rettype)));
 	  op = vect_get_new_ssa_name (rettype, vect_simple_var);
 	  gimple_call_set_lhs (new_call, op);
-	  vect_finish_stmt_generation (stmt, new_call, gsi);
+	  vect_finish_stmt_generation (stmt_info, new_call, gsi);
 	  var = make_ssa_name (vec_dest);
 	  op = build1 (VIEW_CONVERT_EXPR, vectype, op);
 	  gassign *new_stmt = gimple_build_assign (var, VIEW_CONVERT_EXPR, op);
-	  new_stmt_info = vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	  new_stmt_info
+	    = vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 	}
       else
 	{
 	  var = make_ssa_name (vec_dest, new_call);
 	  gimple_call_set_lhs (new_call, var);
-	  new_stmt_info = vect_finish_stmt_generation (stmt, new_call, gsi);
+	  new_stmt_info
+	    = vect_finish_stmt_generation (stmt_info, new_call, gsi);
 	}
 
       if (modifier == NARROW)
@@ -2852,7 +2855,8 @@ vect_build_gather_load_calls (gimple *st
 	      prev_res = var;
 	      continue;
 	    }
-	  var = permute_vec_elements (prev_res, var, perm_mask, stmt, gsi);
+	  var = permute_vec_elements (prev_res, var, perm_mask,
+				      stmt_info, gsi);
 	  new_stmt_info = loop_vinfo->lookup_def (var);
 	}
 
@@ -3027,7 +3031,7 @@ vectorizable_bswap (gimple *stmt, gimple
     {
       /* Handle uses.  */
       if (j == 0)
-        vect_get_vec_defs (op, NULL, stmt, &vec_oprnds, NULL, slp_node);
+	vect_get_vec_defs (op, NULL, stmt_info, &vec_oprnds, NULL, slp_node);
       else
         vect_get_vec_defs_for_stmt_copy (dt, &vec_oprnds, NULL);
 
@@ -3040,15 +3044,16 @@ vectorizable_bswap (gimple *stmt, gimple
 	 tree tem = make_ssa_name (char_vectype);
 	 new_stmt = gimple_build_assign (tem, build1 (VIEW_CONVERT_EXPR,
 						      char_vectype, vop));
-	 vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	 vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 	 tree tem2 = make_ssa_name (char_vectype);
 	 new_stmt = gimple_build_assign (tem2, VEC_PERM_EXPR,
 					 tem, tem, bswap_vconst);
-	 vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	 vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 	 tem = make_ssa_name (vectype);
 	 new_stmt = gimple_build_assign (tem, build1 (VIEW_CONVERT_EXPR,
 						      vectype, tem2));
-	 new_stmt_info = vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	 new_stmt_info
+	   = vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
          if (slp_node)
 	   SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
        }
@@ -3137,8 +3142,8 @@ vectorizable_call (gimple *gs, gimple_st
       && ! vec_stmt)
     return false;
 
-  /* Is GS a vectorizable call?   */
-  stmt = dyn_cast <gcall *> (gs);
+  /* Is STMT_INFO a vectorizable call?   */
+  stmt = dyn_cast <gcall *> (stmt_info->stmt);
   if (!stmt)
     return false;
 
@@ -3307,7 +3312,7 @@ vectorizable_call (gimple *gs, gimple_st
 	       && (gimple_call_builtin_p (stmt, BUILT_IN_BSWAP16)
 		   || gimple_call_builtin_p (stmt, BUILT_IN_BSWAP32)
 		   || gimple_call_builtin_p (stmt, BUILT_IN_BSWAP64)))
-	return vectorizable_bswap (stmt, gsi, vec_stmt, slp_node,
+	return vectorizable_bswap (stmt_info, gsi, vec_stmt, slp_node,
 				   vectype_in, dt, cost_vec);
       else
 	{
@@ -3400,7 +3405,7 @@ vectorizable_call (gimple *gs, gimple_st
 		      gimple_call_set_lhs (call, half_res);
 		      gimple_call_set_nothrow (call, true);
 		      new_stmt_info
-			= vect_finish_stmt_generation (stmt, call, gsi);
+			= vect_finish_stmt_generation (stmt_info, call, gsi);
 		      if ((i & 1) == 0)
 			{
 			  prev_res = half_res;
@@ -3411,7 +3416,8 @@ vectorizable_call (gimple *gs, gimple_st
 			= gimple_build_assign (new_temp, convert_code,
 					       prev_res, half_res);
 		      new_stmt_info
-			= vect_finish_stmt_generation (stmt, new_stmt, gsi);
+			= vect_finish_stmt_generation (stmt_info, new_stmt,
+						       gsi);
 		    }
 		  else
 		    {
@@ -3435,7 +3441,7 @@ vectorizable_call (gimple *gs, gimple_st
 		      gimple_call_set_lhs (call, new_temp);
 		      gimple_call_set_nothrow (call, true);
 		      new_stmt_info
-			= vect_finish_stmt_generation (stmt, call, gsi);
+			= vect_finish_stmt_generation (stmt_info, call, gsi);
 		    }
 		  SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
 		}
@@ -3453,7 +3459,7 @@ vectorizable_call (gimple *gs, gimple_st
 	      op = gimple_call_arg (stmt, i);
 	      if (j == 0)
 		vec_oprnd0
-		  = vect_get_vec_def_for_operand (op, stmt);
+		  = vect_get_vec_def_for_operand (op, stmt_info);
 	      else
 		vec_oprnd0
 		  = vect_get_vec_def_for_stmt_copy (dt[i], orig_vargs[i]);
@@ -3476,11 +3482,11 @@ vectorizable_call (gimple *gs, gimple_st
 	      tree new_var
 		= vect_get_new_ssa_name (vectype_out, vect_simple_var, "cst_");
 	      gimple *init_stmt = gimple_build_assign (new_var, cst);
-	      vect_init_vector_1 (stmt, init_stmt, NULL);
+	      vect_init_vector_1 (stmt_info, init_stmt, NULL);
 	      new_temp = make_ssa_name (vec_dest);
 	      gimple *new_stmt = gimple_build_assign (new_temp, new_var);
 	      new_stmt_info
-		= vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		= vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 	    }
 	  else if (modifier == NARROW)
 	    {
@@ -3491,7 +3497,8 @@ vectorizable_call (gimple *gs, gimple_st
 	      gcall *call = gimple_build_call_internal_vec (ifn, vargs);
 	      gimple_call_set_lhs (call, half_res);
 	      gimple_call_set_nothrow (call, true);
-	      new_stmt_info = vect_finish_stmt_generation (stmt, call, gsi);
+	      new_stmt_info
+		= vect_finish_stmt_generation (stmt_info, call, gsi);
 	      if ((j & 1) == 0)
 		{
 		  prev_res = half_res;
@@ -3501,7 +3508,7 @@ vectorizable_call (gimple *gs, gimple_st
 	      gassign *new_stmt = gimple_build_assign (new_temp, convert_code,
 						       prev_res, half_res);
 	      new_stmt_info
-		= vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		= vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 	    }
 	  else
 	    {
@@ -3513,7 +3520,8 @@ vectorizable_call (gimple *gs, gimple_st
 	      new_temp = make_ssa_name (vec_dest, call);
 	      gimple_call_set_lhs (call, new_temp);
 	      gimple_call_set_nothrow (call, true);
-	      new_stmt_info = vect_finish_stmt_generation (stmt, call, gsi);
+	      new_stmt_info
+		= vect_finish_stmt_generation (stmt_info, call, gsi);
 	    }
 
 	  if (j == (modifier == NARROW ? 1 : 0))
@@ -3566,7 +3574,7 @@ vectorizable_call (gimple *gs, gimple_st
 		  gimple_call_set_lhs (call, new_temp);
 		  gimple_call_set_nothrow (call, true);
 		  new_stmt_info
-		    = vect_finish_stmt_generation (stmt, call, gsi);
+		    = vect_finish_stmt_generation (stmt_info, call, gsi);
 		  SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
 		}
 
@@ -3584,7 +3592,7 @@ vectorizable_call (gimple *gs, gimple_st
 	      if (j == 0)
 		{
 		  vec_oprnd0
-		    = vect_get_vec_def_for_operand (op, stmt);
+		    = vect_get_vec_def_for_operand (op, stmt_info);
 		  vec_oprnd1
 		    = vect_get_vec_def_for_stmt_copy (dt[i], vec_oprnd0);
 		}
@@ -3605,7 +3613,8 @@ vectorizable_call (gimple *gs, gimple_st
 	  gcall *new_stmt = gimple_build_call_vec (fndecl, vargs);
 	  new_temp = make_ssa_name (vec_dest, new_stmt);
 	  gimple_call_set_lhs (new_stmt, new_temp);
-	  new_stmt_info = vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	  new_stmt_info
+	    = vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 
 	  if (j == 0)
 	    STMT_VINFO_VEC_STMT (stmt_info) = new_stmt_info;
@@ -3793,7 +3802,7 @@ vectorizable_simd_clone_call (gimple *st
 
   vectype = STMT_VINFO_VECTYPE (stmt_info);
 
-  if (loop_vinfo && nested_in_vect_loop_p (loop, stmt))
+  if (loop_vinfo && nested_in_vect_loop_p (loop, stmt_info))
     return false;
 
   /* FORNOW */
@@ -4098,7 +4107,7 @@ vectorizable_simd_clone_call (gimple *st
 		      gcc_assert ((k & (k - 1)) == 0);
 		      if (m == 0)
 			vec_oprnd0
-			  = vect_get_vec_def_for_operand (op, stmt);
+			  = vect_get_vec_def_for_operand (op, stmt_info);
 		      else
 			{
 			  vec_oprnd0 = arginfo[i].op;
@@ -4115,7 +4124,7 @@ vectorizable_simd_clone_call (gimple *st
 		      gassign *new_stmt
 			= gimple_build_assign (make_ssa_name (atype),
 					       vec_oprnd0);
-		      vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		      vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 		      vargs.safe_push (gimple_assign_lhs (new_stmt));
 		    }
 		  else
@@ -4132,7 +4141,7 @@ vectorizable_simd_clone_call (gimple *st
 			{
 			  if (m == 0 && l == 0)
 			    vec_oprnd0
-			      = vect_get_vec_def_for_operand (op, stmt);
+			      = vect_get_vec_def_for_operand (op, stmt_info);
 			  else
 			    vec_oprnd0
 			      = vect_get_vec_def_for_stmt_copy (arginfo[i].dt,
@@ -4151,7 +4160,8 @@ vectorizable_simd_clone_call (gimple *st
 			  gassign *new_stmt
 			    = gimple_build_assign (make_ssa_name (atype),
 						   vec_oprnd0);
-			  vect_finish_stmt_generation (stmt, new_stmt, gsi);
+			  vect_finish_stmt_generation (stmt_info, new_stmt,
+						       gsi);
 			  vargs.safe_push (gimple_assign_lhs (new_stmt));
 			}
 		    }
@@ -4220,7 +4230,7 @@ vectorizable_simd_clone_call (gimple *st
 		  gassign *new_stmt
 		    = gimple_build_assign (new_temp, code,
 					   arginfo[i].op, tcst);
-		  vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		  vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 		  vargs.safe_push (new_temp);
 		}
 	      break;
@@ -4249,7 +4259,7 @@ vectorizable_simd_clone_call (gimple *st
 	  gimple_call_set_lhs (new_call, new_temp);
 	}
       stmt_vec_info new_stmt_info
-	= vect_finish_stmt_generation (stmt, new_call, gsi);
+	= vect_finish_stmt_generation (stmt_info, new_call, gsi);
 
       if (vec_dest)
 	{
@@ -4275,7 +4285,7 @@ vectorizable_simd_clone_call (gimple *st
 		  gimple *new_stmt
 		    = gimple_build_assign (make_ssa_name (vectype), t);
 		  new_stmt_info
-		    = vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		    = vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 
 		  if (j == 0 && l == 0)
 		    STMT_VINFO_VEC_STMT (stmt_info)
@@ -4287,7 +4297,7 @@ vectorizable_simd_clone_call (gimple *st
 		}
 
 	      if (ratype)
-		vect_clobber_variable (stmt, gsi, new_temp);
+		vect_clobber_variable (stmt_info, gsi, new_temp);
 	      continue;
 	    }
 	  else if (simd_clone_subparts (vectype) > nunits)
@@ -4307,11 +4317,12 @@ vectorizable_simd_clone_call (gimple *st
 		      gimple *new_stmt
 			= gimple_build_assign (make_ssa_name (rtype), tem);
 		      new_stmt_info
-			= vect_finish_stmt_generation (stmt, new_stmt, gsi);
+			= vect_finish_stmt_generation (stmt_info, new_stmt,
+						       gsi);
 		      CONSTRUCTOR_APPEND_ELT (ret_ctor_elts, NULL_TREE,
 					      gimple_assign_lhs (new_stmt));
 		    }
-		  vect_clobber_variable (stmt, gsi, new_temp);
+		  vect_clobber_variable (stmt_info, gsi, new_temp);
 		}
 	      else
 		CONSTRUCTOR_APPEND_ELT (ret_ctor_elts, NULL_TREE, new_temp);
@@ -4321,7 +4332,7 @@ vectorizable_simd_clone_call (gimple *st
 	      gimple *new_stmt
 		= gimple_build_assign (make_ssa_name (vec_dest), vec_oprnd0);
 	      new_stmt_info
-		= vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		= vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 
 	      if ((unsigned) j == k - 1)
 		STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
@@ -4339,8 +4350,8 @@ vectorizable_simd_clone_call (gimple *st
 	      gimple *new_stmt
 		= gimple_build_assign (make_ssa_name (vec_dest), t);
 	      new_stmt_info
-		= vect_finish_stmt_generation (stmt, new_stmt, gsi);
-	      vect_clobber_variable (stmt, gsi, new_temp);
+		= vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
+	      vect_clobber_variable (stmt_info, gsi, new_temp);
 	    }
 	}
 
@@ -4493,7 +4504,7 @@ vect_create_vectorized_demotion_stmts (v
       new_tmp = make_ssa_name (vec_dest, new_stmt);
       gimple_assign_set_lhs (new_stmt, new_tmp);
       stmt_vec_info new_stmt_info
-	= vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	= vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 
       if (multi_step_cvt)
 	/* Store the resulting vector for next recursive call.  */
@@ -4527,8 +4538,8 @@ vect_create_vectorized_demotion_stmts (v
 	 previous level.  */
       vec_oprnds->truncate ((i+1)/2);
       vect_create_vectorized_demotion_stmts (vec_oprnds, multi_step_cvt - 1,
-					     stmt, vec_dsts, gsi, slp_node,
-					     VEC_PACK_TRUNC_EXPR,
+					     stmt_info, vec_dsts, gsi,
+					     slp_node, VEC_PACK_TRUNC_EXPR,
 					     prev_stmt_info);
     }
 
@@ -4793,9 +4804,9 @@ vectorizable_conversion (gimple *stmt, g
       return false;
 
     case WIDEN:
-      if (supportable_widening_operation (code, stmt, vectype_out, vectype_in,
-					  &code1, &code2, &multi_step_cvt,
-					  &interm_types))
+      if (supportable_widening_operation (code, stmt_info, vectype_out,
+					  vectype_in, &code1, &code2,
+					  &multi_step_cvt, &interm_types))
 	{
 	  /* Binary widening operation can only be supported directly by the
 	     architecture.  */
@@ -4826,15 +4837,16 @@ vectorizable_conversion (gimple *stmt, g
 						  cvt_type, &decl1, &codecvt1))
 		goto unsupported;
 	    }
-	  else if (!supportable_widening_operation (code, stmt, vectype_out,
-						    cvt_type, &codecvt1,
-						    &codecvt2, &multi_step_cvt,
+	  else if (!supportable_widening_operation (code, stmt_info,
+						    vectype_out, cvt_type,
+						    &codecvt1, &codecvt2,
+						    &multi_step_cvt,
 						    &interm_types))
 	    continue;
 	  else
 	    gcc_assert (multi_step_cvt == 0);
 
-	  if (supportable_widening_operation (NOP_EXPR, stmt, cvt_type,
+	  if (supportable_widening_operation (NOP_EXPR, stmt_info, cvt_type,
 					      vectype_in, &code1, &code2,
 					      &multi_step_cvt, &interm_types))
 	    {
@@ -4973,7 +4985,8 @@ vectorizable_conversion (gimple *stmt, g
       for (j = 0; j < ncopies; j++)
 	{
 	  if (j == 0)
-	    vect_get_vec_defs (op0, NULL, stmt, &vec_oprnds0, NULL, slp_node);
+	    vect_get_vec_defs (op0, NULL, stmt_info, &vec_oprnds0,
+			       NULL, slp_node);
 	  else
 	    vect_get_vec_defs_for_stmt_copy (dt, &vec_oprnds0, NULL);
 
@@ -4987,7 +5000,7 @@ vectorizable_conversion (gimple *stmt, g
 		  new_temp = make_ssa_name (vec_dest, new_stmt);
 		  gimple_call_set_lhs (new_stmt, new_temp);
 		  new_stmt_info
-		    = vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		    = vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 		}
 	      else
 		{
@@ -4997,7 +5010,7 @@ vectorizable_conversion (gimple *stmt, g
 		  new_temp = make_ssa_name (vec_dest, new_stmt);
 		  gimple_assign_set_lhs (new_stmt, new_temp);
 		  new_stmt_info
-		    = vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		    = vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 		}
 
 	      if (slp_node)
@@ -5038,23 +5051,24 @@ vectorizable_conversion (gimple *stmt, g
 		      for (k = 0; k < slp_node->vec_stmts_size - 1; k++)
 			vec_oprnds1.quick_push (vec_oprnd1);
 
-		      vect_get_vec_defs (op0, NULL_TREE, stmt, &vec_oprnds0, NULL,
-					 slp_node);
+		      vect_get_vec_defs (op0, NULL_TREE, stmt_info,
+					 &vec_oprnds0, NULL, slp_node);
 		    }
 		  else
-		    vect_get_vec_defs (op0, op1, stmt, &vec_oprnds0,
+		    vect_get_vec_defs (op0, op1, stmt_info, &vec_oprnds0,
 				       &vec_oprnds1, slp_node);
 		}
 	      else
 		{
-		  vec_oprnd0 = vect_get_vec_def_for_operand (op0, stmt);
+		  vec_oprnd0 = vect_get_vec_def_for_operand (op0, stmt_info);
 		  vec_oprnds0.quick_push (vec_oprnd0);
 		  if (op_type == binary_op)
 		    {
 		      if (code == WIDEN_LSHIFT_EXPR)
 			vec_oprnd1 = op1;
 		      else
-			vec_oprnd1 = vect_get_vec_def_for_operand (op1, stmt);
+			vec_oprnd1
+			  = vect_get_vec_def_for_operand (op1, stmt_info);
 		      vec_oprnds1.quick_push (vec_oprnd1);
 		    }
 		}
@@ -5087,8 +5101,8 @@ vectorizable_conversion (gimple *stmt, g
 		  c2 = codecvt2;
 		}
 	      vect_create_vectorized_promotion_stmts (&vec_oprnds0,
-						      &vec_oprnds1,
-						      stmt, this_dest, gsi,
+						      &vec_oprnds1, stmt_info,
+						      this_dest, gsi,
 						      c1, c2, decl1, decl2,
 						      op_type);
 	    }
@@ -5104,7 +5118,8 @@ vectorizable_conversion (gimple *stmt, g
 		      new_temp = make_ssa_name (vec_dest, new_stmt);
 		      gimple_call_set_lhs (new_stmt, new_temp);
 		      new_stmt_info
-			= vect_finish_stmt_generation (stmt, new_stmt, gsi);
+			= vect_finish_stmt_generation (stmt_info, new_stmt,
+						       gsi);
 		    }
 		  else
 		    {
@@ -5113,7 +5128,8 @@ vectorizable_conversion (gimple *stmt, g
 		      gassign *new_stmt
 			= gimple_build_assign (new_temp, codecvt1, vop0);
 		      new_stmt_info
-			= vect_finish_stmt_generation (stmt, new_stmt, gsi);
+			= vect_finish_stmt_generation (stmt_info, new_stmt,
+						       gsi);
 		    }
 		}
 	      else
@@ -5144,12 +5160,13 @@ vectorizable_conversion (gimple *stmt, g
 	{
 	  /* Handle uses.  */
 	  if (slp_node)
-	    vect_get_vec_defs (op0, NULL_TREE, stmt, &vec_oprnds0, NULL,
+	    vect_get_vec_defs (op0, NULL_TREE, stmt_info, &vec_oprnds0, NULL,
 			       slp_node);
 	  else
 	    {
 	      vec_oprnds0.truncate (0);
-	      vect_get_loop_based_defs (&last_oprnd, stmt, dt[0], &vec_oprnds0,
+	      vect_get_loop_based_defs (&last_oprnd, stmt_info, dt[0],
+					&vec_oprnds0,
 					vect_pow2 (multi_step_cvt) - 1);
 	    }
 
@@ -5162,7 +5179,7 @@ vectorizable_conversion (gimple *stmt, g
 		    gcall *new_stmt = gimple_build_call (decl1, 1, vop0);
 		    new_temp = make_ssa_name (vec_dest, new_stmt);
 		    gimple_call_set_lhs (new_stmt, new_temp);
-		    vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		    vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 		  }
 		else
 		  {
@@ -5170,14 +5187,14 @@ vectorizable_conversion (gimple *stmt, g
 		    new_temp = make_ssa_name (vec_dest);
 		    gassign *new_stmt
 		      = gimple_build_assign (new_temp, codecvt1, vop0);
-		    vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		    vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 		  }
 
 		vec_oprnds0[i] = new_temp;
 	      }
 
 	  vect_create_vectorized_demotion_stmts (&vec_oprnds0, multi_step_cvt,
-						 stmt, vec_dsts, gsi,
+						 stmt_info, vec_dsts, gsi,
 						 slp_node, code1,
 						 &prev_stmt_info);
 	}
@@ -5324,7 +5341,7 @@ vectorizable_assignment (gimple *stmt, g
     {
       /* Handle uses.  */
       if (j == 0)
-        vect_get_vec_defs (op, NULL, stmt, &vec_oprnds, NULL, slp_node);
+	vect_get_vec_defs (op, NULL, stmt_info, &vec_oprnds, NULL, slp_node);
       else
         vect_get_vec_defs_for_stmt_copy (dt, &vec_oprnds, NULL);
 
@@ -5338,7 +5355,8 @@ vectorizable_assignment (gimple *stmt, g
 	 gassign *new_stmt = gimple_build_assign (vec_dest, vop);
          new_temp = make_ssa_name (vec_dest, new_stmt);
          gimple_assign_set_lhs (new_stmt, new_temp);
-	 new_stmt_info = vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	 new_stmt_info
+	   = vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
          if (slp_node)
 	   SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
        }
@@ -5623,7 +5641,7 @@ vectorizable_shift (gimple *stmt, gimple
 		  if (vec_stmt && !slp_node)
 		    {
 		      op1 = fold_convert (TREE_TYPE (vectype), op1);
-		      op1 = vect_init_vector (stmt, op1,
+		      op1 = vect_init_vector (stmt_info, op1,
 					      TREE_TYPE (vectype), NULL);
 		    }
 		}
@@ -5722,11 +5740,11 @@ vectorizable_shift (gimple *stmt, gimple
              (a special case for certain kind of vector shifts); otherwise,
              operand 1 should be of a vector type (the usual case).  */
           if (vec_oprnd1)
-            vect_get_vec_defs (op0, NULL_TREE, stmt, &vec_oprnds0, NULL,
-                               slp_node);
+	    vect_get_vec_defs (op0, NULL_TREE, stmt_info, &vec_oprnds0, NULL,
+			       slp_node);
           else
-            vect_get_vec_defs (op0, op1, stmt, &vec_oprnds0, &vec_oprnds1,
-                               slp_node);
+	    vect_get_vec_defs (op0, op1, stmt_info, &vec_oprnds0, &vec_oprnds1,
+			       slp_node);
         }
       else
         vect_get_vec_defs_for_stmt_copy (dt, &vec_oprnds0, &vec_oprnds1);
@@ -5739,7 +5757,8 @@ vectorizable_shift (gimple *stmt, gimple
 	  gassign *new_stmt = gimple_build_assign (vec_dest, code, vop0, vop1);
           new_temp = make_ssa_name (vec_dest, new_stmt);
           gimple_assign_set_lhs (new_stmt, new_temp);
-	  new_stmt_info = vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	  new_stmt_info
+	    = vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
           if (slp_node)
 	    SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
         }
@@ -6076,7 +6095,7 @@ vectorizable_operation (gimple *stmt, gi
       if (j == 0)
 	{
 	  if (op_type == binary_op)
-	    vect_get_vec_defs (op0, op1, stmt, &vec_oprnds0, &vec_oprnds1,
+	    vect_get_vec_defs (op0, op1, stmt_info, &vec_oprnds0, &vec_oprnds1,
 			       slp_node);
 	  else if (op_type == ternary_op)
 	    {
@@ -6094,14 +6113,14 @@ vectorizable_operation (gimple *stmt, gi
 		}
 	      else
 		{
-		  vect_get_vec_defs (op0, op1, stmt, &vec_oprnds0, &vec_oprnds1,
-				     NULL);
-		  vect_get_vec_defs (op2, NULL_TREE, stmt, &vec_oprnds2, NULL,
-				     NULL);
+		  vect_get_vec_defs (op0, op1, stmt_info, &vec_oprnds0,
+				     &vec_oprnds1, NULL);
+		  vect_get_vec_defs (op2, NULL_TREE, stmt_info, &vec_oprnds2,
+				     NULL, NULL);
 		}
 	    }
 	  else
-	    vect_get_vec_defs (op0, NULL_TREE, stmt, &vec_oprnds0, NULL,
+	    vect_get_vec_defs (op0, NULL_TREE, stmt_info, &vec_oprnds0, NULL,
 			       slp_node);
 	}
       else
@@ -6127,7 +6146,8 @@ vectorizable_operation (gimple *stmt, gi
 						   vop0, vop1, vop2);
 	  new_temp = make_ssa_name (vec_dest, new_stmt);
 	  gimple_assign_set_lhs (new_stmt, new_temp);
-	  new_stmt_info = vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	  new_stmt_info
+	    = vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 	  if (vec_cvt_dest)
 	    {
 	      new_temp = build1 (VIEW_CONVERT_EXPR, vectype_out, new_temp);
@@ -6137,7 +6157,7 @@ vectorizable_operation (gimple *stmt, gi
 	      new_temp = make_ssa_name (vec_cvt_dest, new_stmt);
 	      gimple_assign_set_lhs (new_stmt, new_temp);
 	      new_stmt_info
-		= vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		= vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 	    }
           if (slp_node)
 	    SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
@@ -6275,7 +6295,7 @@ vectorizable_store (gimple *stmt, gimple
   /* Is vectorizable store? */
 
   tree mask = NULL_TREE, mask_vectype = NULL_TREE;
-  if (gassign *assign = dyn_cast <gassign *> (stmt))
+  if (gassign *assign = dyn_cast <gassign *> (stmt_info->stmt))
     {
       tree scalar_dest = gimple_assign_lhs (assign);
       if (TREE_CODE (scalar_dest) == VIEW_CONVERT_EXPR
@@ -6292,7 +6312,7 @@ vectorizable_store (gimple *stmt, gimple
     }
   else
     {
-      gcall *call = dyn_cast <gcall *> (stmt);
+      gcall *call = dyn_cast <gcall *> (stmt_info->stmt);
       if (!call || !gimple_call_internal_p (call))
 	return false;
 
@@ -6312,13 +6332,13 @@ vectorizable_store (gimple *stmt, gimple
       if (mask_index >= 0)
 	{
 	  mask = gimple_call_arg (call, mask_index);
-	  if (!vect_check_load_store_mask (stmt, mask, &mask_dt,
+	  if (!vect_check_load_store_mask (stmt_info, mask, &mask_dt,
 					   &mask_vectype))
 	    return false;
 	}
     }
 
-  op = vect_get_store_rhs (stmt);
+  op = vect_get_store_rhs (stmt_info);
 
   /* Cannot have hybrid store SLP -- that would mean storing to the
      same location twice.  */
@@ -6346,7 +6366,7 @@ vectorizable_store (gimple *stmt, gimple
   gcc_assert (ncopies >= 1);
 
   /* FORNOW.  This restriction should be relaxed.  */
-  if (loop && nested_in_vect_loop_p (loop, stmt) && ncopies > 1)
+  if (loop && nested_in_vect_loop_p (loop, stmt_info) && ncopies > 1)
     {
       if (dump_enabled_p ())
 	dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
@@ -6354,7 +6374,7 @@ vectorizable_store (gimple *stmt, gimple
       return false;
     }
 
-  if (!vect_check_store_rhs (stmt, op, &rhs_dt, &rhs_vectype, &vls_type))
+  if (!vect_check_store_rhs (stmt_info, op, &rhs_dt, &rhs_vectype, &vls_type))
     return false;
 
   elem_type = TREE_TYPE (vectype);
@@ -6364,7 +6384,7 @@ vectorizable_store (gimple *stmt, gimple
     return false;
 
   vect_memory_access_type memory_access_type;
-  if (!get_load_store_type (stmt, vectype, slp, mask, vls_type, ncopies,
+  if (!get_load_store_type (stmt_info, vectype, slp, mask, vls_type, ncopies,
 			    &memory_access_type, &gs_info))
     return false;
 
@@ -6501,7 +6521,7 @@ vectorizable_store (gimple *stmt, gimple
       /* Currently we support only unconditional scatter stores,
 	 so mask should be all ones.  */
       mask = build_int_cst (masktype, -1);
-      mask = vect_init_vector (stmt, mask, masktype, NULL);
+      mask = vect_init_vector (stmt_info, mask, masktype, NULL);
 
       scale = build_int_cst (scaletype, gs_info.scale);
 
@@ -6511,9 +6531,9 @@ vectorizable_store (gimple *stmt, gimple
 	  if (j == 0)
 	    {
 	      src = vec_oprnd1
-		= vect_get_vec_def_for_operand (op, stmt);
+		= vect_get_vec_def_for_operand (op, stmt_info);
 	      op = vec_oprnd0
-		= vect_get_vec_def_for_operand (gs_info.offset, stmt);
+		= vect_get_vec_def_for_operand (gs_info.offset, stmt_info);
 	    }
 	  else if (modifier != NONE && (j & 1))
 	    {
@@ -6522,12 +6542,12 @@ vectorizable_store (gimple *stmt, gimple
 		  src = vec_oprnd1
 		    = vect_get_vec_def_for_stmt_copy (rhs_dt, vec_oprnd1);
 		  op = permute_vec_elements (vec_oprnd0, vec_oprnd0, perm_mask,
-					     stmt, gsi);
+					     stmt_info, gsi);
 		}
 	      else if (modifier == NARROW)
 		{
 		  src = permute_vec_elements (vec_oprnd1, vec_oprnd1, perm_mask,
-					      stmt, gsi);
+					      stmt_info, gsi);
 		  op = vec_oprnd0
 		    = vect_get_vec_def_for_stmt_copy (gs_info.offset_dt,
 						      vec_oprnd0);
@@ -6552,7 +6572,7 @@ vectorizable_store (gimple *stmt, gimple
 	      src = build1 (VIEW_CONVERT_EXPR, srctype, src);
 	      gassign *new_stmt
 		= gimple_build_assign (var, VIEW_CONVERT_EXPR, src);
-	      vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	      vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 	      src = var;
 	    }
 
@@ -6564,14 +6584,14 @@ vectorizable_store (gimple *stmt, gimple
 	      op = build1 (VIEW_CONVERT_EXPR, idxtype, op);
 	      gassign *new_stmt
 		= gimple_build_assign (var, VIEW_CONVERT_EXPR, op);
-	      vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	      vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 	      op = var;
 	    }
 
 	  gcall *new_stmt
 	    = gimple_build_call (gs_info.decl, 5, ptr, mask, op, src, scale);
 	  stmt_vec_info new_stmt_info
-	    = vect_finish_stmt_generation (stmt, new_stmt, gsi);
+	    = vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 
 	  if (prev_stmt_info == NULL_STMT_VEC_INFO)
 	    STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
@@ -6588,7 +6608,7 @@ vectorizable_store (gimple *stmt, gimple
   if (grouped_store)
     {
       /* FORNOW */
-      gcc_assert (!loop || !nested_in_vect_loop_p (loop, stmt));
+      gcc_assert (!loop || !nested_in_vect_loop_p (loop, stmt_info));
 
       /* We vectorize all the stmts of the interleaving group when we
 	 reach the last stmt in the group.  */
@@ -6642,7 +6662,7 @@ vectorizable_store (gimple *stmt, gimple
       unsigned int const_nunits = nunits.to_constant ();
 
       gcc_assert (!LOOP_VINFO_FULLY_MASKED_P (loop_vinfo));
-      gcc_assert (!nested_in_vect_loop_p (loop, stmt));
+      gcc_assert (!nested_in_vect_loop_p (loop, stmt_info));
 
       stride_base
 	= fold_build_pointer_plus
@@ -6768,7 +6788,7 @@ vectorizable_store (gimple *stmt, gimple
 	      tree newoff = copy_ssa_name (running_off, NULL);
 	      incr = gimple_build_assign (newoff, POINTER_PLUS_EXPR,
 					  running_off, pos);
-	      vect_finish_stmt_generation (stmt, incr, gsi);
+	      vect_finish_stmt_generation (stmt_info, incr, gsi);
 	      running_off = newoff;
 	    }
 	  unsigned int group_el = 0;
@@ -6782,8 +6802,8 @@ vectorizable_store (gimple *stmt, gimple
 		{
 		  if (slp)
 		    {
-		      vect_get_vec_defs (op, NULL_TREE, stmt, &vec_oprnds, NULL,
-					 slp_node);
+		      vect_get_vec_defs (op, NULL_TREE, stmt_info,
+					 &vec_oprnds, NULL, slp_node);
 		      vec_oprnd = vec_oprnds[0];
 		    }
 		  else
@@ -6811,7 +6831,7 @@ vectorizable_store (gimple *stmt, gimple
 		  gimple *pun
 		    = gimple_build_assign (tem, build1 (VIEW_CONVERT_EXPR,
 							lvectype, vec_oprnd));
-		  vect_finish_stmt_generation (stmt, pun, gsi);
+		  vect_finish_stmt_generation (stmt_info, pun, gsi);
 		  vec_oprnd = tem;
 		}
 	      for (i = 0; i < nstores; i++)
@@ -6838,7 +6858,7 @@ vectorizable_store (gimple *stmt, gimple
 		  /* And store it to *running_off.  */
 		  assign = gimple_build_assign (newref, elem);
 		  stmt_vec_info assign_info
-		    = vect_finish_stmt_generation (stmt, assign, gsi);
+		    = vect_finish_stmt_generation (stmt_info, assign, gsi);
 
 		  group_el += lnel;
 		  if (! slp
@@ -6847,7 +6867,7 @@ vectorizable_store (gimple *stmt, gimple
 		      newoff = copy_ssa_name (running_off, NULL);
 		      incr = gimple_build_assign (newoff, POINTER_PLUS_EXPR,
 						  running_off, stride_step);
-		      vect_finish_stmt_generation (stmt, incr, gsi);
+		      vect_finish_stmt_generation (stmt_info, incr, gsi);
 
 		      running_off = newoff;
 		      group_el = 0;
@@ -6905,7 +6925,7 @@ vectorizable_store (gimple *stmt, gimple
   else if (memory_access_type == VMAT_GATHER_SCATTER)
     {
       aggr_type = elem_type;
-      vect_get_strided_load_store_ops (stmt, loop_vinfo, &gs_info,
+      vect_get_strided_load_store_ops (stmt_info, loop_vinfo, &gs_info,
 				       &bump, &vec_offset);
     }
   else
@@ -6969,8 +6989,8 @@ vectorizable_store (gimple *stmt, gimple
           if (slp)
             {
 	      /* Get vectorized arguments for SLP_NODE.  */
-              vect_get_vec_defs (op, NULL_TREE, stmt, &vec_oprnds,
-                                 NULL, slp_node);
+	      vect_get_vec_defs (op, NULL_TREE, stmt_info, &vec_oprnds,
+				 NULL, slp_node);
 
               vec_oprnd = vec_oprnds[0];
             }
@@ -6999,7 +7019,7 @@ vectorizable_store (gimple *stmt, gimple
 		  next_stmt_info = DR_GROUP_NEXT_ELEMENT (next_stmt_info);
 		}
 	      if (mask)
-		vec_mask = vect_get_vec_def_for_operand (mask, stmt,
+		vec_mask = vect_get_vec_def_for_operand (mask, stmt_info,
 							 mask_vectype);
 	    }
 
@@ -7022,7 +7042,7 @@ vectorizable_store (gimple *stmt, gimple
 	    }
 	  else if (STMT_VINFO_GATHER_SCATTER_P (stmt_info))
 	    {
-	      vect_get_gather_scatter_ops (loop, stmt, &gs_info,
+	      vect_get_gather_scatter_ops (loop, stmt_info, &gs_info,
 					   &dataref_ptr, &vec_offset);
 	      inv_p = false;
 	    }
@@ -7061,8 +7081,8 @@ vectorizable_store (gimple *stmt, gimple
 	    vec_offset = vect_get_vec_def_for_stmt_copy (gs_info.offset_dt,
 							 vec_offset);
 	  else
-	    dataref_ptr = bump_vector_ptr (dataref_ptr, ptr_incr, gsi, stmt,
-					   bump);
+	    dataref_ptr = bump_vector_ptr (dataref_ptr, ptr_incr, gsi,
+					   stmt_info, bump);
 	}
 
       if (memory_access_type == VMAT_LOAD_STORE_LANES)
@@ -7075,13 +7095,13 @@ vectorizable_store (gimple *stmt, gimple
 	  /* Invalidate the current contents of VEC_ARRAY.  This should
 	     become an RTL clobber too, which prevents the vector registers
 	     from being upward-exposed.  */
-	  vect_clobber_variable (stmt, gsi, vec_array);
+	  vect_clobber_variable (stmt_info, gsi, vec_array);
 
 	  /* Store the individual vectors into the array.  */
 	  for (i = 0; i < vec_num; i++)
 	    {
 	      vec_oprnd = dr_chain[i];
-	      write_vector_array (stmt, gsi, vec_oprnd, vec_array, i);
+	      write_vector_array (stmt_info, gsi, vec_oprnd, vec_array, i);
 	    }
 
 	  tree final_mask = NULL;
@@ -7114,10 +7134,10 @@ vectorizable_store (gimple *stmt, gimple
 	      gimple_call_set_lhs (call, data_ref);
 	    }
 	  gimple_call_set_nothrow (call, true);
-	  new_stmt_info = vect_finish_stmt_generation (stmt, call, gsi);
+	  new_stmt_info = vect_finish_stmt_generation (stmt_info, call, gsi);
 
 	  /* Record that VEC_ARRAY is now dead.  */
-	  vect_clobber_variable (stmt, gsi, vec_array);
+	  vect_clobber_variable (stmt_info, gsi, vec_array);
 	}
       else
 	{
@@ -7127,7 +7147,7 @@ vectorizable_store (gimple *stmt, gimple
 	      if (j == 0)
 		result_chain.create (group_size);
 	      /* Permute.  */
-	      vect_permute_store_chain (dr_chain, group_size, stmt, gsi,
+	      vect_permute_store_chain (dr_chain, group_size, stmt_info, gsi,
 					&result_chain);
 	    }
 
@@ -7159,14 +7179,14 @@ vectorizable_store (gimple *stmt, gimple
 		       scale, vec_oprnd);
 		  gimple_call_set_nothrow (call, true);
 		  new_stmt_info
-		    = vect_finish_stmt_generation (stmt, call, gsi);
+		    = vect_finish_stmt_generation (stmt_info, call, gsi);
 		  break;
 		}
 
 	      if (i > 0)
 		/* Bump the vector pointer.  */
 		dataref_ptr = bump_vector_ptr (dataref_ptr, ptr_incr, gsi,
-					       stmt, bump);
+					       stmt_info, bump);
 
 	      if (slp)
 		vec_oprnd = vec_oprnds[i];
@@ -7193,16 +7213,15 @@ vectorizable_store (gimple *stmt, gimple
 	      if (memory_access_type == VMAT_CONTIGUOUS_REVERSE)
 		{
 		  tree perm_mask = perm_mask_for_reverse (vectype);
-		  tree perm_dest 
-		    = vect_create_destination_var (vect_get_store_rhs (stmt),
-						   vectype);
+		  tree perm_dest = vect_create_destination_var
+		    (vect_get_store_rhs (stmt_info), vectype);
 		  tree new_temp = make_ssa_name (perm_dest);
 
 		  /* Generate the permute statement.  */
 		  gimple *perm_stmt 
 		    = gimple_build_assign (new_temp, VEC_PERM_EXPR, vec_oprnd,
 					   vec_oprnd, perm_mask);
-		  vect_finish_stmt_generation (stmt, perm_stmt, gsi);
+		  vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
 
 		  perm_stmt = SSA_NAME_DEF_STMT (new_temp);
 		  vec_oprnd = new_temp;
@@ -7219,7 +7238,7 @@ vectorizable_store (gimple *stmt, gimple
 						  final_mask, vec_oprnd);
 		  gimple_call_set_nothrow (call, true);
 		  new_stmt_info
-		    = vect_finish_stmt_generation (stmt, call, gsi);
+		    = vect_finish_stmt_generation (stmt_info, call, gsi);
 		}
 	      else
 		{
@@ -7242,7 +7261,7 @@ vectorizable_store (gimple *stmt, gimple
 		  gassign *new_stmt
 		    = gimple_build_assign (data_ref, vec_oprnd);
 		  new_stmt_info
-		    = vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		    = vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 		}
 
 	      if (slp)
@@ -7446,7 +7465,7 @@ vectorizable_load (gimple *stmt, gimple_
     return false;
 
   tree mask = NULL_TREE, mask_vectype = NULL_TREE;
-  if (gassign *assign = dyn_cast <gassign *> (stmt))
+  if (gassign *assign = dyn_cast <gassign *> (stmt_info->stmt))
     {
       scalar_dest = gimple_assign_lhs (assign);
       if (TREE_CODE (scalar_dest) != SSA_NAME)
@@ -7465,7 +7484,7 @@ vectorizable_load (gimple *stmt, gimple_
     }
   else
     {
-      gcall *call = dyn_cast <gcall *> (stmt);
+      gcall *call = dyn_cast <gcall *> (stmt_info->stmt);
       if (!call || !gimple_call_internal_p (call))
 	return false;
 
@@ -7489,7 +7508,7 @@ vectorizable_load (gimple *stmt, gimple_
       if (mask_index >= 0)
 	{
 	  mask = gimple_call_arg (call, mask_index);
-	  if (!vect_check_load_store_mask (stmt, mask, &mask_dt,
+	  if (!vect_check_load_store_mask (stmt_info, mask, &mask_dt,
 					   &mask_vectype))
 	    return false;
 	}
@@ -7504,7 +7523,7 @@ vectorizable_load (gimple *stmt, gimple_
   if (loop_vinfo)
     {
       loop = LOOP_VINFO_LOOP (loop_vinfo);
-      nested_in_vect_loop = nested_in_vect_loop_p (loop, stmt);
+      nested_in_vect_loop = nested_in_vect_loop_p (loop, stmt_info);
       vf = LOOP_VINFO_VECT_FACTOR (loop_vinfo);
     }
   else
@@ -7601,7 +7620,7 @@ vectorizable_load (gimple *stmt, gimple_
     group_size = 1;
 
   vect_memory_access_type memory_access_type;
-  if (!get_load_store_type (stmt, vectype, slp, mask, VLS_LOAD, ncopies,
+  if (!get_load_store_type (stmt_info, vectype, slp, mask, VLS_LOAD, ncopies,
 			    &memory_access_type, &gs_info))
     return false;
 
@@ -7669,7 +7688,7 @@ vectorizable_load (gimple *stmt, gimple_
 
   if (memory_access_type == VMAT_GATHER_SCATTER && gs_info.decl)
     {
-      vect_build_gather_load_calls (stmt, gsi, vec_stmt, &gs_info, mask,
+      vect_build_gather_load_calls (stmt_info, gsi, vec_stmt, &gs_info, mask,
 				    mask_dt);
       return true;
     }
@@ -7712,7 +7731,7 @@ vectorizable_load (gimple *stmt, gimple_
 	  if (grouped_load)
 	    cst_offset
 	      = (tree_to_uhwi (TYPE_SIZE_UNIT (TREE_TYPE (vectype)))
-		 * vect_get_place_in_interleaving_chain (stmt,
+		 * vect_get_place_in_interleaving_chain (stmt_info,
 							 first_stmt_info));
 	  group_size = 1;
 	  ref_type = reference_alias_ptr_type (DR_REF (dr));
@@ -7857,7 +7876,7 @@ vectorizable_load (gimple *stmt, gimple_
 	      gassign *new_stmt
 		= gimple_build_assign (make_ssa_name (ltype), data_ref);
 	      new_stmt_info
-		= vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		= vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 	      if (nloads > 1)
 		CONSTRUCTOR_APPEND_ELT (v, NULL_TREE,
 					gimple_assign_lhs (new_stmt));
@@ -7869,7 +7888,7 @@ vectorizable_load (gimple *stmt, gimple_
 		  tree newoff = copy_ssa_name (running_off);
 		  gimple *incr = gimple_build_assign (newoff, POINTER_PLUS_EXPR,
 						      running_off, stride_step);
-		  vect_finish_stmt_generation (stmt, incr, gsi);
+		  vect_finish_stmt_generation (stmt_info, incr, gsi);
 
 		  running_off = newoff;
 		  group_el = 0;
@@ -7878,7 +7897,7 @@ vectorizable_load (gimple *stmt, gimple_
 	  if (nloads > 1)
 	    {
 	      tree vec_inv = build_constructor (lvectype, v);
-	      new_temp = vect_init_vector (stmt, vec_inv, lvectype, gsi);
+	      new_temp = vect_init_vector (stmt_info, vec_inv, lvectype, gsi);
 	      new_stmt_info = vinfo->lookup_def (new_temp);
 	      if (lvectype != vectype)
 		{
@@ -7888,7 +7907,7 @@ vectorizable_load (gimple *stmt, gimple_
 					   build1 (VIEW_CONVERT_EXPR,
 						   vectype, new_temp));
 		  new_stmt_info
-		    = vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		    = vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 		}
 	    }
 
@@ -8145,7 +8164,7 @@ vectorizable_load (gimple *stmt, gimple_
   else if (memory_access_type == VMAT_GATHER_SCATTER)
     {
       aggr_type = elem_type;
-      vect_get_strided_load_store_ops (stmt, loop_vinfo, &gs_info,
+      vect_get_strided_load_store_ops (stmt_info, loop_vinfo, &gs_info,
 				       &bump, &vec_offset);
     }
   else
@@ -8198,11 +8217,11 @@ vectorizable_load (gimple *stmt, gimple_
 						    DR_INIT (first_dr),
 						    DR_INIT (ptrdr)));
 	      dataref_ptr = bump_vector_ptr (dataref_ptr, ptr_incr, gsi,
-					     stmt, diff);
+					     stmt_info, diff);
 	    }
 	  else if (STMT_VINFO_GATHER_SCATTER_P (stmt_info))
 	    {
-	      vect_get_gather_scatter_ops (loop, stmt, &gs_info,
+	      vect_get_gather_scatter_ops (loop, stmt_info, &gs_info,
 					   &dataref_ptr, &vec_offset);
 	      inv_p = false;
 	    }
@@ -8213,7 +8232,7 @@ vectorizable_load (gimple *stmt, gimple_
 					  simd_lane_access_p, &inv_p,
 					  byte_offset, bump);
 	  if (mask)
-	    vec_mask = vect_get_vec_def_for_operand (mask, stmt,
+	    vec_mask = vect_get_vec_def_for_operand (mask, stmt_info,
 						     mask_vectype);
 	}
       else
@@ -8226,7 +8245,7 @@ vectorizable_load (gimple *stmt, gimple_
 							 vec_offset);
 	  else
 	    dataref_ptr = bump_vector_ptr (dataref_ptr, ptr_incr, gsi,
-					   stmt, bump);
+					   stmt_info, bump);
 	  if (mask)
 	    vec_mask = vect_get_vec_def_for_stmt_copy (mask_dt, vec_mask);
 	}
@@ -8269,21 +8288,21 @@ vectorizable_load (gimple *stmt, gimple_
 	    }
 	  gimple_call_set_lhs (call, vec_array);
 	  gimple_call_set_nothrow (call, true);
-	  new_stmt_info = vect_finish_stmt_generation (stmt, call, gsi);
+	  new_stmt_info = vect_finish_stmt_generation (stmt_info, call, gsi);
 
 	  /* Extract each vector into an SSA_NAME.  */
 	  for (i = 0; i < vec_num; i++)
 	    {
-	      new_temp = read_vector_array (stmt, gsi, scalar_dest,
+	      new_temp = read_vector_array (stmt_info, gsi, scalar_dest,
 					    vec_array, i);
 	      dr_chain.quick_push (new_temp);
 	    }
 
 	  /* Record the mapping between SSA_NAMEs and statements.  */
-	  vect_record_grouped_load_vectors (stmt, dr_chain);
+	  vect_record_grouped_load_vectors (stmt_info, dr_chain);
 
 	  /* Record that VEC_ARRAY is now dead.  */
-	  vect_clobber_variable (stmt, gsi, vec_array);
+	  vect_clobber_variable (stmt_info, gsi, vec_array);
 	}
       else
 	{
@@ -8301,7 +8320,7 @@ vectorizable_load (gimple *stmt, gimple_
 
 	      if (i > 0)
 		dataref_ptr = bump_vector_ptr (dataref_ptr, ptr_incr, gsi,
-					       stmt, bump);
+					       stmt_info, bump);
 
 	      /* 2. Create the vector-load in the loop.  */
 	      gimple *new_stmt = NULL;
@@ -8402,7 +8421,7 @@ vectorizable_load (gimple *stmt, gimple_
 				  build_int_cst
 				  (TREE_TYPE (dataref_ptr),
 				   -(HOST_WIDE_INT) align));
-		    vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		    vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 		    data_ref
 		      = build2 (MEM_REF, vectype, ptr,
 				build_int_cst (ref_type, 0));
@@ -8412,22 +8431,23 @@ vectorizable_load (gimple *stmt, gimple_
 		    new_stmt = gimple_build_assign (vec_dest, data_ref);
 		    new_temp = make_ssa_name (vec_dest, new_stmt);
 		    gimple_assign_set_lhs (new_stmt, new_temp);
-		    gimple_set_vdef (new_stmt, gimple_vdef (stmt));
-		    gimple_set_vuse (new_stmt, gimple_vuse (stmt));
-		    vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		    gimple_set_vdef (new_stmt, gimple_vdef (stmt_info->stmt));
+		    gimple_set_vuse (new_stmt, gimple_vuse (stmt_info->stmt));
+		    vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 		    msq = new_temp;
 
 		    bump = size_binop (MULT_EXPR, vs,
 				       TYPE_SIZE_UNIT (elem_type));
 		    bump = size_binop (MINUS_EXPR, bump, size_one_node);
-		    ptr = bump_vector_ptr (dataref_ptr, NULL, gsi, stmt, bump);
+		    ptr = bump_vector_ptr (dataref_ptr, NULL, gsi,
+					   stmt_info, bump);
 		    new_stmt = gimple_build_assign
 				 (NULL_TREE, BIT_AND_EXPR, ptr,
 				  build_int_cst
 				  (TREE_TYPE (ptr), -(HOST_WIDE_INT) align));
 		    ptr = copy_ssa_name (ptr, new_stmt);
 		    gimple_assign_set_lhs (new_stmt, ptr);
-		    vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		    vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 		    data_ref
 		      = build2 (MEM_REF, vectype, ptr,
 				build_int_cst (ref_type, 0));
@@ -8444,7 +8464,7 @@ vectorizable_load (gimple *stmt, gimple_
 		      (new_temp, BIT_AND_EXPR, dataref_ptr,
 		       build_int_cst (TREE_TYPE (dataref_ptr),
 				     -(HOST_WIDE_INT) align));
-		    vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		    vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 		    data_ref
 		      = build2 (MEM_REF, vectype, new_temp,
 				build_int_cst (ref_type, 0));
@@ -8463,7 +8483,7 @@ vectorizable_load (gimple *stmt, gimple_
 	      new_temp = make_ssa_name (vec_dest, new_stmt);
 	      gimple_set_lhs (new_stmt, new_temp);
 	      new_stmt_info
-		= vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		= vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 
 	      /* 3. Handle explicit realignment if necessary/supported.
 		 Create in loop:
@@ -8480,7 +8500,7 @@ vectorizable_load (gimple *stmt, gimple_
 		  new_temp = make_ssa_name (vec_dest, new_stmt);
 		  gimple_assign_set_lhs (new_stmt, new_temp);
 		  new_stmt_info
-		    = vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		    = vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 
 		  if (alignment_support_scheme == dr_explicit_realign_optimized)
 		    {
@@ -8503,7 +8523,7 @@ vectorizable_load (gimple *stmt, gimple_
 		     thus we can insert it on the preheader edge.  */
 		  if (LOOP_VINFO_NO_DATA_DEPENDENCIES (loop_vinfo)
 		      && !nested_in_vect_loop
-		      && hoist_defs_of_uses (stmt, loop))
+		      && hoist_defs_of_uses (stmt_info, loop))
 		    {
 		      if (dump_enabled_p ())
 			{
@@ -8518,7 +8538,8 @@ vectorizable_load (gimple *stmt, gimple_
 			 gimple_build_assign (tem,
 					      unshare_expr
 					        (gimple_assign_rhs1 (stmt))));
-		      new_temp = vect_init_vector (stmt, tem, vectype, NULL);
+		      new_temp = vect_init_vector (stmt_info, tem,
+						   vectype, NULL);
 		      new_stmt = SSA_NAME_DEF_STMT (new_temp);
 		      new_stmt_info = vinfo->add_stmt (new_stmt);
 		    }
@@ -8526,7 +8547,7 @@ vectorizable_load (gimple *stmt, gimple_
 		    {
 		      gimple_stmt_iterator gsi2 = *gsi;
 		      gsi_next (&gsi2);
-		      new_temp = vect_init_vector (stmt, scalar_dest,
+		      new_temp = vect_init_vector (stmt_info, scalar_dest,
 						   vectype, &gsi2);
 		      new_stmt_info = vinfo->lookup_def (new_temp);
 		    }
@@ -8536,7 +8557,7 @@ vectorizable_load (gimple *stmt, gimple_
 		{
 		  tree perm_mask = perm_mask_for_reverse (vectype);
 		  new_temp = permute_vec_elements (new_temp, new_temp,
-						   perm_mask, stmt, gsi);
+						   perm_mask, stmt_info, gsi);
 		  new_stmt_info = vinfo->lookup_def (new_temp);
 		}
 
@@ -8562,7 +8583,7 @@ vectorizable_load (gimple *stmt, gimple_
 		       * group_gap_adj);
 		  tree bump = wide_int_to_tree (sizetype, bump_val);
 		  dataref_ptr = bump_vector_ptr (dataref_ptr, ptr_incr, gsi,
-						 stmt, bump);
+						 stmt_info, bump);
 		  group_elt = 0;
 		}
 	    }
@@ -8575,7 +8596,7 @@ vectorizable_load (gimple *stmt, gimple_
 		   * group_gap_adj);
 	      tree bump = wide_int_to_tree (sizetype, bump_val);
 	      dataref_ptr = bump_vector_ptr (dataref_ptr, ptr_incr, gsi,
-					     stmt, bump);
+					     stmt_info, bump);
 	    }
 	}
 
@@ -8598,7 +8619,8 @@ vectorizable_load (gimple *stmt, gimple_
           if (grouped_load)
   	    {
 	      if (memory_access_type != VMAT_LOAD_STORE_LANES)
-		vect_transform_grouped_load (stmt, dr_chain, group_size, gsi);
+		vect_transform_grouped_load (stmt_info, dr_chain,
+					     group_size, gsi);
 	      *vec_stmt = STMT_VINFO_VEC_STMT (stmt_info);
 	    }
           else
@@ -8942,7 +8964,7 @@ vectorizable_condition (gimple *stmt, gi
 	      if (masked)
 		{
 		  vec_cond_lhs
-		    = vect_get_vec_def_for_operand (cond_expr, stmt,
+		    = vect_get_vec_def_for_operand (cond_expr, stmt_info,
 						    comp_vectype);
 		  vect_is_simple_use (cond_expr, stmt_info->vinfo, &dts[0]);
 		}
@@ -8950,12 +8972,12 @@ vectorizable_condition (gimple *stmt, gi
 		{
 		  vec_cond_lhs
 		    = vect_get_vec_def_for_operand (cond_expr0,
-						    stmt, comp_vectype);
+						    stmt_info, comp_vectype);
 		  vect_is_simple_use (cond_expr0, loop_vinfo, &dts[0]);
 
 		  vec_cond_rhs
 		    = vect_get_vec_def_for_operand (cond_expr1,
-						    stmt, comp_vectype);
+						    stmt_info, comp_vectype);
 		  vect_is_simple_use (cond_expr1, loop_vinfo, &dts[1]);
 		}
 	      if (reduc_index == 1)
@@ -8963,7 +8985,7 @@ vectorizable_condition (gimple *stmt, gi
 	      else
 		{
 		  vec_then_clause = vect_get_vec_def_for_operand (then_clause,
-								  stmt);
+								  stmt_info);
 		  vect_is_simple_use (then_clause, loop_vinfo, &dts[2]);
 		}
 	      if (reduc_index == 2)
@@ -8971,7 +8993,7 @@ vectorizable_condition (gimple *stmt, gi
 	      else
 		{
 		  vec_else_clause = vect_get_vec_def_for_operand (else_clause,
-								  stmt);
+								  stmt_info);
 		  vect_is_simple_use (else_clause, loop_vinfo, &dts[3]);
 		}
 	    }
@@ -9026,7 +9048,7 @@ vectorizable_condition (gimple *stmt, gi
 		    new_stmt
 		      = gimple_build_assign (new_temp, bitop1, vec_cond_lhs,
 					     vec_cond_rhs);
-		  vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		  vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 		  if (bitop2 == NOP_EXPR)
 		    vec_compare = new_temp;
 		  else if (bitop2 == BIT_NOT_EXPR)
@@ -9041,7 +9063,7 @@ vectorizable_condition (gimple *stmt, gi
 		      new_stmt
 			= gimple_build_assign (vec_compare, bitop2,
 					       vec_cond_lhs, new_temp);
-		      vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		      vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 		    }
 		}
 	    }
@@ -9052,7 +9074,7 @@ vectorizable_condition (gimple *stmt, gi
 		  tree vec_compare_name = make_ssa_name (vec_cmp_type);
 		  gassign *new_stmt = gimple_build_assign (vec_compare_name,
 							   vec_compare);
-		  vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		  vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 		  vec_compare = vec_compare_name;
 		}
 	      gcc_assert (reduc_index == 2);
@@ -9061,17 +9083,18 @@ vectorizable_condition (gimple *stmt, gi
 		 vec_then_clause);
 	      gimple_call_set_lhs (new_stmt, scalar_dest);
 	      SSA_NAME_DEF_STMT (scalar_dest) = new_stmt;
-	      if (stmt == gsi_stmt (*gsi))
-		new_stmt_info = vect_finish_replace_stmt (stmt, new_stmt);
+	      if (stmt_info->stmt == gsi_stmt (*gsi))
+		new_stmt_info = vect_finish_replace_stmt (stmt_info, new_stmt);
 	      else
 		{
 		  /* In this case we're moving the definition to later in the
 		     block.  That doesn't matter because the only uses of the
 		     lhs are in phi statements.  */
-		  gimple_stmt_iterator old_gsi = gsi_for_stmt (stmt);
+		  gimple_stmt_iterator old_gsi
+		    = gsi_for_stmt (stmt_info->stmt);
 		  gsi_remove (&old_gsi, true);
 		  new_stmt_info
-		    = vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		    = vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 		}
 	    }
 	  else
@@ -9081,7 +9104,7 @@ vectorizable_condition (gimple *stmt, gi
 		= gimple_build_assign (new_temp, VEC_COND_EXPR, vec_compare,
 				       vec_then_clause, vec_else_clause);
 	      new_stmt_info
-		= vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		= vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 	    }
           if (slp_node)
 	    SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
@@ -9307,8 +9330,10 @@ vectorizable_comparison (gimple *stmt, g
 	    }
 	  else
 	    {
-	      vec_rhs1 = vect_get_vec_def_for_operand (rhs1, stmt, vectype);
-	      vec_rhs2 = vect_get_vec_def_for_operand (rhs2, stmt, vectype);
+	      vec_rhs1 = vect_get_vec_def_for_operand (rhs1, stmt_info,
+						       vectype);
+	      vec_rhs2 = vect_get_vec_def_for_operand (rhs2, stmt_info,
+						       vectype);
 	    }
 	}
       else
@@ -9336,7 +9361,7 @@ vectorizable_comparison (gimple *stmt, g
 	      gassign *new_stmt = gimple_build_assign (new_temp, code,
 						       vec_rhs1, vec_rhs2);
 	      new_stmt_info
-		= vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		= vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 	    }
 	  else
 	    {
@@ -9347,7 +9372,7 @@ vectorizable_comparison (gimple *stmt, g
 		new_stmt = gimple_build_assign (new_temp, bitop1, vec_rhs1,
 						vec_rhs2);
 	      new_stmt_info
-		= vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		= vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 	      if (bitop2 != NOP_EXPR)
 		{
 		  tree res = make_ssa_name (mask);
@@ -9357,7 +9382,7 @@ vectorizable_comparison (gimple *stmt, g
 		    new_stmt = gimple_build_assign (res, bitop2, vec_rhs1,
 						    new_temp);
 		  new_stmt_info
-		    = vect_finish_stmt_generation (stmt, new_stmt, gsi);
+		    = vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 		}
 	    }
 	  if (slp_node)
@@ -9427,10 +9452,10 @@ vect_analyze_stmt (gimple *stmt, bool *n
   if (dump_enabled_p ())
     {
       dump_printf_loc (MSG_NOTE, vect_location, "==> examining statement: ");
-      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
+      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
     }
 
-  if (gimple_has_volatile_ops (stmt))
+  if (gimple_has_volatile_ops (stmt_info->stmt))
     {
       if (dump_enabled_p ())
         dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
@@ -9447,7 +9472,6 @@ vect_analyze_stmt (gimple *stmt, bool *n
 
       for (si = gsi_start (pattern_def_seq); !gsi_end_p (si); gsi_next (&si))
 	{
-	  gimple *pattern_def_stmt = gsi_stmt (si);
 	  stmt_vec_info pattern_def_stmt_info
 	    = vinfo->lookup_stmt (gsi_stmt (si));
 	  if (STMT_VINFO_RELEVANT_P (pattern_def_stmt_info)
@@ -9458,10 +9482,11 @@ vect_analyze_stmt (gimple *stmt, bool *n
 		{
 		  dump_printf_loc (MSG_NOTE, vect_location,
 				   "==> examining pattern def statement: ");
-		  dump_gimple_stmt (MSG_NOTE, TDF_SLIM, pattern_def_stmt, 0);
+		  dump_gimple_stmt (MSG_NOTE, TDF_SLIM,
+				    pattern_def_stmt_info->stmt, 0);
 		}
 
-	      if (!vect_analyze_stmt (pattern_def_stmt,
+	      if (!vect_analyze_stmt (pattern_def_stmt_info,
 				      need_to_vectorize, node, node_instance,
 				      cost_vec))
 		return false;
@@ -9499,7 +9524,7 @@ vect_analyze_stmt (gimple *stmt, bool *n
             {
               dump_printf_loc (MSG_NOTE, vect_location,
                                "==> examining pattern statement: ");
-              dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
+	      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
             }
         }
       else
@@ -9521,7 +9546,7 @@ vect_analyze_stmt (gimple *stmt, bool *n
         {
           dump_printf_loc (MSG_NOTE, vect_location,
                            "==> examining pattern statement: ");
-          dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
+	  dump_gimple_stmt (MSG_NOTE, TDF_SLIM, pattern_stmt_info->stmt, 0);
         }
 
       if (!vect_analyze_stmt (pattern_stmt_info, need_to_vectorize, node,
@@ -9557,8 +9582,9 @@ vect_analyze_stmt (gimple *stmt, bool *n
 
   if (STMT_VINFO_RELEVANT_P (stmt_info))
     {
-      gcc_assert (!VECTOR_MODE_P (TYPE_MODE (gimple_expr_type (stmt))));
-      gcall *call = dyn_cast <gcall *> (stmt);
+      tree type = gimple_expr_type (stmt_info->stmt);
+      gcc_assert (!VECTOR_MODE_P (TYPE_MODE (type)));
+      gcall *call = dyn_cast <gcall *> (stmt_info->stmt);
       gcc_assert (STMT_VINFO_VECTYPE (stmt_info)
 		  || (call && gimple_call_lhs (call) == NULL_TREE));
       *need_to_vectorize = true;
@@ -9575,34 +9601,40 @@ vect_analyze_stmt (gimple *stmt, bool *n
   if (!bb_vinfo
       && (STMT_VINFO_RELEVANT_P (stmt_info)
 	  || STMT_VINFO_DEF_TYPE (stmt_info) == vect_reduction_def))
-    ok = (vectorizable_simd_clone_call (stmt, NULL, NULL, node, cost_vec)
-	  || vectorizable_conversion (stmt, NULL, NULL, node, cost_vec)
-	  || vectorizable_shift (stmt, NULL, NULL, node, cost_vec)
-	  || vectorizable_operation (stmt, NULL, NULL, node, cost_vec)
-	  || vectorizable_assignment (stmt, NULL, NULL, node, cost_vec)
-	  || vectorizable_load (stmt, NULL, NULL, node, node_instance, cost_vec)
-	  || vectorizable_call (stmt, NULL, NULL, node, cost_vec)
-	  || vectorizable_store (stmt, NULL, NULL, node, cost_vec)
-	  || vectorizable_reduction (stmt, NULL, NULL, node, node_instance,
+    ok = (vectorizable_simd_clone_call (stmt_info, NULL, NULL, node, cost_vec)
+	  || vectorizable_conversion (stmt_info, NULL, NULL, node, cost_vec)
+	  || vectorizable_shift (stmt_info, NULL, NULL, node, cost_vec)
+	  || vectorizable_operation (stmt_info, NULL, NULL, node, cost_vec)
+	  || vectorizable_assignment (stmt_info, NULL, NULL, node, cost_vec)
+	  || vectorizable_load (stmt_info, NULL, NULL, node, node_instance,
+				cost_vec)
+	  || vectorizable_call (stmt_info, NULL, NULL, node, cost_vec)
+	  || vectorizable_store (stmt_info, NULL, NULL, node, cost_vec)
+	  || vectorizable_reduction (stmt_info, NULL, NULL, node,
+				     node_instance, cost_vec)
+	  || vectorizable_induction (stmt_info, NULL, NULL, node, cost_vec)
+	  || vectorizable_condition (stmt_info, NULL, NULL, NULL, 0, node,
 				     cost_vec)
-	  || vectorizable_induction (stmt, NULL, NULL, node, cost_vec)
-	  || vectorizable_condition (stmt, NULL, NULL, NULL, 0, node, cost_vec)
-	  || vectorizable_comparison (stmt, NULL, NULL, NULL, node, cost_vec));
+	  || vectorizable_comparison (stmt_info, NULL, NULL, NULL, node,
+				      cost_vec));
   else
     {
       if (bb_vinfo)
-	ok = (vectorizable_simd_clone_call (stmt, NULL, NULL, node, cost_vec)
-	      || vectorizable_conversion (stmt, NULL, NULL, node, cost_vec)
-	      || vectorizable_shift (stmt, NULL, NULL, node, cost_vec)
-	      || vectorizable_operation (stmt, NULL, NULL, node, cost_vec)
-	      || vectorizable_assignment (stmt, NULL, NULL, node, cost_vec)
-	      || vectorizable_load (stmt, NULL, NULL, node, node_instance,
+	ok = (vectorizable_simd_clone_call (stmt_info, NULL, NULL, node,
+					    cost_vec)
+	      || vectorizable_conversion (stmt_info, NULL, NULL, node,
+					  cost_vec)
+	      || vectorizable_shift (stmt_info, NULL, NULL, node, cost_vec)
+	      || vectorizable_operation (stmt_info, NULL, NULL, node, cost_vec)
+	      || vectorizable_assignment (stmt_info, NULL, NULL, node,
+					  cost_vec)
+	      || vectorizable_load (stmt_info, NULL, NULL, node, node_instance,
 				    cost_vec)
-	      || vectorizable_call (stmt, NULL, NULL, node, cost_vec)
-	      || vectorizable_store (stmt, NULL, NULL, node, cost_vec)
-	      || vectorizable_condition (stmt, NULL, NULL, NULL, 0, node,
+	      || vectorizable_call (stmt_info, NULL, NULL, node, cost_vec)
+	      || vectorizable_store (stmt_info, NULL, NULL, node, cost_vec)
+	      || vectorizable_condition (stmt_info, NULL, NULL, NULL, 0, node,
 					 cost_vec)
-	      || vectorizable_comparison (stmt, NULL, NULL, NULL, node,
+	      || vectorizable_comparison (stmt_info, NULL, NULL, NULL, node,
 					  cost_vec));
     }
 
@@ -9613,7 +9645,8 @@ vect_analyze_stmt (gimple *stmt, bool *n
           dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
                            "not vectorized: relevant stmt not ");
           dump_printf (MSG_MISSED_OPTIMIZATION, "supported: ");
-          dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM, stmt, 0);
+	  dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
+			    stmt_info->stmt, 0);
         }
 
       return false;
@@ -9623,13 +9656,14 @@ vect_analyze_stmt (gimple *stmt, bool *n
       need extra handling, except for vectorizable reductions.  */
   if (!bb_vinfo
       && STMT_VINFO_TYPE (stmt_info) != reduc_vec_info_type
-      && !can_vectorize_live_stmts (stmt, NULL, node, NULL, cost_vec))
+      && !can_vectorize_live_stmts (stmt_info, NULL, node, NULL, cost_vec))
     {
       if (dump_enabled_p ())
         {
           dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
                            "not vectorized: live stmt not supported: ");
-          dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM, stmt, 0);
+	  dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
+			    stmt_info->stmt, 0);
         }
 
        return false;
@@ -9660,45 +9694,49 @@ vect_transform_stmt (gimple *stmt, gimpl
   bool nested_p = (STMT_VINFO_LOOP_VINFO (stmt_info)
 		   && nested_in_vect_loop_p
 		        (LOOP_VINFO_LOOP (STMT_VINFO_LOOP_VINFO (stmt_info)),
-			 stmt));
+			 stmt_info));
 
   switch (STMT_VINFO_TYPE (stmt_info))
     {
     case type_demotion_vec_info_type:
     case type_promotion_vec_info_type:
     case type_conversion_vec_info_type:
-      done = vectorizable_conversion (stmt, gsi, &vec_stmt, slp_node, NULL);
+      done = vectorizable_conversion (stmt_info, gsi, &vec_stmt, slp_node,
+				      NULL);
       gcc_assert (done);
       break;
 
     case induc_vec_info_type:
-      done = vectorizable_induction (stmt, gsi, &vec_stmt, slp_node, NULL);
+      done = vectorizable_induction (stmt_info, gsi, &vec_stmt, slp_node,
+				     NULL);
       gcc_assert (done);
       break;
 
     case shift_vec_info_type:
-      done = vectorizable_shift (stmt, gsi, &vec_stmt, slp_node, NULL);
+      done = vectorizable_shift (stmt_info, gsi, &vec_stmt, slp_node, NULL);
       gcc_assert (done);
       break;
 
     case op_vec_info_type:
-      done = vectorizable_operation (stmt, gsi, &vec_stmt, slp_node, NULL);
+      done = vectorizable_operation (stmt_info, gsi, &vec_stmt, slp_node,
+				     NULL);
       gcc_assert (done);
       break;
 
     case assignment_vec_info_type:
-      done = vectorizable_assignment (stmt, gsi, &vec_stmt, slp_node, NULL);
+      done = vectorizable_assignment (stmt_info, gsi, &vec_stmt, slp_node,
+				      NULL);
       gcc_assert (done);
       break;
 
     case load_vec_info_type:
-      done = vectorizable_load (stmt, gsi, &vec_stmt, slp_node,
+      done = vectorizable_load (stmt_info, gsi, &vec_stmt, slp_node,
                                 slp_node_instance, NULL);
       gcc_assert (done);
       break;
 
     case store_vec_info_type:
-      done = vectorizable_store (stmt, gsi, &vec_stmt, slp_node, NULL);
+      done = vectorizable_store (stmt_info, gsi, &vec_stmt, slp_node, NULL);
       gcc_assert (done);
       if (STMT_VINFO_GROUPED_ACCESS (stmt_info) && !slp_node)
 	{
@@ -9716,27 +9754,30 @@ vect_transform_stmt (gimple *stmt, gimpl
       break;
 
     case condition_vec_info_type:
-      done = vectorizable_condition (stmt, gsi, &vec_stmt, NULL, 0, slp_node, NULL);
+      done = vectorizable_condition (stmt_info, gsi, &vec_stmt, NULL, 0,
+				     slp_node, NULL);
       gcc_assert (done);
       break;
 
     case comparison_vec_info_type:
-      done = vectorizable_comparison (stmt, gsi, &vec_stmt, NULL, slp_node, NULL);
+      done = vectorizable_comparison (stmt_info, gsi, &vec_stmt, NULL,
+				      slp_node, NULL);
       gcc_assert (done);
       break;
 
     case call_vec_info_type:
-      done = vectorizable_call (stmt, gsi, &vec_stmt, slp_node, NULL);
+      done = vectorizable_call (stmt_info, gsi, &vec_stmt, slp_node, NULL);
       stmt = gsi_stmt (*gsi);
       break;
 
     case call_simd_clone_vec_info_type:
-      done = vectorizable_simd_clone_call (stmt, gsi, &vec_stmt, slp_node, NULL);
+      done = vectorizable_simd_clone_call (stmt_info, gsi, &vec_stmt,
+					   slp_node, NULL);
       stmt = gsi_stmt (*gsi);
       break;
 
     case reduc_vec_info_type:
-      done = vectorizable_reduction (stmt, gsi, &vec_stmt, slp_node,
+      done = vectorizable_reduction (stmt_info, gsi, &vec_stmt, slp_node,
 				     slp_node_instance, NULL);
       gcc_assert (done);
       break;
@@ -9797,7 +9838,8 @@ vect_transform_stmt (gimple *stmt, gimpl
      being vectorized.  */
   if (STMT_VINFO_TYPE (stmt_info) != reduc_vec_info_type)
     {
-      done = can_vectorize_live_stmts (stmt, gsi, slp_node, &vec_stmt, NULL);
+      done = can_vectorize_live_stmts (stmt_info, gsi, slp_node, &vec_stmt,
+				       NULL);
       gcc_assert (done);
     }
 
@@ -10344,18 +10386,18 @@ supportable_widening_operation (enum tre
 	 a VEC_WIDEN_MULT_LO/HI_EXPR check.  */
       if (vect_loop
 	  && STMT_VINFO_RELEVANT (stmt_info) == vect_used_by_reduction
-	  && !nested_in_vect_loop_p (vect_loop, stmt)
+	  && !nested_in_vect_loop_p (vect_loop, stmt_info)
 	  && supportable_widening_operation (VEC_WIDEN_MULT_EVEN_EXPR,
-					     stmt, vectype_out, vectype_in,
-					     code1, code2, multi_step_cvt,
-					     interm_types))
+					     stmt_info, vectype_out,
+					     vectype_in, code1, code2,
+					     multi_step_cvt, interm_types))
         {
           /* Elements in a vector with vect_used_by_reduction property cannot
              be reordered if the use chain with this property does not have the
              same operation.  One such an example is s += a * b, where elements
              in a and b cannot be reordered.  Here we check if the vector defined
              by STMT is only directly used in the reduction statement.  */
-	  tree lhs = gimple_assign_lhs (stmt);
+	  tree lhs = gimple_assign_lhs (stmt_info->stmt);
 	  stmt_vec_info use_stmt_info = loop_info->lookup_single_use (lhs);
 	  if (use_stmt_info
 	      && STMT_VINFO_DEF_TYPE (use_stmt_info) == vect_reduction_def)
@@ -10827,7 +10869,8 @@ vect_get_vector_types_for_stmt (stmt_vec
       if (*stmt_vectype_out != boolean_type_node)
 	{
 	  HOST_WIDE_INT dummy;
-	  scalar_type = vect_get_smallest_scalar_type (stmt, &dummy, &dummy);
+	  scalar_type = vect_get_smallest_scalar_type (stmt_info,
+						       &dummy, &dummy);
 	}
       if (dump_enabled_p ())
 	{
Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:23:25.232822136 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:23:35.384731983 +0100
@@ -1325,7 +1325,7 @@ vect_dr_behavior (data_reference *dr)
   stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   if (loop_vinfo == NULL
-      || !nested_in_vect_loop_p (LOOP_VINFO_LOOP (loop_vinfo), stmt))
+      || !nested_in_vect_loop_p (LOOP_VINFO_LOOP (loop_vinfo), stmt_info))
     return &DR_INNERMOST (dr);
   else
     return &STMT_VINFO_DR_WRT_VEC_LOOP (stmt_info);

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [32/46] Use stmt_vec_info in function interfaces (part 2)
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (29 preceding siblings ...)
  2018-07-24 10:04 ` [28/46] Use stmt_vec_info instead of gimple stmts internally (part 1) Richard Sandiford
@ 2018-07-24 10:05 ` Richard Sandiford
  2018-07-25 10:06   ` Richard Biener
  2018-07-24 10:05 ` [31/46] Use stmt_vec_info in function interfaces (part 1) Richard Sandiford
                   ` (14 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:05 UTC (permalink / raw)
  To: gcc-patches

This second part handles the mechanical change from a gimple stmt
argument to a stmt_vec_info argument.  It updates the function
comments if they referred to the argument by name, but it doesn't
try to retrofit mentions to other functions.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (nested_in_vect_loop_p): Move further down
	file and take a stmt_vec_info instead of a gimple stmt.
	(supportable_widening_operation, vect_finish_replace_stmt)
	(vect_finish_stmt_generation, vect_get_store_rhs)
	(vect_get_vec_def_for_operand_1, vect_get_vec_def_for_operand)
	(vect_get_vec_defs, vect_init_vector, vect_transform_stmt)
	(vect_remove_stores, vect_analyze_stmt, vectorizable_condition)
	(vect_get_smallest_scalar_type, vect_check_gather_scatter)
	(vect_create_data_ref_ptr, bump_vector_ptr)
	(vect_permute_store_chain, vect_setup_realignment)
	(vect_transform_grouped_load, vect_record_grouped_load_vectors)
	(vect_create_addr_base_for_vector_ref, vectorizable_live_operation)
	(vectorizable_reduction, vectorizable_induction)
	(get_initial_def_for_reduction, is_simple_and_all_uses_invariant)
	(vect_get_place_in_interleaving_chain): Take stmt_vec_infos rather
	than gimple stmts as arguments.
	* tree-vect-data-refs.c (vect_get_smallest_scalar_type)
	(vect_preserves_scalar_order_p, vect_slp_analyze_node_dependences)
	(can_group_stmts_p, vect_check_gather_scatter)
	(vect_create_addr_base_for_vector_ref, vect_create_data_ref_ptr)
	(bump_vector_ptr, vect_permute_store_chain, vect_setup_realignment)
	(vect_permute_load_chain, vect_shift_permute_load_chain)
	(vect_transform_grouped_load)
	(vect_record_grouped_load_vectors): Likewise.
	* tree-vect-loop.c (vect_fixup_reduc_chain)
	(get_initial_def_for_reduction, vect_create_epilog_for_reduction)
	(vectorize_fold_left_reduction, is_nonwrapping_integer_induction)
	(vectorizable_reduction, vectorizable_induction)
	(vectorizable_live_operation, vect_loop_kill_debug_uses): Likewise.
	* tree-vect-patterns.c (type_conversion_p, adjust_bool_stmts)
	(vect_get_load_store_mask): Likewise.
	* tree-vect-slp.c (vect_get_place_in_interleaving_chain)
	(vect_analyze_slp_instance, vect_mask_constant_operand_p): Likewise.
	* tree-vect-stmts.c (vect_mark_relevant)
	(is_simple_and_all_uses_invariant)
	(exist_non_indexing_operands_for_use_p, process_use)
	(vect_init_vector_1, vect_init_vector, vect_get_vec_def_for_operand_1)
	(vect_get_vec_def_for_operand, vect_get_vec_defs)
	(vect_finish_stmt_generation_1, vect_finish_replace_stmt)
	(vect_finish_stmt_generation, vect_truncate_gather_scatter_offset)
	(compare_step_with_zero, vect_get_store_rhs, get_group_load_store_type)
	(get_negative_load_store_type, get_load_store_type)
	(vect_check_load_store_mask, vect_check_store_rhs)
	(vect_build_gather_load_calls, vect_get_strided_load_store_ops)
	(vectorizable_bswap, vectorizable_call, vectorizable_simd_clone_call)
	(vect_create_vectorized_demotion_stmts, vectorizable_conversion)
	(vectorizable_assignment, vectorizable_shift, vectorizable_operation)
	(get_group_alias_ptr_type, vectorizable_store, hoist_defs_of_uses)
	(vectorizable_load, vectorizable_condition, vectorizable_comparison)
	(vect_analyze_stmt, vect_transform_stmt, vect_remove_stores)
	(supportable_widening_operation): Likewise.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:23:35.384731983 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:23:50.008602115 +0100
@@ -627,13 +627,6 @@ loop_vec_info_for_loop (struct loop *loo
   return (loop_vec_info) loop->aux;
 }
 
-static inline bool
-nested_in_vect_loop_p (struct loop *loop, gimple *stmt)
-{
-  return (loop->inner
-          && (loop->inner == (gimple_bb (stmt))->loop_father));
-}
-
 typedef struct _bb_vec_info : public vec_info
 {
   _bb_vec_info (gimple_stmt_iterator, gimple_stmt_iterator, vec_info_shared *);
@@ -1119,6 +1112,13 @@ set_vinfo_for_stmt (gimple *stmt, stmt_v
     }
 }
 
+static inline bool
+nested_in_vect_loop_p (struct loop *loop, stmt_vec_info stmt_info)
+{
+  return (loop->inner
+	  && (loop->inner == (gimple_bb (stmt_info->stmt))->loop_father));
+}
+
 /* Return the earlier statement between STMT1_INFO and STMT2_INFO.  */
 
 static inline stmt_vec_info
@@ -1493,8 +1493,8 @@ extern bool vect_is_simple_use (tree, ve
 extern bool vect_is_simple_use (tree, vec_info *, enum vect_def_type *,
 				tree *, stmt_vec_info * = NULL,
 				gimple ** = NULL);
-extern bool supportable_widening_operation (enum tree_code, gimple *, tree,
-					    tree, enum tree_code *,
+extern bool supportable_widening_operation (enum tree_code, stmt_vec_info,
+					    tree, tree, enum tree_code *,
 					    enum tree_code *, int *,
 					    vec<tree> *);
 extern bool supportable_narrowing_operation (enum tree_code, tree, tree,
@@ -1505,26 +1505,26 @@ extern void free_stmt_vec_info (gimple *
 extern unsigned record_stmt_cost (stmt_vector_for_cost *, int,
 				  enum vect_cost_for_stmt, stmt_vec_info,
 				  int, enum vect_cost_model_location);
-extern stmt_vec_info vect_finish_replace_stmt (gimple *, gimple *);
-extern stmt_vec_info vect_finish_stmt_generation (gimple *, gimple *,
+extern stmt_vec_info vect_finish_replace_stmt (stmt_vec_info, gimple *);
+extern stmt_vec_info vect_finish_stmt_generation (stmt_vec_info, gimple *,
 						  gimple_stmt_iterator *);
 extern bool vect_mark_stmts_to_be_vectorized (loop_vec_info);
-extern tree vect_get_store_rhs (gimple *);
-extern tree vect_get_vec_def_for_operand_1 (gimple *, enum vect_def_type);
-extern tree vect_get_vec_def_for_operand (tree, gimple *, tree = NULL);
-extern void vect_get_vec_defs (tree, tree, gimple *, vec<tree> *,
+extern tree vect_get_store_rhs (stmt_vec_info);
+extern tree vect_get_vec_def_for_operand_1 (stmt_vec_info, enum vect_def_type);
+extern tree vect_get_vec_def_for_operand (tree, stmt_vec_info, tree = NULL);
+extern void vect_get_vec_defs (tree, tree, stmt_vec_info, vec<tree> *,
 			       vec<tree> *, slp_tree);
 extern void vect_get_vec_defs_for_stmt_copy (enum vect_def_type *,
 					     vec<tree> *, vec<tree> *);
-extern tree vect_init_vector (gimple *, tree, tree,
+extern tree vect_init_vector (stmt_vec_info, tree, tree,
                               gimple_stmt_iterator *);
 extern tree vect_get_vec_def_for_stmt_copy (enum vect_def_type, tree);
-extern bool vect_transform_stmt (gimple *, gimple_stmt_iterator *,
+extern bool vect_transform_stmt (stmt_vec_info, gimple_stmt_iterator *,
                                  bool *, slp_tree, slp_instance);
-extern void vect_remove_stores (gimple *);
-extern bool vect_analyze_stmt (gimple *, bool *, slp_tree, slp_instance,
+extern void vect_remove_stores (stmt_vec_info);
+extern bool vect_analyze_stmt (stmt_vec_info, bool *, slp_tree, slp_instance,
 			       stmt_vector_for_cost *);
-extern bool vectorizable_condition (gimple *, gimple_stmt_iterator *,
+extern bool vectorizable_condition (stmt_vec_info, gimple_stmt_iterator *,
 				    stmt_vec_info *, tree, int, slp_tree,
 				    stmt_vector_for_cost *);
 extern void vect_get_load_cost (stmt_vec_info, int, bool,
@@ -1546,7 +1546,7 @@ extern tree vect_get_mask_type_for_stmt
 extern bool vect_can_force_dr_alignment_p (const_tree, unsigned int);
 extern enum dr_alignment_support vect_supportable_dr_alignment
                                            (struct data_reference *, bool);
-extern tree vect_get_smallest_scalar_type (gimple *, HOST_WIDE_INT *,
+extern tree vect_get_smallest_scalar_type (stmt_vec_info, HOST_WIDE_INT *,
                                            HOST_WIDE_INT *);
 extern bool vect_analyze_data_ref_dependences (loop_vec_info, unsigned int *);
 extern bool vect_slp_analyze_instance_dependence (slp_instance);
@@ -1558,36 +1558,36 @@ extern bool vect_analyze_data_ref_access
 extern bool vect_prune_runtime_alias_test_list (loop_vec_info);
 extern bool vect_gather_scatter_fn_p (bool, bool, tree, tree, unsigned int,
 				      signop, int, internal_fn *, tree *);
-extern bool vect_check_gather_scatter (gimple *, loop_vec_info,
+extern bool vect_check_gather_scatter (stmt_vec_info, loop_vec_info,
 				       gather_scatter_info *);
 extern bool vect_find_stmt_data_reference (loop_p, gimple *,
 					   vec<data_reference_p> *);
 extern bool vect_analyze_data_refs (vec_info *, poly_uint64 *);
 extern void vect_record_base_alignments (vec_info *);
-extern tree vect_create_data_ref_ptr (gimple *, tree, struct loop *, tree,
+extern tree vect_create_data_ref_ptr (stmt_vec_info, tree, struct loop *, tree,
 				      tree *, gimple_stmt_iterator *,
 				      gimple **, bool, bool *,
 				      tree = NULL_TREE, tree = NULL_TREE);
-extern tree bump_vector_ptr (tree, gimple *, gimple_stmt_iterator *, gimple *,
-			     tree);
+extern tree bump_vector_ptr (tree, gimple *, gimple_stmt_iterator *,
+			     stmt_vec_info, tree);
 extern void vect_copy_ref_info (tree, tree);
 extern tree vect_create_destination_var (tree, tree);
 extern bool vect_grouped_store_supported (tree, unsigned HOST_WIDE_INT);
 extern bool vect_store_lanes_supported (tree, unsigned HOST_WIDE_INT, bool);
 extern bool vect_grouped_load_supported (tree, bool, unsigned HOST_WIDE_INT);
 extern bool vect_load_lanes_supported (tree, unsigned HOST_WIDE_INT, bool);
-extern void vect_permute_store_chain (vec<tree> ,unsigned int, gimple *,
+extern void vect_permute_store_chain (vec<tree> ,unsigned int, stmt_vec_info,
                                     gimple_stmt_iterator *, vec<tree> *);
-extern tree vect_setup_realignment (gimple *, gimple_stmt_iterator *, tree *,
-                                    enum dr_alignment_support, tree,
+extern tree vect_setup_realignment (stmt_vec_info, gimple_stmt_iterator *,
+				    tree *, enum dr_alignment_support, tree,
                                     struct loop **);
-extern void vect_transform_grouped_load (gimple *, vec<tree> , int,
+extern void vect_transform_grouped_load (stmt_vec_info, vec<tree> , int,
                                          gimple_stmt_iterator *);
-extern void vect_record_grouped_load_vectors (gimple *, vec<tree> );
+extern void vect_record_grouped_load_vectors (stmt_vec_info, vec<tree>);
 extern tree vect_get_new_vect_var (tree, enum vect_var_kind, const char *);
 extern tree vect_get_new_ssa_name (tree, enum vect_var_kind,
 				   const char * = NULL);
-extern tree vect_create_addr_base_for_vector_ref (gimple *, gimple_seq *,
+extern tree vect_create_addr_base_for_vector_ref (stmt_vec_info, gimple_seq *,
 						  tree, tree = NULL_TREE);
 
 /* In tree-vect-loop.c.  */
@@ -1613,16 +1613,16 @@ extern tree vect_get_loop_mask (gimple_s
 /* Drive for loop transformation stage.  */
 extern struct loop *vect_transform_loop (loop_vec_info);
 extern loop_vec_info vect_analyze_loop_form (struct loop *, vec_info_shared *);
-extern bool vectorizable_live_operation (gimple *, gimple_stmt_iterator *,
+extern bool vectorizable_live_operation (stmt_vec_info, gimple_stmt_iterator *,
 					 slp_tree, int, stmt_vec_info *,
 					 stmt_vector_for_cost *);
-extern bool vectorizable_reduction (gimple *, gimple_stmt_iterator *,
+extern bool vectorizable_reduction (stmt_vec_info, gimple_stmt_iterator *,
 				    stmt_vec_info *, slp_tree, slp_instance,
 				    stmt_vector_for_cost *);
-extern bool vectorizable_induction (gimple *, gimple_stmt_iterator *,
+extern bool vectorizable_induction (stmt_vec_info, gimple_stmt_iterator *,
 				    stmt_vec_info *, slp_tree,
 				    stmt_vector_for_cost *);
-extern tree get_initial_def_for_reduction (gimple *, tree, tree *);
+extern tree get_initial_def_for_reduction (stmt_vec_info, tree, tree *);
 extern bool vect_worthwhile_without_simd_p (vec_info *, tree_code);
 extern int vect_get_known_peeling_cost (loop_vec_info, int, int *,
 					stmt_vector_for_cost *,
@@ -1643,13 +1643,13 @@ extern void vect_detect_hybrid_slp (loop
 extern void vect_get_slp_defs (vec<tree> , slp_tree, vec<vec<tree> > *);
 extern bool vect_slp_bb (basic_block);
 extern stmt_vec_info vect_find_last_scalar_stmt_in_slp (slp_tree);
-extern bool is_simple_and_all_uses_invariant (gimple *, loop_vec_info);
+extern bool is_simple_and_all_uses_invariant (stmt_vec_info, loop_vec_info);
 extern bool can_duplicate_and_interleave_p (unsigned int, machine_mode,
 					    unsigned int * = NULL,
 					    tree * = NULL, tree * = NULL);
 extern void duplicate_and_interleave (gimple_seq *, tree, vec<tree>,
 				      unsigned int, vec<tree> &);
-extern int vect_get_place_in_interleaving_chain (gimple *, gimple *);
+extern int vect_get_place_in_interleaving_chain (stmt_vec_info, stmt_vec_info);
 
 /* In tree-vect-patterns.c.  */
 /* Pattern recognition functions.
Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c	2018-07-24 10:23:46.108636749 +0100
+++ gcc/tree-vect-data-refs.c	2018-07-24 10:23:50.000602186 +0100
@@ -99,7 +99,7 @@ vect_lanes_optab_supported_p (const char
 }
 
 
-/* Return the smallest scalar part of STMT.
+/* Return the smallest scalar part of STMT_INFO.
    This is used to determine the vectype of the stmt.  We generally set the
    vectype according to the type of the result (lhs).  For stmts whose
    result-type is different than the type of the arguments (e.g., demotion,
@@ -117,10 +117,11 @@ vect_lanes_optab_supported_p (const char
    types.  */
 
 tree
-vect_get_smallest_scalar_type (gimple *stmt, HOST_WIDE_INT *lhs_size_unit,
-                               HOST_WIDE_INT *rhs_size_unit)
+vect_get_smallest_scalar_type (stmt_vec_info stmt_info,
+			       HOST_WIDE_INT *lhs_size_unit,
+			       HOST_WIDE_INT *rhs_size_unit)
 {
-  tree scalar_type = gimple_expr_type (stmt);
+  tree scalar_type = gimple_expr_type (stmt_info->stmt);
   HOST_WIDE_INT lhs, rhs;
 
   /* During the analysis phase, this function is called on arbitrary
@@ -130,7 +131,7 @@ vect_get_smallest_scalar_type (gimple *s
 
   lhs = rhs = TREE_INT_CST_LOW (TYPE_SIZE_UNIT (scalar_type));
 
-  gassign *assign = dyn_cast <gassign *> (stmt);
+  gassign *assign = dyn_cast <gassign *> (stmt_info->stmt);
   if (assign
       && (gimple_assign_cast_p (assign)
 	  || gimple_assign_rhs_code (assign) == DOT_PROD_EXPR
@@ -191,16 +192,14 @@ vect_check_nonzero_value (loop_vec_info
   LOOP_VINFO_CHECK_NONZERO (loop_vinfo).safe_push (value);
 }
 
-/* Return true if we know that the order of vectorized STMT_A and
-   vectorized STMT_B will be the same as the order of STMT_A and STMT_B.
-   At least one of the statements is a write.  */
+/* Return true if we know that the order of vectorized STMTINFO_A and
+   vectorized STMTINFO_B will be the same as the order of STMTINFO_A and
+   STMTINFO_B.  At least one of the statements is a write.  */
 
 static bool
-vect_preserves_scalar_order_p (gimple *stmt_a, gimple *stmt_b)
+vect_preserves_scalar_order_p (stmt_vec_info stmtinfo_a,
+			       stmt_vec_info stmtinfo_b)
 {
-  stmt_vec_info stmtinfo_a = vinfo_for_stmt (stmt_a);
-  stmt_vec_info stmtinfo_b = vinfo_for_stmt (stmt_b);
-
   /* Single statements are always kept in their original order.  */
   if (!STMT_VINFO_GROUPED_ACCESS (stmtinfo_a)
       && !STMT_VINFO_GROUPED_ACCESS (stmtinfo_b))
@@ -666,7 +665,7 @@ vect_slp_analyze_data_ref_dependence (st
 static bool
 vect_slp_analyze_node_dependences (slp_instance instance, slp_tree node,
 				   vec<stmt_vec_info> stores,
-				   gimple *last_store)
+				   stmt_vec_info last_store_info)
 {
   /* This walks over all stmts involved in the SLP load/store done
      in NODE verifying we can sink them up to the last stmt in the
@@ -712,7 +711,7 @@ vect_slp_analyze_node_dependences (slp_i
 	     been sunk to (and we verify if we can do that as well).  */
 	  if (gimple_visited_p (stmt))
 	    {
-	      if (stmt_info != last_store)
+	      if (stmt_info != last_store_info)
 		continue;
 	      unsigned i;
 	      stmt_vec_info store_info;
@@ -2843,20 +2842,20 @@ strip_conversion (tree op)
   return gimple_assign_rhs1 (stmt);
 }
 
-/* Return true if vectorizable_* routines can handle statements STMT1
-   and STMT2 being in a single group.  */
+/* Return true if vectorizable_* routines can handle statements STMT1_INFO
+   and STMT2_INFO being in a single group.  */
 
 static bool
-can_group_stmts_p (gimple *stmt1, gimple *stmt2)
+can_group_stmts_p (stmt_vec_info stmt1_info, stmt_vec_info stmt2_info)
 {
-  if (gimple_assign_single_p (stmt1))
-    return gimple_assign_single_p (stmt2);
+  if (gimple_assign_single_p (stmt1_info->stmt))
+    return gimple_assign_single_p (stmt2_info->stmt);
 
-  gcall *call1 = dyn_cast <gcall *> (stmt1);
+  gcall *call1 = dyn_cast <gcall *> (stmt1_info->stmt);
   if (call1 && gimple_call_internal_p (call1))
     {
       /* Check for two masked loads or two masked stores.  */
-      gcall *call2 = dyn_cast <gcall *> (stmt2);
+      gcall *call2 = dyn_cast <gcall *> (stmt2_info->stmt);
       if (!call2 || !gimple_call_internal_p (call2))
 	return false;
       internal_fn ifn = gimple_call_internal_fn (call1);
@@ -3643,17 +3642,16 @@ vect_describe_gather_scatter_call (stmt_
   info->memory_type = TREE_TYPE (DR_REF (dr));
 }
 
-/* Return true if a non-affine read or write in STMT is suitable for a
+/* Return true if a non-affine read or write in STMT_INFO is suitable for a
    gather load or scatter store.  Describe the operation in *INFO if so.  */
 
 bool
-vect_check_gather_scatter (gimple *stmt, loop_vec_info loop_vinfo,
+vect_check_gather_scatter (stmt_vec_info stmt_info, loop_vec_info loop_vinfo,
 			   gather_scatter_info *info)
 {
   HOST_WIDE_INT scale = 1;
   poly_int64 pbitpos, pbitsize;
   struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   tree offtype = NULL_TREE;
   tree decl = NULL_TREE, base, off;
@@ -4473,7 +4471,7 @@ vect_duplicate_ssa_name_ptr_info (tree n
    that will be accessed for a data reference.
 
    Input:
-   STMT: The statement containing the data reference.
+   STMT_INFO: The statement containing the data reference.
    NEW_STMT_LIST: Must be initialized to NULL_TREE or a statement list.
    OFFSET: Optional. If supplied, it is be added to the initial address.
    LOOP:    Specify relative to which loop-nest should the address be computed.
@@ -4502,12 +4500,11 @@ vect_duplicate_ssa_name_ptr_info (tree n
    FORNOW: We are only handling array accesses with step 1.  */
 
 tree
-vect_create_addr_base_for_vector_ref (gimple *stmt,
+vect_create_addr_base_for_vector_ref (stmt_vec_info stmt_info,
 				      gimple_seq *new_stmt_list,
 				      tree offset,
 				      tree byte_offset)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   const char *base_name;
   tree addr_base;
@@ -4588,26 +4585,26 @@ vect_create_addr_base_for_vector_ref (gi
 /* Function vect_create_data_ref_ptr.
 
    Create a new pointer-to-AGGR_TYPE variable (ap), that points to the first
-   location accessed in the loop by STMT, along with the def-use update
+   location accessed in the loop by STMT_INFO, along with the def-use update
    chain to appropriately advance the pointer through the loop iterations.
    Also set aliasing information for the pointer.  This pointer is used by
    the callers to this function to create a memory reference expression for
    vector load/store access.
 
    Input:
-   1. STMT: a stmt that references memory. Expected to be of the form
+   1. STMT_INFO: a stmt that references memory. Expected to be of the form
          GIMPLE_ASSIGN <name, data-ref> or
 	 GIMPLE_ASSIGN <data-ref, name>.
    2. AGGR_TYPE: the type of the reference, which should be either a vector
         or an array.
    3. AT_LOOP: the loop where the vector memref is to be created.
    4. OFFSET (optional): an offset to be added to the initial address accessed
-        by the data-ref in STMT.
+	by the data-ref in STMT_INFO.
    5. BSI: location where the new stmts are to be placed if there is no loop
    6. ONLY_INIT: indicate if ap is to be updated in the loop, or remain
         pointing to the initial address.
    7. BYTE_OFFSET (optional, defaults to NULL): a byte offset to be added
-	to the initial address accessed by the data-ref in STMT.  This is
+	to the initial address accessed by the data-ref in STMT_INFO.  This is
 	similar to OFFSET, but OFFSET is counted in elements, while BYTE_OFFSET
 	in bytes.
    8. IV_STEP (optional, defaults to NULL): the amount that should be added
@@ -4643,14 +4640,13 @@ vect_create_addr_base_for_vector_ref (gi
    4. Return the pointer.  */
 
 tree
-vect_create_data_ref_ptr (gimple *stmt, tree aggr_type, struct loop *at_loop,
-			  tree offset, tree *initial_address,
-			  gimple_stmt_iterator *gsi, gimple **ptr_incr,
-			  bool only_init, bool *inv_p, tree byte_offset,
-			  tree iv_step)
+vect_create_data_ref_ptr (stmt_vec_info stmt_info, tree aggr_type,
+			  struct loop *at_loop, tree offset,
+			  tree *initial_address, gimple_stmt_iterator *gsi,
+			  gimple **ptr_incr, bool only_init, bool *inv_p,
+			  tree byte_offset, tree iv_step)
 {
   const char *base_name;
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct loop *loop = NULL;
   bool nested_in_vect_loop = false;
@@ -4905,7 +4901,7 @@ vect_create_data_ref_ptr (gimple *stmt,
 	      the loop.  The increment amount across iterations is expected
 	      to be vector_size.
    BSI - location where the new update stmt is to be placed.
-   STMT - the original scalar memory-access stmt that is being vectorized.
+   STMT_INFO - the original scalar memory-access stmt that is being vectorized.
    BUMP - optional. The offset by which to bump the pointer. If not given,
 	  the offset is assumed to be vector_size.
 
@@ -4915,9 +4911,8 @@ vect_create_data_ref_ptr (gimple *stmt,
 
 tree
 bump_vector_ptr (tree dataref_ptr, gimple *ptr_incr, gimple_stmt_iterator *gsi,
-		 gimple *stmt, tree bump)
+		 stmt_vec_info stmt_info, tree bump)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
   tree update = TYPE_SIZE_UNIT (vectype);
@@ -5217,11 +5212,10 @@ vect_store_lanes_supported (tree vectype
 void
 vect_permute_store_chain (vec<tree> dr_chain,
 			  unsigned int length,
-			  gimple *stmt,
+			  stmt_vec_info stmt_info,
 			  gimple_stmt_iterator *gsi,
 			  vec<tree> *result_chain)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   tree vect1, vect2, high, low;
   gimple *perm_stmt;
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
@@ -5368,12 +5362,12 @@ vect_permute_store_chain (vec<tree> dr_c
    dr_explicit_realign_optimized.
 
    The code above sets up a new (vector) pointer, pointing to the first
-   location accessed by STMT, and a "floor-aligned" load using that pointer.
-   It also generates code to compute the "realignment-token" (if the relevant
-   target hook was defined), and creates a phi-node at the loop-header bb
-   whose arguments are the result of the prolog-load (created by this
-   function) and the result of a load that takes place in the loop (to be
-   created by the caller to this function).
+   location accessed by STMT_INFO, and a "floor-aligned" load using that
+   pointer.  It also generates code to compute the "realignment-token"
+   (if the relevant target hook was defined), and creates a phi-node at the
+   loop-header bb whose arguments are the result of the prolog-load (created
+   by this function) and the result of a load that takes place in the loop
+   (to be created by the caller to this function).
 
    For the case of dr_explicit_realign_optimized:
    The caller to this function uses the phi-result (msq) to create the
@@ -5392,8 +5386,8 @@ vect_permute_store_chain (vec<tree> dr_c
       result = realign_load (msq, lsq, realignment_token);
 
    Input:
-   STMT - (scalar) load stmt to be vectorized. This load accesses
-          a memory location that may be unaligned.
+   STMT_INFO - (scalar) load stmt to be vectorized. This load accesses
+	       a memory location that may be unaligned.
    BSI - place where new code is to be inserted.
    ALIGNMENT_SUPPORT_SCHEME - which of the two misalignment handling schemes
 			      is used.
@@ -5404,13 +5398,12 @@ vect_permute_store_chain (vec<tree> dr_c
    Return value - the result of the loop-header phi node.  */
 
 tree
-vect_setup_realignment (gimple *stmt, gimple_stmt_iterator *gsi,
+vect_setup_realignment (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
                         tree *realignment_token,
 			enum dr_alignment_support alignment_support_scheme,
 			tree init_addr,
 			struct loop **at_loop)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
@@ -5839,11 +5832,10 @@ vect_load_lanes_supported (tree vectype,
 static void
 vect_permute_load_chain (vec<tree> dr_chain,
 			 unsigned int length,
-			 gimple *stmt,
+			 stmt_vec_info stmt_info,
 			 gimple_stmt_iterator *gsi,
 			 vec<tree> *result_chain)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   tree data_ref, first_vect, second_vect;
   tree perm_mask_even, perm_mask_odd;
   tree perm3_mask_low, perm3_mask_high;
@@ -6043,11 +6035,10 @@ vect_permute_load_chain (vec<tree> dr_ch
 static bool
 vect_shift_permute_load_chain (vec<tree> dr_chain,
 			       unsigned int length,
-			       gimple *stmt,
+			       stmt_vec_info stmt_info,
 			       gimple_stmt_iterator *gsi,
 			       vec<tree> *result_chain)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   tree vect[3], vect_shift[3], data_ref, first_vect, second_vect;
   tree perm2_mask1, perm2_mask2, perm3_mask;
   tree select_mask, shift1_mask, shift2_mask, shift3_mask, shift4_mask;
@@ -6311,10 +6302,9 @@ vect_shift_permute_load_chain (vec<tree>
 */
 
 void
-vect_transform_grouped_load (gimple *stmt, vec<tree> dr_chain, int size,
-			     gimple_stmt_iterator *gsi)
+vect_transform_grouped_load (stmt_vec_info stmt_info, vec<tree> dr_chain,
+			     int size, gimple_stmt_iterator *gsi)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   machine_mode mode;
   vec<tree> result_chain = vNULL;
 
@@ -6337,13 +6327,13 @@ vect_transform_grouped_load (gimple *stm
 }
 
 /* RESULT_CHAIN contains the output of a group of grouped loads that were
-   generated as part of the vectorization of STMT.  Assign the statement
+   generated as part of the vectorization of STMT_INFO.  Assign the statement
    for each vector to the associated scalar statement.  */
 
 void
-vect_record_grouped_load_vectors (gimple *stmt, vec<tree> result_chain)
+vect_record_grouped_load_vectors (stmt_vec_info stmt_info,
+				  vec<tree> result_chain)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   vec_info *vinfo = stmt_info->vinfo;
   stmt_vec_info first_stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
   unsigned int i, gap_count;
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:23:46.112636713 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:23:50.004602150 +0100
@@ -648,12 +648,12 @@ vect_analyze_scalar_cycles (loop_vec_inf
     vect_analyze_scalar_cycles_1 (loop_vinfo, loop->inner);
 }
 
-/* Transfer group and reduction information from STMT to its pattern stmt.  */
+/* Transfer group and reduction information from STMT_INFO to its
+   pattern stmt.  */
 
 static void
-vect_fixup_reduc_chain (gimple *stmt)
+vect_fixup_reduc_chain (stmt_vec_info stmt_info)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   stmt_vec_info firstp = STMT_VINFO_RELATED_STMT (stmt_info);
   stmt_vec_info stmtp;
   gcc_assert (!REDUC_GROUP_FIRST_ELEMENT (firstp)
@@ -3998,15 +3998,15 @@ vect_model_induction_cost (stmt_vec_info
 /* Function get_initial_def_for_reduction
 
    Input:
-   STMT - a stmt that performs a reduction operation in the loop.
+   STMT_VINFO - a stmt that performs a reduction operation in the loop.
    INIT_VAL - the initial value of the reduction variable
 
    Output:
    ADJUSTMENT_DEF - a tree that holds a value to be added to the final result
         of the reduction (used for adjusting the epilog - see below).
-   Return a vector variable, initialized according to the operation that STMT
-        performs. This vector will be used as the initial value of the
-        vector of partial results.
+   Return a vector variable, initialized according to the operation that
+	STMT_VINFO performs. This vector will be used as the initial value
+	of the vector of partial results.
 
    Option1 (adjust in epilog): Initialize the vector as follows:
      add/bit or/xor:    [0,0,...,0,0]
@@ -4027,7 +4027,7 @@ vect_model_induction_cost (stmt_vec_info
    for (i=0;i<n;i++)
      s = s + a[i];
 
-   STMT is 's = s + a[i]', and the reduction variable is 's'.
+   STMT_VINFO is 's = s + a[i]', and the reduction variable is 's'.
    For a vector of 4 units, we want to return either [0,0,0,init_val],
    or [0,0,0,0] and let the caller know that it needs to adjust
    the result at the end by 'init_val'.
@@ -4039,10 +4039,9 @@ vect_model_induction_cost (stmt_vec_info
    A cost model should help decide between these two schemes.  */
 
 tree
-get_initial_def_for_reduction (gimple *stmt, tree init_val,
+get_initial_def_for_reduction (stmt_vec_info stmt_vinfo, tree init_val,
                                tree *adjustment_def)
 {
-  stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_vinfo);
   struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
   tree scalar_type = TREE_TYPE (init_val);
@@ -4321,7 +4320,7 @@ get_initial_defs_for_reduction (slp_tree
   
    VECT_DEFS is list of vector of partial results, i.e., the lhs's of vector 
      reduction statements. 
-   STMT is the scalar reduction stmt that is being vectorized.
+   STMT_INFO is the scalar reduction stmt that is being vectorized.
    NCOPIES is > 1 in case the vectorization factor (VF) is bigger than the
      number of elements that we can fit in a vectype (nunits).  In this case
      we have to generate more than one vector stmt - i.e - we need to "unroll"
@@ -4334,7 +4333,7 @@ get_initial_defs_for_reduction (slp_tree
      statement that is defined by REDUCTION_PHI.
    DOUBLE_REDUC is TRUE if double reduction phi nodes should be handled.
    SLP_NODE is an SLP node containing a group of reduction statements. The 
-     first one in this group is STMT.
+     first one in this group is STMT_INFO.
    INDUC_VAL is for INTEGER_INDUC_COND_REDUCTION the value to use for the case
      when the COND_EXPR is never true in the loop.  For MAX_EXPR, it needs to
      be smaller than any value of the IV in the loop, for MIN_EXPR larger than
@@ -4359,8 +4358,8 @@ get_initial_defs_for_reduction (slp_tree
 
         loop:
           vec_def = phi <null, null>            # REDUCTION_PHI
-          VECT_DEF = vector_stmt                # vectorized form of STMT
-          s_loop = scalar_stmt                  # (scalar) STMT
+          VECT_DEF = vector_stmt                # vectorized form of STMT_INFO
+          s_loop = scalar_stmt                  # (scalar) STMT_INFO
         loop_exit:
           s_out0 = phi <s_loop>                 # (scalar) EXIT_PHI
           use <s_out0>
@@ -4370,8 +4369,8 @@ get_initial_defs_for_reduction (slp_tree
 
         loop:
           vec_def = phi <vec_init, VECT_DEF>    # REDUCTION_PHI
-          VECT_DEF = vector_stmt                # vectorized form of STMT
-          s_loop = scalar_stmt                  # (scalar) STMT
+          VECT_DEF = vector_stmt                # vectorized form of STMT_INFO
+          s_loop = scalar_stmt                  # (scalar) STMT_INFO
         loop_exit:
           s_out0 = phi <s_loop>                 # (scalar) EXIT_PHI
           v_out1 = phi <VECT_DEF>               # NEW_EXIT_PHI
@@ -4383,7 +4382,8 @@ get_initial_defs_for_reduction (slp_tree
 */
 
 static void
-vect_create_epilog_for_reduction (vec<tree> vect_defs, gimple *stmt,
+vect_create_epilog_for_reduction (vec<tree> vect_defs,
+				  stmt_vec_info stmt_info,
 				  gimple *reduc_def_stmt,
 				  int ncopies, internal_fn reduc_fn,
 				  vec<stmt_vec_info> reduction_phis,
@@ -4393,7 +4393,6 @@ vect_create_epilog_for_reduction (vec<tr
 				  tree induc_val, enum tree_code induc_code,
 				  tree neutral_op)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   stmt_vec_info prev_phi_info;
   tree vectype;
   machine_mode mode;
@@ -5816,9 +5815,9 @@ vect_expand_fold_left (gimple_stmt_itera
   return lhs;
 }
 
-/* Perform an in-order reduction (FOLD_LEFT_REDUCTION).  STMT is the
+/* Perform an in-order reduction (FOLD_LEFT_REDUCTION).  STMT_INFO is the
    statement that sets the live-out value.  REDUC_DEF_STMT is the phi
-   statement.  CODE is the operation performed by STMT and OPS are
+   statement.  CODE is the operation performed by STMT_INFO and OPS are
    its scalar operands.  REDUC_INDEX is the index of the operand in
    OPS that is set by REDUC_DEF_STMT.  REDUC_FN is the function that
    implements in-order reduction, or IFN_LAST if we should open-code it.
@@ -5826,14 +5825,14 @@ vect_expand_fold_left (gimple_stmt_itera
    that should be used to control the operation in a fully-masked loop.  */
 
 static bool
-vectorize_fold_left_reduction (gimple *stmt, gimple_stmt_iterator *gsi,
+vectorize_fold_left_reduction (stmt_vec_info stmt_info,
+			       gimple_stmt_iterator *gsi,
 			       stmt_vec_info *vec_stmt, slp_tree slp_node,
 			       gimple *reduc_def_stmt,
 			       tree_code code, internal_fn reduc_fn,
 			       tree ops[3], tree vectype_in,
 			       int reduc_index, vec_loop_masks *masks)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
   tree vectype_out = STMT_VINFO_VECTYPE (stmt_info);
@@ -5962,16 +5961,16 @@ vectorize_fold_left_reduction (gimple *s
 
 /* Function is_nonwrapping_integer_induction.
 
-   Check if STMT (which is part of loop LOOP) both increments and
+   Check if STMT_VINO (which is part of loop LOOP) both increments and
    does not cause overflow.  */
 
 static bool
-is_nonwrapping_integer_induction (gimple *stmt, struct loop *loop)
+is_nonwrapping_integer_induction (stmt_vec_info stmt_vinfo, struct loop *loop)
 {
-  stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
+  gphi *phi = as_a <gphi *> (stmt_vinfo->stmt);
   tree base = STMT_VINFO_LOOP_PHI_EVOLUTION_BASE_UNCHANGED (stmt_vinfo);
   tree step = STMT_VINFO_LOOP_PHI_EVOLUTION_PART (stmt_vinfo);
-  tree lhs_type = TREE_TYPE (gimple_phi_result (stmt));
+  tree lhs_type = TREE_TYPE (gimple_phi_result (phi));
   widest_int ni, max_loop_value, lhs_max;
   wi::overflow_type overflow = wi::OVF_NONE;
 
@@ -6004,17 +6003,18 @@ is_nonwrapping_integer_induction (gimple
 
 /* Function vectorizable_reduction.
 
-   Check if STMT performs a reduction operation that can be vectorized.
-   If VEC_STMT is also passed, vectorize the STMT: create a vectorized
+   Check if STMT_INFO performs a reduction operation that can be vectorized.
+   If VEC_STMT is also passed, vectorize STMT_INFO: create a vectorized
    stmt to replace it, put it in VEC_STMT, and insert it at GSI.
-   Return FALSE if not a vectorizable STMT, TRUE otherwise.
+   Return true if STMT_INFO is vectorizable in this way.
 
    This function also handles reduction idioms (patterns) that have been
-   recognized in advance during vect_pattern_recog.  In this case, STMT may be
-   of this form:
+   recognized in advance during vect_pattern_recog.  In this case, STMT_INFO
+   may be of this form:
      X = pattern_expr (arg0, arg1, ..., X)
-   and it's STMT_VINFO_RELATED_STMT points to the last stmt in the original
-   sequence that had been detected and replaced by the pattern-stmt (STMT).
+   and its STMT_VINFO_RELATED_STMT points to the last stmt in the original
+   sequence that had been detected and replaced by the pattern-stmt
+   (STMT_INFO).
 
    This function also handles reduction of condition expressions, for example:
      for (int i = 0; i < N; i++)
@@ -6026,9 +6026,9 @@ is_nonwrapping_integer_induction (gimple
    index into the vector of results.
 
    In some cases of reduction patterns, the type of the reduction variable X is
-   different than the type of the other arguments of STMT.
-   In such cases, the vectype that is used when transforming STMT into a vector
-   stmt is different than the vectype that is used to determine the
+   different than the type of the other arguments of STMT_INFO.
+   In such cases, the vectype that is used when transforming STMT_INFO into
+   a vector stmt is different than the vectype that is used to determine the
    vectorization factor, because it consists of a different number of elements
    than the actual number of elements that are being operated upon in parallel.
 
@@ -6052,14 +6052,13 @@ is_nonwrapping_integer_induction (gimple
    does *NOT* necessarily hold for reduction patterns.  */
 
 bool
-vectorizable_reduction (gimple *stmt, gimple_stmt_iterator *gsi,
+vectorizable_reduction (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
 			stmt_vec_info *vec_stmt, slp_tree slp_node,
 			slp_instance slp_node_instance,
 			stmt_vector_for_cost *cost_vec)
 {
   tree vec_dest;
   tree scalar_dest;
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   tree vectype_out = STMT_VINFO_VECTYPE (stmt_info);
   tree vectype_in = NULL_TREE;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
@@ -6247,7 +6246,7 @@ vectorizable_reduction (gimple *stmt, gi
         inside the loop body. The last operand is the reduction variable,
         which is defined by the loop-header-phi.  */
 
-  gcc_assert (is_gimple_assign (stmt));
+  gassign *stmt = as_a <gassign *> (stmt_info->stmt);
 
   /* Flatten RHS.  */
   switch (get_gimple_rhs_class (gimple_assign_rhs_code (stmt)))
@@ -7240,18 +7239,17 @@ vect_worthwhile_without_simd_p (vec_info
 
 /* Function vectorizable_induction
 
-   Check if PHI performs an induction computation that can be vectorized.
+   Check if STMT_INFO performs an induction computation that can be vectorized.
    If VEC_STMT is also passed, vectorize the induction PHI: create a vectorized
    phi to replace it, put it in VEC_STMT, and add it to the same basic block.
-   Return FALSE if not a vectorizable STMT, TRUE otherwise.  */
+   Return true if STMT_INFO is vectorizable in this way.  */
 
 bool
-vectorizable_induction (gimple *phi,
+vectorizable_induction (stmt_vec_info stmt_info,
 			gimple_stmt_iterator *gsi ATTRIBUTE_UNUSED,
 			stmt_vec_info *vec_stmt, slp_tree slp_node,
 			stmt_vector_for_cost *cost_vec)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (phi);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
   unsigned ncopies;
@@ -7276,9 +7274,9 @@ vectorizable_induction (gimple *phi,
   edge latch_e;
   tree loop_arg;
   gimple_stmt_iterator si;
-  basic_block bb = gimple_bb (phi);
 
-  if (gimple_code (phi) != GIMPLE_PHI)
+  gphi *phi = dyn_cast <gphi *> (stmt_info->stmt);
+  if (!phi)
     return false;
 
   if (!STMT_VINFO_RELEVANT_P (stmt_info))
@@ -7426,6 +7424,7 @@ vectorizable_induction (gimple *phi,
     }
 
   /* Find the first insertion point in the BB.  */
+  basic_block bb = gimple_bb (phi);
   si = gsi_after_labels (bb);
 
   /* For SLP induction we have to generate several IVs as for example
@@ -7791,17 +7790,16 @@ vectorizable_induction (gimple *phi,
 
 /* Function vectorizable_live_operation.
 
-   STMT computes a value that is used outside the loop.  Check if
+   STMT_INFO computes a value that is used outside the loop.  Check if
    it can be supported.  */
 
 bool
-vectorizable_live_operation (gimple *stmt,
+vectorizable_live_operation (stmt_vec_info stmt_info,
 			     gimple_stmt_iterator *gsi ATTRIBUTE_UNUSED,
 			     slp_tree slp_node, int slp_index,
 			     stmt_vec_info *vec_stmt,
 			     stmt_vector_for_cost *)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
   imm_use_iterator imm_iter;
@@ -7908,8 +7906,9 @@ vectorizable_live_operation (gimple *stm
     }
 
   /* If stmt has a related stmt, then use that for getting the lhs.  */
-  if (is_pattern_stmt_p (stmt_info))
-    stmt = STMT_VINFO_RELATED_STMT (stmt_info);
+  gimple *stmt = (is_pattern_stmt_p (stmt_info)
+		  ? STMT_VINFO_RELATED_STMT (stmt_info)->stmt
+		  : stmt_info->stmt);
 
   lhs = (is_a <gphi *> (stmt)) ? gimple_phi_result (stmt)
 	: gimple_get_lhs (stmt);
@@ -8010,17 +8009,17 @@ vectorizable_live_operation (gimple *stm
   return true;
 }
 
-/* Kill any debug uses outside LOOP of SSA names defined in STMT.  */
+/* Kill any debug uses outside LOOP of SSA names defined in STMT_INFO.  */
 
 static void
-vect_loop_kill_debug_uses (struct loop *loop, gimple *stmt)
+vect_loop_kill_debug_uses (struct loop *loop, stmt_vec_info stmt_info)
 {
   ssa_op_iter op_iter;
   imm_use_iterator imm_iter;
   def_operand_p def_p;
   gimple *ustmt;
 
-  FOR_EACH_PHI_OR_STMT_DEF (def_p, stmt, op_iter, SSA_OP_DEF)
+  FOR_EACH_PHI_OR_STMT_DEF (def_p, stmt_info->stmt, op_iter, SSA_OP_DEF)
     {
       FOR_EACH_IMM_USE_STMT (ustmt, imm_iter, DEF_FROM_PTR (def_p))
 	{
Index: gcc/tree-vect-patterns.c
===================================================================
--- gcc/tree-vect-patterns.c	2018-07-24 10:23:35.380732018 +0100
+++ gcc/tree-vect-patterns.c	2018-07-24 10:23:50.004602150 +0100
@@ -236,22 +236,20 @@ vect_get_internal_def (vec_info *vinfo,
   return NULL;
 }
 
-/* Check whether NAME, an ssa-name used in USE_STMT,
+/* Check whether NAME, an ssa-name used in STMT_VINFO,
    is a result of a type promotion, such that:
      DEF_STMT: NAME = NOP (name0)
    If CHECK_SIGN is TRUE, check that either both types are signed or both are
    unsigned.  */
 
 static bool
-type_conversion_p (tree name, gimple *use_stmt, bool check_sign,
+type_conversion_p (tree name, stmt_vec_info stmt_vinfo, bool check_sign,
 		   tree *orig_type, gimple **def_stmt, bool *promotion)
 {
-  stmt_vec_info stmt_vinfo;
   tree type = TREE_TYPE (name);
   tree oprnd0;
   enum vect_def_type dt;
 
-  stmt_vinfo = vinfo_for_stmt (use_stmt);
   stmt_vec_info def_stmt_info;
   if (!vect_is_simple_use (name, stmt_vinfo->vinfo, &dt, &def_stmt_info,
 			   def_stmt))
@@ -3498,15 +3496,13 @@ sort_after_uid (const void *p1, const vo
 }
 
 /* Create pattern stmts for all stmts participating in the bool pattern
-   specified by BOOL_STMT_SET and its root STMT with the desired type
+   specified by BOOL_STMT_SET and its root STMT_INFO with the desired type
    OUT_TYPE.  Return the def of the pattern root.  */
 
 static tree
 adjust_bool_stmts (hash_set <gimple *> &bool_stmt_set,
-		   tree out_type, gimple *stmt)
+		   tree out_type, stmt_vec_info stmt_info)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
-
   /* Gather original stmts in the bool pattern in their order of appearance
      in the IL.  */
   auto_vec<gimple *> bool_stmts (bool_stmt_set.elements ());
@@ -4126,19 +4122,19 @@ vect_recog_mask_conversion_pattern (stmt
   return pattern_stmt;
 }
 
-/* STMT is a load or store.  If the load or store is conditional, return
+/* STMT_INFO is a load or store.  If the load or store is conditional, return
    the boolean condition under which it occurs, otherwise return null.  */
 
 static tree
-vect_get_load_store_mask (gimple *stmt)
+vect_get_load_store_mask (stmt_vec_info stmt_info)
 {
-  if (gassign *def_assign = dyn_cast <gassign *> (stmt))
+  if (gassign *def_assign = dyn_cast <gassign *> (stmt_info->stmt))
     {
       gcc_assert (gimple_assign_single_p (def_assign));
       return NULL_TREE;
     }
 
-  if (gcall *def_call = dyn_cast <gcall *> (stmt))
+  if (gcall *def_call = dyn_cast <gcall *> (stmt_info->stmt))
     {
       internal_fn ifn = gimple_call_internal_fn (def_call);
       int mask_index = internal_fn_mask_index (ifn);
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c	2018-07-24 10:23:46.112636713 +0100
+++ gcc/tree-vect-slp.c	2018-07-24 10:23:50.004602150 +0100
@@ -195,14 +195,14 @@ vect_free_oprnd_info (vec<slp_oprnd_info
 }
 
 
-/* Find the place of the data-ref in STMT in the interleaving chain that starts
-   from FIRST_STMT.  Return -1 if the data-ref is not a part of the chain.  */
+/* Find the place of the data-ref in STMT_INFO in the interleaving chain
+   that starts from FIRST_STMT_INFO.  Return -1 if the data-ref is not a part
+   of the chain.  */
 
 int
-vect_get_place_in_interleaving_chain (gimple *stmt, gimple *first_stmt)
+vect_get_place_in_interleaving_chain (stmt_vec_info stmt_info,
+				      stmt_vec_info first_stmt_info)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
-  stmt_vec_info first_stmt_info = vinfo_for_stmt (first_stmt);
   stmt_vec_info next_stmt_info = first_stmt_info;
   int result = 0;
 
@@ -1918,9 +1918,8 @@ calculate_unrolling_factor (poly_uint64
 
 static bool
 vect_analyze_slp_instance (vec_info *vinfo,
-			   gimple *stmt, unsigned max_tree_size)
+			   stmt_vec_info stmt_info, unsigned max_tree_size)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   slp_instance new_instance;
   slp_tree node;
   unsigned int group_size;
@@ -3118,13 +3117,12 @@ vect_slp_bb (basic_block bb)
 
 
 /* Return 1 if vector type of boolean constant which is OPNUM
-   operand in statement STMT is a boolean vector.  */
+   operand in statement STMT_VINFO is a boolean vector.  */
 
 static bool
-vect_mask_constant_operand_p (gimple *stmt, int opnum)
+vect_mask_constant_operand_p (stmt_vec_info stmt_vinfo, int opnum)
 {
-  stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
-  enum tree_code code = gimple_expr_code (stmt);
+  enum tree_code code = gimple_expr_code (stmt_vinfo->stmt);
   tree op, vectype;
   enum vect_def_type dt;
 
@@ -3132,6 +3130,7 @@ vect_mask_constant_operand_p (gimple *st
      on the other comparison operand.  */
   if (TREE_CODE_CLASS (code) == tcc_comparison)
     {
+      gassign *stmt = as_a <gassign *> (stmt_vinfo->stmt);
       if (opnum)
 	op = gimple_assign_rhs1 (stmt);
       else
@@ -3145,6 +3144,7 @@ vect_mask_constant_operand_p (gimple *st
 
   if (code == COND_EXPR)
     {
+      gassign *stmt = as_a <gassign *> (stmt_vinfo->stmt);
       tree cond = gimple_assign_rhs1 (stmt);
 
       if (TREE_CODE (cond) == SSA_NAME)
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:23:46.116636678 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:23:50.008602115 +0100
@@ -192,13 +192,12 @@ vect_clobber_variable (stmt_vec_info stm
 
 /* Function vect_mark_relevant.
 
-   Mark STMT as "relevant for vectorization" and add it to WORKLIST.  */
+   Mark STMT_INFO as "relevant for vectorization" and add it to WORKLIST.  */
 
 static void
-vect_mark_relevant (vec<stmt_vec_info> *worklist, gimple *stmt,
+vect_mark_relevant (vec<stmt_vec_info> *worklist, stmt_vec_info stmt_info,
 		    enum vect_relevant relevant, bool live_p)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   enum vect_relevant save_relevant = STMT_VINFO_RELEVANT (stmt_info);
   bool save_live_p = STMT_VINFO_LIVE_P (stmt_info);
 
@@ -229,7 +228,6 @@ vect_mark_relevant (vec<stmt_vec_info> *
       gcc_assert (STMT_VINFO_RELATED_STMT (stmt_info) == old_stmt_info);
       save_relevant = STMT_VINFO_RELEVANT (stmt_info);
       save_live_p = STMT_VINFO_LIVE_P (stmt_info);
-      stmt = stmt_info->stmt;
     }
 
   STMT_VINFO_LIVE_P (stmt_info) |= live_p;
@@ -251,15 +249,17 @@ vect_mark_relevant (vec<stmt_vec_info> *
 
 /* Function is_simple_and_all_uses_invariant
 
-   Return true if STMT is simple and all uses of it are invariant.  */
+   Return true if STMT_INFO is simple and all uses of it are invariant.  */
 
 bool
-is_simple_and_all_uses_invariant (gimple *stmt, loop_vec_info loop_vinfo)
+is_simple_and_all_uses_invariant (stmt_vec_info stmt_info,
+				  loop_vec_info loop_vinfo)
 {
   tree op;
   ssa_op_iter iter;
 
-  if (!is_gimple_assign (stmt))
+  gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt);
+  if (!stmt)
     return false;
 
   FOR_EACH_SSA_TREE_OPERAND (op, stmt, iter, SSA_OP_USE)
@@ -361,14 +361,13 @@ vect_stmt_relevant_p (stmt_vec_info stmt
 
 /* Function exist_non_indexing_operands_for_use_p
 
-   USE is one of the uses attached to STMT.  Check if USE is
-   used in STMT for anything other than indexing an array.  */
+   USE is one of the uses attached to STMT_INFO.  Check if USE is
+   used in STMT_INFO for anything other than indexing an array.  */
 
 static bool
-exist_non_indexing_operands_for_use_p (tree use, gimple *stmt)
+exist_non_indexing_operands_for_use_p (tree use, stmt_vec_info stmt_info)
 {
   tree operand;
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
 
   /* USE corresponds to some operand in STMT.  If there is no data
      reference in STMT, then any operand that corresponds to USE
@@ -428,7 +427,7 @@ exist_non_indexing_operands_for_use_p (t
    Function process_use.
 
    Inputs:
-   - a USE in STMT in a loop represented by LOOP_VINFO
+   - a USE in STMT_VINFO in a loop represented by LOOP_VINFO
    - RELEVANT - enum value to be set in the STMT_VINFO of the stmt
      that defined USE.  This is done by calling mark_relevant and passing it
      the WORKLIST (to add DEF_STMT to the WORKLIST in case it is relevant).
@@ -438,25 +437,24 @@ exist_non_indexing_operands_for_use_p (t
    Outputs:
    Generally, LIVE_P and RELEVANT are used to define the liveness and
    relevance info of the DEF_STMT of this USE:
-       STMT_VINFO_LIVE_P (DEF_STMT_info) <-- live_p
-       STMT_VINFO_RELEVANT (DEF_STMT_info) <-- relevant
+       STMT_VINFO_LIVE_P (DEF_stmt_vinfo) <-- live_p
+       STMT_VINFO_RELEVANT (DEF_stmt_vinfo) <-- relevant
    Exceptions:
    - case 1: If USE is used only for address computations (e.g. array indexing),
    which does not need to be directly vectorized, then the liveness/relevance
    of the respective DEF_STMT is left unchanged.
-   - case 2: If STMT is a reduction phi and DEF_STMT is a reduction stmt, we
-   skip DEF_STMT cause it had already been processed.
-   - case 3: If DEF_STMT and STMT are in different nests, then  "relevant" will
-   be modified accordingly.
+   - case 2: If STMT_VINFO is a reduction phi and DEF_STMT is a reduction stmt,
+   we skip DEF_STMT cause it had already been processed.
+   - case 3: If DEF_STMT and STMT_VINFO are in different nests, then
+   "relevant" will be modified accordingly.
 
    Return true if everything is as expected. Return false otherwise.  */
 
 static bool
-process_use (gimple *stmt, tree use, loop_vec_info loop_vinfo,
+process_use (stmt_vec_info stmt_vinfo, tree use, loop_vec_info loop_vinfo,
 	     enum vect_relevant relevant, vec<stmt_vec_info> *worklist,
 	     bool force)
 {
-  stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
   stmt_vec_info dstmt_vinfo;
   basic_block bb, def_bb;
   enum vect_def_type dt;
@@ -1342,12 +1340,12 @@ vect_get_load_cost (stmt_vec_info stmt_i
 }
 
 /* Insert the new stmt NEW_STMT at *GSI or at the appropriate place in
-   the loop preheader for the vectorized stmt STMT.  */
+   the loop preheader for the vectorized stmt STMT_VINFO.  */
 
 static void
-vect_init_vector_1 (gimple *stmt, gimple *new_stmt, gimple_stmt_iterator *gsi)
+vect_init_vector_1 (stmt_vec_info stmt_vinfo, gimple *new_stmt,
+		    gimple_stmt_iterator *gsi)
 {
-  stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
   if (gsi)
     vect_finish_stmt_generation (stmt_vinfo, new_stmt, gsi);
   else
@@ -1396,12 +1394,12 @@ vect_init_vector_1 (gimple *stmt, gimple
    Place the initialization at BSI if it is not NULL.  Otherwise, place the
    initialization at the loop preheader.
    Return the DEF of INIT_STMT.
-   It will be used in the vectorization of STMT.  */
+   It will be used in the vectorization of STMT_INFO.  */
 
 tree
-vect_init_vector (gimple *stmt, tree val, tree type, gimple_stmt_iterator *gsi)
+vect_init_vector (stmt_vec_info stmt_info, tree val, tree type,
+		  gimple_stmt_iterator *gsi)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   gimple *init_stmt;
   tree new_temp;
 
@@ -1456,15 +1454,15 @@ vect_init_vector (gimple *stmt, tree val
 
 /* Function vect_get_vec_def_for_operand_1.
 
-   For a defining stmt DEF_STMT of a scalar stmt, return a vector def with type
-   DT that will be used in the vectorized stmt.  */
+   For a defining stmt DEF_STMT_INFO of a scalar stmt, return a vector def
+   with type DT that will be used in the vectorized stmt.  */
 
 tree
-vect_get_vec_def_for_operand_1 (gimple *def_stmt, enum vect_def_type dt)
+vect_get_vec_def_for_operand_1 (stmt_vec_info def_stmt_info,
+				enum vect_def_type dt)
 {
   tree vec_oprnd;
   stmt_vec_info vec_stmt_info;
-  stmt_vec_info def_stmt_info = NULL;
 
   switch (dt)
     {
@@ -1478,8 +1476,6 @@ vect_get_vec_def_for_operand_1 (gimple *
     case vect_internal_def:
       {
         /* Get the def from the vectorized stmt.  */
-        def_stmt_info = vinfo_for_stmt (def_stmt);
-
 	vec_stmt_info = STMT_VINFO_VEC_STMT (def_stmt_info);
 	/* Get vectorized pattern statement.  */
 	if (!vec_stmt_info
@@ -1501,10 +1497,9 @@ vect_get_vec_def_for_operand_1 (gimple *
     case vect_nested_cycle:
     case vect_induction_def:
       {
-	gcc_assert (gimple_code (def_stmt) == GIMPLE_PHI);
+	gcc_assert (gimple_code (def_stmt_info->stmt) == GIMPLE_PHI);
 
 	/* Get the def from the vectorized stmt.  */
-	def_stmt_info = vinfo_for_stmt (def_stmt);
 	vec_stmt_info = STMT_VINFO_VEC_STMT (def_stmt_info);
 	if (gphi *phi = dyn_cast <gphi *> (vec_stmt_info->stmt))
 	  vec_oprnd = PHI_RESULT (phi);
@@ -1521,8 +1516,8 @@ vect_get_vec_def_for_operand_1 (gimple *
 
 /* Function vect_get_vec_def_for_operand.
 
-   OP is an operand in STMT.  This function returns a (vector) def that will be
-   used in the vectorized stmt for STMT.
+   OP is an operand in STMT_VINFO.  This function returns a (vector) def
+   that will be used in the vectorized stmt for STMT_VINFO.
 
    In the case that OP is an SSA_NAME which is defined in the loop, then
    STMT_VINFO_VEC_STMT of the defining stmt holds the relevant def.
@@ -1532,12 +1527,11 @@ vect_get_vec_def_for_operand_1 (gimple *
    vector invariant.  */
 
 tree
-vect_get_vec_def_for_operand (tree op, gimple *stmt, tree vectype)
+vect_get_vec_def_for_operand (tree op, stmt_vec_info stmt_vinfo, tree vectype)
 {
   gimple *def_stmt;
   enum vect_def_type dt;
   bool is_simple_use;
-  stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_vinfo);
 
   if (dump_enabled_p ())
@@ -1683,12 +1677,11 @@ vect_get_vec_defs_for_stmt_copy (enum ve
 /* Get vectorized definitions for OP0 and OP1.  */
 
 void
-vect_get_vec_defs (tree op0, tree op1, gimple *stmt,
+vect_get_vec_defs (tree op0, tree op1, stmt_vec_info stmt_info,
 		   vec<tree> *vec_oprnds0,
 		   vec<tree> *vec_oprnds1,
 		   slp_tree slp_node)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   if (slp_node)
     {
       int nops = (op1 == NULL_TREE) ? 1 : 2;
@@ -1727,9 +1720,8 @@ vect_get_vec_defs (tree op0, tree op1, g
    statement and create and return a stmt_vec_info for it.  */
 
 static stmt_vec_info
-vect_finish_stmt_generation_1 (gimple *stmt, gimple *vec_stmt)
+vect_finish_stmt_generation_1 (stmt_vec_info stmt_info, gimple *vec_stmt)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   vec_info *vinfo = stmt_info->vinfo;
 
   stmt_vec_info vec_stmt_info = vinfo->add_stmt (vec_stmt);
@@ -1752,14 +1744,13 @@ vect_finish_stmt_generation_1 (gimple *s
   return vec_stmt_info;
 }
 
-/* Replace the scalar statement STMT with a new vector statement VEC_STMT,
-   which sets the same scalar result as STMT did.  Create and return a
+/* Replace the scalar statement STMT_INFO with a new vector statement VEC_STMT,
+   which sets the same scalar result as STMT_INFO did.  Create and return a
    stmt_vec_info for VEC_STMT.  */
 
 stmt_vec_info
-vect_finish_replace_stmt (gimple *stmt, gimple *vec_stmt)
+vect_finish_replace_stmt (stmt_vec_info stmt_info, gimple *vec_stmt)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   gcc_assert (gimple_get_lhs (stmt_info->stmt) == gimple_get_lhs (vec_stmt));
 
   gimple_stmt_iterator gsi = gsi_for_stmt (stmt_info->stmt);
@@ -1768,14 +1759,13 @@ vect_finish_replace_stmt (gimple *stmt,
   return vect_finish_stmt_generation_1 (stmt_info, vec_stmt);
 }
 
-/* Add VEC_STMT to the vectorized implementation of STMT and insert it
+/* Add VEC_STMT to the vectorized implementation of STMT_INFO and insert it
    before *GSI.  Create and return a stmt_vec_info for VEC_STMT.  */
 
 stmt_vec_info
-vect_finish_stmt_generation (gimple *stmt, gimple *vec_stmt,
+vect_finish_stmt_generation (stmt_vec_info stmt_info, gimple *vec_stmt,
 			     gimple_stmt_iterator *gsi)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   gcc_assert (gimple_code (stmt_info->stmt) != GIMPLE_LABEL);
 
   if (!gsi_end_p (*gsi)
@@ -1976,22 +1966,21 @@ prepare_load_store_mask (tree mask_type,
 }
 
 /* Determine whether we can use a gather load or scatter store to vectorize
-   strided load or store STMT by truncating the current offset to a smaller
-   width.  We need to be able to construct an offset vector:
+   strided load or store STMT_INFO by truncating the current offset to a
+   smaller width.  We need to be able to construct an offset vector:
 
      { 0, X, X*2, X*3, ... }
 
-   without loss of precision, where X is STMT's DR_STEP.
+   without loss of precision, where X is STMT_INFO's DR_STEP.
 
    Return true if this is possible, describing the gather load or scatter
    store in GS_INFO.  MASKED_P is true if the load or store is conditional.  */
 
 static bool
-vect_truncate_gather_scatter_offset (gimple *stmt, loop_vec_info loop_vinfo,
-				     bool masked_p,
+vect_truncate_gather_scatter_offset (stmt_vec_info stmt_info,
+				     loop_vec_info loop_vinfo, bool masked_p,
 				     gather_scatter_info *gs_info)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   tree step = DR_STEP (dr);
   if (TREE_CODE (step) != INTEGER_CST)
@@ -2112,14 +2101,13 @@ vect_use_strided_gather_scatters_p (stmt
   return true;
 }
 
-/* STMT is a non-strided load or store, meaning that it accesses
+/* STMT_INFO is a non-strided load or store, meaning that it accesses
    elements with a known constant step.  Return -1 if that step
    is negative, 0 if it is zero, and 1 if it is greater than zero.  */
 
 static int
-compare_step_with_zero (gimple *stmt)
+compare_step_with_zero (stmt_vec_info stmt_info)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   return tree_int_cst_compare (vect_dr_behavior (dr)->step,
 			       size_zero_node);
@@ -2144,29 +2132,29 @@ perm_mask_for_reverse (tree vectype)
   return vect_gen_perm_mask_checked (vectype, indices);
 }
 
-/* STMT is either a masked or unconditional store.  Return the value
+/* STMT_INFO is either a masked or unconditional store.  Return the value
    being stored.  */
 
 tree
-vect_get_store_rhs (gimple *stmt)
+vect_get_store_rhs (stmt_vec_info stmt_info)
 {
-  if (gassign *assign = dyn_cast <gassign *> (stmt))
+  if (gassign *assign = dyn_cast <gassign *> (stmt_info->stmt))
     {
       gcc_assert (gimple_assign_single_p (assign));
       return gimple_assign_rhs1 (assign);
     }
-  if (gcall *call = dyn_cast <gcall *> (stmt))
+  if (gcall *call = dyn_cast <gcall *> (stmt_info->stmt))
     {
       internal_fn ifn = gimple_call_internal_fn (call);
       int index = internal_fn_stored_value_index (ifn);
       gcc_assert (index >= 0);
-      return gimple_call_arg (stmt, index);
+      return gimple_call_arg (call, index);
     }
   gcc_unreachable ();
 }
 
 /* A subroutine of get_load_store_type, with a subset of the same
-   arguments.  Handle the case where STMT is part of a grouped load
+   arguments.  Handle the case where STMT_INFO is part of a grouped load
    or store.
 
    For stores, the statements in the group are all consecutive
@@ -2175,12 +2163,11 @@ vect_get_store_rhs (gimple *stmt)
    as well as at the end.  */
 
 static bool
-get_group_load_store_type (gimple *stmt, tree vectype, bool slp,
+get_group_load_store_type (stmt_vec_info stmt_info, tree vectype, bool slp,
 			   bool masked_p, vec_load_store_type vls_type,
 			   vect_memory_access_type *memory_access_type,
 			   gather_scatter_info *gs_info)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   vec_info *vinfo = stmt_info->vinfo;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct loop *loop = loop_vinfo ? LOOP_VINFO_LOOP (loop_vinfo) : NULL;
@@ -2350,15 +2337,14 @@ get_group_load_store_type (gimple *stmt,
 }
 
 /* A subroutine of get_load_store_type, with a subset of the same
-   arguments.  Handle the case where STMT is a load or store that
+   arguments.  Handle the case where STMT_INFO is a load or store that
    accesses consecutive elements with a negative step.  */
 
 static vect_memory_access_type
-get_negative_load_store_type (gimple *stmt, tree vectype,
+get_negative_load_store_type (stmt_vec_info stmt_info, tree vectype,
 			      vec_load_store_type vls_type,
 			      unsigned int ncopies)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   dr_alignment_support alignment_support_scheme;
 
@@ -2400,7 +2386,7 @@ get_negative_load_store_type (gimple *st
   return VMAT_CONTIGUOUS_REVERSE;
 }
 
-/* Analyze load or store statement STMT of type VLS_TYPE.  Return true
+/* Analyze load or store statement STMT_INFO of type VLS_TYPE.  Return true
    if there is a memory access type that the vectorized form can use,
    storing it in *MEMORY_ACCESS_TYPE if so.  If we decide to use gathers
    or scatters, fill in GS_INFO accordingly.
@@ -2411,12 +2397,12 @@ get_negative_load_store_type (gimple *st
    NCOPIES is the number of vector statements that will be needed.  */
 
 static bool
-get_load_store_type (gimple *stmt, tree vectype, bool slp, bool masked_p,
-		     vec_load_store_type vls_type, unsigned int ncopies,
+get_load_store_type (stmt_vec_info stmt_info, tree vectype, bool slp,
+		     bool masked_p, vec_load_store_type vls_type,
+		     unsigned int ncopies,
 		     vect_memory_access_type *memory_access_type,
 		     gather_scatter_info *gs_info)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   vec_info *vinfo = stmt_info->vinfo;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   poly_uint64 nunits = TYPE_VECTOR_SUBPARTS (vectype);
@@ -2496,12 +2482,12 @@ get_load_store_type (gimple *stmt, tree
 }
 
 /* Return true if boolean argument MASK is suitable for vectorizing
-   conditional load or store STMT.  When returning true, store the type
+   conditional load or store STMT_INFO.  When returning true, store the type
    of the definition in *MASK_DT_OUT and the type of the vectorized mask
    in *MASK_VECTYPE_OUT.  */
 
 static bool
-vect_check_load_store_mask (gimple *stmt, tree mask,
+vect_check_load_store_mask (stmt_vec_info stmt_info, tree mask,
 			    vect_def_type *mask_dt_out,
 			    tree *mask_vectype_out)
 {
@@ -2521,7 +2507,6 @@ vect_check_load_store_mask (gimple *stmt
       return false;
     }
 
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   enum vect_def_type mask_dt;
   tree mask_vectype;
   if (!vect_is_simple_use (mask, stmt_info->vinfo, &mask_dt, &mask_vectype))
@@ -2566,13 +2551,14 @@ vect_check_load_store_mask (gimple *stmt
 }
 
 /* Return true if stored value RHS is suitable for vectorizing store
-   statement STMT.  When returning true, store the type of the
+   statement STMT_INFO.  When returning true, store the type of the
    definition in *RHS_DT_OUT, the type of the vectorized store value in
    *RHS_VECTYPE_OUT and the type of the store in *VLS_TYPE_OUT.  */
 
 static bool
-vect_check_store_rhs (gimple *stmt, tree rhs, vect_def_type *rhs_dt_out,
-		      tree *rhs_vectype_out, vec_load_store_type *vls_type_out)
+vect_check_store_rhs (stmt_vec_info stmt_info, tree rhs,
+		      vect_def_type *rhs_dt_out, tree *rhs_vectype_out,
+		      vec_load_store_type *vls_type_out)
 {
   /* In the case this is a store from a constant make sure
      native_encode_expr can handle it.  */
@@ -2584,7 +2570,6 @@ vect_check_store_rhs (gimple *stmt, tree
       return false;
     }
 
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   enum vect_def_type rhs_dt;
   tree rhs_vectype;
   if (!vect_is_simple_use (rhs, stmt_info->vinfo, &rhs_dt, &rhs_vectype))
@@ -2666,18 +2651,19 @@ vect_build_zero_merge_argument (stmt_vec
   return vect_init_vector (stmt_info, merge, vectype, NULL);
 }
 
-/* Build a gather load call while vectorizing STMT.  Insert new instructions
-   before GSI and add them to VEC_STMT.  GS_INFO describes the gather load
-   operation.  If the load is conditional, MASK is the unvectorized
-   condition and MASK_DT is its definition type, otherwise MASK is null.  */
+/* Build a gather load call while vectorizing STMT_INFO.  Insert new
+   instructions before GSI and add them to VEC_STMT.  GS_INFO describes
+   the gather load operation.  If the load is conditional, MASK is the
+   unvectorized condition and MASK_DT is its definition type, otherwise
+   MASK is null.  */
 
 static void
-vect_build_gather_load_calls (gimple *stmt, gimple_stmt_iterator *gsi,
+vect_build_gather_load_calls (stmt_vec_info stmt_info,
+			      gimple_stmt_iterator *gsi,
 			      stmt_vec_info *vec_stmt,
-			      gather_scatter_info *gs_info, tree mask,
-			      vect_def_type mask_dt)
+			      gather_scatter_info *gs_info,
+			      tree mask, vect_def_type mask_dt)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
@@ -2897,7 +2883,7 @@ vect_get_gather_scatter_ops (struct loop
 
 /* Prepare to implement a grouped or strided load or store using
    the gather load or scatter store operation described by GS_INFO.
-   STMT is the load or store statement.
+   STMT_INFO is the load or store statement.
 
    Set *DATAREF_BUMP to the amount that should be added to the base
    address after each copy of the vectorized statement.  Set *VEC_OFFSET
@@ -2905,11 +2891,11 @@ vect_get_gather_scatter_ops (struct loop
    I * DR_STEP / SCALE.  */
 
 static void
-vect_get_strided_load_store_ops (gimple *stmt, loop_vec_info loop_vinfo,
+vect_get_strided_load_store_ops (stmt_vec_info stmt_info,
+				 loop_vec_info loop_vinfo,
 				 gather_scatter_info *gs_info,
 				 tree *dataref_bump, tree *vec_offset)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
@@ -2963,13 +2949,13 @@ vect_get_data_ptr_increment (data_refere
 /* Check and perform vectorization of BUILT_IN_BSWAP{16,32,64}.  */
 
 static bool
-vectorizable_bswap (gimple *stmt, gimple_stmt_iterator *gsi,
+vectorizable_bswap (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
 		    stmt_vec_info *vec_stmt, slp_tree slp_node,
 		    tree vectype_in, enum vect_def_type *dt,
 		    stmt_vector_for_cost *cost_vec)
 {
   tree op, vectype;
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+  gcall *stmt = as_a <gcall *> (stmt_info->stmt);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   unsigned ncopies;
   unsigned HOST_WIDE_INT nunits, num_bytes;
@@ -3103,13 +3089,13 @@ simple_integer_narrowing (tree vectype_o
 
 /* Function vectorizable_call.
 
-   Check if GS performs a function call that can be vectorized.
-   If VEC_STMT is also passed, vectorize the STMT: create a vectorized
-   stmt to replace it, put it in VEC_STMT, and insert it at BSI.
-   Return FALSE if not a vectorizable STMT, TRUE otherwise.  */
+   Check if STMT_INFO performs a function call that can be vectorized.
+   If VEC_STMT is also passed, vectorize STMT_INFO: create a vectorized
+   stmt to replace it, put it in VEC_STMT, and insert it at GSI.
+   Return true if STMT_INFO is vectorizable in this way.  */
 
 static bool
-vectorizable_call (gimple *gs, gimple_stmt_iterator *gsi,
+vectorizable_call (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
 		   stmt_vec_info *vec_stmt, slp_tree slp_node,
 		   stmt_vector_for_cost *cost_vec)
 {
@@ -3118,7 +3104,7 @@ vectorizable_call (gimple *gs, gimple_st
   tree scalar_dest;
   tree op;
   tree vec_oprnd0 = NULL_TREE, vec_oprnd1 = NULL_TREE;
-  stmt_vec_info stmt_info = vinfo_for_stmt (gs), prev_stmt_info;
+  stmt_vec_info prev_stmt_info;
   tree vectype_out, vectype_in;
   poly_uint64 nunits_in;
   poly_uint64 nunits_out;
@@ -3747,14 +3733,15 @@ simd_clone_subparts (tree vectype)
 
 /* Function vectorizable_simd_clone_call.
 
-   Check if STMT performs a function call that can be vectorized
+   Check if STMT_INFO performs a function call that can be vectorized
    by calling a simd clone of the function.
-   If VEC_STMT is also passed, vectorize the STMT: create a vectorized
-   stmt to replace it, put it in VEC_STMT, and insert it at BSI.
-   Return FALSE if not a vectorizable STMT, TRUE otherwise.  */
+   If VEC_STMT is also passed, vectorize STMT_INFO: create a vectorized
+   stmt to replace it, put it in VEC_STMT, and insert it at GSI.
+   Return true if STMT_INFO is vectorizable in this way.  */
 
 static bool
-vectorizable_simd_clone_call (gimple *stmt, gimple_stmt_iterator *gsi,
+vectorizable_simd_clone_call (stmt_vec_info stmt_info,
+			      gimple_stmt_iterator *gsi,
 			      stmt_vec_info *vec_stmt, slp_tree slp_node,
 			      stmt_vector_for_cost *)
 {
@@ -3762,7 +3749,7 @@ vectorizable_simd_clone_call (gimple *st
   tree scalar_dest;
   tree op, type;
   tree vec_oprnd0 = NULL_TREE;
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt), prev_stmt_info;
+  stmt_vec_info prev_stmt_info;
   tree vectype;
   unsigned int nunits;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
@@ -3778,7 +3765,8 @@ vectorizable_simd_clone_call (gimple *st
   vec<constructor_elt, va_gc> *ret_ctor_elts = NULL;
 
   /* Is STMT a vectorizable call?   */
-  if (!is_gimple_call (stmt))
+  gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt);
+  if (!stmt)
     return false;
 
   fndecl = gimple_call_fndecl (stmt);
@@ -4487,7 +4475,8 @@ vect_get_loop_based_defs (tree *oprnd, s
 
 static void
 vect_create_vectorized_demotion_stmts (vec<tree> *vec_oprnds,
-				       int multi_step_cvt, gimple *stmt,
+				       int multi_step_cvt,
+				       stmt_vec_info stmt_info,
 				       vec<tree> vec_dsts,
 				       gimple_stmt_iterator *gsi,
 				       slp_tree slp_node, enum tree_code code,
@@ -4495,7 +4484,6 @@ vect_create_vectorized_demotion_stmts (v
 {
   unsigned int i;
   tree vop0, vop1, new_tmp, vec_dest;
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
 
   vec_dest = vec_dsts.pop ();
 
@@ -4606,13 +4594,13 @@ vect_create_vectorized_promotion_stmts (
 }
 
 
-/* Check if STMT performs a conversion operation, that can be vectorized.
-   If VEC_STMT is also passed, vectorize the STMT: create a vectorized
+/* Check if STMT_INFO performs a conversion operation that can be vectorized.
+   If VEC_STMT is also passed, vectorize STMT_INFO: create a vectorized
    stmt to replace it, put it in VEC_STMT, and insert it at GSI.
-   Return FALSE if not a vectorizable STMT, TRUE otherwise.  */
+   Return true if STMT_INFO is vectorizable in this way.  */
 
 static bool
-vectorizable_conversion (gimple *stmt, gimple_stmt_iterator *gsi,
+vectorizable_conversion (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
 			 stmt_vec_info *vec_stmt, slp_tree slp_node,
 			 stmt_vector_for_cost *cost_vec)
 {
@@ -4620,7 +4608,6 @@ vectorizable_conversion (gimple *stmt, g
   tree scalar_dest;
   tree op0, op1 = NULL_TREE;
   tree vec_oprnd0 = NULL_TREE, vec_oprnd1 = NULL_TREE;
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   enum tree_code code, code1 = ERROR_MARK, code2 = ERROR_MARK;
   enum tree_code codecvt1 = ERROR_MARK, codecvt2 = ERROR_MARK;
@@ -4655,7 +4642,8 @@ vectorizable_conversion (gimple *stmt, g
       && ! vec_stmt)
     return false;
 
-  if (!is_gimple_assign (stmt))
+  gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt);
+  if (!stmt)
     return false;
 
   if (TREE_CODE (gimple_assign_lhs (stmt)) != SSA_NAME)
@@ -5220,20 +5208,19 @@ vectorizable_conversion (gimple *stmt, g
 
 /* Function vectorizable_assignment.
 
-   Check if STMT performs an assignment (copy) that can be vectorized.
-   If VEC_STMT is also passed, vectorize the STMT: create a vectorized
-   stmt to replace it, put it in VEC_STMT, and insert it at BSI.
-   Return FALSE if not a vectorizable STMT, TRUE otherwise.  */
+   Check if STMT_INFO performs an assignment (copy) that can be vectorized.
+   If VEC_STMT is also passed, vectorize the STMT_INFO: create a vectorized
+   stmt to replace it, put it in VEC_STMT, and insert it at GSI.
+   Return true if STMT_INFO is vectorizable in this way.  */
 
 static bool
-vectorizable_assignment (gimple *stmt, gimple_stmt_iterator *gsi,
+vectorizable_assignment (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
 			 stmt_vec_info *vec_stmt, slp_tree slp_node,
 			 stmt_vector_for_cost *cost_vec)
 {
   tree vec_dest;
   tree scalar_dest;
   tree op;
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   tree new_temp;
   enum vect_def_type dt[1] = {vect_unknown_def_type};
@@ -5256,7 +5243,8 @@ vectorizable_assignment (gimple *stmt, g
     return false;
 
   /* Is vectorizable assignment?  */
-  if (!is_gimple_assign (stmt))
+  gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt);
+  if (!stmt)
     return false;
 
   scalar_dest = gimple_assign_lhs (stmt);
@@ -5422,13 +5410,13 @@ vect_supportable_shift (enum tree_code c
 
 /* Function vectorizable_shift.
 
-   Check if STMT performs a shift operation that can be vectorized.
-   If VEC_STMT is also passed, vectorize the STMT: create a vectorized
-   stmt to replace it, put it in VEC_STMT, and insert it at BSI.
-   Return FALSE if not a vectorizable STMT, TRUE otherwise.  */
+   Check if STMT_INFO performs a shift operation that can be vectorized.
+   If VEC_STMT is also passed, vectorize the STMT_INFO: create a vectorized
+   stmt to replace it, put it in VEC_STMT, and insert it at GSI.
+   Return true if STMT_INFO is vectorizable in this way.  */
 
 static bool
-vectorizable_shift (gimple *stmt, gimple_stmt_iterator *gsi,
+vectorizable_shift (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
 		    stmt_vec_info *vec_stmt, slp_tree slp_node,
 		    stmt_vector_for_cost *cost_vec)
 {
@@ -5436,7 +5424,6 @@ vectorizable_shift (gimple *stmt, gimple
   tree scalar_dest;
   tree op0, op1 = NULL;
   tree vec_oprnd1 = NULL_TREE;
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   tree vectype;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   enum tree_code code;
@@ -5470,7 +5457,8 @@ vectorizable_shift (gimple *stmt, gimple
     return false;
 
   /* Is STMT a vectorizable binary/unary operation?   */
-  if (!is_gimple_assign (stmt))
+  gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt);
+  if (!stmt)
     return false;
 
   if (TREE_CODE (gimple_assign_lhs (stmt)) != SSA_NAME)
@@ -5789,21 +5777,20 @@ vectorizable_shift (gimple *stmt, gimple
 
 /* Function vectorizable_operation.
 
-   Check if STMT performs a binary, unary or ternary operation that can
+   Check if STMT_INFO performs a binary, unary or ternary operation that can
    be vectorized.
-   If VEC_STMT is also passed, vectorize the STMT: create a vectorized
-   stmt to replace it, put it in VEC_STMT, and insert it at BSI.
-   Return FALSE if not a vectorizable STMT, TRUE otherwise.  */
+   If VEC_STMT is also passed, vectorize STMT_INFO: create a vectorized
+   stmt to replace it, put it in VEC_STMT, and insert it at GSI.
+   Return true if STMT_INFO is vectorizable in this way.  */
 
 static bool
-vectorizable_operation (gimple *stmt, gimple_stmt_iterator *gsi,
+vectorizable_operation (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
 			stmt_vec_info *vec_stmt, slp_tree slp_node,
 			stmt_vector_for_cost *cost_vec)
 {
   tree vec_dest;
   tree scalar_dest;
   tree op0, op1 = NULL_TREE, op2 = NULL_TREE;
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   tree vectype;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   enum tree_code code, orig_code;
@@ -5836,7 +5823,8 @@ vectorizable_operation (gimple *stmt, gi
     return false;
 
   /* Is STMT a vectorizable binary/unary operation?   */
-  if (!is_gimple_assign (stmt))
+  gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt);
+  if (!stmt)
     return false;
 
   if (TREE_CODE (gimple_assign_lhs (stmt)) != SSA_NAME)
@@ -6215,12 +6203,11 @@ ensure_base_align (struct data_reference
 
 /* Function get_group_alias_ptr_type.
 
-   Return the alias type for the group starting at FIRST_STMT.  */
+   Return the alias type for the group starting at FIRST_STMT_INFO.  */
 
 static tree
-get_group_alias_ptr_type (gimple *first_stmt)
+get_group_alias_ptr_type (stmt_vec_info first_stmt_info)
 {
-  stmt_vec_info first_stmt_info = vinfo_for_stmt (first_stmt);
   struct data_reference *first_dr, *next_dr;
 
   first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
@@ -6244,21 +6231,20 @@ get_group_alias_ptr_type (gimple *first_
 
 /* Function vectorizable_store.
 
-   Check if STMT defines a non scalar data-ref (array/pointer/structure) that
-   can be vectorized.
-   If VEC_STMT is also passed, vectorize the STMT: create a vectorized
-   stmt to replace it, put it in VEC_STMT, and insert it at BSI.
-   Return FALSE if not a vectorizable STMT, TRUE otherwise.  */
+   Check if STMT_INFO defines a non scalar data-ref (array/pointer/structure)
+   that can be vectorized.
+   If VEC_STMT is also passed, vectorize STMT_INFO: create a vectorized
+   stmt to replace it, put it in VEC_STMT, and insert it at GSI.
+   Return true if STMT_INFO is vectorizable in this way.  */
 
 static bool
-vectorizable_store (gimple *stmt, gimple_stmt_iterator *gsi,
+vectorizable_store (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
 		    stmt_vec_info *vec_stmt, slp_tree slp_node,
 		    stmt_vector_for_cost *cost_vec)
 {
   tree data_ref;
   tree op;
   tree vec_oprnd = NULL_TREE;
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info), *first_dr = NULL;
   tree elem_type;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
@@ -7350,19 +7336,19 @@ permute_vec_elements (tree x, tree y, tr
   return data_ref;
 }
 
-/* Hoist the definitions of all SSA uses on STMT out of the loop LOOP,
+/* Hoist the definitions of all SSA uses on STMT_INFO out of the loop LOOP,
    inserting them on the loops preheader edge.  Returns true if we
-   were successful in doing so (and thus STMT can be moved then),
+   were successful in doing so (and thus STMT_INFO can be moved then),
    otherwise returns false.  */
 
 static bool
-hoist_defs_of_uses (gimple *stmt, struct loop *loop)
+hoist_defs_of_uses (stmt_vec_info stmt_info, struct loop *loop)
 {
   ssa_op_iter i;
   tree op;
   bool any = false;
 
-  FOR_EACH_SSA_TREE_OPERAND (op, stmt, i, SSA_OP_USE)
+  FOR_EACH_SSA_TREE_OPERAND (op, stmt_info->stmt, i, SSA_OP_USE)
     {
       gimple *def_stmt = SSA_NAME_DEF_STMT (op);
       if (!gimple_nop_p (def_stmt)
@@ -7390,7 +7376,7 @@ hoist_defs_of_uses (gimple *stmt, struct
   if (!any)
     return true;
 
-  FOR_EACH_SSA_TREE_OPERAND (op, stmt, i, SSA_OP_USE)
+  FOR_EACH_SSA_TREE_OPERAND (op, stmt_info->stmt, i, SSA_OP_USE)
     {
       gimple *def_stmt = SSA_NAME_DEF_STMT (op);
       if (!gimple_nop_p (def_stmt)
@@ -7407,14 +7393,14 @@ hoist_defs_of_uses (gimple *stmt, struct
 
 /* vectorizable_load.
 
-   Check if STMT reads a non scalar data-ref (array/pointer/structure) that
-   can be vectorized.
-   If VEC_STMT is also passed, vectorize the STMT: create a vectorized
-   stmt to replace it, put it in VEC_STMT, and insert it at BSI.
-   Return FALSE if not a vectorizable STMT, TRUE otherwise.  */
+   Check if STMT_INFO reads a non scalar data-ref (array/pointer/structure)
+   that can be vectorized.
+   If VEC_STMT is also passed, vectorize STMT_INFO: create a vectorized
+   stmt to replace it, put it in VEC_STMT, and insert it at GSI.
+   Return true if STMT_INFO is vectorizable in this way.  */
 
 static bool
-vectorizable_load (gimple *stmt, gimple_stmt_iterator *gsi,
+vectorizable_load (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
 		   stmt_vec_info *vec_stmt, slp_tree slp_node,
 		   slp_instance slp_node_instance,
 		   stmt_vector_for_cost *cost_vec)
@@ -7422,11 +7408,10 @@ vectorizable_load (gimple *stmt, gimple_
   tree scalar_dest;
   tree vec_dest = NULL;
   tree data_ref = NULL;
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   stmt_vec_info prev_stmt_info;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct loop *loop = NULL;
-  struct loop *containing_loop = (gimple_bb (stmt))->loop_father;
+  struct loop *containing_loop = gimple_bb (stmt_info->stmt)->loop_father;
   bool nested_in_vect_loop = false;
   struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info), *first_dr = NULL;
   tree elem_type;
@@ -8532,6 +8517,7 @@ vectorizable_load (gimple *stmt, gimple_
 		      && !nested_in_vect_loop
 		      && hoist_defs_of_uses (stmt_info, loop))
 		    {
+		      gassign *stmt = as_a <gassign *> (stmt_info->stmt);
 		      if (dump_enabled_p ())
 			{
 			  dump_printf_loc (MSG_NOTE, vect_location,
@@ -8730,19 +8716,19 @@ vect_is_simple_cond (tree cond, vec_info
 
 /* vectorizable_condition.
 
-   Check if STMT is conditional modify expression that can be vectorized.
-   If VEC_STMT is also passed, vectorize the STMT: create a vectorized
+   Check if STMT_INFO is conditional modify expression that can be vectorized.
+   If VEC_STMT is also passed, vectorize STMT_INFO: create a vectorized
    stmt using VEC_COND_EXPR  to replace it, put it in VEC_STMT, and insert it
    at GSI.
 
-   When STMT is vectorized as nested cycle, REDUC_DEF is the vector variable
-   to be used at REDUC_INDEX (in then clause if REDUC_INDEX is 1, and in
-   else clause if it is 2).
+   When STMT_INFO is vectorized as a nested cycle, REDUC_DEF is the vector
+   variable to be used at REDUC_INDEX (in then clause if REDUC_INDEX is 1,
+   and in else clause if it is 2).
 
-   Return FALSE if not a vectorizable STMT, TRUE otherwise.  */
+   Return true if STMT_INFO is vectorizable in this way.  */
 
 bool
-vectorizable_condition (gimple *stmt, gimple_stmt_iterator *gsi,
+vectorizable_condition (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
 			stmt_vec_info *vec_stmt, tree reduc_def,
 			int reduc_index, slp_tree slp_node,
 			stmt_vector_for_cost *cost_vec)
@@ -8751,7 +8737,6 @@ vectorizable_condition (gimple *stmt, gi
   tree vec_dest = NULL_TREE;
   tree cond_expr, cond_expr0 = NULL_TREE, cond_expr1 = NULL_TREE;
   tree then_clause, else_clause;
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   tree comp_vectype = NULL_TREE;
   tree vec_cond_lhs = NULL_TREE, vec_cond_rhs = NULL_TREE;
   tree vec_then_clause = NULL_TREE, vec_else_clause = NULL_TREE;
@@ -8800,7 +8785,8 @@ vectorizable_condition (gimple *stmt, gi
     }
 
   /* Is vectorizable conditional operation?  */
-  if (!is_gimple_assign (stmt))
+  gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt);
+  if (!stmt)
     return false;
 
   code = gimple_assign_rhs_code (stmt);
@@ -9138,19 +9124,18 @@ vectorizable_condition (gimple *stmt, gi
 
 /* vectorizable_comparison.
 
-   Check if STMT is comparison expression that can be vectorized.
-   If VEC_STMT is also passed, vectorize the STMT: create a vectorized
+   Check if STMT_INFO is comparison expression that can be vectorized.
+   If VEC_STMT is also passed, vectorize STMT_INFO: create a vectorized
    comparison, put it in VEC_STMT, and insert it at GSI.
 
-   Return FALSE if not a vectorizable STMT, TRUE otherwise.  */
+   Return true if STMT_INFO is vectorizable in this way.  */
 
 static bool
-vectorizable_comparison (gimple *stmt, gimple_stmt_iterator *gsi,
+vectorizable_comparison (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
 			 stmt_vec_info *vec_stmt, tree reduc_def,
 			 slp_tree slp_node, stmt_vector_for_cost *cost_vec)
 {
   tree lhs, rhs1, rhs2;
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   tree vectype1 = NULL_TREE, vectype2 = NULL_TREE;
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
   tree vec_rhs1 = NULL_TREE, vec_rhs2 = NULL_TREE;
@@ -9197,7 +9182,8 @@ vectorizable_comparison (gimple *stmt, g
       return false;
     }
 
-  if (!is_gimple_assign (stmt))
+  gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt);
+  if (!stmt)
     return false;
 
   code = gimple_assign_rhs_code (stmt);
@@ -9446,10 +9432,10 @@ can_vectorize_live_stmts (stmt_vec_info
 /* Make sure the statement is vectorizable.  */
 
 bool
-vect_analyze_stmt (gimple *stmt, bool *need_to_vectorize, slp_tree node,
-		   slp_instance node_instance, stmt_vector_for_cost *cost_vec)
+vect_analyze_stmt (stmt_vec_info stmt_info, bool *need_to_vectorize,
+		   slp_tree node, slp_instance node_instance,
+		   stmt_vector_for_cost *cost_vec)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   vec_info *vinfo = stmt_info->vinfo;
   bb_vec_info bb_vinfo = STMT_VINFO_BB_VINFO (stmt_info);
   enum vect_relevant relevance = STMT_VINFO_RELEVANT (stmt_info);
@@ -9525,7 +9511,6 @@ vect_analyze_stmt (gimple *stmt, bool *n
 	      || STMT_VINFO_LIVE_P (pattern_stmt_info)))
         {
           /* Analyze PATTERN_STMT instead of the original stmt.  */
-	  stmt = pattern_stmt_info->stmt;
 	  stmt_info = pattern_stmt_info;
           if (dump_enabled_p ())
             {
@@ -9682,14 +9667,13 @@ vect_analyze_stmt (gimple *stmt, bool *n
 
 /* Function vect_transform_stmt.
 
-   Create a vectorized stmt to replace STMT, and insert it at BSI.  */
+   Create a vectorized stmt to replace STMT_INFO, and insert it at BSI.  */
 
 bool
-vect_transform_stmt (gimple *stmt, gimple_stmt_iterator *gsi,
+vect_transform_stmt (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
 		     bool *grouped_store, slp_tree slp_node,
                      slp_instance slp_node_instance)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   vec_info *vinfo = stmt_info->vinfo;
   bool is_store = false;
   stmt_vec_info vec_stmt = NULL;
@@ -9703,6 +9687,7 @@ vect_transform_stmt (gimple *stmt, gimpl
 		        (LOOP_VINFO_LOOP (STMT_VINFO_LOOP_VINFO (stmt_info)),
 			 stmt_info));
 
+  gimple *stmt = stmt_info->stmt;
   switch (STMT_VINFO_TYPE (stmt_info))
     {
     case type_demotion_vec_info_type:
@@ -9861,9 +9846,9 @@ vect_transform_stmt (gimple *stmt, gimpl
    stmt_vec_info.  */
 
 void
-vect_remove_stores (gimple *first_stmt)
+vect_remove_stores (stmt_vec_info first_stmt_info)
 {
-  stmt_vec_info next_stmt_info = vinfo_for_stmt (first_stmt);
+  stmt_vec_info next_stmt_info = first_stmt_info;
   gimple_stmt_iterator next_si;
 
   while (next_stmt_info)
@@ -10329,13 +10314,12 @@ vect_is_simple_use (tree operand, vec_in
    widening operation (short in the above example).  */
 
 bool
-supportable_widening_operation (enum tree_code code, gimple *stmt,
+supportable_widening_operation (enum tree_code code, stmt_vec_info stmt_info,
 				tree vectype_out, tree vectype_in,
                                 enum tree_code *code1, enum tree_code *code2,
                                 int *multi_step_cvt,
                                 vec<tree> *interm_types)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   loop_vec_info loop_info = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct loop *vect_loop = NULL;
   machine_mode vec_mode;

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [31/46] Use stmt_vec_info in function interfaces (part 1)
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (30 preceding siblings ...)
  2018-07-24 10:05 ` [32/46] Use stmt_vec_info in function interfaces (part 2) Richard Sandiford
@ 2018-07-24 10:05 ` Richard Sandiford
  2018-07-25 10:05   ` Richard Biener
  2018-07-24 10:06 ` [34/46] Alter interface to vect_get_vec_def_for_stmt_copy Richard Sandiford
                   ` (13 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:05 UTC (permalink / raw)
  To: gcc-patches

This first (less mechanical) part handles cases that involve changes in
the callers or non-trivial changes in the functions themselves.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vect-data-refs.c (vect_describe_gather_scatter_call): Take
	a stmt_vec_info instead of a gcall.
	(vect_check_gather_scatter): Update call accordingly.
	* tree-vect-loop-manip.c (iv_phi_p): Take a stmt_vec_info instead
	of a gphi.
	(vect_can_advance_ivs_p, vect_update_ivs_after_vectorizer)
	(slpeel_update_phi_nodes_for_loops):): Update calls accordingly.
	* tree-vect-loop.c (vect_transform_loop_stmt): Take a stmt_vec_info
	instead of a gimple stmt.
	(vect_transform_loop): Update calls accordingly.
	* tree-vect-slp.c (vect_split_slp_store_group): Take and return
	stmt_vec_infos instead of gimple stmts.
	(vect_analyze_slp_instance): Update use accordingly.
	* tree-vect-stmts.c (read_vector_array, write_vector_array)
	(vect_clobber_variable, vect_stmt_relevant_p, permute_vec_elements)
	(vect_use_strided_gather_scatters_p, vect_build_all_ones_mask)
	(vect_build_zero_merge_argument, vect_get_gather_scatter_ops)
	(vect_gen_widened_results_half, vect_get_loop_based_defs)
	(vect_create_vectorized_promotion_stmts, can_vectorize_live_stmts):
	Take a stmt_vec_info instead of a gimple stmt and pass stmt_vec_infos
	down to subroutines.

Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c	2018-07-24 10:23:35.376732054 +0100
+++ gcc/tree-vect-data-refs.c	2018-07-24 10:23:46.108636749 +0100
@@ -3621,13 +3621,14 @@ vect_gather_scatter_fn_p (bool read_p, b
   return true;
 }
 
-/* CALL is a call to an internal gather load or scatter store function.
+/* STMT_INFO is a call to an internal gather load or scatter store function.
    Describe the operation in INFO.  */
 
 static void
-vect_describe_gather_scatter_call (gcall *call, gather_scatter_info *info)
+vect_describe_gather_scatter_call (stmt_vec_info stmt_info,
+				   gather_scatter_info *info)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (call);
+  gcall *call = as_a <gcall *> (stmt_info->stmt);
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
   data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
 
@@ -3672,7 +3673,7 @@ vect_check_gather_scatter (gimple *stmt,
       ifn = gimple_call_internal_fn (call);
       if (internal_gather_scatter_fn_p (ifn))
 	{
-	  vect_describe_gather_scatter_call (call, info);
+	  vect_describe_gather_scatter_call (stmt_info, info);
 	  return true;
 	}
       masked_p = (ifn == IFN_MASK_LOAD || ifn == IFN_MASK_STORE);
Index: gcc/tree-vect-loop-manip.c
===================================================================
--- gcc/tree-vect-loop-manip.c	2018-07-24 10:23:35.376732054 +0100
+++ gcc/tree-vect-loop-manip.c	2018-07-24 10:23:46.112636713 +0100
@@ -1335,16 +1335,16 @@ find_loop_location (struct loop *loop)
   return dump_user_location_t ();
 }
 
-/* Return true if PHI defines an IV of the loop to be vectorized.  */
+/* Return true if the phi described by STMT_INFO defines an IV of the
+   loop to be vectorized.  */
 
 static bool
-iv_phi_p (gphi *phi)
+iv_phi_p (stmt_vec_info stmt_info)
 {
+  gphi *phi = as_a <gphi *> (stmt_info->stmt);
   if (virtual_operand_p (PHI_RESULT (phi)))
     return false;
 
-  stmt_vec_info stmt_info = vinfo_for_stmt (phi);
-  gcc_assert (stmt_info != NULL_STMT_VEC_INFO);
   if (STMT_VINFO_DEF_TYPE (stmt_info) == vect_reduction_def
       || STMT_VINFO_DEF_TYPE (stmt_info) == vect_double_reduction_def)
     return false;
@@ -1388,7 +1388,7 @@ vect_can_advance_ivs_p (loop_vec_info lo
 	 virtual defs/uses (i.e., memory accesses) are analyzed elsewhere.
 
 	 Skip reduction phis.  */
-      if (!iv_phi_p (phi))
+      if (!iv_phi_p (phi_info))
 	{
 	  if (dump_enabled_p ())
 	    dump_printf_loc (MSG_NOTE, vect_location,
@@ -1509,7 +1509,7 @@ vect_update_ivs_after_vectorizer (loop_v
 	}
 
       /* Skip reduction and virtual phis.  */
-      if (!iv_phi_p (phi))
+      if (!iv_phi_p (phi_info))
 	{
 	  if (dump_enabled_p ())
 	    dump_printf_loc (MSG_NOTE, vect_location,
@@ -2088,7 +2088,8 @@ slpeel_update_phi_nodes_for_loops (loop_
       tree arg = PHI_ARG_DEF_FROM_EDGE (orig_phi, first_latch_e);
       /* Generate lcssa PHI node for the first loop.  */
       gphi *vect_phi = (loop == first) ? orig_phi : update_phi;
-      if (create_lcssa_for_iv_phis || !iv_phi_p (vect_phi))
+      stmt_vec_info vect_phi_info = loop_vinfo->lookup_stmt (vect_phi);
+      if (create_lcssa_for_iv_phis || !iv_phi_p (vect_phi_info))
 	{
 	  tree new_res = copy_ssa_name (PHI_RESULT (orig_phi));
 	  gphi *lcssa_phi = create_phi_node (new_res, between_bb);
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:23:42.472669038 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:23:46.112636713 +0100
@@ -8207,21 +8207,18 @@ scale_profile_for_vect_loop (struct loop
     scale_bbs_frequencies (&loop->latch, 1, exit_l->probability / prob);
 }
 
-/* Vectorize STMT if relevant, inserting any new instructions before GSI.
-   When vectorizing STMT as a store, set *SEEN_STORE to its stmt_vec_info.
+/* Vectorize STMT_INFO if relevant, inserting any new instructions before GSI.
+   When vectorizing STMT_INFO as a store, set *SEEN_STORE to its stmt_vec_info.
    *SLP_SCHEDULE is a running record of whether we have called
    vect_schedule_slp.  */
 
 static void
-vect_transform_loop_stmt (loop_vec_info loop_vinfo, gimple *stmt,
+vect_transform_loop_stmt (loop_vec_info loop_vinfo, stmt_vec_info stmt_info,
 			  gimple_stmt_iterator *gsi,
 			  stmt_vec_info *seen_store, bool *slp_scheduled)
 {
   struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
   poly_uint64 vf = LOOP_VINFO_VECT_FACTOR (loop_vinfo);
-  stmt_vec_info stmt_info = loop_vinfo->lookup_stmt (stmt);
-  if (!stmt_info)
-    return;
 
   if (dump_enabled_p ())
     {
@@ -8476,15 +8473,19 @@ vect_transform_loop (loop_vec_info loop_
 		      gimple *def_seq = STMT_VINFO_PATTERN_DEF_SEQ (stmt_info);
 		      for (gimple_stmt_iterator subsi = gsi_start (def_seq);
 			   !gsi_end_p (subsi); gsi_next (&subsi))
-			vect_transform_loop_stmt (loop_vinfo,
-						  gsi_stmt (subsi), &si,
-						  &seen_store,
-						  &slp_scheduled);
-		      gimple *pat_stmt = STMT_VINFO_RELATED_STMT (stmt_info);
-		      vect_transform_loop_stmt (loop_vinfo, pat_stmt, &si,
+			{
+			  stmt_vec_info pat_stmt_info
+			    = loop_vinfo->lookup_stmt (gsi_stmt (subsi));
+			  vect_transform_loop_stmt (loop_vinfo, pat_stmt_info,
+						    &si, &seen_store,
+						    &slp_scheduled);
+			}
+		      stmt_vec_info pat_stmt_info
+			= STMT_VINFO_RELATED_STMT (stmt_info);
+		      vect_transform_loop_stmt (loop_vinfo, pat_stmt_info, &si,
 						&seen_store, &slp_scheduled);
 		    }
-		  vect_transform_loop_stmt (loop_vinfo, stmt, &si,
+		  vect_transform_loop_stmt (loop_vinfo, stmt_info, &si,
 					    &seen_store, &slp_scheduled);
 		}
 	      if (seen_store)
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c	2018-07-24 10:23:38.964700191 +0100
+++ gcc/tree-vect-slp.c	2018-07-24 10:23:46.112636713 +0100
@@ -1856,16 +1856,15 @@ vect_find_last_scalar_stmt_in_slp (slp_t
   return last;
 }
 
-/* Splits a group of stores, currently beginning at FIRST_STMT, into two groups:
-   one (still beginning at FIRST_STMT) of size GROUP1_SIZE (also containing
-   the first GROUP1_SIZE stmts, since stores are consecutive), the second
-   containing the remainder.
+/* Splits a group of stores, currently beginning at FIRST_VINFO, into
+   two groups: one (still beginning at FIRST_VINFO) of size GROUP1_SIZE
+   (also containing the first GROUP1_SIZE stmts, since stores are
+   consecutive), the second containing the remainder.
    Return the first stmt in the second group.  */
 
-static gimple *
-vect_split_slp_store_group (gimple *first_stmt, unsigned group1_size)
+static stmt_vec_info
+vect_split_slp_store_group (stmt_vec_info first_vinfo, unsigned group1_size)
 {
-  stmt_vec_info first_vinfo = vinfo_for_stmt (first_stmt);
   gcc_assert (DR_GROUP_FIRST_ELEMENT (first_vinfo) == first_vinfo);
   gcc_assert (group1_size > 0);
   int group2_size = DR_GROUP_SIZE (first_vinfo) - group1_size;
@@ -2174,7 +2173,8 @@ vect_analyze_slp_instance (vec_info *vin
 	  gcc_assert ((const_nunits & (const_nunits - 1)) == 0);
 	  unsigned group1_size = i & ~(const_nunits - 1);
 
-	  gimple *rest = vect_split_slp_store_group (stmt_info, group1_size);
+	  stmt_vec_info rest = vect_split_slp_store_group (stmt_info,
+							   group1_size);
 	  bool res = vect_analyze_slp_instance (vinfo, stmt_info,
 						max_tree_size);
 	  /* If the first non-match was in the middle of a vector,
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:23:42.472669038 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:23:46.116636678 +0100
@@ -117,12 +117,12 @@ create_vector_array (tree elem_type, uns
 
 /* ARRAY is an array of vectors created by create_vector_array.
    Return an SSA_NAME for the vector in index N.  The reference
-   is part of the vectorization of STMT and the vector is associated
+   is part of the vectorization of STMT_INFO and the vector is associated
    with scalar destination SCALAR_DEST.  */
 
 static tree
-read_vector_array (gimple *stmt, gimple_stmt_iterator *gsi, tree scalar_dest,
-		   tree array, unsigned HOST_WIDE_INT n)
+read_vector_array (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
+		   tree scalar_dest, tree array, unsigned HOST_WIDE_INT n)
 {
   tree vect_type, vect, vect_name, array_ref;
   gimple *new_stmt;
@@ -137,18 +137,18 @@ read_vector_array (gimple *stmt, gimple_
   new_stmt = gimple_build_assign (vect, array_ref);
   vect_name = make_ssa_name (vect, new_stmt);
   gimple_assign_set_lhs (new_stmt, vect_name);
-  vect_finish_stmt_generation (stmt, new_stmt, gsi);
+  vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 
   return vect_name;
 }
 
 /* ARRAY is an array of vectors created by create_vector_array.
    Emit code to store SSA_NAME VECT in index N of the array.
-   The store is part of the vectorization of STMT.  */
+   The store is part of the vectorization of STMT_INFO.  */
 
 static void
-write_vector_array (gimple *stmt, gimple_stmt_iterator *gsi, tree vect,
-		    tree array, unsigned HOST_WIDE_INT n)
+write_vector_array (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
+		    tree vect, tree array, unsigned HOST_WIDE_INT n)
 {
   tree array_ref;
   gimple *new_stmt;
@@ -158,7 +158,7 @@ write_vector_array (gimple *stmt, gimple
 		      NULL_TREE, NULL_TREE);
 
   new_stmt = gimple_build_assign (array_ref, vect);
-  vect_finish_stmt_generation (stmt, new_stmt, gsi);
+  vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 }
 
 /* PTR is a pointer to an array of type TYPE.  Return a representation
@@ -176,15 +176,16 @@ create_array_ref (tree type, tree ptr, t
   return mem_ref;
 }
 
-/* Add a clobber of variable VAR to the vectorization of STMT.
+/* Add a clobber of variable VAR to the vectorization of STMT_INFO.
    Emit the clobber before *GSI.  */
 
 static void
-vect_clobber_variable (gimple *stmt, gimple_stmt_iterator *gsi, tree var)
+vect_clobber_variable (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
+		       tree var)
 {
   tree clobber = build_clobber (TREE_TYPE (var));
   gimple *new_stmt = gimple_build_assign (var, clobber);
-  vect_finish_stmt_generation (stmt, new_stmt, gsi);
+  vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 }
 
 /* Utility functions used by vect_mark_stmts_to_be_vectorized.  */
@@ -281,8 +282,8 @@ is_simple_and_all_uses_invariant (gimple
 
 /* Function vect_stmt_relevant_p.
 
-   Return true if STMT in loop that is represented by LOOP_VINFO is
-   "relevant for vectorization".
+   Return true if STMT_INFO, in the loop that is represented by LOOP_VINFO,
+   is "relevant for vectorization".
 
    A stmt is considered "relevant for vectorization" if:
    - it has uses outside the loop.
@@ -292,7 +293,7 @@ is_simple_and_all_uses_invariant (gimple
    CHECKME: what other side effects would the vectorizer allow?  */
 
 static bool
-vect_stmt_relevant_p (gimple *stmt, loop_vec_info loop_vinfo,
+vect_stmt_relevant_p (stmt_vec_info stmt_info, loop_vec_info loop_vinfo,
 		      enum vect_relevant *relevant, bool *live_p)
 {
   struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
@@ -305,15 +306,14 @@ vect_stmt_relevant_p (gimple *stmt, loop
   *live_p = false;
 
   /* cond stmt other than loop exit cond.  */
-  if (is_ctrl_stmt (stmt)
-      && STMT_VINFO_TYPE (vinfo_for_stmt (stmt))
-         != loop_exit_ctrl_vec_info_type)
+  if (is_ctrl_stmt (stmt_info->stmt)
+      && STMT_VINFO_TYPE (stmt_info) != loop_exit_ctrl_vec_info_type)
     *relevant = vect_used_in_scope;
 
   /* changing memory.  */
-  if (gimple_code (stmt) != GIMPLE_PHI)
-    if (gimple_vdef (stmt)
-	&& !gimple_clobber_p (stmt))
+  if (gimple_code (stmt_info->stmt) != GIMPLE_PHI)
+    if (gimple_vdef (stmt_info->stmt)
+	&& !gimple_clobber_p (stmt_info->stmt))
       {
 	if (dump_enabled_p ())
 	  dump_printf_loc (MSG_NOTE, vect_location,
@@ -322,7 +322,7 @@ vect_stmt_relevant_p (gimple *stmt, loop
       }
 
   /* uses outside the loop.  */
-  FOR_EACH_PHI_OR_STMT_DEF (def_p, stmt, op_iter, SSA_OP_DEF)
+  FOR_EACH_PHI_OR_STMT_DEF (def_p, stmt_info->stmt, op_iter, SSA_OP_DEF)
     {
       FOR_EACH_IMM_USE_FAST (use_p, imm_iter, DEF_FROM_PTR (def_p))
 	{
@@ -347,7 +347,7 @@ vect_stmt_relevant_p (gimple *stmt, loop
     }
 
   if (*live_p && *relevant == vect_unused_in_scope
-      && !is_simple_and_all_uses_invariant (stmt, loop_vinfo))
+      && !is_simple_and_all_uses_invariant (stmt_info, loop_vinfo))
     {
       if (dump_enabled_p ())
 	dump_printf_loc (MSG_NOTE, vect_location,
@@ -1838,7 +1838,7 @@ vectorizable_internal_function (combined
 }
 
 
-static tree permute_vec_elements (tree, tree, tree, gimple *,
+static tree permute_vec_elements (tree, tree, tree, stmt_vec_info,
 				  gimple_stmt_iterator *);
 
 /* Check whether a load or store statement in the loop described by
@@ -2072,19 +2072,19 @@ vect_truncate_gather_scatter_offset (gim
 }
 
 /* Return true if we can use gather/scatter internal functions to
-   vectorize STMT, which is a grouped or strided load or store.
+   vectorize STMT_INFO, which is a grouped or strided load or store.
    MASKED_P is true if load or store is conditional.  When returning
    true, fill in GS_INFO with the information required to perform the
    operation.  */
 
 static bool
-vect_use_strided_gather_scatters_p (gimple *stmt, loop_vec_info loop_vinfo,
-				    bool masked_p,
+vect_use_strided_gather_scatters_p (stmt_vec_info stmt_info,
+				    loop_vec_info loop_vinfo, bool masked_p,
 				    gather_scatter_info *gs_info)
 {
-  if (!vect_check_gather_scatter (stmt, loop_vinfo, gs_info)
+  if (!vect_check_gather_scatter (stmt_info, loop_vinfo, gs_info)
       || gs_info->decl)
-    return vect_truncate_gather_scatter_offset (stmt, loop_vinfo,
+    return vect_truncate_gather_scatter_offset (stmt_info, loop_vinfo,
 						masked_p, gs_info);
 
   scalar_mode element_mode = SCALAR_TYPE_MODE (gs_info->element_type);
@@ -2613,12 +2613,12 @@ vect_check_store_rhs (gimple *stmt, tree
   return true;
 }
 
-/* Build an all-ones vector mask of type MASKTYPE while vectorizing STMT.
+/* Build an all-ones vector mask of type MASKTYPE while vectorizing STMT_INFO.
    Note that we support masks with floating-point type, in which case the
    floats are interpreted as a bitmask.  */
 
 static tree
-vect_build_all_ones_mask (gimple *stmt, tree masktype)
+vect_build_all_ones_mask (stmt_vec_info stmt_info, tree masktype)
 {
   if (TREE_CODE (masktype) == INTEGER_TYPE)
     return build_int_cst (masktype, -1);
@@ -2626,7 +2626,7 @@ vect_build_all_ones_mask (gimple *stmt,
     {
       tree mask = build_int_cst (TREE_TYPE (masktype), -1);
       mask = build_vector_from_val (masktype, mask);
-      return vect_init_vector (stmt, mask, masktype, NULL);
+      return vect_init_vector (stmt_info, mask, masktype, NULL);
     }
   else if (SCALAR_FLOAT_TYPE_P (TREE_TYPE (masktype)))
     {
@@ -2637,16 +2637,16 @@ vect_build_all_ones_mask (gimple *stmt,
       real_from_target (&r, tmp, TYPE_MODE (TREE_TYPE (masktype)));
       tree mask = build_real (TREE_TYPE (masktype), r);
       mask = build_vector_from_val (masktype, mask);
-      return vect_init_vector (stmt, mask, masktype, NULL);
+      return vect_init_vector (stmt_info, mask, masktype, NULL);
     }
   gcc_unreachable ();
 }
 
 /* Build an all-zero merge value of type VECTYPE while vectorizing
-   STMT as a gather load.  */
+   STMT_INFO as a gather load.  */
 
 static tree
-vect_build_zero_merge_argument (gimple *stmt, tree vectype)
+vect_build_zero_merge_argument (stmt_vec_info stmt_info, tree vectype)
 {
   tree merge;
   if (TREE_CODE (TREE_TYPE (vectype)) == INTEGER_TYPE)
@@ -2663,7 +2663,7 @@ vect_build_zero_merge_argument (gimple *
   else
     gcc_unreachable ();
   merge = build_vector_from_val (vectype, merge);
-  return vect_init_vector (stmt, merge, vectype, NULL);
+  return vect_init_vector (stmt_info, merge, vectype, NULL);
 }
 
 /* Build a gather load call while vectorizing STMT.  Insert new instructions
@@ -2871,11 +2871,12 @@ vect_build_gather_load_calls (gimple *st
 
 /* Prepare the base and offset in GS_INFO for vectorization.
    Set *DATAREF_PTR to the loop-invariant base address and *VEC_OFFSET
-   to the vectorized offset argument for the first copy of STMT.  STMT
-   is the statement described by GS_INFO and LOOP is the containing loop.  */
+   to the vectorized offset argument for the first copy of STMT_INFO.
+   STMT_INFO is the statement described by GS_INFO and LOOP is the
+   containing loop.  */
 
 static void
-vect_get_gather_scatter_ops (struct loop *loop, gimple *stmt,
+vect_get_gather_scatter_ops (struct loop *loop, stmt_vec_info stmt_info,
 			     gather_scatter_info *gs_info,
 			     tree *dataref_ptr, tree *vec_offset)
 {
@@ -2890,7 +2891,7 @@ vect_get_gather_scatter_ops (struct loop
     }
   tree offset_type = TREE_TYPE (gs_info->offset);
   tree offset_vectype = get_vectype_for_scalar_type (offset_type);
-  *vec_offset = vect_get_vec_def_for_operand (gs_info->offset, stmt,
+  *vec_offset = vect_get_vec_def_for_operand (gs_info->offset, stmt_info,
 					      offset_vectype);
 }
 
@@ -4403,14 +4404,14 @@ vectorizable_simd_clone_call (gimple *st
    VEC_OPRND0 and VEC_OPRND1.  The new vector stmt is to be inserted at BSI.
    In the case that CODE is a CALL_EXPR, this means that a call to DECL
    needs to be created (DECL is a function-decl of a target-builtin).
-   STMT is the original scalar stmt that we are vectorizing.  */
+   STMT_INFO is the original scalar stmt that we are vectorizing.  */
 
 static gimple *
 vect_gen_widened_results_half (enum tree_code code,
 			       tree decl,
                                tree vec_oprnd0, tree vec_oprnd1, int op_type,
 			       tree vec_dest, gimple_stmt_iterator *gsi,
-			       gimple *stmt)
+			       stmt_vec_info stmt_info)
 {
   gimple *new_stmt;
   tree new_temp;
@@ -4436,22 +4437,23 @@ vect_gen_widened_results_half (enum tree
       new_temp = make_ssa_name (vec_dest, new_stmt);
       gimple_assign_set_lhs (new_stmt, new_temp);
     }
-  vect_finish_stmt_generation (stmt, new_stmt, gsi);
+  vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 
   return new_stmt;
 }
 
 
-/* Get vectorized definitions for loop-based vectorization.  For the first
-   operand we call vect_get_vec_def_for_operand() (with OPRND containing
-   scalar operand), and for the rest we get a copy with
+/* Get vectorized definitions for loop-based vectorization of STMT_INFO.
+   For the first operand we call vect_get_vec_def_for_operand (with OPRND
+   containing scalar operand), and for the rest we get a copy with
    vect_get_vec_def_for_stmt_copy() using the previous vector definition
    (stored in OPRND). See vect_get_vec_def_for_stmt_copy() for details.
    The vectors are collected into VEC_OPRNDS.  */
 
 static void
-vect_get_loop_based_defs (tree *oprnd, gimple *stmt, enum vect_def_type dt,
-			  vec<tree> *vec_oprnds, int multi_step_cvt)
+vect_get_loop_based_defs (tree *oprnd, stmt_vec_info stmt_info,
+			  enum vect_def_type dt, vec<tree> *vec_oprnds,
+			  int multi_step_cvt)
 {
   tree vec_oprnd;
 
@@ -4459,7 +4461,7 @@ vect_get_loop_based_defs (tree *oprnd, g
   /* All the vector operands except the very first one (that is scalar oprnd)
      are stmt copies.  */
   if (TREE_CODE (TREE_TYPE (*oprnd)) != VECTOR_TYPE)
-    vec_oprnd = vect_get_vec_def_for_operand (*oprnd, stmt);
+    vec_oprnd = vect_get_vec_def_for_operand (*oprnd, stmt_info);
   else
     vec_oprnd = vect_get_vec_def_for_stmt_copy (dt, *oprnd);
 
@@ -4474,7 +4476,8 @@ vect_get_loop_based_defs (tree *oprnd, g
   /* For conversion in multiple steps, continue to get operands
      recursively.  */
   if (multi_step_cvt)
-    vect_get_loop_based_defs (oprnd, stmt, dt, vec_oprnds,  multi_step_cvt - 1);
+    vect_get_loop_based_defs (oprnd, stmt_info, dt, vec_oprnds,
+			      multi_step_cvt - 1);
 }
 
 
@@ -4549,13 +4552,14 @@ vect_create_vectorized_demotion_stmts (v
 
 
 /* Create vectorized promotion statements for vector operands from VEC_OPRNDS0
-   and VEC_OPRNDS1 (for binary operations).  For multi-step conversions store
-   the resulting vectors and call the function recursively.  */
+   and VEC_OPRNDS1, for a binary operation associated with scalar statement
+   STMT_INFO.  For multi-step conversions store the resulting vectors and
+   call the function recursively.  */
 
 static void
 vect_create_vectorized_promotion_stmts (vec<tree> *vec_oprnds0,
 					vec<tree> *vec_oprnds1,
-					gimple *stmt, tree vec_dest,
+					stmt_vec_info stmt_info, tree vec_dest,
 					gimple_stmt_iterator *gsi,
 					enum tree_code code1,
 					enum tree_code code2, tree decl1,
@@ -4576,9 +4580,11 @@ vect_create_vectorized_promotion_stmts (
 
       /* Generate the two halves of promotion operation.  */
       new_stmt1 = vect_gen_widened_results_half (code1, decl1, vop0, vop1,
-						 op_type, vec_dest, gsi, stmt);
+						 op_type, vec_dest, gsi,
+						 stmt_info);
       new_stmt2 = vect_gen_widened_results_half (code2, decl2, vop0, vop1,
-						 op_type, vec_dest, gsi, stmt);
+						 op_type, vec_dest, gsi,
+						 stmt_info);
       if (is_gimple_call (new_stmt1))
 	{
 	  new_tmp1 = gimple_call_lhs (new_stmt1);
@@ -7318,19 +7324,19 @@ vect_gen_perm_mask_checked (tree vectype
 }
 
 /* Given a vector variable X and Y, that was generated for the scalar
-   STMT, generate instructions to permute the vector elements of X and Y
+   STMT_INFO, generate instructions to permute the vector elements of X and Y
    using permutation mask MASK_VEC, insert them at *GSI and return the
    permuted vector variable.  */
 
 static tree
-permute_vec_elements (tree x, tree y, tree mask_vec, gimple *stmt,
+permute_vec_elements (tree x, tree y, tree mask_vec, stmt_vec_info stmt_info,
 		      gimple_stmt_iterator *gsi)
 {
   tree vectype = TREE_TYPE (x);
   tree perm_dest, data_ref;
   gimple *perm_stmt;
 
-  tree scalar_dest = gimple_get_lhs (stmt);
+  tree scalar_dest = gimple_get_lhs (stmt_info->stmt);
   if (TREE_CODE (scalar_dest) == SSA_NAME)
     perm_dest = vect_create_destination_var (scalar_dest, vectype);
   else
@@ -7339,7 +7345,7 @@ permute_vec_elements (tree x, tree y, tr
 
   /* Generate the permute statement.  */
   perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR, x, y, mask_vec);
-  vect_finish_stmt_generation (stmt, perm_stmt, gsi);
+  vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
 
   return data_ref;
 }
@@ -9409,11 +9415,11 @@ vectorizable_comparison (gimple *stmt, g
 
 /* If SLP_NODE is nonnull, return true if vectorizable_live_operation
    can handle all live statements in the node.  Otherwise return true
-   if STMT is not live or if vectorizable_live_operation can handle it.
+   if STMT_INFO is not live or if vectorizable_live_operation can handle it.
    GSI and VEC_STMT are as for vectorizable_live_operation.  */
 
 static bool
-can_vectorize_live_stmts (gimple *stmt, gimple_stmt_iterator *gsi,
+can_vectorize_live_stmts (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
 			  slp_tree slp_node, stmt_vec_info *vec_stmt,
 			  stmt_vector_for_cost *cost_vec)
 {
@@ -9429,9 +9435,9 @@ can_vectorize_live_stmts (gimple *stmt,
 	    return false;
 	}
     }
-  else if (STMT_VINFO_LIVE_P (vinfo_for_stmt (stmt))
-	   && !vectorizable_live_operation (stmt, gsi, slp_node, -1, vec_stmt,
-					    cost_vec))
+  else if (STMT_VINFO_LIVE_P (stmt_info)
+	   && !vectorizable_live_operation (stmt_info, gsi, slp_node, -1,
+					    vec_stmt, cost_vec))
     return false;
 
   return true;

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [35/46] Alter interfaces within vect_pattern_recog
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (32 preceding siblings ...)
  2018-07-24 10:06 ` [34/46] Alter interface to vect_get_vec_def_for_stmt_copy Richard Sandiford
@ 2018-07-24 10:06 ` Richard Sandiford
  2018-07-25 10:14   ` Richard Biener
  2018-07-24 10:06 ` [33/46] Use stmt_vec_infos instead of vec_info/gimple stmt pairs Richard Sandiford
                   ` (11 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:06 UTC (permalink / raw)
  To: gcc-patches

vect_pattern_recog_1 took a gimple_stmt_iterator as argument, but was
only interested in the gsi_stmt, not anything else.  This patch makes
the associated routines operate directly on stmt_vec_infos.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vect-patterns.c (vect_mark_pattern_stmts): Take the
	original stmt as a stmt_vec_info rather than a gimple stmt.
	(vect_pattern_recog_1): Take the statement directly as a
	stmt_vec_info, rather than via a gimple_stmt_iterator.
	Update call to vect_mark_pattern_stmts.
	(vect_pattern_recog): Update calls accordingly.

Index: gcc/tree-vect-patterns.c
===================================================================
--- gcc/tree-vect-patterns.c	2018-07-24 10:23:50.004602150 +0100
+++ gcc/tree-vect-patterns.c	2018-07-24 10:23:59.408518638 +0100
@@ -4720,29 +4720,29 @@ const unsigned int NUM_PATTERNS = ARRAY_
 /* Mark statements that are involved in a pattern.  */
 
 static inline void
-vect_mark_pattern_stmts (gimple *orig_stmt, gimple *pattern_stmt,
+vect_mark_pattern_stmts (stmt_vec_info orig_stmt_info, gimple *pattern_stmt,
                          tree pattern_vectype)
 {
-  stmt_vec_info orig_stmt_info = vinfo_for_stmt (orig_stmt);
   gimple *def_seq = STMT_VINFO_PATTERN_DEF_SEQ (orig_stmt_info);
 
-  bool old_pattern_p = is_pattern_stmt_p (orig_stmt_info);
-  if (old_pattern_p)
+  gimple *orig_pattern_stmt = NULL;
+  if (is_pattern_stmt_p (orig_stmt_info))
     {
       /* We're replacing a statement in an existing pattern definition
 	 sequence.  */
+      orig_pattern_stmt = orig_stmt_info->stmt;
       if (dump_enabled_p ())
 	{
 	  dump_printf_loc (MSG_NOTE, vect_location,
 			   "replacing earlier pattern ");
-	  dump_gimple_stmt (MSG_NOTE, TDF_SLIM, orig_stmt, 0);
+	  dump_gimple_stmt (MSG_NOTE, TDF_SLIM, orig_pattern_stmt, 0);
 	}
 
       /* To keep the book-keeping simple, just swap the lhs of the
 	 old and new statements, so that the old one has a valid but
 	 unused lhs.  */
-      tree old_lhs = gimple_get_lhs (orig_stmt);
-      gimple_set_lhs (orig_stmt, gimple_get_lhs (pattern_stmt));
+      tree old_lhs = gimple_get_lhs (orig_pattern_stmt);
+      gimple_set_lhs (orig_pattern_stmt, gimple_get_lhs (pattern_stmt));
       gimple_set_lhs (pattern_stmt, old_lhs);
 
       if (dump_enabled_p ())
@@ -4755,7 +4755,8 @@ vect_mark_pattern_stmts (gimple *orig_st
       orig_stmt_info = STMT_VINFO_RELATED_STMT (orig_stmt_info);
 
       /* We shouldn't be replacing the main pattern statement.  */
-      gcc_assert (STMT_VINFO_RELATED_STMT (orig_stmt_info) != orig_stmt);
+      gcc_assert (STMT_VINFO_RELATED_STMT (orig_stmt_info)->stmt
+		  != orig_pattern_stmt);
     }
 
   if (def_seq)
@@ -4763,13 +4764,14 @@ vect_mark_pattern_stmts (gimple *orig_st
 	 !gsi_end_p (si); gsi_next (&si))
       vect_init_pattern_stmt (gsi_stmt (si), orig_stmt_info, pattern_vectype);
 
-  if (old_pattern_p)
+  if (orig_pattern_stmt)
     {
       vect_init_pattern_stmt (pattern_stmt, orig_stmt_info, pattern_vectype);
 
       /* Insert all the new pattern statements before the original one.  */
       gimple_seq *orig_def_seq = &STMT_VINFO_PATTERN_DEF_SEQ (orig_stmt_info);
-      gimple_stmt_iterator gsi = gsi_for_stmt (orig_stmt, orig_def_seq);
+      gimple_stmt_iterator gsi = gsi_for_stmt (orig_pattern_stmt,
+					       orig_def_seq);
       gsi_insert_seq_before_without_update (&gsi, def_seq, GSI_SAME_STMT);
       gsi_insert_before_without_update (&gsi, pattern_stmt, GSI_SAME_STMT);
 
@@ -4785,12 +4787,12 @@ vect_mark_pattern_stmts (gimple *orig_st
    Input:
    PATTERN_RECOG_FUNC: A pointer to a function that detects a certain
         computation pattern.
-   STMT: A stmt from which the pattern search should start.
+   STMT_INFO: A stmt from which the pattern search should start.
 
    If PATTERN_RECOG_FUNC successfully detected the pattern, it creates
    a sequence of statements that has the same functionality and can be
-   used to replace STMT.  It returns the last statement in the sequence
-   and adds any earlier statements to STMT's STMT_VINFO_PATTERN_DEF_SEQ.
+   used to replace STMT_INFO.  It returns the last statement in the sequence
+   and adds any earlier statements to STMT_INFO's STMT_VINFO_PATTERN_DEF_SEQ.
    PATTERN_RECOG_FUNC also sets *TYPE_OUT to the vector type of the final
    statement, having first checked that the target supports the new operation
    in that type.
@@ -4799,10 +4801,10 @@ vect_mark_pattern_stmts (gimple *orig_st
    for vect_recog_pattern.  */
 
 static void
-vect_pattern_recog_1 (vect_recog_func *recog_func, gimple_stmt_iterator si)
+vect_pattern_recog_1 (vect_recog_func *recog_func, stmt_vec_info stmt_info)
 {
-  gimple *stmt = gsi_stmt (si), *pattern_stmt;
-  stmt_vec_info stmt_info;
+  vec_info *vinfo = stmt_info->vinfo;
+  gimple *pattern_stmt;
   loop_vec_info loop_vinfo;
   tree pattern_vectype;
 
@@ -4810,13 +4812,12 @@ vect_pattern_recog_1 (vect_recog_func *r
      leave the original statement alone, since the first match wins.
      Instead try to match against the definition statements that feed
      the main pattern statement.  */
-  stmt_info = vinfo_for_stmt (stmt);
   if (STMT_VINFO_IN_PATTERN_P (stmt_info))
     {
       gimple_stmt_iterator gsi;
       for (gsi = gsi_start (STMT_VINFO_PATTERN_DEF_SEQ (stmt_info));
 	   !gsi_end_p (gsi); gsi_next (&gsi))
-	vect_pattern_recog_1 (recog_func, gsi);
+	vect_pattern_recog_1 (recog_func, vinfo->lookup_stmt (gsi_stmt (gsi)));
       return;
     }
 
@@ -4841,7 +4842,7 @@ vect_pattern_recog_1 (vect_recog_func *r
     }
 
   /* Mark the stmts that are involved in the pattern. */
-  vect_mark_pattern_stmts (stmt, pattern_stmt, pattern_vectype);
+  vect_mark_pattern_stmts (stmt_info, pattern_stmt, pattern_vectype);
 
   /* Patterns cannot be vectorized using SLP, because they change the order of
      computation.  */
@@ -4957,9 +4958,13 @@ vect_pattern_recog (vec_info *vinfo)
 	{
 	  basic_block bb = bbs[i];
 	  for (si = gsi_start_bb (bb); !gsi_end_p (si); gsi_next (&si))
-	    /* Scan over all generic vect_recog_xxx_pattern functions.  */
-	    for (j = 0; j < NUM_PATTERNS; j++)
-	      vect_pattern_recog_1 (&vect_vect_recog_func_ptrs[j], si);
+	    {
+	      stmt_vec_info stmt_info = vinfo->lookup_stmt (gsi_stmt (si));
+	      /* Scan over all generic vect_recog_xxx_pattern functions.  */
+	      for (j = 0; j < NUM_PATTERNS; j++)
+		vect_pattern_recog_1 (&vect_vect_recog_func_ptrs[j],
+				      stmt_info);
+	    }
 	}
     }
   else
@@ -4975,7 +4980,7 @@ vect_pattern_recog (vec_info *vinfo)
 
 	  /* Scan over all generic vect_recog_xxx_pattern functions.  */
 	  for (j = 0; j < NUM_PATTERNS; j++)
-	    vect_pattern_recog_1 (&vect_vect_recog_func_ptrs[j], si);
+	    vect_pattern_recog_1 (&vect_vect_recog_func_ptrs[j], stmt_info);
 	}
     }
 }

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [34/46] Alter interface to vect_get_vec_def_for_stmt_copy
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (31 preceding siblings ...)
  2018-07-24 10:05 ` [31/46] Use stmt_vec_info in function interfaces (part 1) Richard Sandiford
@ 2018-07-24 10:06 ` Richard Sandiford
  2018-07-25 10:13   ` Richard Biener
  2018-07-24 10:06 ` [35/46] Alter interfaces within vect_pattern_recog Richard Sandiford
                   ` (12 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:06 UTC (permalink / raw)
  To: gcc-patches

This patch makes vect_get_vec_def_for_stmt_copy take a vec_info
rather than a vect_def_type.  If the vector operand passed in is
defined in the vectorised region, we should look for copies in
the normal way.  If it's defined in an external statement
(such as by vect_init_vector_1) we should just use the original value.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (vect_get_vec_defs_for_stmt_copy)
	(vect_get_vec_def_for_stmt_copy): Take a vec_info rather than
	a vect_def_type for the first argument.
	* tree-vect-stmts.c (vect_get_vec_defs_for_stmt_copy): Likewise.
	(vect_get_vec_def_for_stmt_copy): Likewise.  Return the original
	operand if it isn't defined by a vectorized statement.
	(vect_build_gather_load_calls): Remove the mask_dt argument and
	update calls to vect_get_vec_def_for_stmt_copy.
	(vectorizable_bswap): Likewise the dt argument.
	(vectorizable_call): Update calls to vectorizable_bswap and
	vect_get_vec_def_for_stmt_copy.
	(vectorizable_simd_clone_call, vectorizable_assignment)
	(vectorizable_shift, vectorizable_operation, vectorizable_condition)
	(vectorizable_comparison): Update calls to
	vect_get_vec_def_for_stmt_copy.
	(vectorizable_store): Likewise.  Remove now-unnecessary calls to
	vect_is_simple_use.
	(vect_get_loop_based_defs): Remove dt argument and update call
	to vect_get_vec_def_for_stmt_copy.
	(vectorizable_conversion): Update calls to vect_get_loop_based_defs
	and vect_get_vec_def_for_stmt_copy.
	(vectorizable_load): Update calls to vect_build_gather_load_calls
	and vect_get_vec_def_for_stmt_copy.
	* tree-vect-loop.c (vect_create_epilog_for_reduction)
	(vectorizable_reduction, vectorizable_live_operation): Update calls
	to vect_get_vec_def_for_stmt_copy.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:23:50.008602115 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:23:56.440544995 +0100
@@ -1514,11 +1514,11 @@ extern tree vect_get_vec_def_for_operand
 extern tree vect_get_vec_def_for_operand (tree, stmt_vec_info, tree = NULL);
 extern void vect_get_vec_defs (tree, tree, stmt_vec_info, vec<tree> *,
 			       vec<tree> *, slp_tree);
-extern void vect_get_vec_defs_for_stmt_copy (enum vect_def_type *,
+extern void vect_get_vec_defs_for_stmt_copy (vec_info *,
 					     vec<tree> *, vec<tree> *);
 extern tree vect_init_vector (stmt_vec_info, tree, tree,
                               gimple_stmt_iterator *);
-extern tree vect_get_vec_def_for_stmt_copy (enum vect_def_type, tree);
+extern tree vect_get_vec_def_for_stmt_copy (vec_info *, tree);
 extern bool vect_transform_stmt (stmt_vec_info, gimple_stmt_iterator *,
                                  bool *, slp_tree, slp_instance);
 extern void vect_remove_stores (stmt_vec_info);
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:23:50.008602115 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:23:56.440544995 +0100
@@ -1580,8 +1580,7 @@ vect_get_vec_def_for_operand (tree op, s
    created in case the vectorized result cannot fit in one vector, and several
    copies of the vector-stmt are required.  In this case the vector-def is
    retrieved from the vector stmt recorded in the STMT_VINFO_RELATED_STMT field
-   of the stmt that defines VEC_OPRND.
-   DT is the type of the vector def VEC_OPRND.
+   of the stmt that defines VEC_OPRND.  VINFO describes the vectorization.
 
    Context:
         In case the vectorization factor (VF) is bigger than the number
@@ -1625,29 +1624,24 @@ vect_get_vec_def_for_operand (tree op, s
    STMT_VINFO_RELATED_STMT field of 'VS1.0' we obtain the next copy - 'VS1.1',
    and return its def ('vx.1').
    Overall, to create the above sequence this function will be called 3 times:
-        vx.1 = vect_get_vec_def_for_stmt_copy (dt, vx.0);
-        vx.2 = vect_get_vec_def_for_stmt_copy (dt, vx.1);
-        vx.3 = vect_get_vec_def_for_stmt_copy (dt, vx.2);  */
+	vx.1 = vect_get_vec_def_for_stmt_copy (vinfo, vx.0);
+	vx.2 = vect_get_vec_def_for_stmt_copy (vinfo, vx.1);
+	vx.3 = vect_get_vec_def_for_stmt_copy (vinfo, vx.2);  */
 
 tree
-vect_get_vec_def_for_stmt_copy (enum vect_def_type dt, tree vec_oprnd)
+vect_get_vec_def_for_stmt_copy (vec_info *vinfo, tree vec_oprnd)
 {
-  gimple *vec_stmt_for_operand;
-  stmt_vec_info def_stmt_info;
-
-  /* Do nothing; can reuse same def.  */
-  if (dt == vect_external_def || dt == vect_constant_def )
+  stmt_vec_info def_stmt_info = vinfo->lookup_def (vec_oprnd);
+  if (!def_stmt_info)
+    /* Do nothing; can reuse same def.  */
     return vec_oprnd;
 
-  vec_stmt_for_operand = SSA_NAME_DEF_STMT (vec_oprnd);
-  def_stmt_info = vinfo_for_stmt (vec_stmt_for_operand);
+  def_stmt_info = STMT_VINFO_RELATED_STMT (def_stmt_info);
   gcc_assert (def_stmt_info);
-  vec_stmt_for_operand = STMT_VINFO_RELATED_STMT (def_stmt_info);
-  gcc_assert (vec_stmt_for_operand);
-  if (gimple_code (vec_stmt_for_operand) == GIMPLE_PHI)
-    vec_oprnd = PHI_RESULT (vec_stmt_for_operand);
+  if (gphi *phi = dyn_cast <gphi *> (def_stmt_info->stmt))
+    vec_oprnd = PHI_RESULT (phi);
   else
-    vec_oprnd = gimple_get_lhs (vec_stmt_for_operand);
+    vec_oprnd = gimple_get_lhs (def_stmt_info->stmt);
   return vec_oprnd;
 }
 
@@ -1656,19 +1650,19 @@ vect_get_vec_def_for_stmt_copy (enum vec
    stmt.  See vect_get_vec_def_for_stmt_copy () for details.  */
 
 void
-vect_get_vec_defs_for_stmt_copy (enum vect_def_type *dt,
+vect_get_vec_defs_for_stmt_copy (vec_info *vinfo,
 				 vec<tree> *vec_oprnds0,
 				 vec<tree> *vec_oprnds1)
 {
   tree vec_oprnd = vec_oprnds0->pop ();
 
-  vec_oprnd = vect_get_vec_def_for_stmt_copy (dt[0], vec_oprnd);
+  vec_oprnd = vect_get_vec_def_for_stmt_copy (vinfo, vec_oprnd);
   vec_oprnds0->quick_push (vec_oprnd);
 
   if (vec_oprnds1 && vec_oprnds1->length ())
     {
       vec_oprnd = vec_oprnds1->pop ();
-      vec_oprnd = vect_get_vec_def_for_stmt_copy (dt[1], vec_oprnd);
+      vec_oprnd = vect_get_vec_def_for_stmt_copy (vinfo, vec_oprnd);
       vec_oprnds1->quick_push (vec_oprnd);
     }
 }
@@ -2662,7 +2656,7 @@ vect_build_gather_load_calls (stmt_vec_i
 			      gimple_stmt_iterator *gsi,
 			      stmt_vec_info *vec_stmt,
 			      gather_scatter_info *gs_info,
-			      tree mask, vect_def_type mask_dt)
+			      tree mask)
 {
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
@@ -2767,8 +2761,8 @@ vect_build_gather_load_calls (stmt_vec_i
 	op = vec_oprnd0
 	  = vect_get_vec_def_for_operand (gs_info->offset, stmt_info);
       else
-	op = vec_oprnd0
-	  = vect_get_vec_def_for_stmt_copy (gs_info->offset_dt, vec_oprnd0);
+	op = vec_oprnd0 = vect_get_vec_def_for_stmt_copy (loop_vinfo,
+							  vec_oprnd0);
 
       if (!useless_type_conversion_p (idxtype, TREE_TYPE (op)))
 	{
@@ -2791,7 +2785,8 @@ vect_build_gather_load_calls (stmt_vec_i
 	      if (j == 0)
 		vec_mask = vect_get_vec_def_for_operand (mask, stmt_info);
 	      else
-		vec_mask = vect_get_vec_def_for_stmt_copy (mask_dt, vec_mask);
+		vec_mask = vect_get_vec_def_for_stmt_copy (loop_vinfo,
+							   vec_mask);
 
 	      mask_op = vec_mask;
 	      if (!useless_type_conversion_p (masktype, TREE_TYPE (vec_mask)))
@@ -2951,11 +2946,11 @@ vect_get_data_ptr_increment (data_refere
 static bool
 vectorizable_bswap (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
 		    stmt_vec_info *vec_stmt, slp_tree slp_node,
-		    tree vectype_in, enum vect_def_type *dt,
-		    stmt_vector_for_cost *cost_vec)
+		    tree vectype_in, stmt_vector_for_cost *cost_vec)
 {
   tree op, vectype;
   gcall *stmt = as_a <gcall *> (stmt_info->stmt);
+  vec_info *vinfo = stmt_info->vinfo;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   unsigned ncopies;
   unsigned HOST_WIDE_INT nunits, num_bytes;
@@ -3021,7 +3016,7 @@ vectorizable_bswap (stmt_vec_info stmt_i
       if (j == 0)
 	vect_get_vec_defs (op, NULL, stmt_info, &vec_oprnds, NULL, slp_node);
       else
-        vect_get_vec_defs_for_stmt_copy (dt, &vec_oprnds, NULL);
+	vect_get_vec_defs_for_stmt_copy (vinfo, &vec_oprnds, NULL);
 
       /* Arguments are ready. create the new vector stmt.  */
       unsigned i;
@@ -3301,7 +3296,7 @@ vectorizable_call (stmt_vec_info stmt_in
 		   || gimple_call_builtin_p (stmt, BUILT_IN_BSWAP32)
 		   || gimple_call_builtin_p (stmt, BUILT_IN_BSWAP64)))
 	return vectorizable_bswap (stmt_info, gsi, vec_stmt, slp_node,
-				   vectype_in, dt, cost_vec);
+				   vectype_in, cost_vec);
       else
 	{
 	  if (dump_enabled_p ())
@@ -3450,7 +3445,7 @@ vectorizable_call (stmt_vec_info stmt_in
 		  = vect_get_vec_def_for_operand (op, stmt_info);
 	      else
 		vec_oprnd0
-		  = vect_get_vec_def_for_stmt_copy (dt[i], orig_vargs[i]);
+		  = vect_get_vec_def_for_stmt_copy (vinfo, orig_vargs[i]);
 
 	      orig_vargs[i] = vargs[i] = vec_oprnd0;
 	    }
@@ -3582,16 +3577,16 @@ vectorizable_call (stmt_vec_info stmt_in
 		  vec_oprnd0
 		    = vect_get_vec_def_for_operand (op, stmt_info);
 		  vec_oprnd1
-		    = vect_get_vec_def_for_stmt_copy (dt[i], vec_oprnd0);
+		    = vect_get_vec_def_for_stmt_copy (vinfo, vec_oprnd0);
 		}
 	      else
 		{
 		  vec_oprnd1 = gimple_call_arg (new_stmt_info->stmt,
 						2 * i + 1);
 		  vec_oprnd0
-		    = vect_get_vec_def_for_stmt_copy (dt[i], vec_oprnd1);
+		    = vect_get_vec_def_for_stmt_copy (vinfo, vec_oprnd1);
 		  vec_oprnd1
-		    = vect_get_vec_def_for_stmt_copy (dt[i], vec_oprnd0);
+		    = vect_get_vec_def_for_stmt_copy (vinfo, vec_oprnd0);
 		}
 
 	      vargs.quick_push (vec_oprnd0);
@@ -4103,7 +4098,7 @@ vectorizable_simd_clone_call (stmt_vec_i
 			  vec_oprnd0 = arginfo[i].op;
 			  if ((m & (k - 1)) == 0)
 			    vec_oprnd0
-			      = vect_get_vec_def_for_stmt_copy (arginfo[i].dt,
+			      = vect_get_vec_def_for_stmt_copy (vinfo,
 								vec_oprnd0);
 			}
 		      arginfo[i].op = vec_oprnd0;
@@ -4134,7 +4129,7 @@ vectorizable_simd_clone_call (stmt_vec_i
 			      = vect_get_vec_def_for_operand (op, stmt_info);
 			  else
 			    vec_oprnd0
-			      = vect_get_vec_def_for_stmt_copy (arginfo[i].dt,
+			      = vect_get_vec_def_for_stmt_copy (vinfo,
 								arginfo[i].op);
 			  arginfo[i].op = vec_oprnd0;
 			  if (k == 1)
@@ -4440,9 +4435,9 @@ vect_gen_widened_results_half (enum tree
 
 static void
 vect_get_loop_based_defs (tree *oprnd, stmt_vec_info stmt_info,
-			  enum vect_def_type dt, vec<tree> *vec_oprnds,
-			  int multi_step_cvt)
+			  vec<tree> *vec_oprnds, int multi_step_cvt)
 {
+  vec_info *vinfo = stmt_info->vinfo;
   tree vec_oprnd;
 
   /* Get first vector operand.  */
@@ -4451,12 +4446,12 @@ vect_get_loop_based_defs (tree *oprnd, s
   if (TREE_CODE (TREE_TYPE (*oprnd)) != VECTOR_TYPE)
     vec_oprnd = vect_get_vec_def_for_operand (*oprnd, stmt_info);
   else
-    vec_oprnd = vect_get_vec_def_for_stmt_copy (dt, *oprnd);
+    vec_oprnd = vect_get_vec_def_for_stmt_copy (vinfo, *oprnd);
 
   vec_oprnds->quick_push (vec_oprnd);
 
   /* Get second vector operand.  */
-  vec_oprnd = vect_get_vec_def_for_stmt_copy (dt, vec_oprnd);
+  vec_oprnd = vect_get_vec_def_for_stmt_copy (vinfo, vec_oprnd);
   vec_oprnds->quick_push (vec_oprnd);
 
   *oprnd = vec_oprnd;
@@ -4464,7 +4459,7 @@ vect_get_loop_based_defs (tree *oprnd, s
   /* For conversion in multiple steps, continue to get operands
      recursively.  */
   if (multi_step_cvt)
-    vect_get_loop_based_defs (oprnd, stmt_info, dt, vec_oprnds,
+    vect_get_loop_based_defs (oprnd, stmt_info, vec_oprnds,
 			      multi_step_cvt - 1);
 }
 
@@ -4983,7 +4978,7 @@ vectorizable_conversion (stmt_vec_info s
 	    vect_get_vec_defs (op0, NULL, stmt_info, &vec_oprnds0,
 			       NULL, slp_node);
 	  else
-	    vect_get_vec_defs_for_stmt_copy (dt, &vec_oprnds0, NULL);
+	    vect_get_vec_defs_for_stmt_copy (vinfo, &vec_oprnds0, NULL);
 
 	  FOR_EACH_VEC_ELT (vec_oprnds0, i, vop0)
 	    {
@@ -5070,7 +5065,7 @@ vectorizable_conversion (stmt_vec_info s
 	    }
 	  else
 	    {
-	      vec_oprnd0 = vect_get_vec_def_for_stmt_copy (dt[0], vec_oprnd0);
+	      vec_oprnd0 = vect_get_vec_def_for_stmt_copy (vinfo, vec_oprnd0);
 	      vec_oprnds0.truncate (0);
 	      vec_oprnds0.quick_push (vec_oprnd0);
 	      if (op_type == binary_op)
@@ -5078,7 +5073,7 @@ vectorizable_conversion (stmt_vec_info s
 		  if (code == WIDEN_LSHIFT_EXPR)
 		    vec_oprnd1 = op1;
 		  else
-		    vec_oprnd1 = vect_get_vec_def_for_stmt_copy (dt[1],
+		    vec_oprnd1 = vect_get_vec_def_for_stmt_copy (vinfo,
 								 vec_oprnd1);
 		  vec_oprnds1.truncate (0);
 		  vec_oprnds1.quick_push (vec_oprnd1);
@@ -5160,8 +5155,7 @@ vectorizable_conversion (stmt_vec_info s
 	  else
 	    {
 	      vec_oprnds0.truncate (0);
-	      vect_get_loop_based_defs (&last_oprnd, stmt_info, dt[0],
-					&vec_oprnds0,
+	      vect_get_loop_based_defs (&last_oprnd, stmt_info, &vec_oprnds0,
 					vect_pow2 (multi_step_cvt) - 1);
 	    }
 
@@ -5338,7 +5332,7 @@ vectorizable_assignment (stmt_vec_info s
       if (j == 0)
 	vect_get_vec_defs (op, NULL, stmt_info, &vec_oprnds, NULL, slp_node);
       else
-        vect_get_vec_defs_for_stmt_copy (dt, &vec_oprnds, NULL);
+	vect_get_vec_defs_for_stmt_copy (vinfo, &vec_oprnds, NULL);
 
       /* Arguments are ready. create the new vector stmt.  */
       stmt_vec_info new_stmt_info = NULL;
@@ -5742,7 +5736,7 @@ vectorizable_shift (stmt_vec_info stmt_i
 			       slp_node);
         }
       else
-        vect_get_vec_defs_for_stmt_copy (dt, &vec_oprnds0, &vec_oprnds1);
+	vect_get_vec_defs_for_stmt_copy (vinfo, &vec_oprnds0, &vec_oprnds1);
 
       /* Arguments are ready.  Create the new vector stmt.  */
       stmt_vec_info new_stmt_info = NULL;
@@ -6120,11 +6114,11 @@ vectorizable_operation (stmt_vec_info st
 	}
       else
 	{
-	  vect_get_vec_defs_for_stmt_copy (dt, &vec_oprnds0, &vec_oprnds1);
+	  vect_get_vec_defs_for_stmt_copy (vinfo, &vec_oprnds0, &vec_oprnds1);
 	  if (op_type == ternary_op)
 	    {
 	      tree vec_oprnd = vec_oprnds2.pop ();
-	      vec_oprnds2.quick_push (vect_get_vec_def_for_stmt_copy (dt[2],
+	      vec_oprnds2.quick_push (vect_get_vec_def_for_stmt_copy (vinfo,
 							           vec_oprnd));
 	    }
 	}
@@ -6533,7 +6527,7 @@ vectorizable_store (stmt_vec_info stmt_i
 	      if (modifier == WIDEN)
 		{
 		  src = vec_oprnd1
-		    = vect_get_vec_def_for_stmt_copy (rhs_dt, vec_oprnd1);
+		    = vect_get_vec_def_for_stmt_copy (vinfo, vec_oprnd1);
 		  op = permute_vec_elements (vec_oprnd0, vec_oprnd0, perm_mask,
 					     stmt_info, gsi);
 		}
@@ -6542,8 +6536,7 @@ vectorizable_store (stmt_vec_info stmt_i
 		  src = permute_vec_elements (vec_oprnd1, vec_oprnd1, perm_mask,
 					      stmt_info, gsi);
 		  op = vec_oprnd0
-		    = vect_get_vec_def_for_stmt_copy (gs_info.offset_dt,
-						      vec_oprnd0);
+		    = vect_get_vec_def_for_stmt_copy (vinfo, vec_oprnd0);
 		}
 	      else
 		gcc_unreachable ();
@@ -6551,10 +6544,9 @@ vectorizable_store (stmt_vec_info stmt_i
 	  else
 	    {
 	      src = vec_oprnd1
-		= vect_get_vec_def_for_stmt_copy (rhs_dt, vec_oprnd1);
+		= vect_get_vec_def_for_stmt_copy (vinfo, vec_oprnd1);
 	      op = vec_oprnd0
-		= vect_get_vec_def_for_stmt_copy (gs_info.offset_dt,
-						  vec_oprnd0);
+		= vect_get_vec_def_for_stmt_copy (vinfo, vec_oprnd0);
 	    }
 
 	  if (!useless_type_conversion_p (srctype, TREE_TYPE (src)))
@@ -6811,11 +6803,8 @@ vectorizable_store (stmt_vec_info stmt_i
 		  if (slp)
 		    vec_oprnd = vec_oprnds[j];
 		  else
-		    {
-		      vect_is_simple_use (op, vinfo, &rhs_dt);
-		      vec_oprnd = vect_get_vec_def_for_stmt_copy (rhs_dt,
-								  vec_oprnd);
-		    }
+		    vec_oprnd = vect_get_vec_def_for_stmt_copy (vinfo,
+								vec_oprnd);
 		}
 	      /* Pun the vector to extract from if necessary.  */
 	      if (lvectype != vectype)
@@ -7060,19 +7049,17 @@ vectorizable_store (stmt_vec_info stmt_i
 	  for (i = 0; i < group_size; i++)
 	    {
 	      op = oprnds[i];
-	      vect_is_simple_use (op, vinfo, &rhs_dt);
-	      vec_oprnd = vect_get_vec_def_for_stmt_copy (rhs_dt, op);
+	      vec_oprnd = vect_get_vec_def_for_stmt_copy (vinfo, op);
 	      dr_chain[i] = vec_oprnd;
 	      oprnds[i] = vec_oprnd;
 	    }
 	  if (mask)
-	    vec_mask = vect_get_vec_def_for_stmt_copy (mask_dt, vec_mask);
+	    vec_mask = vect_get_vec_def_for_stmt_copy (vinfo, vec_mask);
 	  if (dataref_offset)
 	    dataref_offset
 	      = int_const_binop (PLUS_EXPR, dataref_offset, bump);
 	  else if (STMT_VINFO_GATHER_SCATTER_P (stmt_info))
-	    vec_offset = vect_get_vec_def_for_stmt_copy (gs_info.offset_dt,
-							 vec_offset);
+	    vec_offset = vect_get_vec_def_for_stmt_copy (vinfo, vec_offset);
 	  else
 	    dataref_ptr = bump_vector_ptr (dataref_ptr, ptr_incr, gsi,
 					   stmt_info, bump);
@@ -7680,8 +7667,7 @@ vectorizable_load (stmt_vec_info stmt_in
 
   if (memory_access_type == VMAT_GATHER_SCATTER && gs_info.decl)
     {
-      vect_build_gather_load_calls (stmt_info, gsi, vec_stmt, &gs_info, mask,
-				    mask_dt);
+      vect_build_gather_load_calls (stmt_info, gsi, vec_stmt, &gs_info, mask);
       return true;
     }
 
@@ -8233,13 +8219,12 @@ vectorizable_load (stmt_vec_info stmt_in
 	    dataref_offset = int_const_binop (PLUS_EXPR, dataref_offset,
 					      bump);
 	  else if (STMT_VINFO_GATHER_SCATTER_P (stmt_info))
-	    vec_offset = vect_get_vec_def_for_stmt_copy (gs_info.offset_dt,
-							 vec_offset);
+	    vec_offset = vect_get_vec_def_for_stmt_copy (vinfo, vec_offset);
 	  else
 	    dataref_ptr = bump_vector_ptr (dataref_ptr, ptr_incr, gsi,
 					   stmt_info, bump);
 	  if (mask)
-	    vec_mask = vect_get_vec_def_for_stmt_copy (mask_dt, vec_mask);
+	    vec_mask = vect_get_vec_def_for_stmt_copy (vinfo, vec_mask);
 	}
 
       if (grouped_load || slp_perm)
@@ -8733,6 +8718,7 @@ vectorizable_condition (stmt_vec_info st
 			int reduc_index, slp_tree slp_node,
 			stmt_vector_for_cost *cost_vec)
 {
+  vec_info *vinfo = stmt_info->vinfo;
   tree scalar_dest = NULL_TREE;
   tree vec_dest = NULL_TREE;
   tree cond_expr, cond_expr0 = NULL_TREE, cond_expr1 = NULL_TREE;
@@ -8994,16 +8980,14 @@ vectorizable_condition (stmt_vec_info st
       else
 	{
 	  vec_cond_lhs
-	    = vect_get_vec_def_for_stmt_copy (dts[0],
-					      vec_oprnds0.pop ());
+	    = vect_get_vec_def_for_stmt_copy (vinfo, vec_oprnds0.pop ());
 	  if (!masked)
 	    vec_cond_rhs
-	      = vect_get_vec_def_for_stmt_copy (dts[1],
-						vec_oprnds1.pop ());
+	      = vect_get_vec_def_for_stmt_copy (vinfo, vec_oprnds1.pop ());
 
-	  vec_then_clause = vect_get_vec_def_for_stmt_copy (dts[2],
+	  vec_then_clause = vect_get_vec_def_for_stmt_copy (vinfo,
 							    vec_oprnds2.pop ());
-	  vec_else_clause = vect_get_vec_def_for_stmt_copy (dts[3],
+	  vec_else_clause = vect_get_vec_def_for_stmt_copy (vinfo,
 							    vec_oprnds3.pop ());
 	}
 
@@ -9135,6 +9119,7 @@ vectorizable_comparison (stmt_vec_info s
 			 stmt_vec_info *vec_stmt, tree reduc_def,
 			 slp_tree slp_node, stmt_vector_for_cost *cost_vec)
 {
+  vec_info *vinfo = stmt_info->vinfo;
   tree lhs, rhs1, rhs2;
   tree vectype1 = NULL_TREE, vectype2 = NULL_TREE;
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
@@ -9331,9 +9316,9 @@ vectorizable_comparison (stmt_vec_info s
 	}
       else
 	{
-	  vec_rhs1 = vect_get_vec_def_for_stmt_copy (dts[0],
+	  vec_rhs1 = vect_get_vec_def_for_stmt_copy (vinfo,
 						     vec_oprnds0.pop ());
-	  vec_rhs2 = vect_get_vec_def_for_stmt_copy (dts[1],
+	  vec_rhs2 = vect_get_vec_def_for_stmt_copy (vinfo,
 						     vec_oprnds1.pop ());
 	}
 
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:23:50.004602150 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:23:56.436545030 +0100
@@ -4421,7 +4421,6 @@ vect_create_epilog_for_reduction (vec<tr
   bool nested_in_vect_loop = false;
   auto_vec<gimple *> new_phis;
   auto_vec<stmt_vec_info> inner_phis;
-  enum vect_def_type dt = vect_unknown_def_type;
   int j, i;
   auto_vec<tree> scalar_results;
   unsigned int group_size = 1, k, ratio;
@@ -4528,8 +4527,7 @@ vect_create_epilog_for_reduction (vec<tr
 	      phi_info = STMT_VINFO_RELATED_STMT (phi_info);
 	      if (nested_in_vect_loop)
 		vec_init_def
-		  = vect_get_vec_def_for_stmt_copy (initial_def_dt,
-						    vec_init_def);
+		  = vect_get_vec_def_for_stmt_copy (loop_vinfo, vec_init_def);
 	    }
 
 	  /* Set the loop-entry arg of the reduction-phi.  */
@@ -4556,7 +4554,7 @@ vect_create_epilog_for_reduction (vec<tr
 
           /* Set the loop-latch arg for the reduction-phi.  */
           if (j > 0)
-            def = vect_get_vec_def_for_stmt_copy (vect_unknown_def_type, def);
+	    def = vect_get_vec_def_for_stmt_copy (loop_vinfo, def);
 
 	  add_phi_arg (phi, def, loop_latch_edge (loop), UNKNOWN_LOCATION);
 
@@ -4697,7 +4695,7 @@ vect_create_epilog_for_reduction (vec<tr
             new_phis.quick_push (phi);
           else
 	    {
-	      def = vect_get_vec_def_for_stmt_copy (dt, def);
+	      def = vect_get_vec_def_for_stmt_copy (loop_vinfo, def);
 	      STMT_VINFO_RELATED_STMT (prev_phi_info) = phi_info;
 	    }
 
@@ -7111,19 +7109,22 @@ vectorizable_reduction (stmt_vec_info st
 		vec_oprnds0[0] = gimple_get_lhs (new_stmt_info->stmt);
 	      else
 		vec_oprnds0[0]
-		  = vect_get_vec_def_for_stmt_copy (dts[0], vec_oprnds0[0]);
+		  = vect_get_vec_def_for_stmt_copy (loop_vinfo,
+						    vec_oprnds0[0]);
 	      if (single_defuse_cycle && reduc_index == 1)
 		vec_oprnds1[0] = gimple_get_lhs (new_stmt_info->stmt);
 	      else
 		vec_oprnds1[0]
-		  = vect_get_vec_def_for_stmt_copy (dts[1], vec_oprnds1[0]);
+		  = vect_get_vec_def_for_stmt_copy (loop_vinfo,
+						    vec_oprnds1[0]);
 	      if (op_type == ternary_op)
 		{
 		  if (single_defuse_cycle && reduc_index == 2)
 		    vec_oprnds2[0] = gimple_get_lhs (new_stmt_info->stmt);
 		  else
 		    vec_oprnds2[0] 
-		      = vect_get_vec_def_for_stmt_copy (dts[2], vec_oprnds2[0]);
+		      = vect_get_vec_def_for_stmt_copy (loop_vinfo,
+							vec_oprnds2[0]);
 		}
             }
         }
@@ -7945,8 +7946,7 @@ vectorizable_live_operation (stmt_vec_in
 
       /* For multiple copies, get the last copy.  */
       for (int i = 1; i < ncopies; ++i)
-	vec_lhs = vect_get_vec_def_for_stmt_copy (vect_unknown_def_type,
-						  vec_lhs);
+	vec_lhs = vect_get_vec_def_for_stmt_copy (loop_vinfo, vec_lhs);
 
       /* Get the last lane in the vector.  */
       bitstart = int_const_binop (MINUS_EXPR, vec_bitsize, bitsize);

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [33/46] Use stmt_vec_infos instead of vec_info/gimple stmt pairs
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (33 preceding siblings ...)
  2018-07-24 10:06 ` [35/46] Alter interfaces within vect_pattern_recog Richard Sandiford
@ 2018-07-24 10:06 ` Richard Sandiford
  2018-07-25 10:06   ` Richard Biener
  2018-07-24 10:07 ` [36/46] Add a pattern_stmt_p field to stmt_vec_info Richard Sandiford
                   ` (10 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:06 UTC (permalink / raw)
  To: gcc-patches

This patch makes vect_record_max_nunits and vect_record_base_alignment
take a stmt_vec_info instead of a vec_info/gimple pair.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vect-data-refs.c (vect_record_base_alignment): Replace vec_info
	and gimple stmt arguments with a stmt_vec_info.
	(vect_record_base_alignments): Update calls accordingly.
	* tree-vect-slp.c (vect_record_max_nunits): Replace vec_info
	and gimple stmt arguments with a stmt_vec_info.
	(vect_build_slp_tree_1): Remove vinfo argument and update call
	to vect_record_max_nunits.
	(vect_build_slp_tree_2): Update calls to vect_build_slp_tree_1
	and vect_record_max_nunits.

Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c	2018-07-24 10:23:50.000602186 +0100
+++ gcc/tree-vect-data-refs.c	2018-07-24 10:23:53.204573732 +0100
@@ -794,14 +794,14 @@ vect_slp_analyze_instance_dependence (sl
   return res;
 }
 
-/* Record in VINFO the base alignment guarantee given by DRB.  STMT is
-   the statement that contains DRB, which is useful for recording in the
-   dump file.  */
+/* Record the base alignment guarantee given by DRB, which occurs
+   in STMT_INFO.  */
 
 static void
-vect_record_base_alignment (vec_info *vinfo, gimple *stmt,
+vect_record_base_alignment (stmt_vec_info stmt_info,
 			    innermost_loop_behavior *drb)
 {
+  vec_info *vinfo = stmt_info->vinfo;
   bool existed;
   innermost_loop_behavior *&entry
     = vinfo->base_alignments.get_or_insert (drb->base_address, &existed);
@@ -820,7 +820,7 @@ vect_record_base_alignment (vec_info *vi
 			   "  misalignment: %d\n", drb->base_misalignment);
 	  dump_printf_loc (MSG_NOTE, vect_location,
 			   "  based on:     ");
-	  dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
+	  dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
 	}
     }
 }
@@ -847,13 +847,13 @@ vect_record_base_alignments (vec_info *v
 	  && STMT_VINFO_VECTORIZABLE (stmt_info)
 	  && !STMT_VINFO_GATHER_SCATTER_P (stmt_info))
 	{
-	  vect_record_base_alignment (vinfo, stmt_info, &DR_INNERMOST (dr));
+	  vect_record_base_alignment (stmt_info, &DR_INNERMOST (dr));
 
 	  /* If DR is nested in the loop that is being vectorized, we can also
 	     record the alignment of the base wrt the outer loop.  */
 	  if (loop && nested_in_vect_loop_p (loop, stmt_info))
 	    vect_record_base_alignment
-		(vinfo, stmt_info, &STMT_VINFO_DR_WRT_VEC_LOOP (stmt_info));
+	      (stmt_info, &STMT_VINFO_DR_WRT_VEC_LOOP (stmt_info));
 	}
     }
 }
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c	2018-07-24 10:23:50.004602150 +0100
+++ gcc/tree-vect-slp.c	2018-07-24 10:23:53.204573732 +0100
@@ -609,14 +609,14 @@ compatible_calls_p (gcall *call1, gcall
 }
 
 /* A subroutine of vect_build_slp_tree for checking VECTYPE, which is the
-   caller's attempt to find the vector type in STMT with the narrowest
+   caller's attempt to find the vector type in STMT_INFO with the narrowest
    element type.  Return true if VECTYPE is nonnull and if it is valid
-   for VINFO.  When returning true, update MAX_NUNITS to reflect the
-   number of units in VECTYPE.  VINFO, GORUP_SIZE and MAX_NUNITS are
-   as for vect_build_slp_tree.  */
+   for STMT_INFO.  When returning true, update MAX_NUNITS to reflect the
+   number of units in VECTYPE.  GROUP_SIZE and MAX_NUNITS are as for
+   vect_build_slp_tree.  */
 
 static bool
-vect_record_max_nunits (vec_info *vinfo, gimple *stmt, unsigned int group_size,
+vect_record_max_nunits (stmt_vec_info stmt_info, unsigned int group_size,
 			tree vectype, poly_uint64 *max_nunits)
 {
   if (!vectype)
@@ -625,7 +625,8 @@ vect_record_max_nunits (vec_info *vinfo,
 	{
 	  dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
 			   "Build SLP failed: unsupported data-type in ");
-	  dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM, stmt, 0);
+	  dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
+			    stmt_info->stmt, 0);
 	  dump_printf (MSG_MISSED_OPTIMIZATION, "\n");
 	}
       /* Fatal mismatch.  */
@@ -636,7 +637,7 @@ vect_record_max_nunits (vec_info *vinfo,
      before adjusting *max_nunits for basic-block vectorization.  */
   poly_uint64 nunits = TYPE_VECTOR_SUBPARTS (vectype);
   unsigned HOST_WIDE_INT const_nunits;
-  if (is_a <bb_vec_info> (vinfo)
+  if (STMT_VINFO_BB_VINFO (stmt_info)
       && (!nunits.is_constant (&const_nunits)
 	  || const_nunits > group_size))
     {
@@ -696,7 +697,7 @@ vect_two_operations_perm_ok_p (vec<stmt_
    to (B1 <= A1 ? X1 : Y1); or be inverted to (A1 < B1) ? Y1 : X1.  */
 
 static bool
-vect_build_slp_tree_1 (vec_info *vinfo, unsigned char *swap,
+vect_build_slp_tree_1 (unsigned char *swap,
 		       vec<stmt_vec_info> stmts, unsigned int group_size,
 		       poly_uint64 *max_nunits, bool *matches,
 		       bool *two_operators)
@@ -763,7 +764,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
       if (!vect_get_vector_types_for_stmt (stmt_info, &vectype,
 					   &nunits_vectype)
 	  || (nunits_vectype
-	      && !vect_record_max_nunits (vinfo, stmt_info, group_size,
+	      && !vect_record_max_nunits (stmt_info, group_size,
 					  nunits_vectype, max_nunits)))
 	{
 	  /* Fatal mismatch.  */
@@ -1207,8 +1208,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
     {
       tree scalar_type = TREE_TYPE (PHI_RESULT (stmt));
       tree vectype = get_vectype_for_scalar_type (scalar_type);
-      if (!vect_record_max_nunits (vinfo, stmt_info, group_size, vectype,
-				   max_nunits))
+      if (!vect_record_max_nunits (stmt_info, group_size, vectype, max_nunits))
 	return NULL;
 
       vect_def_type def_type = STMT_VINFO_DEF_TYPE (stmt_info);
@@ -1241,7 +1241,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
 
   bool two_operators = false;
   unsigned char *swap = XALLOCAVEC (unsigned char, group_size);
-  if (!vect_build_slp_tree_1 (vinfo, swap, stmts, group_size,
+  if (!vect_build_slp_tree_1 (swap, stmts, group_size,
 			      &this_max_nunits, matches, &two_operators))
     return NULL;
 

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [36/46] Add a pattern_stmt_p field to stmt_vec_info
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (34 preceding siblings ...)
  2018-07-24 10:06 ` [33/46] Use stmt_vec_infos instead of vec_info/gimple stmt pairs Richard Sandiford
@ 2018-07-24 10:07 ` Richard Sandiford
  2018-07-25 10:15   ` Richard Biener
  2018-07-24 10:07 ` [37/46] Associate alignment information with stmt_vec_infos Richard Sandiford
                   ` (9 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:07 UTC (permalink / raw)
  To: gcc-patches

This patch adds a pattern_stmt_p field to stmt_vec_info, so that it's
possible to tell whether the statement is a pattern statement without
referring to other statements.  The new field goes in what was
previously a hole in the structure, so the size is the same as before.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (_stmt_vec_info::pattern_stmt_p): New field.
	(is_pattern_stmt_p): Delete.
	* tree-vect-patterns.c (vect_init_pattern_stmt): Set pattern_stmt_p
	on pattern statements.
	(vect_split_statement, vect_mark_pattern_stmts): Use the new
	pattern_stmt_p field instead of is_pattern_stmt_p.
	* tree-vect-data-refs.c (vect_preserves_scalar_order_p): Likewise.
	* tree-vect-loop.c (vectorizable_live_operation): Likewise.
	* tree-vect-slp.c (vect_build_slp_tree_2): Likewise.
	(vect_find_last_scalar_stmt_in_slp, vect_remove_slp_scalar_calls)
	(vect_schedule_slp): Likewise.
	* tree-vect-stmts.c (vect_mark_stmts_to_be_vectorized): Likewise.
	(vectorizable_call, vectorizable_simd_clone_call, vectorizable_shift)
	(vectorizable_store, vect_remove_stores): Likewise.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:23:56.440544995 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:24:02.364492386 +0100
@@ -791,6 +791,12 @@ struct _stmt_vec_info {
   /* Stmt is part of some pattern (computation idiom)  */
   bool in_pattern_p;
 
+  /* True if the statement was created during pattern recognition as
+     part of the replacement for RELATED_STMT.  This implies that the
+     statement isn't part of any basic block, although for convenience
+     its gimple_bb is the same as for RELATED_STMT.  */
+  bool pattern_stmt_p;
+
   /* Is this statement vectorizable or should it be skipped in (partial)
      vectorization.  */
   bool vectorizable;
@@ -1151,16 +1157,6 @@ get_later_stmt (stmt_vec_info stmt1_info
     return stmt2_info;
 }
 
-/* Return TRUE if a statement represented by STMT_INFO is a part of a
-   pattern.  */
-
-static inline bool
-is_pattern_stmt_p (stmt_vec_info stmt_info)
-{
-  stmt_vec_info related_stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
-  return related_stmt_info && STMT_VINFO_IN_PATTERN_P (related_stmt_info);
-}
-
 /* Return true if BB is a loop header.  */
 
 static inline bool
Index: gcc/tree-vect-patterns.c
===================================================================
--- gcc/tree-vect-patterns.c	2018-07-24 10:23:59.408518638 +0100
+++ gcc/tree-vect-patterns.c	2018-07-24 10:24:02.360492422 +0100
@@ -108,6 +108,7 @@ vect_init_pattern_stmt (gimple *pattern_
     pattern_stmt_info = orig_stmt_info->vinfo->add_stmt (pattern_stmt);
   gimple_set_bb (pattern_stmt, gimple_bb (orig_stmt_info->stmt));
 
+  pattern_stmt_info->pattern_stmt_p = true;
   STMT_VINFO_RELATED_STMT (pattern_stmt_info) = orig_stmt_info;
   STMT_VINFO_DEF_TYPE (pattern_stmt_info)
     = STMT_VINFO_DEF_TYPE (orig_stmt_info);
@@ -630,7 +631,7 @@ vect_recog_temp_ssa_var (tree type, gimp
 vect_split_statement (stmt_vec_info stmt2_info, tree new_rhs,
 		      gimple *stmt1, tree vectype)
 {
-  if (is_pattern_stmt_p (stmt2_info))
+  if (stmt2_info->pattern_stmt_p)
     {
       /* STMT2_INFO is part of a pattern.  Get the statement to which
 	 the pattern is attached.  */
@@ -4726,7 +4727,7 @@ vect_mark_pattern_stmts (stmt_vec_info o
   gimple *def_seq = STMT_VINFO_PATTERN_DEF_SEQ (orig_stmt_info);
 
   gimple *orig_pattern_stmt = NULL;
-  if (is_pattern_stmt_p (orig_stmt_info))
+  if (orig_stmt_info->pattern_stmt_p)
     {
       /* We're replacing a statement in an existing pattern definition
 	 sequence.  */
Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c	2018-07-24 10:23:53.204573732 +0100
+++ gcc/tree-vect-data-refs.c	2018-07-24 10:24:02.356492457 +0100
@@ -212,9 +212,9 @@ vect_preserves_scalar_order_p (stmt_vec_
      (but could happen later) while reads will happen no later than their
      current position (but could happen earlier).  Reordering is therefore
      only possible if the first access is a write.  */
-  if (is_pattern_stmt_p (stmtinfo_a))
+  if (stmtinfo_a->pattern_stmt_p)
     stmtinfo_a = STMT_VINFO_RELATED_STMT (stmtinfo_a);
-  if (is_pattern_stmt_p (stmtinfo_b))
+  if (stmtinfo_b->pattern_stmt_p)
     stmtinfo_b = STMT_VINFO_RELATED_STMT (stmtinfo_b);
   stmt_vec_info earlier_stmt_info = get_earlier_stmt (stmtinfo_a, stmtinfo_b);
   return !DR_IS_WRITE (STMT_VINFO_DATA_REF (earlier_stmt_info));
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:23:56.436545030 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:24:02.360492422 +0100
@@ -7907,7 +7907,7 @@ vectorizable_live_operation (stmt_vec_in
     }
 
   /* If stmt has a related stmt, then use that for getting the lhs.  */
-  gimple *stmt = (is_pattern_stmt_p (stmt_info)
+  gimple *stmt = (stmt_info->pattern_stmt_p
 		  ? STMT_VINFO_RELATED_STMT (stmt_info)->stmt
 		  : stmt_info->stmt);
 
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c	2018-07-24 10:23:53.204573732 +0100
+++ gcc/tree-vect-slp.c	2018-07-24 10:24:02.360492422 +0100
@@ -376,7 +376,7 @@ vect_get_and_check_slp_defs (vec_info *v
       /* Check if DEF_STMT_INFO is a part of a pattern in LOOP and get
 	 the def stmt from the pattern.  Check that all the stmts of the
 	 node are in the pattern.  */
-      if (def_stmt_info && is_pattern_stmt_p (def_stmt_info))
+      if (def_stmt_info && def_stmt_info->pattern_stmt_p)
         {
           pattern = true;
           if (!first && !oprnd_info->first_pattern
@@ -1315,7 +1315,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
 	      /* ???  Rejecting patterns this way doesn't work.  We'd have to
 		 do extra work to cancel the pattern so the uses see the
 		 scalar version.  */
-	      && !is_pattern_stmt_p (SLP_TREE_SCALAR_STMTS (child)[0]))
+	      && !SLP_TREE_SCALAR_STMTS (child)[0]->pattern_stmt_p)
 	    {
 	      slp_tree grandchild;
 
@@ -1359,7 +1359,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
 	  /* ???  Rejecting patterns this way doesn't work.  We'd have to
 	     do extra work to cancel the pattern so the uses see the
 	     scalar version.  */
-	  && !is_pattern_stmt_p (stmt_info))
+	  && !stmt_info->pattern_stmt_p)
 	{
 	  dump_printf_loc (MSG_NOTE, vect_location,
 			   "Building vector operands from scalars\n");
@@ -1486,7 +1486,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
 		  /* ???  Rejecting patterns this way doesn't work.  We'd have
 		     to do extra work to cancel the pattern so the uses see the
 		     scalar version.  */
-		  && !is_pattern_stmt_p (SLP_TREE_SCALAR_STMTS (child)[0]))
+		  && !SLP_TREE_SCALAR_STMTS (child)[0]->pattern_stmt_p)
 		{
 		  unsigned int j;
 		  slp_tree grandchild;
@@ -1848,7 +1848,7 @@ vect_find_last_scalar_stmt_in_slp (slp_t
 
   for (int i = 0; SLP_TREE_SCALAR_STMTS (node).iterate (i, &stmt_vinfo); i++)
     {
-      if (is_pattern_stmt_p (stmt_vinfo))
+      if (stmt_vinfo->pattern_stmt_p)
 	stmt_vinfo = STMT_VINFO_RELATED_STMT (stmt_vinfo);
       last = last ? get_later_stmt (stmt_vinfo, last) : stmt_vinfo;
     }
@@ -4044,8 +4044,7 @@ vect_remove_slp_scalar_calls (slp_tree n
       gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt);
       if (!stmt || gimple_bb (stmt) == NULL)
 	continue;
-      if (is_pattern_stmt_p (stmt_info)
-	  || !PURE_SLP_STMT (stmt_info))
+      if (stmt_info->pattern_stmt_p || !PURE_SLP_STMT (stmt_info))
 	continue;
       lhs = gimple_call_lhs (stmt);
       new_stmt = gimple_build_assign (lhs, build_zero_cst (TREE_TYPE (lhs)));
@@ -4106,7 +4105,7 @@ vect_schedule_slp (vec_info *vinfo)
 	  if (!STMT_VINFO_DATA_REF (store_info))
 	    break;
 
-	  if (is_pattern_stmt_p (store_info))
+	  if (store_info->pattern_stmt_p)
 	    store_info = STMT_VINFO_RELATED_STMT (store_info);
 	  /* Free the attached stmt_vec_info and remove the stmt.  */
 	  gsi = gsi_for_stmt (store_info);
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:23:56.440544995 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:24:02.364492386 +0100
@@ -731,7 +731,7 @@ vect_mark_stmts_to_be_vectorized (loop_v
             break;
         }
 
-      if (is_pattern_stmt_p (stmt_vinfo))
+      if (stmt_vinfo->pattern_stmt_p)
         {
           /* Pattern statements are not inserted into the code, so
              FOR_EACH_PHI_OR_STMT_USE optimizes their operands out, and we
@@ -3623,7 +3623,7 @@ vectorizable_call (stmt_vec_info stmt_in
   if (slp_node)
     return true;
 
-  if (is_pattern_stmt_p (stmt_info))
+  if (stmt_info->pattern_stmt_p)
     stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
   lhs = gimple_get_lhs (stmt_info->stmt);
 
@@ -4362,7 +4362,7 @@ vectorizable_simd_clone_call (stmt_vec_i
   if (scalar_dest)
     {
       type = TREE_TYPE (scalar_dest);
-      if (is_pattern_stmt_p (stmt_info))
+      if (stmt_info->pattern_stmt_p)
 	lhs = gimple_call_lhs (STMT_VINFO_RELATED_STMT (stmt_info)->stmt);
       else
 	lhs = gimple_call_lhs (stmt);
@@ -5552,7 +5552,7 @@ vectorizable_shift (stmt_vec_info stmt_i
       /* If the shift amount is computed by a pattern stmt we cannot
          use the scalar amount directly thus give up and use a vector
 	 shift.  */
-      if (op1_def_stmt_info && is_pattern_stmt_p (op1_def_stmt_info))
+      if (op1_def_stmt_info && op1_def_stmt_info->pattern_stmt_p)
 	scalar_shift_arg = false;
     }
   else
@@ -6286,7 +6286,7 @@ vectorizable_store (stmt_vec_info stmt_i
     {
       tree scalar_dest = gimple_assign_lhs (assign);
       if (TREE_CODE (scalar_dest) == VIEW_CONVERT_EXPR
-	  && is_pattern_stmt_p (stmt_info))
+	  && stmt_info->pattern_stmt_p)
 	scalar_dest = TREE_OPERAND (scalar_dest, 0);
       if (TREE_CODE (scalar_dest) != ARRAY_REF
 	  && TREE_CODE (scalar_dest) != BIT_FIELD_REF
@@ -9839,7 +9839,7 @@ vect_remove_stores (stmt_vec_info first_
   while (next_stmt_info)
     {
       stmt_vec_info tmp = DR_GROUP_NEXT_ELEMENT (next_stmt_info);
-      if (is_pattern_stmt_p (next_stmt_info))
+      if (next_stmt_info->pattern_stmt_p)
 	next_stmt_info = STMT_VINFO_RELATED_STMT (next_stmt_info);
       /* Free the attached stmt_vec_info and remove the stmt.  */
       next_si = gsi_for_stmt (next_stmt_info->stmt);

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [37/46] Associate alignment information with stmt_vec_infos
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (35 preceding siblings ...)
  2018-07-24 10:07 ` [36/46] Add a pattern_stmt_p field to stmt_vec_info Richard Sandiford
@ 2018-07-24 10:07 ` Richard Sandiford
  2018-07-25 10:18   ` Richard Biener
  2018-07-24 10:08 ` [38/46] Pass stmt_vec_infos instead of data_references where relevant Richard Sandiford
                   ` (8 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:07 UTC (permalink / raw)
  To: gcc-patches

Alignment information is really a property of a stmt_vec_info
(and the way we want to vectorise it) rather than the original scalar dr.
I think that was true even before the recent dr sharing.

This patch therefore makes the alignment-related interfaces take
stmt_vec_infos rather than data_references.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (STMT_VINFO_TARGET_ALIGNMENT): New macro.
	(DR_VECT_AUX, DR_MISALIGNMENT, SET_DR_MISALIGNMENT)
	(DR_TARGET_ALIGNMENT): Delete.
	(set_dr_misalignment, dr_misalignment, aligned_access_p)
	(known_alignment_for_access_p, vect_known_alignment_in_bytes)
	(vect_dr_behavior): Take a stmt_vec_info rather than a data_reference.
	* tree-vect-data-refs.c (vect_calculate_target_alignment)
	(vect_compute_data_ref_alignment, vect_update_misalignment_for_peel)
	(vector_alignment_reachable_p, vect_get_peeling_costs_all_drs)
	(vect_peeling_supportable, vect_enhance_data_refs_alignment)
	(vect_duplicate_ssa_name_ptr_info): Update after above changes.
	(vect_create_addr_base_for_vector_ref, vect_create_data_ref_ptr)
	(vect_setup_realignment, vect_supportable_dr_alignment): Likewise.
	* tree-vect-loop-manip.c (get_misalign_in_elems): Likewise.
	(vect_gen_prolog_loop_niters): Likewise.
	* tree-vect-stmts.c (vect_get_store_cost, vect_get_load_cost)
	(compare_step_with_zero, get_group_load_store_type): Likewise.
	(vect_get_data_ptr_increment, ensure_base_align, vectorizable_store)
	(vectorizable_load): Likewise.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:24:02.364492386 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:24:05.744462369 +0100
@@ -1031,6 +1031,9 @@ #define STMT_VINFO_NUM_SLP_USES(S)	(S)->
 #define STMT_VINFO_REDUC_TYPE(S)	(S)->reduc_type
 #define STMT_VINFO_REDUC_DEF(S)		(S)->reduc_def
 
+/* Only defined once dr_misalignment is defined.  */
+#define STMT_VINFO_TARGET_ALIGNMENT(S) (S)->dr_aux.target_alignment
+
 #define DR_GROUP_FIRST_ELEMENT(S)  (gcc_checking_assert ((S)->data_ref_info), (S)->first_element)
 #define DR_GROUP_NEXT_ELEMENT(S)   (gcc_checking_assert ((S)->data_ref_info), (S)->next_element)
 #define DR_GROUP_SIZE(S)           (gcc_checking_assert ((S)->data_ref_info), (S)->size)
@@ -1048,8 +1051,6 @@ #define HYBRID_SLP_STMT(S)
 #define PURE_SLP_STMT(S)                  ((S)->slp_type == pure_slp)
 #define STMT_SLP_TYPE(S)                   (S)->slp_type
 
-#define DR_VECT_AUX(dr) (&vinfo_for_stmt (DR_STMT (dr))->dr_aux)
-
 #define VECT_MAX_COST 1000
 
 /* The maximum number of intermediate steps required in multi-step type
@@ -1256,73 +1257,72 @@ add_stmt_costs (void *data, stmt_vector_
 #define DR_MISALIGNMENT_UNKNOWN (-1)
 #define DR_MISALIGNMENT_UNINITIALIZED (-2)
 
+/* Record that the vectorized form of the data access in STMT_INFO
+   will be misaligned by VAL bytes wrt its target alignment.
+   Negative values have the meanings above.  */
+
 inline void
-set_dr_misalignment (struct data_reference *dr, int val)
+set_dr_misalignment (stmt_vec_info stmt_info, int val)
 {
-  dataref_aux *data_aux = DR_VECT_AUX (dr);
-  data_aux->misalignment = val;
+  stmt_info->dr_aux.misalignment = val;
 }
 
+/* Return the misalignment in bytes of the vectorized form of the data
+   access in STMT_INFO, relative to its target alignment.  Negative
+   values have the meanings above.  */
+
 inline int
-dr_misalignment (struct data_reference *dr)
+dr_misalignment (stmt_vec_info stmt_info)
 {
-  int misalign = DR_VECT_AUX (dr)->misalignment;
+  int misalign = stmt_info->dr_aux.misalignment;
   gcc_assert (misalign != DR_MISALIGNMENT_UNINITIALIZED);
   return misalign;
 }
 
-/* Reflects actual alignment of first access in the vectorized loop,
-   taking into account peeling/versioning if applied.  */
-#define DR_MISALIGNMENT(DR) dr_misalignment (DR)
-#define SET_DR_MISALIGNMENT(DR, VAL) set_dr_misalignment (DR, VAL)
-
-/* Only defined once DR_MISALIGNMENT is defined.  */
-#define DR_TARGET_ALIGNMENT(DR) DR_VECT_AUX (DR)->target_alignment
-
-/* Return true if data access DR is aligned to its target alignment
-   (which may be less than a full vector).  */
+/* Return true if the vectorized form of the data access in STMT_INFO is
+   aligned to its target alignment (which may be less than a full vector).  */
 
 static inline bool
-aligned_access_p (struct data_reference *data_ref_info)
+aligned_access_p (stmt_vec_info stmt_info)
 {
-  return (DR_MISALIGNMENT (data_ref_info) == 0);
+  return (dr_misalignment (stmt_info) == 0);
 }
 
-/* Return TRUE if the alignment of the data access is known, and FALSE
-   otherwise.  */
+/* Return true if the alignment of the vectorized form of the data
+   access in STMT_INFO is known at compile time.  */
 
 static inline bool
-known_alignment_for_access_p (struct data_reference *data_ref_info)
+known_alignment_for_access_p (stmt_vec_info stmt_info)
 {
-  return (DR_MISALIGNMENT (data_ref_info) != DR_MISALIGNMENT_UNKNOWN);
+  return (dr_misalignment (stmt_info) != DR_MISALIGNMENT_UNKNOWN);
 }
 
 /* Return the minimum alignment in bytes that the vectorized version
-   of DR is guaranteed to have.  */
+   of the data reference in STMT_INFO is guaranteed to have.  */
 
 static inline unsigned int
-vect_known_alignment_in_bytes (struct data_reference *dr)
+vect_known_alignment_in_bytes (stmt_vec_info stmt_info)
 {
-  if (DR_MISALIGNMENT (dr) == DR_MISALIGNMENT_UNKNOWN)
+  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
+  int misalignment = dr_misalignment (stmt_info);
+  if (misalignment == DR_MISALIGNMENT_UNKNOWN)
     return TYPE_ALIGN_UNIT (TREE_TYPE (DR_REF (dr)));
-  if (DR_MISALIGNMENT (dr) == 0)
-    return DR_TARGET_ALIGNMENT (dr);
-  return DR_MISALIGNMENT (dr) & -DR_MISALIGNMENT (dr);
+  if (misalignment == 0)
+    return STMT_VINFO_TARGET_ALIGNMENT (stmt_info);
+  return misalignment & -misalignment;
 }
 
-/* Return the behavior of DR with respect to the vectorization context
-   (which for outer loop vectorization might not be the behavior recorded
-   in DR itself).  */
+/* Return the data reference behavior of STMT_INFO with respect to the
+   vectorization context (which for outer loop vectorization might not
+   be the behavior recorded in STMT_VINFO_DATA_DEF).  */
 
 static inline innermost_loop_behavior *
-vect_dr_behavior (data_reference *dr)
+vect_dr_behavior (stmt_vec_info stmt_info)
 {
-  gimple *stmt = DR_STMT (dr);
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   if (loop_vinfo == NULL
       || !nested_in_vect_loop_p (LOOP_VINFO_LOOP (loop_vinfo), stmt_info))
-    return &DR_INNERMOST (dr);
+    return &DR_INNERMOST (STMT_VINFO_DATA_REF (stmt_info));
   else
     return &STMT_VINFO_DR_WRT_VEC_LOOP (stmt_info);
 }
Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c	2018-07-24 10:24:02.356492457 +0100
+++ gcc/tree-vect-data-refs.c	2018-07-24 10:24:05.740462405 +0100
@@ -873,7 +873,7 @@ vect_calculate_target_alignment (struct
    Compute the misalignment of the data reference DR.
 
    Output:
-   1. DR_MISALIGNMENT (DR) is defined.
+   1. dr_misalignment (STMT_INFO) is defined.
 
    FOR NOW: No analysis is actually performed. Misalignment is calculated
    only for trivial cases. TODO.  */
@@ -896,17 +896,17 @@ vect_compute_data_ref_alignment (struct
     loop = LOOP_VINFO_LOOP (loop_vinfo);
 
   /* Initialize misalignment to unknown.  */
-  SET_DR_MISALIGNMENT (dr, DR_MISALIGNMENT_UNKNOWN);
+  set_dr_misalignment (stmt_info, DR_MISALIGNMENT_UNKNOWN);
 
   if (STMT_VINFO_GATHER_SCATTER_P (stmt_info))
     return;
 
-  innermost_loop_behavior *drb = vect_dr_behavior (dr);
+  innermost_loop_behavior *drb = vect_dr_behavior (stmt_info);
   bool step_preserves_misalignment_p;
 
   unsigned HOST_WIDE_INT vector_alignment
     = vect_calculate_target_alignment (dr) / BITS_PER_UNIT;
-  DR_TARGET_ALIGNMENT (dr) = vector_alignment;
+  STMT_VINFO_TARGET_ALIGNMENT (stmt_info) = vector_alignment;
 
   /* No step for BB vectorization.  */
   if (!loop)
@@ -1009,8 +1009,8 @@ vect_compute_data_ref_alignment (struct
           dump_printf (MSG_NOTE, "\n");
         }
 
-      DR_VECT_AUX (dr)->base_decl = base;
-      DR_VECT_AUX (dr)->base_misaligned = true;
+      stmt_info->dr_aux.base_decl = base;
+      stmt_info->dr_aux.base_misaligned = true;
       base_misalignment = 0;
     }
   poly_int64 misalignment
@@ -1038,12 +1038,13 @@ vect_compute_data_ref_alignment (struct
       return;
     }
 
-  SET_DR_MISALIGNMENT (dr, const_misalignment);
+  set_dr_misalignment (stmt_info, const_misalignment);
 
   if (dump_enabled_p ())
     {
       dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
-                       "misalign = %d bytes of ref ", DR_MISALIGNMENT (dr));
+		       "misalign = %d bytes of ref ",
+		       dr_misalignment (stmt_info));
       dump_generic_expr (MSG_MISSED_OPTIMIZATION, TDF_SLIM, ref);
       dump_printf (MSG_MISSED_OPTIMIZATION, "\n");
     }
@@ -1089,29 +1090,29 @@ vect_update_misalignment_for_peel (struc
     {
       if (current_dr != dr)
         continue;
-      gcc_assert (!known_alignment_for_access_p (dr)
-		  || !known_alignment_for_access_p (dr_peel)
-		  || (DR_MISALIGNMENT (dr) / dr_size
-		      == DR_MISALIGNMENT (dr_peel) / dr_peel_size));
-      SET_DR_MISALIGNMENT (dr, 0);
+      gcc_assert (!known_alignment_for_access_p (stmt_info)
+		  || !known_alignment_for_access_p (peel_stmt_info)
+		  || (dr_misalignment (stmt_info) / dr_size
+		      == dr_misalignment (peel_stmt_info) / dr_peel_size));
+      set_dr_misalignment (stmt_info, 0);
       return;
     }
 
-  if (known_alignment_for_access_p (dr)
-      && known_alignment_for_access_p (dr_peel))
+  if (known_alignment_for_access_p (stmt_info)
+      && known_alignment_for_access_p (peel_stmt_info))
     {
       bool negative = tree_int_cst_compare (DR_STEP (dr), size_zero_node) < 0;
-      int misal = DR_MISALIGNMENT (dr);
+      int misal = dr_misalignment (stmt_info);
       misal += negative ? -npeel * dr_size : npeel * dr_size;
-      misal &= DR_TARGET_ALIGNMENT (dr) - 1;
-      SET_DR_MISALIGNMENT (dr, misal);
+      misal &= STMT_VINFO_TARGET_ALIGNMENT (stmt_info) - 1;
+      set_dr_misalignment (stmt_info, misal);
       return;
     }
 
   if (dump_enabled_p ())
     dump_printf_loc (MSG_NOTE, vect_location, "Setting misalignment " \
 		     "to unknown (-1).\n");
-  SET_DR_MISALIGNMENT (dr, DR_MISALIGNMENT_UNKNOWN);
+  set_dr_misalignment (stmt_info, DR_MISALIGNMENT_UNKNOWN);
 }
 
 
@@ -1219,13 +1220,13 @@ vector_alignment_reachable_p (struct dat
       int elem_size, mis_in_elements;
 
       /* FORNOW: handle only known alignment.  */
-      if (!known_alignment_for_access_p (dr))
+      if (!known_alignment_for_access_p (stmt_info))
 	return false;
 
       poly_uint64 nelements = TYPE_VECTOR_SUBPARTS (vectype);
       poly_uint64 vector_size = GET_MODE_SIZE (TYPE_MODE (vectype));
       elem_size = vector_element_size (vector_size, nelements);
-      mis_in_elements = DR_MISALIGNMENT (dr) / elem_size;
+      mis_in_elements = dr_misalignment (stmt_info) / elem_size;
 
       if (!multiple_p (nelements - mis_in_elements, DR_GROUP_SIZE (stmt_info)))
 	return false;
@@ -1233,7 +1234,8 @@ vector_alignment_reachable_p (struct dat
 
   /* If misalignment is known at the compile time then allow peeling
      only if natural alignment is reachable through peeling.  */
-  if (known_alignment_for_access_p (dr) && !aligned_access_p (dr))
+  if (known_alignment_for_access_p (stmt_info)
+      && !aligned_access_p (stmt_info))
     {
       HOST_WIDE_INT elmsize =
 		int_cst_value (TYPE_SIZE_UNIT (TREE_TYPE (vectype)));
@@ -1241,10 +1243,10 @@ vector_alignment_reachable_p (struct dat
 	{
 	  dump_printf_loc (MSG_NOTE, vect_location,
 	                   "data size =" HOST_WIDE_INT_PRINT_DEC, elmsize);
-	  dump_printf (MSG_NOTE,
-	               ". misalignment = %d.\n", DR_MISALIGNMENT (dr));
+	  dump_printf (MSG_NOTE, ". misalignment = %d.\n",
+		       dr_misalignment (stmt_info));
 	}
-      if (DR_MISALIGNMENT (dr) % elmsize)
+      if (dr_misalignment (stmt_info) % elmsize)
 	{
 	  if (dump_enabled_p ())
 	    dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
@@ -1253,7 +1255,7 @@ vector_alignment_reachable_p (struct dat
 	}
     }
 
-  if (!known_alignment_for_access_p (dr))
+  if (!known_alignment_for_access_p (stmt_info))
     {
       tree type = TREE_TYPE (DR_REF (dr));
       bool is_packed = not_size_aligned (DR_REF (dr));
@@ -1401,6 +1403,8 @@ vect_get_peeling_costs_all_drs (vec<data
 				unsigned int npeel,
 				bool unknown_misalignment)
 {
+  stmt_vec_info peel_stmt_info = (dr0 ? vect_dr_stmt (dr0)
+				  : NULL_STMT_VEC_INFO);
   unsigned i;
   data_reference *dr;
 
@@ -1423,16 +1427,16 @@ vect_get_peeling_costs_all_drs (vec<data
 	continue;
 
       int save_misalignment;
-      save_misalignment = DR_MISALIGNMENT (dr);
+      save_misalignment = dr_misalignment (stmt_info);
       if (npeel == 0)
 	;
-      else if (unknown_misalignment && dr == dr0)
-	SET_DR_MISALIGNMENT (dr, 0);
+      else if (unknown_misalignment && stmt_info == peel_stmt_info)
+	set_dr_misalignment (stmt_info, 0);
       else
 	vect_update_misalignment_for_peel (dr, dr0, npeel);
       vect_get_data_access_cost (dr, inside_cost, outside_cost,
 				 body_cost_vec, prologue_cost_vec);
-      SET_DR_MISALIGNMENT (dr, save_misalignment);
+      set_dr_misalignment (stmt_info, save_misalignment);
     }
 }
 
@@ -1552,10 +1556,10 @@ vect_peeling_supportable (loop_vec_info
 	  && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
 	continue;
 
-      save_misalignment = DR_MISALIGNMENT (dr);
+      save_misalignment = dr_misalignment (stmt_info);
       vect_update_misalignment_for_peel (dr, dr0, npeel);
       supportable_dr_alignment = vect_supportable_dr_alignment (dr, false);
-      SET_DR_MISALIGNMENT (dr, save_misalignment);
+      set_dr_misalignment (stmt_info, save_misalignment);
 
       if (!supportable_dr_alignment)
 	return false;
@@ -1598,27 +1602,27 @@ vect_peeling_supportable (loop_vec_info
 
      -- original loop, before alignment analysis:
 	for (i=0; i<N; i++){
-	  x = q[i];			# DR_MISALIGNMENT(q) = unknown
-	  p[i] = y;			# DR_MISALIGNMENT(p) = unknown
+	  x = q[i];			# dr_misalignment(q) = unknown
+	  p[i] = y;			# dr_misalignment(p) = unknown
 	}
 
      -- After vect_compute_data_refs_alignment:
 	for (i=0; i<N; i++){
-	  x = q[i];			# DR_MISALIGNMENT(q) = 3
-	  p[i] = y;			# DR_MISALIGNMENT(p) = unknown
+	  x = q[i];			# dr_misalignment(q) = 3
+	  p[i] = y;			# dr_misalignment(p) = unknown
 	}
 
      -- Possibility 1: we do loop versioning:
      if (p is aligned) {
 	for (i=0; i<N; i++){	# loop 1A
-	  x = q[i];			# DR_MISALIGNMENT(q) = 3
-	  p[i] = y;			# DR_MISALIGNMENT(p) = 0
+	  x = q[i];			# dr_misalignment(q) = 3
+	  p[i] = y;			# dr_misalignment(p) = 0
 	}
      }
      else {
 	for (i=0; i<N; i++){	# loop 1B
-	  x = q[i];			# DR_MISALIGNMENT(q) = 3
-	  p[i] = y;			# DR_MISALIGNMENT(p) = unaligned
+	  x = q[i];			# dr_misalignment(q) = 3
+	  p[i] = y;			# dr_misalignment(p) = unaligned
 	}
      }
 
@@ -1628,8 +1632,8 @@ vect_peeling_supportable (loop_vec_info
 	p[i] = y;
      }
      for (i = 3; i < N; i++){	# loop 2A
-	x = q[i];			# DR_MISALIGNMENT(q) = 0
-	p[i] = y;			# DR_MISALIGNMENT(p) = unknown
+	x = q[i];			# dr_misalignment(q) = 0
+	p[i] = y;			# dr_misalignment(p) = unknown
      }
 
      -- Possibility 3: combination of loop peeling and versioning:
@@ -1639,14 +1643,14 @@ vect_peeling_supportable (loop_vec_info
      }
      if (p is aligned) {
 	for (i = 3; i<N; i++){	# loop 3A
-	  x = q[i];			# DR_MISALIGNMENT(q) = 0
-	  p[i] = y;			# DR_MISALIGNMENT(p) = 0
+	  x = q[i];			# dr_misalignment(q) = 0
+	  p[i] = y;			# dr_misalignment(p) = 0
 	}
      }
      else {
 	for (i = 3; i<N; i++){	# loop 3B
-	  x = q[i];			# DR_MISALIGNMENT(q) = 0
-	  p[i] = y;			# DR_MISALIGNMENT(p) = unaligned
+	  x = q[i];			# dr_misalignment(q) = 0
+	  p[i] = y;			# dr_misalignment(p) = unaligned
 	}
      }
 
@@ -1745,17 +1749,20 @@ vect_enhance_data_refs_alignment (loop_v
       do_peeling = vector_alignment_reachable_p (dr);
       if (do_peeling)
         {
-          if (known_alignment_for_access_p (dr))
+	  if (known_alignment_for_access_p (stmt_info))
             {
 	      unsigned int npeel_tmp = 0;
 	      bool negative = tree_int_cst_compare (DR_STEP (dr),
 						    size_zero_node) < 0;
 
 	      vectype = STMT_VINFO_VECTYPE (stmt_info);
-	      unsigned int target_align = DR_TARGET_ALIGNMENT (dr);
+	      unsigned int target_align
+		= STMT_VINFO_TARGET_ALIGNMENT (stmt_info);
 	      unsigned int dr_size = vect_get_scalar_dr_size (dr);
-	      mis = (negative ? DR_MISALIGNMENT (dr) : -DR_MISALIGNMENT (dr));
-	      if (DR_MISALIGNMENT (dr) != 0)
+	      mis = (negative
+		     ? dr_misalignment (stmt_info)
+		     : -dr_misalignment (stmt_info));
+	      if (mis != 0)
 		npeel_tmp = (mis & (target_align - 1)) / dr_size;
 
               /* For multiple types, it is possible that the bigger type access
@@ -1780,7 +1787,7 @@ vect_enhance_data_refs_alignment (loop_v
 
 		  /* NPEEL_TMP is 0 when there is no misalignment, but also
 		     allow peeling NELEMENTS.  */
-		  if (DR_MISALIGNMENT (dr) == 0)
+		  if (dr_misalignment (stmt_info) == 0)
 		    possible_npeel_number++;
 		}
 
@@ -1841,7 +1848,7 @@ vect_enhance_data_refs_alignment (loop_v
         }
       else
         {
-          if (!aligned_access_p (dr))
+	  if (!aligned_access_p (stmt_info))
             {
               if (dump_enabled_p ())
                 dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
@@ -2010,10 +2017,10 @@ vect_enhance_data_refs_alignment (loop_v
 
   if (do_peeling)
     {
-      stmt_vec_info stmt_info = vect_dr_stmt (dr0);
-      vectype = STMT_VINFO_VECTYPE (stmt_info);
+      stmt_vec_info peel_stmt_info = vect_dr_stmt (dr0);
+      vectype = STMT_VINFO_VECTYPE (peel_stmt_info);
 
-      if (known_alignment_for_access_p (dr0))
+      if (known_alignment_for_access_p (peel_stmt_info))
         {
 	  bool negative = tree_int_cst_compare (DR_STEP (dr0),
 						size_zero_node) < 0;
@@ -2021,11 +2028,14 @@ vect_enhance_data_refs_alignment (loop_v
             {
               /* Since it's known at compile time, compute the number of
                  iterations in the peeled loop (the peeling factor) for use in
-                 updating DR_MISALIGNMENT values.  The peeling factor is the
+                 updating dr_misalignment values.  The peeling factor is the
                  vectorization factor minus the misalignment as an element
                  count.  */
-	      mis = negative ? DR_MISALIGNMENT (dr0) : -DR_MISALIGNMENT (dr0);
-	      unsigned int target_align = DR_TARGET_ALIGNMENT (dr0);
+	      mis = (negative
+		     ? dr_misalignment (peel_stmt_info)
+		     : -dr_misalignment (peel_stmt_info));
+	      unsigned int target_align
+		= STMT_VINFO_TARGET_ALIGNMENT (peel_stmt_info);
 	      npeel = ((mis & (target_align - 1))
 		       / vect_get_scalar_dr_size (dr0));
             }
@@ -2033,9 +2043,8 @@ vect_enhance_data_refs_alignment (loop_v
 	  /* For interleaved data access every iteration accesses all the
 	     members of the group, therefore we divide the number of iterations
 	     by the group size.  */
-	  stmt_info = vect_dr_stmt (dr0);
-	  if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
-	    npeel /= DR_GROUP_SIZE (stmt_info);
+	  if (STMT_VINFO_GROUPED_ACCESS (peel_stmt_info))
+	    npeel /= DR_GROUP_SIZE (peel_stmt_info);
 
           if (dump_enabled_p ())
             dump_printf_loc (MSG_NOTE, vect_location,
@@ -2047,7 +2056,9 @@ vect_enhance_data_refs_alignment (loop_v
 	do_peeling = false;
 
       /* Check if all datarefs are supportable and log.  */
-      if (do_peeling && known_alignment_for_access_p (dr0) && npeel == 0)
+      if (do_peeling
+	  && known_alignment_for_access_p (peel_stmt_info)
+	  && npeel == 0)
         {
           stat = vect_verify_datarefs_alignment (loop_vinfo);
           if (!stat)
@@ -2066,7 +2077,8 @@ vect_enhance_data_refs_alignment (loop_v
               unsigned max_peel = npeel;
               if (max_peel == 0)
                 {
-		  unsigned int target_align = DR_TARGET_ALIGNMENT (dr0);
+		  unsigned int target_align
+		    = STMT_VINFO_TARGET_ALIGNMENT (peel_stmt_info);
 		  max_peel = target_align / vect_get_scalar_dr_size (dr0) - 1;
                 }
               if (max_peel > max_allowed_peel)
@@ -2095,19 +2107,20 @@ vect_enhance_data_refs_alignment (loop_v
 
       if (do_peeling)
         {
-          /* (1.2) Update the DR_MISALIGNMENT of each data reference DR_i.
-             If the misalignment of DR_i is identical to that of dr0 then set
-             DR_MISALIGNMENT (DR_i) to zero.  If the misalignment of DR_i and
-             dr0 are known at compile time then increment DR_MISALIGNMENT (DR_i)
-             by the peeling factor times the element size of DR_i (MOD the
-             vectorization factor times the size).  Otherwise, the
-             misalignment of DR_i must be set to unknown.  */
+	  /* (1.2) Update the dr_misalignment of each data reference
+	     statement STMT_i.  If the misalignment of STMT_i is identical
+	     to that of PEEL_STMT_INFO then set dr_misalignment (STMT_i)
+	     to zero.  If the misalignment of STMT_i and PEEL_STMT_INFO are
+	     known at compile time then increment dr_misalignment (STMT_i)
+	     by the peeling factor times the element size of STMT_i (MOD
+	     the vectorization factor times the size).  Otherwise, the
+	     misalignment of STMT_i must be set to unknown.  */
 	  FOR_EACH_VEC_ELT (datarefs, i, dr)
 	    if (dr != dr0)
 	      {
 		/* Strided accesses perform only component accesses, alignment
 		   is irrelevant for them.  */
-		stmt_info = vect_dr_stmt (dr);
+		stmt_vec_info stmt_info = vect_dr_stmt (dr);
 		if (STMT_VINFO_STRIDED_P (stmt_info)
 		    && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
 		  continue;
@@ -2120,8 +2133,8 @@ vect_enhance_data_refs_alignment (loop_v
             LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) = npeel;
           else
             LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo)
-	      = DR_MISALIGNMENT (dr0);
-	  SET_DR_MISALIGNMENT (dr0, 0);
+	      = dr_misalignment (peel_stmt_info);
+	  set_dr_misalignment (peel_stmt_info, 0);
 	  if (dump_enabled_p ())
             {
               dump_printf_loc (MSG_NOTE, vect_location,
@@ -2160,7 +2173,7 @@ vect_enhance_data_refs_alignment (loop_v
 
 	  /* For interleaving, only the alignment of the first access
 	     matters.  */
-	  if (aligned_access_p (dr)
+	  if (aligned_access_p (stmt_info)
 	      || (STMT_VINFO_GROUPED_ACCESS (stmt_info)
 		  && DR_GROUP_FIRST_ELEMENT (stmt_info) != stmt_info))
 	    continue;
@@ -2182,7 +2195,7 @@ vect_enhance_data_refs_alignment (loop_v
               int mask;
               tree vectype;
 
-              if (known_alignment_for_access_p (dr)
+              if (known_alignment_for_access_p (stmt_info)
                   || LOOP_VINFO_MAY_MISALIGN_STMTS (loop_vinfo).length ()
                      >= (unsigned) PARAM_VALUE (PARAM_VECT_MAX_VERSION_FOR_ALIGNMENT_CHECKS))
                 {
@@ -2241,8 +2254,7 @@ vect_enhance_data_refs_alignment (loop_v
          of the loop being vectorized.  */
       FOR_EACH_VEC_ELT (may_misalign_stmts, i, stmt_info)
         {
-          dr = STMT_VINFO_DATA_REF (stmt_info);
-	  SET_DR_MISALIGNMENT (dr, 0);
+	  set_dr_misalignment (stmt_info, 0);
 	  if (dump_enabled_p ())
             dump_printf_loc (MSG_NOTE, vect_location,
                              "Alignment of access forced using versioning.\n");
@@ -4456,13 +4468,14 @@ vect_get_new_ssa_name (tree type, enum v
 static void
 vect_duplicate_ssa_name_ptr_info (tree name, data_reference *dr)
 {
+  stmt_vec_info stmt_info = vect_dr_stmt (dr);
   duplicate_ssa_name_ptr_info (name, DR_PTR_INFO (dr));
-  int misalign = DR_MISALIGNMENT (dr);
+  int misalign = dr_misalignment (stmt_info);
   if (misalign == DR_MISALIGNMENT_UNKNOWN)
     mark_ptr_info_alignment_unknown (SSA_NAME_PTR_INFO (name));
   else
     set_ptr_info_alignment (SSA_NAME_PTR_INFO (name),
-			    DR_TARGET_ALIGNMENT (dr), misalign);
+			    STMT_VINFO_TARGET_ALIGNMENT (stmt_info), misalign);
 }
 
 /* Function vect_create_addr_base_for_vector_ref.
@@ -4513,7 +4526,7 @@ vect_create_addr_base_for_vector_ref (st
   tree vect_ptr_type;
   tree step = TYPE_SIZE_UNIT (TREE_TYPE (DR_REF (dr)));
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
-  innermost_loop_behavior *drb = vect_dr_behavior (dr);
+  innermost_loop_behavior *drb = vect_dr_behavior (stmt_info);
 
   tree data_ref_base = unshare_expr (drb->base_address);
   tree base_offset = unshare_expr (drb->offset);
@@ -4687,7 +4700,7 @@ vect_create_data_ref_ptr (stmt_vec_info
 
   /* Check the step (evolution) of the load in LOOP, and record
      whether it's invariant.  */
-  step = vect_dr_behavior (dr)->step;
+  step = vect_dr_behavior (stmt_info)->step;
   if (integer_zerop (step))
     *inv_p = true;
   else
@@ -5519,7 +5532,7 @@ vect_setup_realignment (stmt_vec_info st
 	new_temp = copy_ssa_name (ptr);
       else
 	new_temp = make_ssa_name (TREE_TYPE (ptr));
-      unsigned int align = DR_TARGET_ALIGNMENT (dr);
+      unsigned int align = STMT_VINFO_TARGET_ALIGNMENT (stmt_info);
       new_stmt = gimple_build_assign
 		   (new_temp, BIT_AND_EXPR, ptr,
 		    build_int_cst (TREE_TYPE (ptr), -(HOST_WIDE_INT) align));
@@ -6438,7 +6451,7 @@ vect_supportable_dr_alignment (struct da
   struct loop *vect_loop = NULL;
   bool nested_in_vect_loop = false;
 
-  if (aligned_access_p (dr) && !check_aligned_accesses)
+  if (aligned_access_p (stmt_info) && !check_aligned_accesses)
     return dr_aligned;
 
   /* For now assume all conditional loads/stores support unaligned
@@ -6546,11 +6559,11 @@ vect_supportable_dr_alignment (struct da
 	  else
 	    return dr_explicit_realign_optimized;
 	}
-      if (!known_alignment_for_access_p (dr))
+      if (!known_alignment_for_access_p (stmt_info))
 	is_packed = not_size_aligned (DR_REF (dr));
 
       if (targetm.vectorize.support_vector_misalignment
-	    (mode, type, DR_MISALIGNMENT (dr), is_packed))
+	    (mode, type, dr_misalignment (stmt_info), is_packed))
 	/* Can't software pipeline the loads, but can at least do them.  */
 	return dr_unaligned_supported;
     }
@@ -6559,11 +6572,11 @@ vect_supportable_dr_alignment (struct da
       bool is_packed = false;
       tree type = (TREE_TYPE (DR_REF (dr)));
 
-      if (!known_alignment_for_access_p (dr))
+      if (!known_alignment_for_access_p (stmt_info))
 	is_packed = not_size_aligned (DR_REF (dr));
 
      if (targetm.vectorize.support_vector_misalignment
-	   (mode, type, DR_MISALIGNMENT (dr), is_packed))
+	   (mode, type, dr_misalignment (stmt_info), is_packed))
        return dr_unaligned_supported;
     }
 
Index: gcc/tree-vect-loop-manip.c
===================================================================
--- gcc/tree-vect-loop-manip.c	2018-07-24 10:23:46.112636713 +0100
+++ gcc/tree-vect-loop-manip.c	2018-07-24 10:24:05.740462405 +0100
@@ -1564,7 +1564,7 @@ get_misalign_in_elems (gimple **seq, loo
   stmt_vec_info stmt_info = vect_dr_stmt (dr);
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
 
-  unsigned int target_align = DR_TARGET_ALIGNMENT (dr);
+  unsigned int target_align = STMT_VINFO_TARGET_ALIGNMENT (stmt_info);
   gcc_assert (target_align != 0);
 
   bool negative = tree_int_cst_compare (DR_STEP (dr), size_zero_node) < 0;
@@ -1600,7 +1600,7 @@ get_misalign_in_elems (gimple **seq, loo
    refer to an aligned location.  The following computation is generated:
 
    If the misalignment of DR is known at compile time:
-     addr_mis = int mis = DR_MISALIGNMENT (dr);
+     addr_mis = int mis = dr_misalignment (stmt-containing-DR);
    Else, compute address misalignment in bytes:
      addr_mis = addr & (target_align - 1)
 
@@ -1633,7 +1633,7 @@ vect_gen_prolog_loop_niters (loop_vec_in
   tree iters, iters_name;
   stmt_vec_info stmt_info = vect_dr_stmt (dr);
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
-  unsigned int target_align = DR_TARGET_ALIGNMENT (dr);
+  unsigned int target_align = STMT_VINFO_TARGET_ALIGNMENT (stmt_info);
 
   if (LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) > 0)
     {
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:24:02.364492386 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:24:05.744462369 +0100
@@ -1079,7 +1079,8 @@ vect_get_store_cost (stmt_vec_info stmt_
         /* Here, we assign an additional cost for the unaligned store.  */
 	*inside_cost += record_stmt_cost (body_cost_vec, ncopies,
 					  unaligned_store, stmt_info,
-					  DR_MISALIGNMENT (dr), vect_body);
+					  dr_misalignment (stmt_info),
+					  vect_body);
         if (dump_enabled_p ())
           dump_printf_loc (MSG_NOTE, vect_location,
                            "vect_model_store_cost: unaligned supported by "
@@ -1257,7 +1258,8 @@ vect_get_load_cost (stmt_vec_info stmt_i
         /* Here, we assign an additional cost for the unaligned load.  */
 	*inside_cost += record_stmt_cost (body_cost_vec, ncopies,
 					  unaligned_load, stmt_info,
-					  DR_MISALIGNMENT (dr), vect_body);
+					  dr_misalignment (stmt_info),
+					  vect_body);
 
         if (dump_enabled_p ())
           dump_printf_loc (MSG_NOTE, vect_location,
@@ -2102,8 +2104,7 @@ vect_use_strided_gather_scatters_p (stmt
 static int
 compare_step_with_zero (stmt_vec_info stmt_info)
 {
-  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
-  return tree_int_cst_compare (vect_dr_behavior (dr)->step,
+  return tree_int_cst_compare (vect_dr_behavior (stmt_info)->step,
 			       size_zero_node);
 }
 
@@ -2218,7 +2219,7 @@ get_group_load_store_type (stmt_vec_info
 	     be a multiple of B and so we are guaranteed to access a
 	     non-gap element in the same B-sized block.  */
 	  if (overrun_p
-	      && gap < (vect_known_alignment_in_bytes (first_dr)
+	      && gap < (vect_known_alignment_in_bytes (first_stmt_info)
 			/ vect_get_scalar_dr_size (first_dr)))
 	    overrun_p = false;
 	  if (overrun_p && !can_overrun_p)
@@ -2246,7 +2247,7 @@ get_group_load_store_type (stmt_vec_info
 	 same B-sized block.  */
       if (would_overrun_p
 	  && !masked_p
-	  && gap < (vect_known_alignment_in_bytes (first_dr)
+	  && gap < (vect_known_alignment_in_bytes (first_stmt_info)
 		    / vect_get_scalar_dr_size (first_dr)))
 	would_overrun_p = false;
 
@@ -2931,11 +2932,12 @@ vect_get_strided_load_store_ops (stmt_ve
 vect_get_data_ptr_increment (data_reference *dr, tree aggr_type,
 			     vect_memory_access_type memory_access_type)
 {
+  stmt_vec_info stmt_info = vect_dr_stmt (dr);
   if (memory_access_type == VMAT_INVARIANT)
     return size_zero_node;
 
   tree iv_step = TYPE_SIZE_UNIT (aggr_type);
-  tree step = vect_dr_behavior (dr)->step;
+  tree step = vect_dr_behavior (stmt_info)->step;
   if (tree_int_cst_sgn (step) == -1)
     iv_step = fold_build1 (NEGATE_EXPR, TREE_TYPE (iv_step), iv_step);
   return iv_step;
@@ -6174,14 +6176,16 @@ vectorizable_operation (stmt_vec_info st
 static void
 ensure_base_align (struct data_reference *dr)
 {
-  if (DR_VECT_AUX (dr)->misalignment == DR_MISALIGNMENT_UNINITIALIZED)
+  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  if (stmt_info->dr_aux.misalignment == DR_MISALIGNMENT_UNINITIALIZED)
     return;
 
-  if (DR_VECT_AUX (dr)->base_misaligned)
+  if (stmt_info->dr_aux.base_misaligned)
     {
-      tree base_decl = DR_VECT_AUX (dr)->base_decl;
+      tree base_decl = stmt_info->dr_aux.base_decl;
 
-      unsigned int align_base_to = DR_TARGET_ALIGNMENT (dr) * BITS_PER_UNIT;
+      unsigned int align_base_to = (stmt_info->dr_aux.target_alignment
+				    * BITS_PER_UNIT);
 
       if (decl_in_symtab_p (base_decl))
 	symtab_node::get (base_decl)->increase_alignment (align_base_to);
@@ -6190,7 +6194,7 @@ ensure_base_align (struct data_reference
 	  SET_DECL_ALIGN (base_decl, align_base_to);
           DECL_USER_ALIGN (base_decl) = 1;
 	}
-      DR_VECT_AUX (dr)->base_misaligned = false;
+      stmt_info->dr_aux.base_misaligned = false;
     }
 }
 
@@ -7175,16 +7179,16 @@ vectorizable_store (stmt_vec_info stmt_i
 		   vect_permute_store_chain().  */
 		vec_oprnd = result_chain[i];
 
-	      align = DR_TARGET_ALIGNMENT (first_dr);
-	      if (aligned_access_p (first_dr))
+	      align = STMT_VINFO_TARGET_ALIGNMENT (first_stmt_info);
+	      if (aligned_access_p (first_stmt_info))
 		misalign = 0;
-	      else if (DR_MISALIGNMENT (first_dr) == -1)
+	      else if (dr_misalignment (first_stmt_info) == -1)
 		{
-		  align = dr_alignment (vect_dr_behavior (first_dr));
+		  align = dr_alignment (vect_dr_behavior (first_stmt_info));
 		  misalign = 0;
 		}
 	      else
-		misalign = DR_MISALIGNMENT (first_dr);
+		misalign = dr_misalignment (first_stmt_info);
 	      if (dataref_offset == NULL_TREE
 		  && TREE_CODE (dataref_ptr) == SSA_NAME)
 		set_ptr_info_alignment (get_ptr_info (dataref_ptr), align,
@@ -7227,9 +7231,9 @@ vectorizable_store (stmt_vec_info stmt_i
 					  dataref_offset
 					  ? dataref_offset
 					  : build_int_cst (ref_type, 0));
-		  if (aligned_access_p (first_dr))
+		  if (aligned_access_p (first_stmt_info))
 		    ;
-		  else if (DR_MISALIGNMENT (first_dr) == -1)
+		  else if (dr_misalignment (first_stmt_info) == -1)
 		    TREE_TYPE (data_ref)
 		      = build_aligned_type (TREE_TYPE (data_ref),
 					    align * BITS_PER_UNIT);
@@ -8326,19 +8330,20 @@ vectorizable_load (stmt_vec_info stmt_in
 			break;
 		      }
 
-		    align = DR_TARGET_ALIGNMENT (dr);
+		    align = STMT_VINFO_TARGET_ALIGNMENT (stmt_info);
 		    if (alignment_support_scheme == dr_aligned)
 		      {
-			gcc_assert (aligned_access_p (first_dr));
+			gcc_assert (aligned_access_p (first_stmt_info));
 			misalign = 0;
 		      }
-		    else if (DR_MISALIGNMENT (first_dr) == -1)
+		    else if (dr_misalignment (first_stmt_info) == -1)
 		      {
-			align = dr_alignment (vect_dr_behavior (first_dr));
+			align = dr_alignment
+			  (vect_dr_behavior (first_stmt_info));
 			misalign = 0;
 		      }
 		    else
-		      misalign = DR_MISALIGNMENT (first_dr);
+		      misalign = dr_misalignment (first_stmt_info);
 		    if (dataref_offset == NULL_TREE
 			&& TREE_CODE (dataref_ptr) == SSA_NAME)
 		      set_ptr_info_alignment (get_ptr_info (dataref_ptr),
@@ -8365,7 +8370,7 @@ vectorizable_load (stmt_vec_info stmt_in
 					 : build_int_cst (ref_type, 0));
 			if (alignment_support_scheme == dr_aligned)
 			  ;
-			else if (DR_MISALIGNMENT (first_dr) == -1)
+			else if (dr_misalignment (first_stmt_info) == -1)
 			  TREE_TYPE (data_ref)
 			    = build_aligned_type (TREE_TYPE (data_ref),
 						  align * BITS_PER_UNIT);
@@ -8392,7 +8397,8 @@ vectorizable_load (stmt_vec_info stmt_in
 		      ptr = copy_ssa_name (dataref_ptr);
 		    else
 		      ptr = make_ssa_name (TREE_TYPE (dataref_ptr));
-		    unsigned int align = DR_TARGET_ALIGNMENT (first_dr);
+		    unsigned int align
+		      = STMT_VINFO_TARGET_ALIGNMENT (first_stmt_info);
 		    new_stmt = gimple_build_assign
 				 (ptr, BIT_AND_EXPR, dataref_ptr,
 				  build_int_cst
@@ -8436,7 +8442,8 @@ vectorizable_load (stmt_vec_info stmt_in
 		      new_temp = copy_ssa_name (dataref_ptr);
 		    else
 		      new_temp = make_ssa_name (TREE_TYPE (dataref_ptr));
-		    unsigned int align = DR_TARGET_ALIGNMENT (first_dr);
+		    unsigned int align
+		      = STMT_VINFO_TARGET_ALIGNMENT (first_stmt_info);
 		    new_stmt = gimple_build_assign
 		      (new_temp, BIT_AND_EXPR, dataref_ptr,
 		       build_int_cst (TREE_TYPE (dataref_ptr),

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [38/46] Pass stmt_vec_infos instead of data_references where relevant
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (36 preceding siblings ...)
  2018-07-24 10:07 ` [37/46] Associate alignment information with stmt_vec_infos Richard Sandiford
@ 2018-07-24 10:08 ` Richard Sandiford
  2018-07-25 10:21   ` Richard Biener
  2018-07-24 10:08 ` [39/46] Replace STMT_VINFO_UNALIGNED_DR with the associated statement Richard Sandiford
                   ` (7 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:08 UTC (permalink / raw)
  To: gcc-patches

This patch makes various routines (mostly in tree-vect-data-refs.c)
take stmt_vec_infos rather than data_references.  The affected routines
are really dealing with the way that an access is going to vectorised
for a particular stmt_vec_info, rather than with the original scalar
access described by the data_reference.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (vect_supportable_dr_alignment): Take
	a stmt_vec_info rather than a data_reference.
	* tree-vect-data-refs.c (vect_calculate_target_alignment)
	(vect_compute_data_ref_alignment, vect_update_misalignment_for_peel)
	(verify_data_ref_alignment, vector_alignment_reachable_p)
	(vect_get_data_access_cost, vect_get_peeling_costs_all_drs)
	(vect_peeling_supportable, vect_analyze_group_access_1)
	(vect_analyze_group_access, vect_analyze_data_ref_access)
	(vect_vfa_segment_size, vect_vfa_access_size, vect_small_gap_p)
	(vectorizable_with_step_bound_p, vect_duplicate_ssa_name_ptr_info)
	(vect_supportable_dr_alignment): Likewise.  Update calls to other
	functions for which the same change is being made.
	(vect_verify_datarefs_alignment, vect_find_same_alignment_drs)
	(vect_analyze_data_refs_alignment): Update calls accordingly.
	(vect_slp_analyze_and_verify_node_alignment): Likewise.
	(vect_analyze_data_ref_accesses): Likewise.
	(vect_prune_runtime_alias_test_list): Likewise.
	(vect_create_addr_base_for_vector_ref): Likewise.
	(vect_create_data_ref_ptr): Likewise.
	(_vect_peel_info::dr): Replace with...
	(_vect_peel_info::stmt_info): ...this new field.
	(vect_peeling_hash_get_most_frequent): Update _vect_peel_info uses
	accordingly, and update after above interface changes.
	(vect_peeling_hash_get_lowest_cost): Likewise
	(vect_peeling_hash_choose_best_peeling): Likewise.
	(vect_enhance_data_refs_alignment): Likewise.
	(vect_peeling_hash_insert): Likewise.  Take a stmt_vec_info
	rather than a data_reference.
	* tree-vect-stmts.c (vect_get_store_cost, vect_get_load_cost)
	(get_negative_load_store_type): Update calls to
	vect_supportable_dr_alignment.
	(vect_get_data_ptr_increment, ensure_base_align): Take a
	stmt_vec_info instead of a data_reference.
	(vectorizable_store, vectorizable_load): Update calls after
	above interface changes.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:24:05.744462369 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:24:08.924434128 +0100
@@ -1541,7 +1541,7 @@ extern tree vect_get_mask_type_for_stmt
 /* In tree-vect-data-refs.c.  */
 extern bool vect_can_force_dr_alignment_p (const_tree, unsigned int);
 extern enum dr_alignment_support vect_supportable_dr_alignment
-                                           (struct data_reference *, bool);
+  (stmt_vec_info, bool);
 extern tree vect_get_smallest_scalar_type (stmt_vec_info, HOST_WIDE_INT *,
                                            HOST_WIDE_INT *);
 extern bool vect_analyze_data_ref_dependences (loop_vec_info, unsigned int *);
Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c	2018-07-24 10:24:05.740462405 +0100
+++ gcc/tree-vect-data-refs.c	2018-07-24 10:24:08.924434128 +0100
@@ -858,19 +858,19 @@ vect_record_base_alignments (vec_info *v
     }
 }
 
-/* Return the target alignment for the vectorized form of DR.  */
+/* Return the target alignment for the vectorized form of the load or store
+   in STMT_INFO.  */
 
 static unsigned int
-vect_calculate_target_alignment (struct data_reference *dr)
+vect_calculate_target_alignment (stmt_vec_info stmt_info)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
   return targetm.vectorize.preferred_vector_alignment (vectype);
 }
 
 /* Function vect_compute_data_ref_alignment
 
-   Compute the misalignment of the data reference DR.
+   Compute the misalignment of the load or store in STMT_INFO.
 
    Output:
    1. dr_misalignment (STMT_INFO) is defined.
@@ -879,9 +879,9 @@ vect_calculate_target_alignment (struct
    only for trivial cases. TODO.  */
 
 static void
-vect_compute_data_ref_alignment (struct data_reference *dr)
+vect_compute_data_ref_alignment (stmt_vec_info stmt_info)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   vec_base_alignments *base_alignments = &stmt_info->vinfo->base_alignments;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct loop *loop = NULL;
@@ -905,7 +905,7 @@ vect_compute_data_ref_alignment (struct
   bool step_preserves_misalignment_p;
 
   unsigned HOST_WIDE_INT vector_alignment
-    = vect_calculate_target_alignment (dr) / BITS_PER_UNIT;
+    = vect_calculate_target_alignment (stmt_info) / BITS_PER_UNIT;
   STMT_VINFO_TARGET_ALIGNMENT (stmt_info) = vector_alignment;
 
   /* No step for BB vectorization.  */
@@ -1053,28 +1053,28 @@ vect_compute_data_ref_alignment (struct
 }
 
 /* Function vect_update_misalignment_for_peel.
-   Sets DR's misalignment
-   - to 0 if it has the same alignment as DR_PEEL,
+   Sets the misalignment of the load or store in STMT_INFO
+   - to 0 if it has the same alignment as PEEL_STMT_INFO,
    - to the misalignment computed using NPEEL if DR's salignment is known,
    - to -1 (unknown) otherwise.
 
-   DR - the data reference whose misalignment is to be adjusted.
-   DR_PEEL - the data reference whose misalignment is being made
-             zero in the vector loop by the peel.
+   STMT_INFO - the load or store whose misalignment is to be adjusted.
+   PEEL_STMT_INFO - the load or store whose misalignment is being made
+		    zero in the vector loop by the peel.
    NPEEL - the number of iterations in the peel loop if the misalignment
-           of DR_PEEL is known at compile time.  */
+	   of PEEL_STMT_INFO is known at compile time.  */
 
 static void
-vect_update_misalignment_for_peel (struct data_reference *dr,
-                                   struct data_reference *dr_peel, int npeel)
+vect_update_misalignment_for_peel (stmt_vec_info stmt_info,
+				   stmt_vec_info peel_stmt_info, int npeel)
 {
   unsigned int i;
   vec<dr_p> same_aligned_drs;
   struct data_reference *current_dr;
+  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
+  data_reference *dr_peel = STMT_VINFO_DATA_REF (peel_stmt_info);
   int dr_size = vect_get_scalar_dr_size (dr);
   int dr_peel_size = vect_get_scalar_dr_size (dr_peel);
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
-  stmt_vec_info peel_stmt_info = vect_dr_stmt (dr_peel);
 
  /* For interleaved data accesses the step in the loop must be multiplied by
      the size of the interleaving group.  */
@@ -1085,7 +1085,7 @@ vect_update_misalignment_for_peel (struc
 
   /* It can be assumed that the data refs with the same alignment as dr_peel
      are aligned in the vector loop.  */
-  same_aligned_drs = STMT_VINFO_SAME_ALIGN_REFS (vect_dr_stmt (dr_peel));
+  same_aligned_drs = STMT_VINFO_SAME_ALIGN_REFS (peel_stmt_info);
   FOR_EACH_VEC_ELT (same_aligned_drs, i, current_dr)
     {
       if (current_dr != dr)
@@ -1118,13 +1118,15 @@ vect_update_misalignment_for_peel (struc
 
 /* Function verify_data_ref_alignment
 
-   Return TRUE if DR can be handled with respect to alignment.  */
+   Return TRUE if the load or store in STMT_INFO can be handled with
+   respect to alignment.  */
 
 static bool
-verify_data_ref_alignment (data_reference_p dr)
+verify_data_ref_alignment (stmt_vec_info stmt_info)
 {
+  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   enum dr_alignment_support supportable_dr_alignment
-    = vect_supportable_dr_alignment (dr, false);
+    = vect_supportable_dr_alignment (stmt_info, false);
   if (!supportable_dr_alignment)
     {
       if (dump_enabled_p ())
@@ -1181,7 +1183,7 @@ vect_verify_datarefs_alignment (loop_vec
 	  && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
 	continue;
 
-      if (! verify_data_ref_alignment (dr))
+      if (! verify_data_ref_alignment (stmt_info))
 	return false;
     }
 
@@ -1203,13 +1205,13 @@ not_size_aligned (tree exp)
 
 /* Function vector_alignment_reachable_p
 
-   Return true if vector alignment for DR is reachable by peeling
-   a few loop iterations.  Return false otherwise.  */
+   Return true if the vector alignment is reachable for the load or store
+   in STMT_INFO by peeling a few loop iterations.  Return false otherwise.  */
 
 static bool
-vector_alignment_reachable_p (struct data_reference *dr)
+vector_alignment_reachable_p (stmt_vec_info stmt_info)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
 
   if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
@@ -1270,16 +1272,16 @@ vector_alignment_reachable_p (struct dat
 }
 
 
-/* Calculate the cost of the memory access represented by DR.  */
+/* Calculate the cost of the memory access in STMT_INFO.  */
 
 static void
-vect_get_data_access_cost (struct data_reference *dr,
+vect_get_data_access_cost (stmt_vec_info stmt_info,
                            unsigned int *inside_cost,
                            unsigned int *outside_cost,
 			   stmt_vector_for_cost *body_cost_vec,
 			   stmt_vector_for_cost *prologue_cost_vec)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   int ncopies;
 
@@ -1303,7 +1305,7 @@ vect_get_data_access_cost (struct data_r
 
 typedef struct _vect_peel_info
 {
-  struct data_reference *dr;
+  stmt_vec_info stmt_info;
   int npeel;
   unsigned int count;
 } *vect_peel_info;
@@ -1337,16 +1339,17 @@ peel_info_hasher::equal (const _vect_pee
 }
 
 
-/* Insert DR into peeling hash table with NPEEL as key.  */
+/* Insert STMT_INFO into peeling hash table with NPEEL as key.  */
 
 static void
 vect_peeling_hash_insert (hash_table<peel_info_hasher> *peeling_htab,
-			  loop_vec_info loop_vinfo, struct data_reference *dr,
+			  loop_vec_info loop_vinfo, stmt_vec_info stmt_info,
                           int npeel)
 {
   struct _vect_peel_info elem, *slot;
   _vect_peel_info **new_slot;
-  bool supportable_dr_alignment = vect_supportable_dr_alignment (dr, true);
+  bool supportable_dr_alignment
+    = vect_supportable_dr_alignment (stmt_info, true);
 
   elem.npeel = npeel;
   slot = peeling_htab->find (&elem);
@@ -1356,7 +1359,7 @@ vect_peeling_hash_insert (hash_table<pee
     {
       slot = XNEW (struct _vect_peel_info);
       slot->npeel = npeel;
-      slot->dr = dr;
+      slot->stmt_info = stmt_info;
       slot->count = 1;
       new_slot = peeling_htab->find_slot (slot, INSERT);
       *new_slot = slot;
@@ -1383,19 +1386,19 @@ vect_peeling_hash_get_most_frequent (_ve
     {
       max->peel_info.npeel = elem->npeel;
       max->peel_info.count = elem->count;
-      max->peel_info.dr = elem->dr;
+      max->peel_info.stmt_info = elem->stmt_info;
     }
 
   return 1;
 }
 
 /* Get the costs of peeling NPEEL iterations checking data access costs
-   for all data refs.  If UNKNOWN_MISALIGNMENT is true, we assume DR0's
-   misalignment will be zero after peeling.  */
+   for all data refs.  If UNKNOWN_MISALIGNMENT is true, we assume
+   PEEL_STMT_INFO's misalignment will be zero after peeling.  */
 
 static void
 vect_get_peeling_costs_all_drs (vec<data_reference_p> datarefs,
-				struct data_reference *dr0,
+				stmt_vec_info peel_stmt_info,
 				unsigned int *inside_cost,
 				unsigned int *outside_cost,
 				stmt_vector_for_cost *body_cost_vec,
@@ -1403,8 +1406,6 @@ vect_get_peeling_costs_all_drs (vec<data
 				unsigned int npeel,
 				bool unknown_misalignment)
 {
-  stmt_vec_info peel_stmt_info = (dr0 ? vect_dr_stmt (dr0)
-				  : NULL_STMT_VEC_INFO);
   unsigned i;
   data_reference *dr;
 
@@ -1433,8 +1434,8 @@ vect_get_peeling_costs_all_drs (vec<data
       else if (unknown_misalignment && stmt_info == peel_stmt_info)
 	set_dr_misalignment (stmt_info, 0);
       else
-	vect_update_misalignment_for_peel (dr, dr0, npeel);
-      vect_get_data_access_cost (dr, inside_cost, outside_cost,
+	vect_update_misalignment_for_peel (stmt_info, peel_stmt_info, npeel);
+      vect_get_data_access_cost (stmt_info, inside_cost, outside_cost,
 				 body_cost_vec, prologue_cost_vec);
       set_dr_misalignment (stmt_info, save_misalignment);
     }
@@ -1450,7 +1451,7 @@ vect_peeling_hash_get_lowest_cost (_vect
   vect_peel_info elem = *slot;
   int dummy;
   unsigned int inside_cost = 0, outside_cost = 0;
-  stmt_vec_info stmt_info = vect_dr_stmt (elem->dr);
+  stmt_vec_info stmt_info = elem->stmt_info;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   stmt_vector_for_cost prologue_cost_vec, body_cost_vec,
 		       epilogue_cost_vec;
@@ -1460,7 +1461,7 @@ vect_peeling_hash_get_lowest_cost (_vect
   epilogue_cost_vec.create (2);
 
   vect_get_peeling_costs_all_drs (LOOP_VINFO_DATAREFS (loop_vinfo),
-				  elem->dr, &inside_cost, &outside_cost,
+				  elem->stmt_info, &inside_cost, &outside_cost,
 				  &body_cost_vec, &prologue_cost_vec,
 				  elem->npeel, false);
 
@@ -1484,7 +1485,7 @@ vect_peeling_hash_get_lowest_cost (_vect
     {
       min->inside_cost = inside_cost;
       min->outside_cost = outside_cost;
-      min->peel_info.dr = elem->dr;
+      min->peel_info.stmt_info = elem->stmt_info;
       min->peel_info.npeel = elem->npeel;
       min->peel_info.count = elem->count;
     }
@@ -1503,7 +1504,7 @@ vect_peeling_hash_choose_best_peeling (h
 {
    struct _vect_peel_extended_info res;
 
-   res.peel_info.dr = NULL;
+   res.peel_info.stmt_info = NULL;
 
    if (!unlimited_cost_model (LOOP_VINFO_LOOP (loop_vinfo)))
      {
@@ -1527,8 +1528,8 @@ vect_peeling_hash_choose_best_peeling (h
 /* Return true if the new peeling NPEEL is supported.  */
 
 static bool
-vect_peeling_supportable (loop_vec_info loop_vinfo, struct data_reference *dr0,
-			  unsigned npeel)
+vect_peeling_supportable (loop_vec_info loop_vinfo,
+			  stmt_vec_info peel_stmt_info, unsigned npeel)
 {
   unsigned i;
   struct data_reference *dr = NULL;
@@ -1540,10 +1541,10 @@ vect_peeling_supportable (loop_vec_info
     {
       int save_misalignment;
 
-      if (dr == dr0)
+      stmt_vec_info stmt_info = vect_dr_stmt (dr);
+      if (stmt_info == peel_stmt_info)
 	continue;
 
-      stmt_vec_info stmt_info = vect_dr_stmt (dr);
       /* For interleaving, only the alignment of the first access
 	 matters.  */
       if (STMT_VINFO_GROUPED_ACCESS (stmt_info)
@@ -1557,8 +1558,9 @@ vect_peeling_supportable (loop_vec_info
 	continue;
 
       save_misalignment = dr_misalignment (stmt_info);
-      vect_update_misalignment_for_peel (dr, dr0, npeel);
-      supportable_dr_alignment = vect_supportable_dr_alignment (dr, false);
+      vect_update_misalignment_for_peel (stmt_info, peel_stmt_info, npeel);
+      supportable_dr_alignment
+	= vect_supportable_dr_alignment (stmt_info, false);
       set_dr_misalignment (stmt_info, save_misalignment);
 
       if (!supportable_dr_alignment)
@@ -1665,8 +1667,9 @@ vect_enhance_data_refs_alignment (loop_v
   vec<data_reference_p> datarefs = LOOP_VINFO_DATAREFS (loop_vinfo);
   struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
   enum dr_alignment_support supportable_dr_alignment;
-  struct data_reference *dr0 = NULL, *first_store = NULL;
   struct data_reference *dr;
+  stmt_vec_info peel_stmt_info = NULL;
+  stmt_vec_info first_store_info = NULL;
   unsigned int i, j;
   bool do_peeling = false;
   bool do_versioning = false;
@@ -1675,7 +1678,7 @@ vect_enhance_data_refs_alignment (loop_v
   bool one_misalignment_known = false;
   bool one_misalignment_unknown = false;
   bool one_dr_unsupportable = false;
-  struct data_reference *unsupportable_dr = NULL;
+  stmt_vec_info unsupportable_stmt_info = NULL;
   poly_uint64 vf = LOOP_VINFO_VECT_FACTOR (loop_vinfo);
   unsigned possible_npeel_number = 1;
   tree vectype;
@@ -1745,8 +1748,9 @@ vect_enhance_data_refs_alignment (loop_v
 	  && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
 	continue;
 
-      supportable_dr_alignment = vect_supportable_dr_alignment (dr, true);
-      do_peeling = vector_alignment_reachable_p (dr);
+      supportable_dr_alignment
+	= vect_supportable_dr_alignment (stmt_info, true);
+      do_peeling = vector_alignment_reachable_p (stmt_info);
       if (do_peeling)
         {
 	  if (known_alignment_for_access_p (stmt_info))
@@ -1796,7 +1800,7 @@ vect_enhance_data_refs_alignment (loop_v
               for (j = 0; j < possible_npeel_number; j++)
                 {
                   vect_peeling_hash_insert (&peeling_htab, loop_vinfo,
-					    dr, npeel_tmp);
+					    stmt_info, npeel_tmp);
 		  npeel_tmp += target_align / dr_size;
                 }
 
@@ -1810,11 +1814,11 @@ vect_enhance_data_refs_alignment (loop_v
                  stores over load.  */
 	      unsigned same_align_drs
 		= STMT_VINFO_SAME_ALIGN_REFS (stmt_info).length ();
-	      if (!dr0
+	      if (!peel_stmt_info
 		  || same_align_drs_max < same_align_drs)
 		{
 		  same_align_drs_max = same_align_drs;
-		  dr0 = dr;
+		  peel_stmt_info = stmt_info;
 		}
 	      /* For data-refs with the same number of related
 		 accesses prefer the one where the misalign
@@ -1822,6 +1826,7 @@ vect_enhance_data_refs_alignment (loop_v
 	      else if (same_align_drs_max == same_align_drs)
 		{
 		  struct loop *ivloop0, *ivloop;
+		  data_reference *dr0 = STMT_VINFO_DATA_REF (peel_stmt_info);
 		  ivloop0 = outermost_invariant_loop_for_expr
 		    (loop, DR_BASE_ADDRESS (dr0));
 		  ivloop = outermost_invariant_loop_for_expr
@@ -1829,7 +1834,7 @@ vect_enhance_data_refs_alignment (loop_v
 		  if ((ivloop && !ivloop0)
 		      || (ivloop && ivloop0
 			  && flow_loop_nested_p (ivloop, ivloop0)))
-		    dr0 = dr;
+		    peel_stmt_info = stmt_info;
 		}
 
 	      one_misalignment_unknown = true;
@@ -1839,11 +1844,11 @@ vect_enhance_data_refs_alignment (loop_v
 	      if (!supportable_dr_alignment)
 	      {
 		one_dr_unsupportable = true;
-		unsupportable_dr = dr;
+		unsupportable_stmt_info = stmt_info;
 	      }
 
-	      if (!first_store && DR_IS_WRITE (dr))
-		first_store = dr;
+	      if (!first_store_info && DR_IS_WRITE (dr))
+		first_store_info = stmt_info;
             }
         }
       else
@@ -1886,16 +1891,16 @@ vect_enhance_data_refs_alignment (loop_v
 
       stmt_vector_for_cost dummy;
       dummy.create (2);
-      vect_get_peeling_costs_all_drs (datarefs, dr0,
+      vect_get_peeling_costs_all_drs (datarefs, peel_stmt_info,
 				      &load_inside_cost,
 				      &load_outside_cost,
 				      &dummy, &dummy, estimated_npeels, true);
       dummy.release ();
 
-      if (first_store)
+      if (first_store_info)
 	{
 	  dummy.create (2);
-	  vect_get_peeling_costs_all_drs (datarefs, first_store,
+	  vect_get_peeling_costs_all_drs (datarefs, first_store_info,
 					  &store_inside_cost,
 					  &store_outside_cost,
 					  &dummy, &dummy,
@@ -1912,7 +1917,7 @@ vect_enhance_data_refs_alignment (loop_v
 	  || (load_inside_cost == store_inside_cost
 	      && load_outside_cost > store_outside_cost))
 	{
-	  dr0 = first_store;
+	  peel_stmt_info = first_store_info;
 	  peel_for_unknown_alignment.inside_cost = store_inside_cost;
 	  peel_for_unknown_alignment.outside_cost = store_outside_cost;
 	}
@@ -1936,18 +1941,18 @@ vect_enhance_data_refs_alignment (loop_v
       epilogue_cost_vec.release ();
 
       peel_for_unknown_alignment.peel_info.count = 1
-	+ STMT_VINFO_SAME_ALIGN_REFS (vect_dr_stmt (dr0)).length ();
+	+ STMT_VINFO_SAME_ALIGN_REFS (peel_stmt_info).length ();
     }
 
   peel_for_unknown_alignment.peel_info.npeel = 0;
-  peel_for_unknown_alignment.peel_info.dr = dr0;
+  peel_for_unknown_alignment.peel_info.stmt_info = peel_stmt_info;
 
   best_peel = peel_for_unknown_alignment;
 
   peel_for_known_alignment.inside_cost = INT_MAX;
   peel_for_known_alignment.outside_cost = INT_MAX;
   peel_for_known_alignment.peel_info.count = 0;
-  peel_for_known_alignment.peel_info.dr = NULL;
+  peel_for_known_alignment.peel_info.stmt_info = NULL;
 
   if (do_peeling && one_misalignment_known)
     {
@@ -1959,7 +1964,7 @@ vect_enhance_data_refs_alignment (loop_v
     }
 
   /* Compare costs of peeling for known and unknown alignment. */
-  if (peel_for_known_alignment.peel_info.dr != NULL
+  if (peel_for_known_alignment.peel_info.stmt_info
       && peel_for_unknown_alignment.inside_cost
       >= peel_for_known_alignment.inside_cost)
     {
@@ -1976,7 +1981,7 @@ vect_enhance_data_refs_alignment (loop_v
      since we'd have to discard a chosen peeling except when it accidentally
      aligned the unsupportable data ref.  */
   if (one_dr_unsupportable)
-    dr0 = unsupportable_dr;
+    peel_stmt_info = unsupportable_stmt_info;
   else if (do_peeling)
     {
       /* Calculate the penalty for no peeling, i.e. leaving everything as-is.
@@ -2007,7 +2012,7 @@ vect_enhance_data_refs_alignment (loop_v
       epilogue_cost_vec.release ();
 
       npeel = best_peel.peel_info.npeel;
-      dr0 = best_peel.peel_info.dr;
+      peel_stmt_info = best_peel.peel_info.stmt_info;
 
       /* If no peeling is not more expensive than the best peeling we
 	 have so far, don't perform any peeling.  */
@@ -2017,8 +2022,8 @@ vect_enhance_data_refs_alignment (loop_v
 
   if (do_peeling)
     {
-      stmt_vec_info peel_stmt_info = vect_dr_stmt (dr0);
       vectype = STMT_VINFO_VECTYPE (peel_stmt_info);
+      data_reference *dr0 = STMT_VINFO_DATA_REF (peel_stmt_info);
 
       if (known_alignment_for_access_p (peel_stmt_info))
         {
@@ -2052,7 +2057,7 @@ vect_enhance_data_refs_alignment (loop_v
         }
 
       /* Ensure that all datarefs can be vectorized after the peel.  */
-      if (!vect_peeling_supportable (loop_vinfo, dr0, npeel))
+      if (!vect_peeling_supportable (loop_vinfo, peel_stmt_info, npeel))
 	do_peeling = false;
 
       /* Check if all datarefs are supportable and log.  */
@@ -2125,7 +2130,8 @@ vect_enhance_data_refs_alignment (loop_v
 		    && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
 		  continue;
 
-		vect_update_misalignment_for_peel (dr, dr0, npeel);
+		vect_update_misalignment_for_peel (stmt_info,
+						   peel_stmt_info, npeel);
 	      }
 
           LOOP_VINFO_UNALIGNED_DR (loop_vinfo) = dr0;
@@ -2188,7 +2194,8 @@ vect_enhance_data_refs_alignment (loop_v
 	      break;
 	    }
 
-	  supportable_dr_alignment = vect_supportable_dr_alignment (dr, false);
+	  supportable_dr_alignment
+	    = vect_supportable_dr_alignment (stmt_info, false);
 
           if (!supportable_dr_alignment)
             {
@@ -2203,7 +2210,6 @@ vect_enhance_data_refs_alignment (loop_v
                   break;
                 }
 
-	      stmt_info = vect_dr_stmt (dr);
 	      vectype = STMT_VINFO_VECTYPE (stmt_info);
 	      gcc_assert (vectype);
 
@@ -2314,9 +2320,9 @@ vect_find_same_alignment_drs (struct dat
   if (maybe_ne (diff, 0))
     {
       /* Get the wider of the two alignments.  */
-      unsigned int align_a = (vect_calculate_target_alignment (dra)
+      unsigned int align_a = (vect_calculate_target_alignment (stmtinfo_a)
 			      / BITS_PER_UNIT);
-      unsigned int align_b = (vect_calculate_target_alignment (drb)
+      unsigned int align_b = (vect_calculate_target_alignment (stmtinfo_b)
 			      / BITS_PER_UNIT);
       unsigned int max_align = MAX (align_a, align_b);
 
@@ -2366,7 +2372,7 @@ vect_analyze_data_refs_alignment (loop_v
     {
       stmt_vec_info stmt_info = vect_dr_stmt (dr);
       if (STMT_VINFO_VECTORIZABLE (stmt_info))
-	vect_compute_data_ref_alignment (dr);
+	vect_compute_data_ref_alignment (stmt_info);
     }
 
   return true;
@@ -2382,17 +2388,16 @@ vect_slp_analyze_and_verify_node_alignme
      the node is permuted in which case we start from the first
      element in the group.  */
   stmt_vec_info first_stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
-  data_reference_p first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
+  stmt_vec_info stmt_info = first_stmt_info;
   if (SLP_TREE_LOAD_PERMUTATION (node).exists ())
-    first_stmt_info = DR_GROUP_FIRST_ELEMENT (first_stmt_info);
+    stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
 
-  data_reference_p dr = STMT_VINFO_DATA_REF (first_stmt_info);
-  vect_compute_data_ref_alignment (dr);
+  vect_compute_data_ref_alignment (stmt_info);
   /* For creating the data-ref pointer we need alignment of the
      first element anyway.  */
-  if (dr != first_dr)
-    vect_compute_data_ref_alignment (first_dr);
-  if (! verify_data_ref_alignment (dr))
+  if (stmt_info != first_stmt_info)
+    vect_compute_data_ref_alignment (first_stmt_info);
+  if (! verify_data_ref_alignment (first_stmt_info))
     {
       if (dump_enabled_p ())
 	dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
@@ -2430,19 +2435,19 @@ vect_slp_analyze_and_verify_instance_ali
 }
 
 
-/* Analyze groups of accesses: check that DR belongs to a group of
-   accesses of legal size, step, etc.  Detect gaps, single element
-   interleaving, and other special cases. Set grouped access info.
-   Collect groups of strided stores for further use in SLP analysis.
-   Worker for vect_analyze_group_access.  */
+/* Analyze groups of accesses: check that the load or store in STMT_INFO
+   belongs to a group of accesses of legal size, step, etc.  Detect gaps,
+   single element interleaving, and other special cases.  Set grouped
+   access info.  Collect groups of strided stores for further use in
+   SLP analysis.  Worker for vect_analyze_group_access.  */
 
 static bool
-vect_analyze_group_access_1 (struct data_reference *dr)
+vect_analyze_group_access_1 (stmt_vec_info stmt_info)
 {
+  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   tree step = DR_STEP (dr);
   tree scalar_type = TREE_TYPE (DR_REF (dr));
   HOST_WIDE_INT type_size = TREE_INT_CST_LOW (TYPE_SIZE_UNIT (scalar_type));
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   bb_vec_info bb_vinfo = STMT_VINFO_BB_VINFO (stmt_info);
   HOST_WIDE_INT dr_step = -1;
@@ -2519,7 +2524,7 @@ vect_analyze_group_access_1 (struct data
       if (bb_vinfo)
 	{
 	  /* Mark the statement as unvectorizable.  */
-	  STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr)) = false;
+	  STMT_VINFO_VECTORIZABLE (stmt_info) = false;
 	  return true;
 	}
 
@@ -2667,18 +2672,18 @@ vect_analyze_group_access_1 (struct data
   return true;
 }
 
-/* Analyze groups of accesses: check that DR belongs to a group of
-   accesses of legal size, step, etc.  Detect gaps, single element
-   interleaving, and other special cases. Set grouped access info.
-   Collect groups of strided stores for further use in SLP analysis.  */
+/* Analyze groups of accesses: check that the load or store in STMT_INFO
+   belongs to a group of accesses of legal size, step, etc.  Detect gaps,
+   single element interleaving, and other special cases.  Set grouped
+   access info.  Collect groups of strided stores for further use in
+   SLP analysis.  */
 
 static bool
-vect_analyze_group_access (struct data_reference *dr)
+vect_analyze_group_access (stmt_vec_info stmt_info)
 {
-  if (!vect_analyze_group_access_1 (dr))
+  if (!vect_analyze_group_access_1 (stmt_info))
     {
       /* Dissolve the group if present.  */
-      stmt_vec_info stmt_info = DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (dr));
       while (stmt_info)
 	{
 	  stmt_vec_info next = DR_GROUP_NEXT_ELEMENT (stmt_info);
@@ -2691,16 +2696,16 @@ vect_analyze_group_access (struct data_r
   return true;
 }
 
-/* Analyze the access pattern of the data-reference DR.
+/* Analyze the access pattern of the load or store in STMT_INFO.
    In case of non-consecutive accesses call vect_analyze_group_access() to
    analyze groups of accesses.  */
 
 static bool
-vect_analyze_data_ref_access (struct data_reference *dr)
+vect_analyze_data_ref_access (stmt_vec_info stmt_info)
 {
+  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   tree step = DR_STEP (dr);
   tree scalar_type = TREE_TYPE (DR_REF (dr));
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct loop *loop = NULL;
 
@@ -2780,10 +2785,10 @@ vect_analyze_data_ref_access (struct dat
   if (TREE_CODE (step) != INTEGER_CST)
     return (STMT_VINFO_STRIDED_P (stmt_info)
 	    && (!STMT_VINFO_GROUPED_ACCESS (stmt_info)
-		|| vect_analyze_group_access (dr)));
+		|| vect_analyze_group_access (stmt_info)));
 
   /* Not consecutive access - check if it's a part of interleaving group.  */
-  return vect_analyze_group_access (dr);
+  return vect_analyze_group_access (stmt_info);
 }
 
 /* Compare two data-references DRA and DRB to group them into chunks
@@ -3062,25 +3067,28 @@ vect_analyze_data_ref_accesses (vec_info
     }
 
   FOR_EACH_VEC_ELT (datarefs_copy, i, dr)
-    if (STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr))
-        && !vect_analyze_data_ref_access (dr))
-      {
-	if (dump_enabled_p ())
-	  dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
-	                   "not vectorized: complicated access pattern.\n");
+    {
+      stmt_vec_info stmt_info = vect_dr_stmt (dr);
+      if (STMT_VINFO_VECTORIZABLE (stmt_info)
+	  && !vect_analyze_data_ref_access (stmt_info))
+	{
+	  if (dump_enabled_p ())
+	    dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
+			     "not vectorized: complicated access pattern.\n");
 
-        if (is_a <bb_vec_info> (vinfo))
-	  {
-	    /* Mark the statement as not vectorizable.  */
-	    STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr)) = false;
-	    continue;
-	  }
-        else
-	  {
-	    datarefs_copy.release ();
-	    return false;
-	  }
-      }
+	  if (is_a <bb_vec_info> (vinfo))
+	    {
+	      /* Mark the statement as not vectorizable.  */
+	      STMT_VINFO_VECTORIZABLE (stmt_info) = false;
+	      continue;
+	    }
+	  else
+	    {
+	      datarefs_copy.release ();
+	      return false;
+	    }
+	}
+    }
 
   datarefs_copy.release ();
   return true;
@@ -3089,7 +3097,7 @@ vect_analyze_data_ref_accesses (vec_info
 /* Function vect_vfa_segment_size.
 
    Input:
-     DR: The data reference.
+     STMT_INFO: the load or store statement.
      LENGTH_FACTOR: segment length to consider.
 
    Return a value suitable for the dr_with_seg_len::seg_len field.
@@ -3098,8 +3106,9 @@ vect_analyze_data_ref_accesses (vec_info
    the size of the access; in effect it only describes the first byte.  */
 
 static tree
-vect_vfa_segment_size (struct data_reference *dr, tree length_factor)
+vect_vfa_segment_size (stmt_vec_info stmt_info, tree length_factor)
 {
+  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   length_factor = size_binop (MINUS_EXPR,
 			      fold_convert (sizetype, length_factor),
 			      size_one_node);
@@ -3107,23 +3116,23 @@ vect_vfa_segment_size (struct data_refer
 		     length_factor);
 }
 
-/* Return a value that, when added to abs (vect_vfa_segment_size (dr)),
+/* Return a value that, when added to abs (vect_vfa_segment_size (STMT_INFO)),
    gives the worst-case number of bytes covered by the segment.  */
 
 static unsigned HOST_WIDE_INT
-vect_vfa_access_size (data_reference *dr)
+vect_vfa_access_size (stmt_vec_info stmt_vinfo)
 {
-  stmt_vec_info stmt_vinfo = vect_dr_stmt (dr);
+  data_reference *dr = STMT_VINFO_DATA_REF (stmt_vinfo);
   tree ref_type = TREE_TYPE (DR_REF (dr));
   unsigned HOST_WIDE_INT ref_size = tree_to_uhwi (TYPE_SIZE_UNIT (ref_type));
   unsigned HOST_WIDE_INT access_size = ref_size;
   if (DR_GROUP_FIRST_ELEMENT (stmt_vinfo))
     {
-      gcc_assert (DR_GROUP_FIRST_ELEMENT (stmt_vinfo) == vect_dr_stmt (dr));
+      gcc_assert (DR_GROUP_FIRST_ELEMENT (stmt_vinfo) == stmt_vinfo);
       access_size *= DR_GROUP_SIZE (stmt_vinfo) - DR_GROUP_GAP (stmt_vinfo);
     }
   if (STMT_VINFO_VEC_STMT (stmt_vinfo)
-      && (vect_supportable_dr_alignment (dr, false)
+      && (vect_supportable_dr_alignment (stmt_vinfo, false)
 	  == dr_explicit_realign_optimized))
     {
       /* We might access a full vector's worth.  */
@@ -3281,13 +3290,14 @@ vect_check_lower_bound (loop_vec_info lo
   LOOP_VINFO_LOWER_BOUNDS (loop_vinfo).safe_push (lower_bound);
 }
 
-/* Return true if it's unlikely that the step of the vectorized form of DR
-   will span fewer than GAP bytes.  */
+/* Return true if it's unlikely that the step of the vectorized form of
+   the load or store in STMT_INFO will span fewer than GAP bytes.  */
 
 static bool
-vect_small_gap_p (loop_vec_info loop_vinfo, data_reference *dr, poly_int64 gap)
+vect_small_gap_p (stmt_vec_info stmt_info, poly_int64 gap)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
+  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   HOST_WIDE_INT count
     = estimated_poly_value (LOOP_VINFO_VECT_FACTOR (loop_vinfo));
   if (DR_GROUP_FIRST_ELEMENT (stmt_info))
@@ -3295,16 +3305,20 @@ vect_small_gap_p (loop_vec_info loop_vin
   return estimated_poly_value (gap) <= count * vect_get_scalar_dr_size (dr);
 }
 
-/* Return true if we know that there is no alias between DR_A and DR_B
-   when abs (DR_STEP (DR_A)) >= N for some N.  When returning true, set
-   *LOWER_BOUND_OUT to this N.  */
+/* Return true if we know that there is no alias between the loads and
+   stores in STMT_INFO_A and STMT_INFO_B when the absolute step of
+   STMT_INFO_A's access is >= some N.  When returning true,
+   set *LOWER_BOUND_OUT to this N.  */
 
 static bool
-vectorizable_with_step_bound_p (data_reference *dr_a, data_reference *dr_b,
+vectorizable_with_step_bound_p (stmt_vec_info stmt_info_a,
+				stmt_vec_info stmt_info_b,
 				poly_uint64 *lower_bound_out)
 {
   /* Check that there is a constant gap of known sign between DR_A
      and DR_B.  */
+  data_reference *dr_a = STMT_VINFO_DATA_REF (stmt_info_a);
+  data_reference *dr_b = STMT_VINFO_DATA_REF (stmt_info_b);
   poly_int64 init_a, init_b;
   if (!operand_equal_p (DR_BASE_ADDRESS (dr_a), DR_BASE_ADDRESS (dr_b), 0)
       || !operand_equal_p (DR_OFFSET (dr_a), DR_OFFSET (dr_b), 0)
@@ -3324,8 +3338,7 @@ vectorizable_with_step_bound_p (data_ref
   /* If the two accesses could be dependent within a scalar iteration,
      make sure that we'd retain their order.  */
   if (maybe_gt (init_a + vect_get_scalar_dr_size (dr_a), init_b)
-      && !vect_preserves_scalar_order_p (vect_dr_stmt (dr_a),
-					 vect_dr_stmt (dr_b)))
+      && !vect_preserves_scalar_order_p (stmt_info_a, stmt_info_b))
     return false;
 
   /* There is no alias if abs (DR_STEP) is greater than or equal to
@@ -3426,7 +3439,8 @@ vect_prune_runtime_alias_test_list (loop
 	 and intra-iteration dependencies are guaranteed to be honored.  */
       if (ignore_step_p
 	  && (vect_preserves_scalar_order_p (stmt_info_a, stmt_info_b)
-	      || vectorizable_with_step_bound_p (dr_a, dr_b, &lower_bound)))
+	      || vectorizable_with_step_bound_p (stmt_info_a, stmt_info_b,
+						 &lower_bound)))
 	{
 	  if (dump_enabled_p ())
 	    {
@@ -3446,9 +3460,10 @@ vect_prune_runtime_alias_test_list (loop
 	 than the number of bytes handled by one vector iteration.)  */
       if (!ignore_step_p
 	  && TREE_CODE (DR_STEP (dr_a)) != INTEGER_CST
-	  && vectorizable_with_step_bound_p (dr_a, dr_b, &lower_bound)
-	  && (vect_small_gap_p (loop_vinfo, dr_a, lower_bound)
-	      || vect_small_gap_p (loop_vinfo, dr_b, lower_bound)))
+	  && vectorizable_with_step_bound_p (stmt_info_a, stmt_info_b,
+					     &lower_bound)
+	  && (vect_small_gap_p (stmt_info_a, lower_bound)
+	      || vect_small_gap_p (stmt_info_b, lower_bound)))
 	{
 	  bool unsigned_p = dr_known_forward_stride_p (dr_a);
 	  if (dump_enabled_p ())
@@ -3501,11 +3516,13 @@ vect_prune_runtime_alias_test_list (loop
 	    length_factor = scalar_loop_iters;
 	  else
 	    length_factor = size_int (vect_factor);
-	  segment_length_a = vect_vfa_segment_size (dr_a, length_factor);
-	  segment_length_b = vect_vfa_segment_size (dr_b, length_factor);
+	  segment_length_a = vect_vfa_segment_size (stmt_info_a,
+						    length_factor);
+	  segment_length_b = vect_vfa_segment_size (stmt_info_b,
+						    length_factor);
 	}
-      access_size_a = vect_vfa_access_size (dr_a);
-      access_size_b = vect_vfa_access_size (dr_b);
+      access_size_a = vect_vfa_access_size (stmt_info_a);
+      access_size_b = vect_vfa_access_size (stmt_info_b);
       align_a = vect_vfa_align (dr_a);
       align_b = vect_vfa_align (dr_b);
 
@@ -4463,12 +4480,12 @@ vect_get_new_ssa_name (tree type, enum v
   return new_vect_var;
 }
 
-/* Duplicate ptr info and set alignment/misaligment on NAME from DR.  */
+/* Duplicate ptr info and set alignment/misaligment on NAME from STMT_INFO.  */
 
 static void
-vect_duplicate_ssa_name_ptr_info (tree name, data_reference *dr)
+vect_duplicate_ssa_name_ptr_info (tree name, stmt_vec_info stmt_info)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   duplicate_ssa_name_ptr_info (name, DR_PTR_INFO (dr));
   int misalign = dr_misalignment (stmt_info);
   if (misalign == DR_MISALIGNMENT_UNKNOWN)
@@ -4579,7 +4596,7 @@ vect_create_addr_base_for_vector_ref (st
       && TREE_CODE (addr_base) == SSA_NAME
       && !SSA_NAME_PTR_INFO (addr_base))
     {
-      vect_duplicate_ssa_name_ptr_info (addr_base, dr);
+      vect_duplicate_ssa_name_ptr_info (addr_base, stmt_info);
       if (offset || byte_offset)
 	mark_ptr_info_alignment_unknown (SSA_NAME_PTR_INFO (addr_base));
     }
@@ -4845,8 +4862,8 @@ vect_create_data_ref_ptr (stmt_vec_info
       /* Copy the points-to information if it exists. */
       if (DR_PTR_INFO (dr))
 	{
-	  vect_duplicate_ssa_name_ptr_info (indx_before_incr, dr);
-	  vect_duplicate_ssa_name_ptr_info (indx_after_incr, dr);
+	  vect_duplicate_ssa_name_ptr_info (indx_before_incr, stmt_info);
+	  vect_duplicate_ssa_name_ptr_info (indx_after_incr, stmt_info);
 	}
       if (ptr_incr)
 	*ptr_incr = incr;
@@ -4875,8 +4892,8 @@ vect_create_data_ref_ptr (stmt_vec_info
       /* Copy the points-to information if it exists. */
       if (DR_PTR_INFO (dr))
 	{
-	  vect_duplicate_ssa_name_ptr_info (indx_before_incr, dr);
-	  vect_duplicate_ssa_name_ptr_info (indx_after_incr, dr);
+	  vect_duplicate_ssa_name_ptr_info (indx_before_incr, stmt_info);
+	  vect_duplicate_ssa_name_ptr_info (indx_after_incr, stmt_info);
 	}
       if (ptr_incr)
 	*ptr_incr = incr;
@@ -6434,17 +6451,17 @@ vect_can_force_dr_alignment_p (const_tre
 }
 
 
-/* Return whether the data reference DR is supported with respect to its
-   alignment.
+/* Return whether the load or store in STMT_INFO is supported with
+   respect to its alignment.
    If CHECK_ALIGNED_ACCESSES is TRUE, check if the access is supported even
    it is aligned, i.e., check if it is possible to vectorize it with different
    alignment.  */
 
 enum dr_alignment_support
-vect_supportable_dr_alignment (struct data_reference *dr,
+vect_supportable_dr_alignment (stmt_vec_info stmt_info,
                                bool check_aligned_accesses)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
   machine_mode mode = TYPE_MODE (vectype);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:24:05.744462369 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:24:08.924434128 +0100
@@ -1057,8 +1057,8 @@ vect_get_store_cost (stmt_vec_info stmt_
 		     unsigned int *inside_cost,
 		     stmt_vector_for_cost *body_cost_vec)
 {
-  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
-  int alignment_support_scheme = vect_supportable_dr_alignment (dr, false);
+  int alignment_support_scheme
+    = vect_supportable_dr_alignment (stmt_info, false);
 
   switch (alignment_support_scheme)
     {
@@ -1237,8 +1237,8 @@ vect_get_load_cost (stmt_vec_info stmt_i
 		    stmt_vector_for_cost *body_cost_vec,
 		    bool record_prologue_costs)
 {
-  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
-  int alignment_support_scheme = vect_supportable_dr_alignment (dr, false);
+  int alignment_support_scheme
+    = vect_supportable_dr_alignment (stmt_info, false);
 
   switch (alignment_support_scheme)
     {
@@ -2340,7 +2340,6 @@ get_negative_load_store_type (stmt_vec_i
 			      vec_load_store_type vls_type,
 			      unsigned int ncopies)
 {
-  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   dr_alignment_support alignment_support_scheme;
 
   if (ncopies > 1)
@@ -2351,7 +2350,7 @@ get_negative_load_store_type (stmt_vec_i
       return VMAT_ELEMENTWISE;
     }
 
-  alignment_support_scheme = vect_supportable_dr_alignment (dr, false);
+  alignment_support_scheme = vect_supportable_dr_alignment (stmt_info, false);
   if (alignment_support_scheme != dr_aligned
       && alignment_support_scheme != dr_unaligned_supported)
     {
@@ -2924,15 +2923,14 @@ vect_get_strided_load_store_ops (stmt_ve
 }
 
 /* Return the amount that should be added to a vector pointer to move
-   to the next or previous copy of AGGR_TYPE.  DR is the data reference
-   being vectorized and MEMORY_ACCESS_TYPE describes the type of
+   to the next or previous copy of AGGR_TYPE.  STMT_INFO is the load or
+   store being vectorized and MEMORY_ACCESS_TYPE describes the type of
    vectorization.  */
 
 static tree
-vect_get_data_ptr_increment (data_reference *dr, tree aggr_type,
+vect_get_data_ptr_increment (stmt_vec_info stmt_info, tree aggr_type,
 			     vect_memory_access_type memory_access_type)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
   if (memory_access_type == VMAT_INVARIANT)
     return size_zero_node;
 
@@ -6171,12 +6169,12 @@ vectorizable_operation (stmt_vec_info st
   return true;
 }
 
-/* A helper function to ensure data reference DR's base alignment.  */
+/* If we decided to increase the base alignment for the memory access in
+   STMT_INFO, but haven't increased it yet, do so now.  */
 
 static void
-ensure_base_align (struct data_reference *dr)
+ensure_base_align (stmt_vec_info stmt_info)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
   if (stmt_info->dr_aux.misalignment == DR_MISALIGNMENT_UNINITIALIZED)
     return;
 
@@ -6439,7 +6437,7 @@ vectorizable_store (stmt_vec_info stmt_i
 
   /* Transform.  */
 
-  ensure_base_align (dr);
+  ensure_base_align (stmt_info);
 
   if (memory_access_type == VMAT_GATHER_SCATTER && gs_info.decl)
     {
@@ -6882,7 +6880,8 @@ vectorizable_store (stmt_vec_info stmt_i
   auto_vec<tree> dr_chain (group_size);
   oprnds.create (group_size);
 
-  alignment_support_scheme = vect_supportable_dr_alignment (first_dr, false);
+  alignment_support_scheme
+    = vect_supportable_dr_alignment (first_stmt_info, false);
   gcc_assert (alignment_support_scheme);
   vec_loop_masks *loop_masks
     = (loop_vinfo && LOOP_VINFO_FULLY_MASKED_P (loop_vinfo)
@@ -6920,7 +6919,8 @@ vectorizable_store (stmt_vec_info stmt_i
 	aggr_type = build_array_type_nelts (elem_type, vec_num * nunits);
       else
 	aggr_type = vectype;
-      bump = vect_get_data_ptr_increment (dr, aggr_type, memory_access_type);
+      bump = vect_get_data_ptr_increment (stmt_info, aggr_type,
+					  memory_access_type);
     }
 
   if (mask)
@@ -7667,7 +7667,7 @@ vectorizable_load (stmt_vec_info stmt_in
 
   /* Transform.  */
 
-  ensure_base_align (dr);
+  ensure_base_align (stmt_info);
 
   if (memory_access_type == VMAT_GATHER_SCATTER && gs_info.decl)
     {
@@ -7990,7 +7990,8 @@ vectorizable_load (stmt_vec_info stmt_in
       ref_type = reference_alias_ptr_type (DR_REF (first_dr));
     }
 
-  alignment_support_scheme = vect_supportable_dr_alignment (first_dr, false);
+  alignment_support_scheme
+    = vect_supportable_dr_alignment (first_stmt_info, false);
   gcc_assert (alignment_support_scheme);
   vec_loop_masks *loop_masks
     = (loop_vinfo && LOOP_VINFO_FULLY_MASKED_P (loop_vinfo)
@@ -8155,7 +8156,8 @@ vectorizable_load (stmt_vec_info stmt_in
 	aggr_type = build_array_type_nelts (elem_type, vec_num * nunits);
       else
 	aggr_type = vectype;
-      bump = vect_get_data_ptr_increment (dr, aggr_type, memory_access_type);
+      bump = vect_get_data_ptr_increment (stmt_info, aggr_type,
+					  memory_access_type);
     }
 
   tree vec_mask = NULL_TREE;

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [39/46] Replace STMT_VINFO_UNALIGNED_DR with the associated statement
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (37 preceding siblings ...)
  2018-07-24 10:08 ` [38/46] Pass stmt_vec_infos instead of data_references where relevant Richard Sandiford
@ 2018-07-24 10:08 ` Richard Sandiford
  2018-07-26 11:08   ` [39/46 v2] Change STMT_VINFO_UNALIGNED_DR to a dr_vec_info Richard Sandiford
  2018-07-24 10:09 ` [40/46] Add vec_info::lookup_dr Richard Sandiford
                   ` (6 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:08 UTC (permalink / raw)
  To: gcc-patches

After previous changes, it makes more sense to record which stmt's
access is going to be aligned via peeling, rather than the associated
scalar data reference.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (_loop_vec_info::unaligned_dr): Replace with...
	(_loop_vec_info::unaligned_stmt): ...this new field.
	(LOOP_VINFO_UNALIGNED_DR): Delete.
	(LOOP_VINFO_UNALIGNED_STMT): New macro.
	* tree-vect-data-refs.c (vect_enhance_data_refs_alignment): Use
	LOOP_VINFO_UNALIGNED_STMT instead of LOOP_VINFO_UNALIGNED_DR.
	* tree-vect-loop-manip.c (get_misalign_in_elems): Likewise.
	(vect_gen_prolog_loop_niters): Likewise.
	* tree-vect-loop.c (_loop_vec_info::_loop_vec_info): Update
	after above change to _loop_vec_info.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:24:08.924434128 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:24:12.252404574 +0100
@@ -436,7 +436,7 @@ typedef struct _loop_vec_info : public v
   tree mask_compare_type;
 
   /* Unknown DRs according to which loop was peeled.  */
-  struct data_reference *unaligned_dr;
+  stmt_vec_info unaligned_stmt;
 
   /* peeling_for_alignment indicates whether peeling for alignment will take
      place, and what the peeling factor should be:
@@ -445,7 +445,7 @@ typedef struct _loop_vec_info : public v
         If X>0: Peel first X iterations.
         If X=-1: Generate a runtime test to calculate the number of iterations
                  to be peeled, using the dataref recorded in the field
-                 unaligned_dr.  */
+                 unaligned_stmt.  */
   int peeling_for_alignment;
 
   /* The mask used to check the alignment of pointers or arrays.  */
@@ -576,7 +576,7 @@ #define LOOP_VINFO_DATAREFS(L)
 #define LOOP_VINFO_DDRS(L)                 (L)->shared->ddrs
 #define LOOP_VINFO_INT_NITERS(L)           (TREE_INT_CST_LOW ((L)->num_iters))
 #define LOOP_VINFO_PEELING_FOR_ALIGNMENT(L) (L)->peeling_for_alignment
-#define LOOP_VINFO_UNALIGNED_DR(L)         (L)->unaligned_dr
+#define LOOP_VINFO_UNALIGNED_STMT(L)       (L)->unaligned_stmt
 #define LOOP_VINFO_MAY_MISALIGN_STMTS(L)   (L)->may_misalign_stmts
 #define LOOP_VINFO_MAY_ALIAS_DDRS(L)       (L)->may_alias_ddrs
 #define LOOP_VINFO_COMP_ALIAS_DDRS(L)      (L)->comp_alias_ddrs
Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c	2018-07-24 10:24:08.924434128 +0100
+++ gcc/tree-vect-data-refs.c	2018-07-24 10:24:12.248404609 +0100
@@ -2134,7 +2134,7 @@ vect_enhance_data_refs_alignment (loop_v
 						   peel_stmt_info, npeel);
 	      }
 
-          LOOP_VINFO_UNALIGNED_DR (loop_vinfo) = dr0;
+          LOOP_VINFO_UNALIGNED_STMT (loop_vinfo) = peel_stmt_info;
           if (npeel)
             LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) = npeel;
           else
Index: gcc/tree-vect-loop-manip.c
===================================================================
--- gcc/tree-vect-loop-manip.c	2018-07-24 10:24:05.740462405 +0100
+++ gcc/tree-vect-loop-manip.c	2018-07-24 10:24:12.248404609 +0100
@@ -1560,8 +1560,8 @@ vect_update_ivs_after_vectorizer (loop_v
 static tree
 get_misalign_in_elems (gimple **seq, loop_vec_info loop_vinfo)
 {
-  struct data_reference *dr = LOOP_VINFO_UNALIGNED_DR (loop_vinfo);
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  stmt_vec_info stmt_info = LOOP_VINFO_UNALIGNED_STMT (loop_vinfo);
+  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
 
   unsigned int target_align = STMT_VINFO_TARGET_ALIGNMENT (stmt_info);
@@ -1594,8 +1594,8 @@ get_misalign_in_elems (gimple **seq, loo
 /* Function vect_gen_prolog_loop_niters
 
    Generate the number of iterations which should be peeled as prolog for the
-   loop represented by LOOP_VINFO.  It is calculated as the misalignment of
-   DR - the data reference recorded in LOOP_VINFO_UNALIGNED_DR (LOOP_VINFO).
+   loop represented by LOOP_VINFO.  It is calculated as the misalignment of DR
+   - the data reference recorded in LOOP_VINFO_UNALIGNED_STMT (LOOP_VINFO).
    As a result, after the execution of this loop, the data reference DR will
    refer to an aligned location.  The following computation is generated:
 
@@ -1626,12 +1626,12 @@ get_misalign_in_elems (gimple **seq, loo
 vect_gen_prolog_loop_niters (loop_vec_info loop_vinfo,
 			     basic_block bb, int *bound)
 {
-  struct data_reference *dr = LOOP_VINFO_UNALIGNED_DR (loop_vinfo);
+  stmt_vec_info stmt_info = LOOP_VINFO_UNALIGNED_STMT (loop_vinfo);
+  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   tree var;
   tree niters_type = TREE_TYPE (LOOP_VINFO_NITERS (loop_vinfo));
   gimple_seq stmts = NULL, new_stmts = NULL;
   tree iters, iters_name;
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
   unsigned int target_align = STMT_VINFO_TARGET_ALIGNMENT (stmt_info);
 
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:24:02.360492422 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:24:12.252404574 +0100
@@ -817,7 +817,7 @@ _loop_vec_info::_loop_vec_info (struct l
     max_vectorization_factor (0),
     mask_skip_niters (NULL_TREE),
     mask_compare_type (NULL_TREE),
-    unaligned_dr (NULL),
+    unaligned_stmt (NULL),
     peeling_for_alignment (0),
     ptr_mask (0),
     ivexpr_map (NULL),
@@ -2142,8 +2142,8 @@ vect_analyze_loop_2 (loop_vec_info loop_
 	  /* Niters for peeled prolog loop.  */
 	  if (LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) < 0)
 	    {
-	      struct data_reference *dr = LOOP_VINFO_UNALIGNED_DR (loop_vinfo);
-	      tree vectype = STMT_VINFO_VECTYPE (vect_dr_stmt (dr));
+	      stmt_vec_info stmt_info = LOOP_VINFO_UNALIGNED_STMT (loop_vinfo);
+	      tree vectype = STMT_VINFO_VECTYPE (stmt_info);
 	      niters_th += TYPE_VECTOR_SUBPARTS (vectype) - 1;
 	    }
 	  else

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [41/46] Add vec_info::remove_stmt
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (39 preceding siblings ...)
  2018-07-24 10:09 ` [40/46] Add vec_info::lookup_dr Richard Sandiford
@ 2018-07-24 10:09 ` Richard Sandiford
  2018-07-31 12:02   ` Richard Biener
  2018-07-24 10:09 ` [42/46] Add vec_info::replace_stmt Richard Sandiford
                   ` (4 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:09 UTC (permalink / raw)
  To: gcc-patches

This patch adds a new helper function for permanently removing a
statement and its associated stmt_vec_info.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (vec_info::remove_stmt): Declare.
	* tree-vectorizer.c (vec_info::remove_stmt): New function.
	* tree-vect-loop-manip.c (vect_set_loop_condition): Use it.
	* tree-vect-loop.c (vect_transform_loop): Likewise.
	* tree-vect-slp.c (vect_schedule_slp): Likewise.
	* tree-vect-stmts.c (vect_remove_stores): Likewise.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:24:16.552366384 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:24:19.544339803 +0100
@@ -241,6 +241,7 @@ struct vec_info {
   stmt_vec_info lookup_def (tree);
   stmt_vec_info lookup_single_use (tree);
   stmt_vec_info lookup_dr (data_reference *);
+  void remove_stmt (stmt_vec_info);
 
   /* The type of vectorization.  */
   vec_kind kind;
Index: gcc/tree-vectorizer.c
===================================================================
--- gcc/tree-vectorizer.c	2018-07-24 10:24:16.552366384 +0100
+++ gcc/tree-vectorizer.c	2018-07-24 10:24:19.544339803 +0100
@@ -577,6 +577,20 @@ vec_info::lookup_dr (data_reference *dr)
   return stmt_info;
 }
 
+/* Permanently remove the statement described by STMT_INFO from the
+   function.  */
+
+void
+vec_info::remove_stmt (stmt_vec_info stmt_info)
+{
+  gcc_assert (!stmt_info->pattern_stmt_p);
+  gimple_stmt_iterator si = gsi_for_stmt (stmt_info->stmt);
+  unlink_stmt_vdef (stmt_info->stmt);
+  gsi_remove (&si, true);
+  release_defs (stmt_info->stmt);
+  free_stmt_vec_info (stmt_info);
+}
+
 /* A helper function to free scev and LOOP niter information, as well as
    clear loop constraint LOOP_C_FINITE.  */
 
Index: gcc/tree-vect-loop-manip.c
===================================================================
--- gcc/tree-vect-loop-manip.c	2018-07-24 10:24:16.552366384 +0100
+++ gcc/tree-vect-loop-manip.c	2018-07-24 10:24:19.540339838 +0100
@@ -935,8 +935,12 @@ vect_set_loop_condition (struct loop *lo
 						  loop_cond_gsi);
 
   /* Remove old loop exit test.  */
-  gsi_remove (&loop_cond_gsi, true);
-  free_stmt_vec_info (orig_cond);
+  stmt_vec_info orig_cond_info;
+  if (loop_vinfo
+      && (orig_cond_info = loop_vinfo->lookup_stmt (orig_cond)))
+    loop_vinfo->remove_stmt (orig_cond_info);
+  else
+    gsi_remove (&loop_cond_gsi, true);
 
   if (dump_enabled_p ())
     {
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:24:12.252404574 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:24:19.540339838 +0100
@@ -8487,28 +8487,18 @@ vect_transform_loop (loop_vec_info loop_
 		  vect_transform_loop_stmt (loop_vinfo, stmt_info, &si,
 					    &seen_store, &slp_scheduled);
 		}
+	      gsi_next (&si);
 	      if (seen_store)
 		{
 		  if (STMT_VINFO_GROUPED_ACCESS (seen_store))
-		    {
-		      /* Interleaving.  If IS_STORE is TRUE, the
-			 vectorization of the interleaving chain was
-			 completed - free all the stores in the chain.  */
-		      gsi_next (&si);
-		      vect_remove_stores (DR_GROUP_FIRST_ELEMENT (seen_store));
-		    }
+		    /* Interleaving.  If IS_STORE is TRUE, the
+		       vectorization of the interleaving chain was
+		       completed - free all the stores in the chain.  */
+		    vect_remove_stores (DR_GROUP_FIRST_ELEMENT (seen_store));
 		  else
-		    {
-		      /* Free the attached stmt_vec_info and remove the
-			 stmt.  */
-		      free_stmt_vec_info (stmt);
-		      unlink_stmt_vdef (stmt);
-		      gsi_remove (&si, true);
-		      release_defs (stmt);
-		    }
+		    /* Free the attached stmt_vec_info and remove the stmt.  */
+		    loop_vinfo->remove_stmt (stmt_info);
 		}
-	      else
-		gsi_next (&si);
 	    }
 	}
 
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c	2018-07-24 10:24:02.360492422 +0100
+++ gcc/tree-vect-slp.c	2018-07-24 10:24:19.540339838 +0100
@@ -4087,7 +4087,6 @@ vect_schedule_slp (vec_info *vinfo)
       slp_tree root = SLP_INSTANCE_TREE (instance);
       stmt_vec_info store_info;
       unsigned int j;
-      gimple_stmt_iterator gsi;
 
       /* Remove scalar call stmts.  Do not do this for basic-block
 	 vectorization as not all uses may be vectorized.
@@ -4108,11 +4107,7 @@ vect_schedule_slp (vec_info *vinfo)
 	  if (store_info->pattern_stmt_p)
 	    store_info = STMT_VINFO_RELATED_STMT (store_info);
 	  /* Free the attached stmt_vec_info and remove the stmt.  */
-	  gsi = gsi_for_stmt (store_info);
-	  unlink_stmt_vdef (store_info);
-	  gsi_remove (&gsi, true);
-	  release_defs (store_info);
-	  free_stmt_vec_info (store_info);
+	  vinfo->remove_stmt (store_info);
         }
     }
 
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:24:08.924434128 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:24:19.544339803 +0100
@@ -9842,8 +9842,8 @@ vect_transform_stmt (stmt_vec_info stmt_
 void
 vect_remove_stores (stmt_vec_info first_stmt_info)
 {
+  vec_info *vinfo = first_stmt_info->vinfo;
   stmt_vec_info next_stmt_info = first_stmt_info;
-  gimple_stmt_iterator next_si;
 
   while (next_stmt_info)
     {
@@ -9851,11 +9851,7 @@ vect_remove_stores (stmt_vec_info first_
       if (next_stmt_info->pattern_stmt_p)
 	next_stmt_info = STMT_VINFO_RELATED_STMT (next_stmt_info);
       /* Free the attached stmt_vec_info and remove the stmt.  */
-      next_si = gsi_for_stmt (next_stmt_info->stmt);
-      unlink_stmt_vdef (next_stmt_info->stmt);
-      gsi_remove (&next_si, true);
-      release_defs (next_stmt_info->stmt);
-      free_stmt_vec_info (next_stmt_info);
+      vinfo->remove_stmt (next_stmt_info);
       next_stmt_info = tmp;
     }
 }

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [40/46] Add vec_info::lookup_dr
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (38 preceding siblings ...)
  2018-07-24 10:08 ` [39/46] Replace STMT_VINFO_UNALIGNED_DR with the associated statement Richard Sandiford
@ 2018-07-24 10:09 ` Richard Sandiford
  2018-07-26 11:10   ` [40/46 v2] " Richard Sandiford
  2018-07-24 10:09 ` [41/46] Add vec_info::remove_stmt Richard Sandiford
                   ` (5 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:09 UTC (permalink / raw)
  To: gcc-patches

Previous patches got rid of a lot of calls to vect_dr_stmt.
This patch replaces the remaining ones with calls to a new
vec_info::lookup_dr function, so that the lookup is relative
to a particular vec_info rather than to global state.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (vec_info::lookup_dr): New member function.
	(vect_dr_stmt): Delete.
	* tree-vectorizer.c (vec_info::lookup_dr): New function.
	* tree-vect-loop-manip.c (vect_update_inits_of_drs): Use it instead
	of vect_dr_stmt.
	* tree-vect-data-refs.c (vect_analyze_possibly_independent_ddr)
	(vect_analyze_data_ref_dependence, vect_record_base_alignments)
	(vect_verify_datarefs_alignment, vect_peeling_supportable)
	(vect_analyze_data_ref_accesses, vect_prune_runtime_alias_test_list)
	(vect_analyze_data_refs): Likewise.
	(vect_slp_analyze_data_ref_dependence): Likewise.  Take a vec_info
	argument.
	(vect_find_same_alignment_drs): Likewise.
	(vect_slp_analyze_node_dependences): Update calls accordingly.
	(vect_analyze_data_refs_alignment): Likewise.  Use vec_info::lookup_dr
	instead of vect_dr_stmt.
	(vect_get_peeling_costs_all_drs): Take a loop_vec_info instead
	of a vector data references.  Use vec_info::lookup_dr instead of
	vect_dr_stmt.
	(vect_peeling_hash_get_lowest_cost): Update calls accordingly.
	(vect_enhance_data_refs_alignment): Likewise.  Use vec_info::lookup_dr
	instead of vect_dr_stmt.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:24:12.252404574 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:24:16.552366384 +0100
@@ -240,6 +240,7 @@ struct vec_info {
   stmt_vec_info lookup_stmt (gimple *);
   stmt_vec_info lookup_def (tree);
   stmt_vec_info lookup_single_use (tree);
+  stmt_vec_info lookup_dr (data_reference *);
 
   /* The type of vectorization.  */
   vec_kind kind;
@@ -1327,22 +1328,6 @@ vect_dr_behavior (stmt_vec_info stmt_inf
     return &STMT_VINFO_DR_WRT_VEC_LOOP (stmt_info);
 }
 
-/* Return the stmt DR is in.  For DR_STMT that have been replaced by
-   a pattern this returns the corresponding pattern stmt.  Otherwise
-   DR_STMT is returned.  */
-
-inline stmt_vec_info
-vect_dr_stmt (data_reference *dr)
-{
-  gimple *stmt = DR_STMT (dr);
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
-  if (STMT_VINFO_IN_PATTERN_P (stmt_info))
-    return STMT_VINFO_RELATED_STMT (stmt_info);
-  /* DR_STMT should never refer to a stmt in a pattern replacement.  */
-  gcc_checking_assert (!STMT_VINFO_RELATED_STMT (stmt_info));
-  return stmt_info;
-}
-
 /* Return true if the vect cost model is unlimited.  */
 static inline bool
 unlimited_cost_model (loop_p loop)
Index: gcc/tree-vectorizer.c
===================================================================
--- gcc/tree-vectorizer.c	2018-07-24 10:22:30.401309046 +0100
+++ gcc/tree-vectorizer.c	2018-07-24 10:24:16.552366384 +0100
@@ -562,6 +562,21 @@ vec_info::lookup_single_use (tree lhs)
   return NULL;
 }
 
+/* Return the stmt DR is in.  For DR_STMT that have been replaced by
+   a pattern this returns the corresponding pattern stmt.  Otherwise
+   it returns the information for DR_STMT itself.  */
+
+stmt_vec_info
+vec_info::lookup_dr (data_reference *dr)
+{
+  stmt_vec_info stmt_info = lookup_stmt (DR_STMT (dr));
+  if (STMT_VINFO_IN_PATTERN_P (stmt_info))
+    return STMT_VINFO_RELATED_STMT (stmt_info);
+  /* DR_STMT should never refer to a stmt in a pattern replacement.  */
+  gcc_checking_assert (!STMT_VINFO_RELATED_STMT (stmt_info));
+  return stmt_info;
+}
+
 /* A helper function to free scev and LOOP niter information, as well as
    clear loop constraint LOOP_C_FINITE.  */
 
Index: gcc/tree-vect-loop-manip.c
===================================================================
--- gcc/tree-vect-loop-manip.c	2018-07-24 10:24:12.248404609 +0100
+++ gcc/tree-vect-loop-manip.c	2018-07-24 10:24:16.552366384 +0100
@@ -1752,8 +1752,8 @@ vect_update_inits_of_drs (loop_vec_info
 
   FOR_EACH_VEC_ELT (datarefs, i, dr)
     {
-      gimple *stmt = DR_STMT (dr);
-      if (!STMT_VINFO_GATHER_SCATTER_P (vinfo_for_stmt (stmt)))
+      stmt_vec_info stmt_info = loop_vinfo->lookup_dr (dr);
+      if (!STMT_VINFO_GATHER_SCATTER_P (stmt_info))
 	vect_update_init_of_dr (dr, niters, code);
     }
 }
Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c	2018-07-24 10:24:12.248404609 +0100
+++ gcc/tree-vect-data-refs.c	2018-07-24 10:24:16.552366384 +0100
@@ -267,10 +267,10 @@ vect_analyze_possibly_independent_ddr (d
 
 	     Note that the alias checks will be removed if the VF ends up
 	     being small enough.  */
-	  return (!STMT_VINFO_GATHER_SCATTER_P
-		     (vinfo_for_stmt (DR_STMT (DDR_A (ddr))))
-		  && !STMT_VINFO_GATHER_SCATTER_P
-		        (vinfo_for_stmt (DR_STMT (DDR_B (ddr))))
+	  stmt_vec_info stmt_info_a = loop_vinfo->lookup_dr (DDR_A (ddr));
+	  stmt_vec_info stmt_info_b = loop_vinfo->lookup_dr (DDR_B (ddr));
+	  return (!STMT_VINFO_GATHER_SCATTER_P (stmt_info_a)
+		  && !STMT_VINFO_GATHER_SCATTER_P (stmt_info_b)
 		  && vect_mark_for_runtime_alias_test (ddr, loop_vinfo));
 	}
     }
@@ -294,8 +294,8 @@ vect_analyze_data_ref_dependence (struct
   struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
   struct data_reference *dra = DDR_A (ddr);
   struct data_reference *drb = DDR_B (ddr);
-  stmt_vec_info stmtinfo_a = vect_dr_stmt (dra);
-  stmt_vec_info stmtinfo_b = vect_dr_stmt (drb);
+  stmt_vec_info stmtinfo_a = loop_vinfo->lookup_dr (dra);
+  stmt_vec_info stmtinfo_b = loop_vinfo->lookup_dr (drb);
   lambda_vector dist_v;
   unsigned int loop_depth;
 
@@ -600,12 +600,13 @@ vect_analyze_data_ref_dependences (loop_
 /* Function vect_slp_analyze_data_ref_dependence.
 
    Return TRUE if there (might) exist a dependence between a memory-reference
-   DRA and a memory-reference DRB.  When versioning for alias may check a
-   dependence at run-time, return FALSE.  Adjust *MAX_VF according to
-   the data dependence.  */
+   DRA and a memory-reference DRB for VINFO.  When versioning for alias
+   may check a dependence at run-time, return FALSE.  Adjust *MAX_VF
+   according to the data dependence.  */
 
 static bool
-vect_slp_analyze_data_ref_dependence (struct data_dependence_relation *ddr)
+vect_slp_analyze_data_ref_dependence (vec_info *vinfo,
+				      struct data_dependence_relation *ddr)
 {
   struct data_reference *dra = DDR_A (ddr);
   struct data_reference *drb = DDR_B (ddr);
@@ -626,9 +627,10 @@ vect_slp_analyze_data_ref_dependence (st
 
   /* If dra and drb are part of the same interleaving chain consider
      them independent.  */
-  if (STMT_VINFO_GROUPED_ACCESS (vect_dr_stmt (dra))
-      && (DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (dra))
-	  == DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (drb))))
+  stmt_vec_info stmt_info_a = vinfo->lookup_dr (dra);
+  if (STMT_VINFO_GROUPED_ACCESS (stmt_info_a)
+      && (DR_GROUP_FIRST_ELEMENT (stmt_info_a)
+	  == DR_GROUP_FIRST_ELEMENT (vinfo->lookup_dr (drb))))
     return false;
 
   /* Unknown data dependence.  */
@@ -720,7 +722,8 @@ vect_slp_analyze_node_dependences (slp_i
 		  data_reference *store_dr = STMT_VINFO_DATA_REF (store_info);
 		  ddr_p ddr = initialize_data_dependence_relation
 				(dr_a, store_dr, vNULL);
-		  dependent = vect_slp_analyze_data_ref_dependence (ddr);
+		  dependent
+		    = vect_slp_analyze_data_ref_dependence (vinfo, ddr);
 		  free_dependence_relation (ddr);
 		  if (dependent)
 		    break;
@@ -730,7 +733,7 @@ vect_slp_analyze_node_dependences (slp_i
 	    {
 	      ddr_p ddr = initialize_data_dependence_relation (dr_a,
 							       dr_b, vNULL);
-	      dependent = vect_slp_analyze_data_ref_dependence (ddr);
+	      dependent = vect_slp_analyze_data_ref_dependence (vinfo, ddr);
 	      free_dependence_relation (ddr);
 	    }
 	  if (dependent)
@@ -842,7 +845,7 @@ vect_record_base_alignments (vec_info *v
   unsigned int i;
   FOR_EACH_VEC_ELT (vinfo->shared->datarefs, i, dr)
     {
-      stmt_vec_info stmt_info = vect_dr_stmt (dr);
+      stmt_vec_info stmt_info = vinfo->lookup_dr (dr);
       if (!DR_IS_CONDITIONAL_IN_STMT (dr)
 	  && STMT_VINFO_VECTORIZABLE (stmt_info)
 	  && !STMT_VINFO_GATHER_SCATTER_P (stmt_info))
@@ -1167,7 +1170,7 @@ vect_verify_datarefs_alignment (loop_vec
 
   FOR_EACH_VEC_ELT (datarefs, i, dr)
     {
-      stmt_vec_info stmt_info = vect_dr_stmt (dr);
+      stmt_vec_info stmt_info = vinfo->lookup_dr (dr);
 
       if (!STMT_VINFO_RELEVANT_P (stmt_info))
 	continue;
@@ -1392,12 +1395,12 @@ vect_peeling_hash_get_most_frequent (_ve
   return 1;
 }
 
-/* Get the costs of peeling NPEEL iterations checking data access costs
-   for all data refs.  If UNKNOWN_MISALIGNMENT is true, we assume
-   PEEL_STMT_INFO's misalignment will be zero after peeling.  */
+/* Get the costs of peeling NPEEL iterations for LOOP_VINFO, checking
+   data access costs for all data refs.  If UNKNOWN_MISALIGNMENT is true,
+   we assume PEEL_STMT_INFO's misalignment will be zero after peeling.  */
 
 static void
-vect_get_peeling_costs_all_drs (vec<data_reference_p> datarefs,
+vect_get_peeling_costs_all_drs (loop_vec_info loop_vinfo,
 				stmt_vec_info peel_stmt_info,
 				unsigned int *inside_cost,
 				unsigned int *outside_cost,
@@ -1406,12 +1409,13 @@ vect_get_peeling_costs_all_drs (vec<data
 				unsigned int npeel,
 				bool unknown_misalignment)
 {
+  vec<data_reference_p> datarefs = LOOP_VINFO_DATAREFS (loop_vinfo);
   unsigned i;
   data_reference *dr;
 
   FOR_EACH_VEC_ELT (datarefs, i, dr)
     {
-      stmt_vec_info stmt_info = vect_dr_stmt (dr);
+      stmt_vec_info stmt_info = loop_vinfo->lookup_dr (dr);
       if (!STMT_VINFO_RELEVANT_P (stmt_info))
 	continue;
 
@@ -1460,10 +1464,9 @@ vect_peeling_hash_get_lowest_cost (_vect
   body_cost_vec.create (2);
   epilogue_cost_vec.create (2);
 
-  vect_get_peeling_costs_all_drs (LOOP_VINFO_DATAREFS (loop_vinfo),
-				  elem->stmt_info, &inside_cost, &outside_cost,
-				  &body_cost_vec, &prologue_cost_vec,
-				  elem->npeel, false);
+  vect_get_peeling_costs_all_drs (loop_vinfo, elem->stmt_info, &inside_cost,
+				  &outside_cost, &body_cost_vec,
+				  &prologue_cost_vec, elem->npeel, false);
 
   body_cost_vec.release ();
 
@@ -1541,7 +1544,7 @@ vect_peeling_supportable (loop_vec_info
     {
       int save_misalignment;
 
-      stmt_vec_info stmt_info = vect_dr_stmt (dr);
+      stmt_vec_info stmt_info = loop_vinfo->lookup_dr (dr);
       if (stmt_info == peel_stmt_info)
 	continue;
 
@@ -1725,7 +1728,7 @@ vect_enhance_data_refs_alignment (loop_v
 
   FOR_EACH_VEC_ELT (datarefs, i, dr)
     {
-      stmt_vec_info stmt_info = vect_dr_stmt (dr);
+      stmt_vec_info stmt_info = loop_vinfo->lookup_dr (dr);
 
       if (!STMT_VINFO_RELEVANT_P (stmt_info))
 	continue;
@@ -1891,7 +1894,7 @@ vect_enhance_data_refs_alignment (loop_v
 
       stmt_vector_for_cost dummy;
       dummy.create (2);
-      vect_get_peeling_costs_all_drs (datarefs, peel_stmt_info,
+      vect_get_peeling_costs_all_drs (loop_vinfo, peel_stmt_info,
 				      &load_inside_cost,
 				      &load_outside_cost,
 				      &dummy, &dummy, estimated_npeels, true);
@@ -1900,7 +1903,7 @@ vect_enhance_data_refs_alignment (loop_v
       if (first_store_info)
 	{
 	  dummy.create (2);
-	  vect_get_peeling_costs_all_drs (datarefs, first_store_info,
+	  vect_get_peeling_costs_all_drs (loop_vinfo, first_store_info,
 					  &store_inside_cost,
 					  &store_outside_cost,
 					  &dummy, &dummy,
@@ -1991,7 +1994,7 @@ vect_enhance_data_refs_alignment (loop_v
 
       stmt_vector_for_cost dummy;
       dummy.create (2);
-      vect_get_peeling_costs_all_drs (datarefs, NULL, &nopeel_inside_cost,
+      vect_get_peeling_costs_all_drs (loop_vinfo, NULL, &nopeel_inside_cost,
 				      &nopeel_outside_cost, &dummy, &dummy,
 				      0, false);
       dummy.release ();
@@ -2125,7 +2128,7 @@ vect_enhance_data_refs_alignment (loop_v
 	      {
 		/* Strided accesses perform only component accesses, alignment
 		   is irrelevant for them.  */
-		stmt_vec_info stmt_info = vect_dr_stmt (dr);
+		stmt_vec_info stmt_info = loop_vinfo->lookup_dr (dr);
 		if (STMT_VINFO_STRIDED_P (stmt_info)
 		    && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
 		  continue;
@@ -2175,7 +2178,7 @@ vect_enhance_data_refs_alignment (loop_v
     {
       FOR_EACH_VEC_ELT (datarefs, i, dr)
         {
-	  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+	  stmt_vec_info stmt_info = loop_vinfo->lookup_dr (dr);
 
 	  /* For interleaving, only the alignment of the first access
 	     matters.  */
@@ -2288,16 +2291,16 @@ vect_enhance_data_refs_alignment (loop_v
 
 /* Function vect_find_same_alignment_drs.
 
-   Update group and alignment relations according to the chosen
+   Update group and alignment relations in VINFO according to the chosen
    vectorization factor.  */
 
 static void
-vect_find_same_alignment_drs (struct data_dependence_relation *ddr)
+vect_find_same_alignment_drs (vec_info *vinfo, data_dependence_relation *ddr)
 {
   struct data_reference *dra = DDR_A (ddr);
   struct data_reference *drb = DDR_B (ddr);
-  stmt_vec_info stmtinfo_a = vect_dr_stmt (dra);
-  stmt_vec_info stmtinfo_b = vect_dr_stmt (drb);
+  stmt_vec_info stmtinfo_a = vinfo->lookup_dr (dra);
+  stmt_vec_info stmtinfo_b = vinfo->lookup_dr (drb);
 
   if (DDR_ARE_DEPENDENT (ddr) == chrec_known)
     return;
@@ -2362,7 +2365,7 @@ vect_analyze_data_refs_alignment (loop_v
   unsigned int i;
 
   FOR_EACH_VEC_ELT (ddrs, i, ddr)
-    vect_find_same_alignment_drs (ddr);
+    vect_find_same_alignment_drs (vinfo, ddr);
 
   vec<data_reference_p> datarefs = vinfo->shared->datarefs;
   struct data_reference *dr;
@@ -2370,7 +2373,7 @@ vect_analyze_data_refs_alignment (loop_v
   vect_record_base_alignments (vinfo);
   FOR_EACH_VEC_ELT (datarefs, i, dr)
     {
-      stmt_vec_info stmt_info = vect_dr_stmt (dr);
+      stmt_vec_info stmt_info = vinfo->lookup_dr (dr);
       if (STMT_VINFO_VECTORIZABLE (stmt_info))
 	vect_compute_data_ref_alignment (stmt_info);
     }
@@ -2933,7 +2936,7 @@ vect_analyze_data_ref_accesses (vec_info
   for (i = 0; i < datarefs_copy.length () - 1;)
     {
       data_reference_p dra = datarefs_copy[i];
-      stmt_vec_info stmtinfo_a = vect_dr_stmt (dra);
+      stmt_vec_info stmtinfo_a = vinfo->lookup_dr (dra);
       stmt_vec_info lastinfo = NULL;
       if (!STMT_VINFO_VECTORIZABLE (stmtinfo_a)
 	  || STMT_VINFO_GATHER_SCATTER_P (stmtinfo_a))
@@ -2944,7 +2947,7 @@ vect_analyze_data_ref_accesses (vec_info
       for (i = i + 1; i < datarefs_copy.length (); ++i)
 	{
 	  data_reference_p drb = datarefs_copy[i];
-	  stmt_vec_info stmtinfo_b = vect_dr_stmt (drb);
+	  stmt_vec_info stmtinfo_b = vinfo->lookup_dr (drb);
 	  if (!STMT_VINFO_VECTORIZABLE (stmtinfo_b)
 	      || STMT_VINFO_GATHER_SCATTER_P (stmtinfo_b))
 	    break;
@@ -3068,7 +3071,7 @@ vect_analyze_data_ref_accesses (vec_info
 
   FOR_EACH_VEC_ELT (datarefs_copy, i, dr)
     {
-      stmt_vec_info stmt_info = vect_dr_stmt (dr);
+      stmt_vec_info stmt_info = vinfo->lookup_dr (dr);
       if (STMT_VINFO_VECTORIZABLE (stmt_info)
 	  && !vect_analyze_data_ref_access (stmt_info))
 	{
@@ -3430,10 +3433,10 @@ vect_prune_runtime_alias_test_list (loop
 	}
 
       dr_a = DDR_A (ddr);
-      stmt_vec_info stmt_info_a = vect_dr_stmt (DDR_A (ddr));
+      stmt_vec_info stmt_info_a = loop_vinfo->lookup_dr (DDR_A (ddr));
 
       dr_b = DDR_B (ddr);
-      stmt_vec_info stmt_info_b = vect_dr_stmt (DDR_B (ddr));
+      stmt_vec_info stmt_info_b = loop_vinfo->lookup_dr (DDR_B (ddr));
 
       /* Skip the pair if inter-iteration dependencies are irrelevant
 	 and intra-iteration dependencies are guaranteed to be honored.  */
@@ -4149,7 +4152,7 @@ vect_analyze_data_refs (vec_info *vinfo,
       poly_uint64 vf;
 
       gcc_assert (DR_REF (dr));
-      stmt_vec_info stmt_info = vect_dr_stmt (dr);
+      stmt_vec_info stmt_info = vinfo->lookup_dr (dr);
 
       /* Check that analysis of the data-ref succeeded.  */
       if (!DR_BASE_ADDRESS (dr) || !DR_OFFSET (dr) || !DR_INIT (dr)

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [42/46] Add vec_info::replace_stmt
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (40 preceding siblings ...)
  2018-07-24 10:09 ` [41/46] Add vec_info::remove_stmt Richard Sandiford
@ 2018-07-24 10:09 ` Richard Sandiford
  2018-07-31 12:03   ` Richard Biener
  2018-07-24 10:10 ` [45/46] Remove vect_stmt_in_region_p Richard Sandiford
                   ` (3 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:09 UTC (permalink / raw)
  To: gcc-patches

This patch adds a helper for replacing a stmt_vec_info's statement with
a new statement.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (vec_info::replace_stmt): Declare.
	* tree-vectorizer.c (vec_info::replace_stmt): New function.
	* tree-vect-slp.c (vect_remove_slp_scalar_calls): Use it.
	* tree-vect-stmts.c (vectorizable_call): Likewise.
	(vectorizable_simd_clone_call): Likewise.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:24:19.544339803 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:24:22.684311906 +0100
@@ -242,6 +242,7 @@ struct vec_info {
   stmt_vec_info lookup_single_use (tree);
   stmt_vec_info lookup_dr (data_reference *);
   void remove_stmt (stmt_vec_info);
+  void replace_stmt (gimple_stmt_iterator *, stmt_vec_info, gimple *);
 
   /* The type of vectorization.  */
   vec_kind kind;
Index: gcc/tree-vectorizer.c
===================================================================
--- gcc/tree-vectorizer.c	2018-07-24 10:24:19.544339803 +0100
+++ gcc/tree-vectorizer.c	2018-07-24 10:24:22.684311906 +0100
@@ -591,6 +591,22 @@ vec_info::remove_stmt (stmt_vec_info stm
   free_stmt_vec_info (stmt_info);
 }
 
+/* Replace the statement at GSI by NEW_STMT, both the vectorization
+   information and the function itself.  STMT_INFO describes the statement
+   at GSI.  */
+
+void
+vec_info::replace_stmt (gimple_stmt_iterator *gsi, stmt_vec_info stmt_info,
+			gimple *new_stmt)
+{
+  gimple *old_stmt = stmt_info->stmt;
+  gcc_assert (!stmt_info->pattern_stmt_p && old_stmt == gsi_stmt (*gsi));
+  set_vinfo_for_stmt (old_stmt, NULL);
+  set_vinfo_for_stmt (new_stmt, stmt_info);
+  stmt_info->stmt = new_stmt;
+  gsi_replace (gsi, new_stmt, true);
+}
+
 /* A helper function to free scev and LOOP niter information, as well as
    clear loop constraint LOOP_C_FINITE.  */
 
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c	2018-07-24 10:24:19.540339838 +0100
+++ gcc/tree-vect-slp.c	2018-07-24 10:24:22.680311942 +0100
@@ -4048,11 +4048,8 @@ vect_remove_slp_scalar_calls (slp_tree n
 	continue;
       lhs = gimple_call_lhs (stmt);
       new_stmt = gimple_build_assign (lhs, build_zero_cst (TREE_TYPE (lhs)));
-      set_vinfo_for_stmt (new_stmt, stmt_info);
-      set_vinfo_for_stmt (stmt, NULL);
-      STMT_VINFO_STMT (stmt_info) = new_stmt;
       gsi = gsi_for_stmt (stmt);
-      gsi_replace (&gsi, new_stmt, false);
+      stmt_info->vinfo->replace_stmt (&gsi, stmt_info, new_stmt);
       SSA_NAME_DEF_STMT (gimple_assign_lhs (new_stmt)) = new_stmt;
     }
 }
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:24:19.544339803 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:24:22.684311906 +0100
@@ -3629,10 +3629,7 @@ vectorizable_call (stmt_vec_info stmt_in
 
   gassign *new_stmt
     = gimple_build_assign (lhs, build_zero_cst (TREE_TYPE (lhs)));
-  set_vinfo_for_stmt (new_stmt, stmt_info);
-  set_vinfo_for_stmt (stmt_info->stmt, NULL);
-  STMT_VINFO_STMT (stmt_info) = new_stmt;
-  gsi_replace (gsi, new_stmt, false);
+  vinfo->replace_stmt (gsi, stmt_info, new_stmt);
 
   return true;
 }
@@ -4370,10 +4367,7 @@ vectorizable_simd_clone_call (stmt_vec_i
     }
   else
     new_stmt = gimple_build_nop ();
-  set_vinfo_for_stmt (new_stmt, stmt_info);
-  set_vinfo_for_stmt (stmt, NULL);
-  STMT_VINFO_STMT (stmt_info) = new_stmt;
-  gsi_replace (gsi, new_stmt, true);
+  vinfo->replace_stmt (gsi, stmt_info, new_stmt);
   unlink_stmt_vdef (stmt);
 
   return true;

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [45/46] Remove vect_stmt_in_region_p
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (41 preceding siblings ...)
  2018-07-24 10:09 ` [42/46] Add vec_info::replace_stmt Richard Sandiford
@ 2018-07-24 10:10 ` Richard Sandiford
  2018-07-31 12:06   ` Richard Biener
  2018-07-24 10:10 ` [43/46] Make free_stmt_vec_info take a stmt_vec_info Richard Sandiford
                   ` (2 subsequent siblings)
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:10 UTC (permalink / raw)
  To: gcc-patches

Unlike the old vinfo_for_stmt, vec_info::lookup_stmt can cope with
any statement, so there's no need to check beforehand that the statement
is part of the vectorisable region.  This means that there are no longer
any calls to vect_stmt_in_region_p.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (vect_stmt_in_region_p): Delete.
	* tree-vectorizer.c (vect_stmt_in_region_p): Likewise.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:24:29.300253129 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:24:32.472224947 +0100
@@ -1609,7 +1609,6 @@ void vect_pattern_recog (vec_info *);
 
 /* In tree-vectorizer.c.  */
 unsigned vectorize_loops (void);
-bool vect_stmt_in_region_p (vec_info *, gimple *);
 void vect_free_loop_info_assumptions (struct loop *);
 
 #endif  /* GCC_TREE_VECTORIZER_H  */
Index: gcc/tree-vectorizer.c
===================================================================
--- gcc/tree-vectorizer.c	2018-07-24 10:24:29.300253129 +0100
+++ gcc/tree-vectorizer.c	2018-07-24 10:24:32.472224947 +0100
@@ -700,33 +700,6 @@ vect_free_loop_info_assumptions (struct
   loop_constraint_clear (loop, LOOP_C_FINITE);
 }
 
-/* Return whether STMT is inside the region we try to vectorize.  */
-
-bool
-vect_stmt_in_region_p (vec_info *vinfo, gimple *stmt)
-{
-  if (!gimple_bb (stmt))
-    return false;
-
-  if (loop_vec_info loop_vinfo = dyn_cast <loop_vec_info> (vinfo))
-    {
-      struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
-      if (!flow_bb_inside_loop_p (loop, gimple_bb (stmt)))
-	return false;
-    }
-  else
-    {
-      bb_vec_info bb_vinfo = as_a <bb_vec_info> (vinfo);
-      if (gimple_bb (stmt) != BB_VINFO_BB (bb_vinfo)
-	  || gimple_uid (stmt) == -1U
-	  || gimple_code (stmt) == GIMPLE_PHI)
-	return false;
-    }
-
-  return true;
-}
-
-
 /* If LOOP has been versioned during ifcvt, return the internal call
    guarding it.  */
 

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [43/46] Make free_stmt_vec_info take a stmt_vec_info
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (42 preceding siblings ...)
  2018-07-24 10:10 ` [45/46] Remove vect_stmt_in_region_p Richard Sandiford
@ 2018-07-24 10:10 ` Richard Sandiford
  2018-07-31 12:03   ` Richard Biener
  2018-07-24 10:10 ` [44/46] Remove global vinfo_for_stmt-related routines Richard Sandiford
  2018-07-24 10:11 ` [46/46] Turn stmt_vec_info back into a typedef Richard Sandiford
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:10 UTC (permalink / raw)
  To: gcc-patches

This patch makes free_stmt_vec_info take the stmt_vec_info that
it's supposed to free and makes it free only that stmt_vec_info.
Callers need to update the statement mapping where necessary
(but now there are only a couple of callers).

This in turns means that we can leave ~vec_info to do the actual
freeing, since there's no longer a need to do it before resetting
the gimple_uids.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (free_stmt_vec_info): Take a stmt_vec_info
	rather than a gimple stmt.
	* tree-vect-stmts.c (free_stmt_vec_info): Likewise.  Don't free
	information for pattern statements when passed the original
	statement; instead wait to be passed the pattern statement itself.
	Don't call set_vinfo_for_stmt here.
	(free_stmt_vec_infos): Update call to free_stmt_vec_info.
	* tree-vect-loop.c (_loop_vec_info::~loop_vec_info): Don't free
	stmt_vec_infos here.
	* tree-vect-slp.c (_bb_vec_info::~bb_vec_info): Likewise.
	* tree-vectorizer.c (vec_info::remove_stmt): Nullify the statement's
	stmt_vec_infos entry.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:24:22.684311906 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:24:26.084281700 +0100
@@ -1484,7 +1484,7 @@ extern bool supportable_narrowing_operat
 					     enum tree_code *,
 					     int *, vec<tree> *);
 extern stmt_vec_info new_stmt_vec_info (gimple *stmt, vec_info *);
-extern void free_stmt_vec_info (gimple *stmt);
+extern void free_stmt_vec_info (stmt_vec_info);
 extern unsigned record_stmt_cost (stmt_vector_for_cost *, int,
 				  enum vect_cost_for_stmt, stmt_vec_info,
 				  int, enum vect_cost_model_location);
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:24:22.684311906 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:24:26.084281700 +0100
@@ -9916,7 +9916,7 @@ free_stmt_vec_infos (vec<stmt_vec_info>
   stmt_vec_info info;
   FOR_EACH_VEC_ELT (*v, i, info)
     if (info != NULL_STMT_VEC_INFO)
-      free_stmt_vec_info (STMT_VINFO_STMT (info));
+      free_stmt_vec_info (info);
   if (v == stmt_vec_info_vec)
     stmt_vec_info_vec = NULL;
   v->release ();
@@ -9926,44 +9926,18 @@ free_stmt_vec_infos (vec<stmt_vec_info>
 /* Free stmt vectorization related info.  */
 
 void
-free_stmt_vec_info (gimple *stmt)
+free_stmt_vec_info (stmt_vec_info stmt_info)
 {
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
-
-  if (!stmt_info)
-    return;
-
-  /* Check if this statement has a related "pattern stmt"
-     (introduced by the vectorizer during the pattern recognition
-     pass).  Free pattern's stmt_vec_info and def stmt's stmt_vec_info
-     too.  */
-  if (STMT_VINFO_IN_PATTERN_P (stmt_info))
+  if (stmt_info->pattern_stmt_p)
     {
-      if (gimple_seq seq = STMT_VINFO_PATTERN_DEF_SEQ (stmt_info))
-	for (gimple_stmt_iterator si = gsi_start (seq);
-	     !gsi_end_p (si); gsi_next (&si))
-	  {
-	    gimple *seq_stmt = gsi_stmt (si);
-	    gimple_set_bb (seq_stmt, NULL);
-	    tree lhs = gimple_get_lhs (seq_stmt);
-	    if (lhs && TREE_CODE (lhs) == SSA_NAME)
-	      release_ssa_name (lhs);
-	    free_stmt_vec_info (seq_stmt);
-	  }
-      stmt_vec_info patt_stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
-      if (patt_stmt_info)
-	{
-	  gimple_set_bb (patt_stmt_info->stmt, NULL);
-	  tree lhs = gimple_get_lhs (patt_stmt_info->stmt);
-	  if (lhs && TREE_CODE (lhs) == SSA_NAME)
-	    release_ssa_name (lhs);
-	  free_stmt_vec_info (patt_stmt_info);
-	}
+      gimple_set_bb (stmt_info->stmt, NULL);
+      tree lhs = gimple_get_lhs (stmt_info->stmt);
+      if (lhs && TREE_CODE (lhs) == SSA_NAME)
+	release_ssa_name (lhs);
     }
 
   STMT_VINFO_SAME_ALIGN_REFS (stmt_info).release ();
   STMT_VINFO_SIMD_CLONE_INFO (stmt_info).release ();
-  set_vinfo_for_stmt (stmt, NULL);
   free (stmt_info);
 }
 
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:24:19.540339838 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:24:26.080281735 +0100
@@ -894,9 +894,6 @@ _loop_vec_info::~_loop_vec_info ()
   for (j = 0; j < nbbs; j++)
     {
       basic_block bb = bbs[j];
-      for (si = gsi_start_phis (bb); !gsi_end_p (si); gsi_next (&si))
-        free_stmt_vec_info (gsi_stmt (si));
-
       for (si = gsi_start_bb (bb); !gsi_end_p (si); )
         {
 	  gimple *stmt = gsi_stmt (si);
@@ -936,9 +933,6 @@ _loop_vec_info::~_loop_vec_info ()
 		    }
 		}
 	    }
-
-	  /* Free stmt_vec_info.  */
-	  free_stmt_vec_info (stmt);
           gsi_next (&si);
         }
     }
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c	2018-07-24 10:24:22.680311942 +0100
+++ gcc/tree-vect-slp.c	2018-07-24 10:24:26.080281735 +0100
@@ -2490,17 +2490,8 @@ _bb_vec_info::~_bb_vec_info ()
 {
   for (gimple_stmt_iterator si = region_begin;
        gsi_stmt (si) != gsi_stmt (region_end); gsi_next (&si))
-    {
-      gimple *stmt = gsi_stmt (si);
-      stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
-
-      if (stmt_info)
-        /* Free stmt_vec_info.  */
-        free_stmt_vec_info (stmt);
-
-      /* Reset region marker.  */
-      gimple_set_uid (stmt, -1);
-    }
+    /* Reset region marker.  */
+    gimple_set_uid (gsi_stmt (si), -1);
 
   bb->aux = NULL;
 }
Index: gcc/tree-vectorizer.c
===================================================================
--- gcc/tree-vectorizer.c	2018-07-24 10:24:22.684311906 +0100
+++ gcc/tree-vectorizer.c	2018-07-24 10:24:26.084281700 +0100
@@ -584,6 +584,7 @@ vec_info::lookup_dr (data_reference *dr)
 vec_info::remove_stmt (stmt_vec_info stmt_info)
 {
   gcc_assert (!stmt_info->pattern_stmt_p);
+  set_vinfo_for_stmt (stmt_info->stmt, NULL);
   gimple_stmt_iterator si = gsi_for_stmt (stmt_info->stmt);
   unlink_stmt_vdef (stmt_info->stmt);
   gsi_remove (&si, true);

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [44/46] Remove global vinfo_for_stmt-related routines
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (43 preceding siblings ...)
  2018-07-24 10:10 ` [43/46] Make free_stmt_vec_info take a stmt_vec_info Richard Sandiford
@ 2018-07-24 10:10 ` Richard Sandiford
  2018-07-31 12:05   ` Richard Biener
  2018-07-24 10:11 ` [46/46] Turn stmt_vec_info back into a typedef Richard Sandiford
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:10 UTC (permalink / raw)
  To: gcc-patches

There are no more direct uses of:

- new_stmt_vec_info
- set_vinfo_for_stmt
- free_stmt_vec_infos
- free_stmt_vec_info

outside of vec_info, so they can now be private member functions.
It also seemed better to put them in tree-vectorizer.c, along with the
other vec_info routines.

We can also get rid of:

- vinfo_for_stmt
- stmt_vec_info_vec
- set_stmt_vec_info_vec

since nothing now uses them.  This was the main goal of the series.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (vec_info::new_vinfo_for_stmt)
	(vec_info::set_vinfo_for_stmt, vec_info::free_stmt_vec_infos)
	(vec_info::free_stmt_vec_info): New private member functions.
	(set_stmt_vec_info_vec, free_stmt_vec_infos, vinfo_for_stmt)
	(set_vinfo_for_stmt, new_stmt_vec_info, free_stmt_vec_info): Delete.
	* tree-parloops.c (gather_scalar_reductions): Remove calls to
	set_stmt_vec_info_vec and free_stmt_vec_infos.
	* tree-vect-loop.c (_loop_vec_info): Remove call to
	set_stmt_vec_info_vec.
	* tree-vect-stmts.c (new_stmt_vec_info, set_stmt_vec_info_vec)
	(free_stmt_vec_infos, free_stmt_vec_info): Delete in favor of...
	* tree-vectorizer.c (vec_info::new_stmt_vec_info)
	(vec_info::set_vinfo_for_stmt, vec_info::free_stmt_vec_infos)
	(vec_info::free_stmt_vec_info): ...these new functions.  Remove
	assignments in {vec_info::,}new_stmt_vec_info that are redundant
	with the clearing in the xcalloc.
	(stmt_vec_info_vec): Delete.
	(vec_info::vec_info): Don't call set_stmt_vec_info_vec.
	(vectorize_loops): Likewise.
	(vec_info::~vec_info): Remove argument from call to
	free_stmt_vec_infos.
	(vec_info::add_stmt): Remove vinfo argument from call to
	new_stmt_vec_info.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:24:26.084281700 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:24:29.300253129 +0100
@@ -266,6 +266,12 @@ struct vec_info {
 
   /* Cost data used by the target cost model.  */
   void *target_cost_data;
+
+private:
+  stmt_vec_info new_stmt_vec_info (gimple *stmt);
+  void set_vinfo_for_stmt (gimple *, stmt_vec_info);
+  void free_stmt_vec_infos ();
+  void free_stmt_vec_info (stmt_vec_info);
 };
 
 struct _loop_vec_info;
@@ -1085,43 +1091,6 @@ inline stmt_vec_info::operator gimple *
   return m_ptr ? m_ptr->stmt : NULL;
 }
 
-extern vec<stmt_vec_info> *stmt_vec_info_vec;
-
-void set_stmt_vec_info_vec (vec<stmt_vec_info> *);
-void free_stmt_vec_infos (vec<stmt_vec_info> *);
-
-/* Return a stmt_vec_info corresponding to STMT.  */
-
-static inline stmt_vec_info
-vinfo_for_stmt (gimple *stmt)
-{
-  int uid = gimple_uid (stmt);
-  if (uid <= 0)
-    return NULL;
-
-  return (*stmt_vec_info_vec)[uid - 1];
-}
-
-/* Set vectorizer information INFO for STMT.  */
-
-static inline void
-set_vinfo_for_stmt (gimple *stmt, stmt_vec_info info)
-{
-  unsigned int uid = gimple_uid (stmt);
-  if (uid == 0)
-    {
-      gcc_checking_assert (info);
-      uid = stmt_vec_info_vec->length () + 1;
-      gimple_set_uid (stmt, uid);
-      stmt_vec_info_vec->safe_push (info);
-    }
-  else
-    {
-      gcc_checking_assert (info == NULL_STMT_VEC_INFO);
-      (*stmt_vec_info_vec)[uid - 1] = info;
-    }
-}
-
 static inline bool
 nested_in_vect_loop_p (struct loop *loop, stmt_vec_info stmt_info)
 {
@@ -1483,8 +1452,6 @@ extern bool supportable_widening_operati
 extern bool supportable_narrowing_operation (enum tree_code, tree, tree,
 					     enum tree_code *,
 					     int *, vec<tree> *);
-extern stmt_vec_info new_stmt_vec_info (gimple *stmt, vec_info *);
-extern void free_stmt_vec_info (stmt_vec_info);
 extern unsigned record_stmt_cost (stmt_vector_for_cost *, int,
 				  enum vect_cost_for_stmt, stmt_vec_info,
 				  int, enum vect_cost_model_location);
Index: gcc/tree-parloops.c
===================================================================
--- gcc/tree-parloops.c	2018-07-24 10:22:57.273070426 +0100
+++ gcc/tree-parloops.c	2018-07-24 10:24:29.296253164 +0100
@@ -2592,10 +2592,6 @@ gather_scalar_reductions (loop_p loop, r
   auto_vec<gphi *, 4> double_reduc_phis;
   auto_vec<gimple *, 4> double_reduc_stmts;
 
-  vec<stmt_vec_info> stmt_vec_infos;
-  stmt_vec_infos.create (50);
-  set_stmt_vec_info_vec (&stmt_vec_infos);
-
   vec_info_shared shared;
   simple_loop_info = vect_analyze_loop_form (loop, &shared);
   if (simple_loop_info == NULL)
@@ -2679,14 +2675,11 @@ gather_scalar_reductions (loop_p loop, r
     }
 
  gather_done:
-  /* Release the claim on gimple_uid.  */
-  free_stmt_vec_infos (&stmt_vec_infos);
-
   if (reduction_list->elements () == 0)
     return;
 
   /* As gimple_uid is used by the vectorizer in between vect_analyze_loop_form
-     and free_stmt_vec_info_vec, we can set gimple_uid of reduc_phi stmts only
+     and delete simple_loop_info, we can set gimple_uid of reduc_phi stmts only
      now.  */
   basic_block bb;
   FOR_EACH_BB_FN (bb, cfun)
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:24:26.080281735 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:24:29.296253164 +0100
@@ -888,8 +888,6 @@ _loop_vec_info::~_loop_vec_info ()
   gimple_stmt_iterator si;
   int j;
 
-  /* ???  We're releasing loop_vinfos en-block.  */
-  set_stmt_vec_info_vec (&stmt_vec_infos);
   nbbs = loop->num_nodes;
   for (j = 0; j < nbbs; j++)
     {
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:24:26.084281700 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:24:29.300253129 +0100
@@ -9850,98 +9850,6 @@ vect_remove_stores (stmt_vec_info first_
     }
 }
 
-
-/* Function new_stmt_vec_info.
-
-   Create and initialize a new stmt_vec_info struct for STMT.  */
-
-stmt_vec_info
-new_stmt_vec_info (gimple *stmt, vec_info *vinfo)
-{
-  stmt_vec_info res;
-  res = (_stmt_vec_info *) xcalloc (1, sizeof (struct _stmt_vec_info));
-
-  STMT_VINFO_TYPE (res) = undef_vec_info_type;
-  STMT_VINFO_STMT (res) = stmt;
-  res->vinfo = vinfo;
-  STMT_VINFO_RELEVANT (res) = vect_unused_in_scope;
-  STMT_VINFO_LIVE_P (res) = false;
-  STMT_VINFO_VECTYPE (res) = NULL;
-  STMT_VINFO_VEC_STMT (res) = NULL;
-  STMT_VINFO_VECTORIZABLE (res) = true;
-  STMT_VINFO_IN_PATTERN_P (res) = false;
-  STMT_VINFO_PATTERN_DEF_SEQ (res) = NULL;
-  STMT_VINFO_DATA_REF (res) = NULL;
-  STMT_VINFO_VEC_REDUCTION_TYPE (res) = TREE_CODE_REDUCTION;
-  STMT_VINFO_VEC_CONST_COND_REDUC_CODE (res) = ERROR_MARK;
-
-  if (gimple_code (stmt) == GIMPLE_PHI
-      && is_loop_header_bb_p (gimple_bb (stmt)))
-    STMT_VINFO_DEF_TYPE (res) = vect_unknown_def_type;
-  else
-    STMT_VINFO_DEF_TYPE (res) = vect_internal_def;
-
-  STMT_VINFO_SAME_ALIGN_REFS (res).create (0);
-  STMT_SLP_TYPE (res) = loop_vect;
-  STMT_VINFO_NUM_SLP_USES (res) = 0;
-
-  res->first_element = NULL; /* GROUP_FIRST_ELEMENT */
-  res->next_element = NULL; /* GROUP_NEXT_ELEMENT */
-  res->size = 0; /* GROUP_SIZE */
-  res->store_count = 0; /* GROUP_STORE_COUNT */
-  res->gap = 0; /* GROUP_GAP */
-  res->same_dr_stmt = NULL; /* GROUP_SAME_DR_STMT */
-
-  /* This is really "uninitialized" until vect_compute_data_ref_alignment.  */
-  res->dr_aux.misalignment = DR_MISALIGNMENT_UNINITIALIZED;
-
-  return res;
-}
-
-
-/* Set the current stmt_vec_info vector to V.  */
-
-void
-set_stmt_vec_info_vec (vec<stmt_vec_info> *v)
-{
-  stmt_vec_info_vec = v;
-}
-
-/* Free the stmt_vec_info entries in V and release V.  */
-
-void
-free_stmt_vec_infos (vec<stmt_vec_info> *v)
-{
-  unsigned int i;
-  stmt_vec_info info;
-  FOR_EACH_VEC_ELT (*v, i, info)
-    if (info != NULL_STMT_VEC_INFO)
-      free_stmt_vec_info (info);
-  if (v == stmt_vec_info_vec)
-    stmt_vec_info_vec = NULL;
-  v->release ();
-}
-
-
-/* Free stmt vectorization related info.  */
-
-void
-free_stmt_vec_info (stmt_vec_info stmt_info)
-{
-  if (stmt_info->pattern_stmt_p)
-    {
-      gimple_set_bb (stmt_info->stmt, NULL);
-      tree lhs = gimple_get_lhs (stmt_info->stmt);
-      if (lhs && TREE_CODE (lhs) == SSA_NAME)
-	release_ssa_name (lhs);
-    }
-
-  STMT_VINFO_SAME_ALIGN_REFS (stmt_info).release ();
-  STMT_VINFO_SIMD_CLONE_INFO (stmt_info).release ();
-  free (stmt_info);
-}
-
-
 /* Function get_vectype_for_scalar_type_and_size.
 
    Returns the vector type corresponding to SCALAR_TYPE  and SIZE as supported
Index: gcc/tree-vectorizer.c
===================================================================
--- gcc/tree-vectorizer.c	2018-07-24 10:24:26.084281700 +0100
+++ gcc/tree-vectorizer.c	2018-07-24 10:24:29.300253129 +0100
@@ -84,9 +84,6 @@ Software Foundation; either version 3, o
 /* Loop or bb location, with hotness information.  */
 dump_user_location_t vect_location;
 
-/* Vector mapping GIMPLE stmt to stmt_vec_info. */
-vec<stmt_vec_info> *stmt_vec_info_vec;
-
 /* Dump a cost entry according to args to F.  */
 
 void
@@ -457,7 +454,6 @@ vec_info::vec_info (vec_info::vec_kind k
     target_cost_data (target_cost_data_in)
 {
   stmt_vec_infos.create (50);
-  set_stmt_vec_info_vec (&stmt_vec_infos);
 }
 
 vec_info::~vec_info ()
@@ -469,7 +465,7 @@ vec_info::~vec_info ()
     vect_free_slp_instance (instance, true);
 
   destroy_cost_data (target_cost_data);
-  free_stmt_vec_infos (&stmt_vec_infos);
+  free_stmt_vec_infos ();
 }
 
 vec_info_shared::vec_info_shared ()
@@ -513,7 +509,7 @@ vec_info_shared::check_datarefs ()
 stmt_vec_info
 vec_info::add_stmt (gimple *stmt)
 {
-  stmt_vec_info res = new_stmt_vec_info (stmt, this);
+  stmt_vec_info res = new_stmt_vec_info (stmt);
   set_vinfo_for_stmt (stmt, res);
   return res;
 }
@@ -608,6 +604,87 @@ vec_info::replace_stmt (gimple_stmt_iter
   gsi_replace (gsi, new_stmt, true);
 }
 
+/* Create and initialize a new stmt_vec_info struct for STMT.  */
+
+stmt_vec_info
+vec_info::new_stmt_vec_info (gimple *stmt)
+{
+  stmt_vec_info res = XCNEW (struct _stmt_vec_info);
+  res->vinfo = this;
+  res->stmt = stmt;
+
+  STMT_VINFO_TYPE (res) = undef_vec_info_type;
+  STMT_VINFO_RELEVANT (res) = vect_unused_in_scope;
+  STMT_VINFO_VECTORIZABLE (res) = true;
+  STMT_VINFO_VEC_REDUCTION_TYPE (res) = TREE_CODE_REDUCTION;
+  STMT_VINFO_VEC_CONST_COND_REDUC_CODE (res) = ERROR_MARK;
+
+  if (gimple_code (stmt) == GIMPLE_PHI
+      && is_loop_header_bb_p (gimple_bb (stmt)))
+    STMT_VINFO_DEF_TYPE (res) = vect_unknown_def_type;
+  else
+    STMT_VINFO_DEF_TYPE (res) = vect_internal_def;
+
+  STMT_VINFO_SAME_ALIGN_REFS (res).create (0);
+  STMT_SLP_TYPE (res) = loop_vect;
+
+  /* This is really "uninitialized" until vect_compute_data_ref_alignment.  */
+  res->dr_aux.misalignment = DR_MISALIGNMENT_UNINITIALIZED;
+
+  return res;
+}
+
+/* Associate STMT with INFO.  */
+
+void
+vec_info::set_vinfo_for_stmt (gimple *stmt, stmt_vec_info info)
+{
+  unsigned int uid = gimple_uid (stmt);
+  if (uid == 0)
+    {
+      gcc_checking_assert (info);
+      uid = stmt_vec_infos.length () + 1;
+      gimple_set_uid (stmt, uid);
+      stmt_vec_infos.safe_push (info);
+    }
+  else
+    {
+      gcc_checking_assert (info == NULL_STMT_VEC_INFO);
+      stmt_vec_infos[uid - 1] = info;
+    }
+}
+
+/* Free the contents of stmt_vec_infos.  */
+
+void
+vec_info::free_stmt_vec_infos (void)
+{
+  unsigned int i;
+  stmt_vec_info info;
+  FOR_EACH_VEC_ELT (stmt_vec_infos, i, info)
+    if (info != NULL_STMT_VEC_INFO)
+      free_stmt_vec_info (info);
+  stmt_vec_infos.release ();
+}
+
+/* Free STMT_INFO.  */
+
+void
+vec_info::free_stmt_vec_info (stmt_vec_info stmt_info)
+{
+  if (stmt_info->pattern_stmt_p)
+    {
+      gimple_set_bb (stmt_info->stmt, NULL);
+      tree lhs = gimple_get_lhs (stmt_info->stmt);
+      if (lhs && TREE_CODE (lhs) == SSA_NAME)
+	release_ssa_name (lhs);
+    }
+
+  STMT_VINFO_SAME_ALIGN_REFS (stmt_info).release ();
+  STMT_VINFO_SIMD_CLONE_INFO (stmt_info).release ();
+  free (stmt_info);
+}
+
 /* A helper function to free scev and LOOP niter information, as well as
    clear loop constraint LOOP_C_FINITE.  */
 
@@ -963,8 +1040,6 @@ vectorize_loops (void)
   if (cfun->has_simduid_loops)
     note_simd_array_uses (&simd_array_to_simduid_htab);
 
-  set_stmt_vec_info_vec (NULL);
-
   /*  ----------- Analyze loops. -----------  */
 
   /* If some loop was duplicated, it gets bigger number

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [46/46] Turn stmt_vec_info back into a typedef
  2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
                   ` (44 preceding siblings ...)
  2018-07-24 10:10 ` [44/46] Remove global vinfo_for_stmt-related routines Richard Sandiford
@ 2018-07-24 10:11 ` Richard Sandiford
  2018-07-31 12:07   ` Richard Biener
  45 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-24 10:11 UTC (permalink / raw)
  To: gcc-patches

This patch removes the stmt_vec_info wrapper class added near the
beginning of the series and turns stmt_vec_info back into a typedef.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (stmt_vec_info): Turn back into a typedef.
	(NULL_STMT_VEC_INFO): Delete.
	(stmt_vec_info::operator*): Likewise.
	(stmt_vec_info::operator gimple *): Likewise.
	* tree-vect-loop.c (vectorizable_reduction): Use NULL instead
	of NULL_STMT_VEC_INFO.
	* tree-vect-patterns.c (vect_init_pattern_stmt): Likewise.
	(vect_reassociating_reduction_p): Likewise.
	* tree-vect-stmts.c (vect_build_gather_load_calls): Likewise.
	(vectorizable_store): Likewise.
	* tree-vectorizer.c (vec_info::set_vinfo_for_stmt): Likewise.
	(vec_info::free_stmt_vec_infos): Likewise.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:24:32.472224947 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:24:35.888194598 +0100
@@ -21,26 +21,7 @@ Software Foundation; either version 3, o
 #ifndef GCC_TREE_VECTORIZER_H
 #define GCC_TREE_VECTORIZER_H
 
-class stmt_vec_info {
-public:
-  stmt_vec_info () {}
-  stmt_vec_info (struct _stmt_vec_info *ptr) : m_ptr (ptr) {}
-  struct _stmt_vec_info *operator-> () const { return m_ptr; }
-  struct _stmt_vec_info &operator* () const;
-  operator struct _stmt_vec_info * () const { return m_ptr; }
-  operator gimple * () const;
-  operator void * () const { return m_ptr; }
-  operator bool () const { return m_ptr; }
-  bool operator == (const stmt_vec_info &x) { return x.m_ptr == m_ptr; }
-  bool operator == (_stmt_vec_info *x) { return x == m_ptr; }
-  bool operator != (const stmt_vec_info &x) { return x.m_ptr != m_ptr; }
-  bool operator != (_stmt_vec_info *x) { return x != m_ptr; }
-
-private:
-  struct _stmt_vec_info *m_ptr;
-};
-
-#define NULL_STMT_VEC_INFO (stmt_vec_info (NULL))
+typedef struct _stmt_vec_info *stmt_vec_info;
 
 #include "tree-data-ref.h"
 #include "tree-hash-traits.h"
@@ -1080,17 +1061,6 @@ #define VECT_SCALAR_BOOLEAN_TYPE_P(TYPE)
        && TYPE_PRECISION (TYPE) == 1		\
        && TYPE_UNSIGNED (TYPE)))
 
-inline _stmt_vec_info &
-stmt_vec_info::operator* () const
-{
-  return *m_ptr;
-}
-
-inline stmt_vec_info::operator gimple * () const
-{
-  return m_ptr ? m_ptr->stmt : NULL;
-}
-
 static inline bool
 nested_in_vect_loop_p (struct loop *loop, stmt_vec_info stmt_info)
 {
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:24:29.296253164 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:24:35.884194634 +0100
@@ -6755,7 +6755,7 @@ vectorizable_reduction (stmt_vec_info st
   if (slp_node)
     neutral_op = neutral_op_for_slp_reduction
       (slp_node_instance->reduc_phis, code,
-       REDUC_GROUP_FIRST_ELEMENT (stmt_info) != NULL_STMT_VEC_INFO);
+       REDUC_GROUP_FIRST_ELEMENT (stmt_info) != NULL);
 
   if (double_reduc && reduction_type == FOLD_LEFT_REDUCTION)
     {
Index: gcc/tree-vect-patterns.c
===================================================================
--- gcc/tree-vect-patterns.c	2018-07-24 10:24:02.360492422 +0100
+++ gcc/tree-vect-patterns.c	2018-07-24 10:24:35.884194634 +0100
@@ -104,7 +104,7 @@ vect_init_pattern_stmt (gimple *pattern_
 {
   vec_info *vinfo = orig_stmt_info->vinfo;
   stmt_vec_info pattern_stmt_info = vinfo->lookup_stmt (pattern_stmt);
-  if (pattern_stmt_info == NULL_STMT_VEC_INFO)
+  if (pattern_stmt_info == NULL)
     pattern_stmt_info = orig_stmt_info->vinfo->add_stmt (pattern_stmt);
   gimple_set_bb (pattern_stmt, gimple_bb (orig_stmt_info->stmt));
 
@@ -819,7 +819,7 @@ vect_reassociating_reduction_p (stmt_vec
 {
   return (STMT_VINFO_DEF_TYPE (stmt_vinfo) == vect_reduction_def
 	  ? STMT_VINFO_REDUC_TYPE (stmt_vinfo) != FOLD_LEFT_REDUCTION
-	  : REDUC_GROUP_FIRST_ELEMENT (stmt_vinfo) != NULL_STMT_VEC_INFO);
+	  : REDUC_GROUP_FIRST_ELEMENT (stmt_vinfo) != NULL);
 }
 
 /* As above, but also require it to have code CODE and to be a reduction
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:24:29.300253129 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:24:35.888194598 +0100
@@ -2842,7 +2842,7 @@ vect_build_gather_load_calls (stmt_vec_i
 	  new_stmt_info = loop_vinfo->lookup_def (var);
 	}
 
-      if (prev_stmt_info == NULL_STMT_VEC_INFO)
+      if (prev_stmt_info == NULL)
 	STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
       else
 	STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
@@ -6574,7 +6574,7 @@ vectorizable_store (stmt_vec_info stmt_i
 	  stmt_vec_info new_stmt_info
 	    = vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
 
-	  if (prev_stmt_info == NULL_STMT_VEC_INFO)
+	  if (prev_stmt_info == NULL)
 	    STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
 	  else
 	    STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
Index: gcc/tree-vectorizer.c
===================================================================
--- gcc/tree-vectorizer.c	2018-07-24 10:24:32.472224947 +0100
+++ gcc/tree-vectorizer.c	2018-07-24 10:24:35.888194598 +0100
@@ -649,7 +649,7 @@ vec_info::set_vinfo_for_stmt (gimple *st
     }
   else
     {
-      gcc_checking_assert (info == NULL_STMT_VEC_INFO);
+      gcc_checking_assert (info == NULL);
       stmt_vec_infos[uid - 1] = info;
     }
 }
@@ -662,7 +662,7 @@ vec_info::free_stmt_vec_infos (void)
   unsigned int i;
   stmt_vec_info info;
   FOR_EACH_VEC_ELT (stmt_vec_infos, i, info)
-    if (info != NULL_STMT_VEC_INFO)
+    if (info != NULL)
       free_stmt_vec_info (info);
   stmt_vec_infos.release ();
 }

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [01/46] Move special cases out of get_initial_def_for_reduction
  2018-07-24  9:52 ` [01/46] Move special cases out of get_initial_def_for_reduction Richard Sandiford
@ 2018-07-25  8:42   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25  8:42 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 11:53 AM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This minor clean-up avoids repeating the test for double reductions
> and also moves the vect_get_vec_def_for_operand call to the same
> function as the corresponding vect_get_vec_def_for_stmt_copy.

OK.

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vect-loop.c (get_initial_def_for_reduction): Move special
>         cases for nested loops from here to ...
>         (vect_create_epilog_for_reduction): ...here.  Only call
>         vect_is_simple_use for inner-loop reductions.
>
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-13 10:11:14.429843575 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:22:02.965552667 +0100
> @@ -4113,10 +4113,8 @@ get_initial_def_for_reduction (gimple *s
>    enum tree_code code = gimple_assign_rhs_code (stmt);
>    tree def_for_init;
>    tree init_def;
> -  bool nested_in_vect_loop = false;
>    REAL_VALUE_TYPE real_init_val = dconst0;
>    int int_init_val = 0;
> -  gimple *def_stmt = NULL;
>    gimple_seq stmts = NULL;
>
>    gcc_assert (vectype);
> @@ -4124,39 +4122,12 @@ get_initial_def_for_reduction (gimple *s
>    gcc_assert (POINTER_TYPE_P (scalar_type) || INTEGRAL_TYPE_P (scalar_type)
>               || SCALAR_FLOAT_TYPE_P (scalar_type));
>
> -  if (nested_in_vect_loop_p (loop, stmt))
> -    nested_in_vect_loop = true;
> -  else
> -    gcc_assert (loop == (gimple_bb (stmt))->loop_father);
> -
> -  /* In case of double reduction we only create a vector variable to be put
> -     in the reduction phi node.  The actual statement creation is done in
> -     vect_create_epilog_for_reduction.  */
> -  if (adjustment_def && nested_in_vect_loop
> -      && TREE_CODE (init_val) == SSA_NAME
> -      && (def_stmt = SSA_NAME_DEF_STMT (init_val))
> -      && gimple_code (def_stmt) == GIMPLE_PHI
> -      && flow_bb_inside_loop_p (loop, gimple_bb (def_stmt))
> -      && vinfo_for_stmt (def_stmt)
> -      && STMT_VINFO_DEF_TYPE (vinfo_for_stmt (def_stmt))
> -          == vect_double_reduction_def)
> -    {
> -      *adjustment_def = NULL;
> -      return vect_create_destination_var (init_val, vectype);
> -    }
> +  gcc_assert (nested_in_vect_loop_p (loop, stmt)
> +             || loop == (gimple_bb (stmt))->loop_father);
>
>    vect_reduction_type reduction_type
>      = STMT_VINFO_VEC_REDUCTION_TYPE (stmt_vinfo);
>
> -  /* In case of a nested reduction do not use an adjustment def as
> -     that case is not supported by the epilogue generation correctly
> -     if ncopies is not one.  */
> -  if (adjustment_def && nested_in_vect_loop)
> -    {
> -      *adjustment_def = NULL;
> -      return vect_get_vec_def_for_operand (init_val, stmt);
> -    }
> -
>    switch (code)
>      {
>      case WIDEN_SUM_EXPR:
> @@ -4586,9 +4557,22 @@ vect_create_epilog_for_reduction (vec<tr
>               || (induc_code == MIN_EXPR
>                   && tree_int_cst_lt (induc_val, initial_def))))
>         induc_val = initial_def;
> -      vect_is_simple_use (initial_def, loop_vinfo, &initial_def_dt);
> -      vec_initial_def = get_initial_def_for_reduction (stmt, initial_def,
> -                                                      &adjustment_def);
> +
> +      if (double_reduc)
> +       /* In case of double reduction we only create a vector variable
> +          to be put in the reduction phi node.  The actual statement
> +          creation is done later in this function.  */
> +       vec_initial_def = vect_create_destination_var (initial_def, vectype);
> +      else if (nested_in_vect_loop)
> +       {
> +         /* Do not use an adjustment def as that case is not supported
> +            correctly if ncopies is not one.  */
> +         vect_is_simple_use (initial_def, loop_vinfo, &initial_def_dt);
> +         vec_initial_def = vect_get_vec_def_for_operand (initial_def, stmt);
> +       }
> +      else
> +       vec_initial_def = get_initial_def_for_reduction (stmt, initial_def,
> +                                                        &adjustment_def);
>        vec_initial_defs.create (1);
>        vec_initial_defs.quick_push (vec_initial_def);
>      }

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [02/46] Remove dead vectorizable_reduction code
  2018-07-24  9:53 ` [02/46] Remove dead vectorizable_reduction code Richard Sandiford
@ 2018-07-25  8:43   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25  8:43 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 11:53 AM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> vectorizable_reduction has old code to cope with cases in which the
> given statement belongs to a reduction group but isn't the first statement.
> That can no longer happen, since all statements in the group go into the
> same SLP node, and we only check the first statement in each node.
>
> The point is to remove the only path through vectorizable_reduction
> in which stmt and stmt_info refer to different statements.

OK.

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vect-loop.c (vectorizable_reduction): Assert that the
>         function is not called for second and subsequent members of
>         a reduction group.
>
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:22:02.965552667 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:22:06.269523330 +0100
> @@ -6162,7 +6162,6 @@ vectorizable_reduction (gimple *stmt, gi
>    auto_vec<gimple *> phis;
>    int vec_num;
>    tree def0, tem;
> -  bool first_p = true;
>    tree cr_index_scalar_type = NULL_TREE, cr_index_vector_type = NULL_TREE;
>    tree cond_reduc_val = NULL_TREE;
>
> @@ -6178,15 +6177,8 @@ vectorizable_reduction (gimple *stmt, gi
>        nested_cycle = true;
>      }
>
> -  /* In case of reduction chain we switch to the first stmt in the chain, but
> -     we don't update STMT_INFO, since only the last stmt is marked as reduction
> -     and has reduction properties.  */
> -  if (REDUC_GROUP_FIRST_ELEMENT (stmt_info)
> -      && REDUC_GROUP_FIRST_ELEMENT (stmt_info) != stmt)
> -    {
> -      stmt = REDUC_GROUP_FIRST_ELEMENT (stmt_info);
> -      first_p = false;
> -    }
> +  if (REDUC_GROUP_FIRST_ELEMENT (stmt_info))
> +    gcc_assert (slp_node && REDUC_GROUP_FIRST_ELEMENT (stmt_info) == stmt);
>
>    if (gimple_code (stmt) == GIMPLE_PHI)
>      {
> @@ -7050,8 +7042,7 @@ vectorizable_reduction (gimple *stmt, gi
>
>    if (!vec_stmt) /* transformation not required.  */
>      {
> -      if (first_p)
> -       vect_model_reduction_cost (stmt_info, reduc_fn, ncopies, cost_vec);
> +      vect_model_reduction_cost (stmt_info, reduc_fn, ncopies, cost_vec);
>        if (loop_vinfo && LOOP_VINFO_CAN_FULLY_MASK_P (loop_vinfo))
>         {
>           if (reduction_type != FOLD_LEFT_REDUCTION

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [03/46] Remove unnecessary update of NUM_SLP_USES
  2018-07-24  9:53 ` [03/46] Remove unnecessary update of NUM_SLP_USES Richard Sandiford
@ 2018-07-25  8:46   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25  8:46 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 11:53 AM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> vect_free_slp_tree had:
>
>   gimple *stmt;
>   FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
>     /* After transform some stmts are removed and thus their vinfo is gone.  */
>     if (vinfo_for_stmt (stmt))
>       {
>         gcc_assert (STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt)) > 0);
>         STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt))--;
>       }
>
> But after transform this update is redundant even for statements that do
> exist, so it seems better to skip this loop for the final teardown.

OK

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (vect_free_slp_instance): Add a final_p parameter.
>         * tree-vect-slp.c (vect_free_slp_tree): Likewise.  Don't update
>         STMT_VINFO_NUM_SLP_USES when it's true.
>         (vect_free_slp_instance): Add a final_p parameter and pass it to
>         vect_free_slp_tree.
>         (vect_build_slp_tree_2): Update call to vect_free_slp_instance.
>         (vect_analyze_slp_instance): Likewise.
>         (vect_slp_analyze_operations): Likewise.
>         (vect_slp_analyze_bb_1): Likewise.
>         * tree-vectorizer.c (vec_info): Likewise.
>         * tree-vect-loop.c (vect_transform_loop): Likewise.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-03 10:59:30.480481417 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:22:09.237496975 +0100
> @@ -1634,7 +1634,7 @@ extern int vect_get_known_peeling_cost (
>  extern tree cse_and_gimplify_to_preheader (loop_vec_info, tree);
>
>  /* In tree-vect-slp.c.  */
> -extern void vect_free_slp_instance (slp_instance);
> +extern void vect_free_slp_instance (slp_instance, bool);
>  extern bool vect_transform_slp_perm_load (slp_tree, vec<tree> ,
>                                           gimple_stmt_iterator *, poly_uint64,
>                                           slp_instance, bool, unsigned *);
> Index: gcc/tree-vect-slp.c
> ===================================================================
> --- gcc/tree-vect-slp.c 2018-07-23 16:58:06.000000000 +0100
> +++ gcc/tree-vect-slp.c 2018-07-24 10:22:09.237496975 +0100
> @@ -47,25 +47,32 @@ Software Foundation; either version 3, o
>  #include "internal-fn.h"
>
>
> -/* Recursively free the memory allocated for the SLP tree rooted at NODE.  */
> +/* Recursively free the memory allocated for the SLP tree rooted at NODE.
> +   FINAL_P is true if we have vectorized the instance or if we have
> +   made a final decision not to vectorize the statements in any way.  */
>
>  static void
> -vect_free_slp_tree (slp_tree node)
> +vect_free_slp_tree (slp_tree node, bool final_p)
>  {
>    int i;
>    slp_tree child;
>
>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
> -    vect_free_slp_tree (child);
> +    vect_free_slp_tree (child, final_p);
>
> -  gimple *stmt;
> -  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
> -    /* After transform some stmts are removed and thus their vinfo is gone.  */
> -    if (vinfo_for_stmt (stmt))
> -      {
> -       gcc_assert (STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt)) > 0);
> -       STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt))--;
> -      }
> +  /* Don't update STMT_VINFO_NUM_SLP_USES if it isn't relevant.
> +     Some statements might no longer exist, after having been
> +     removed by vect_transform_stmt.  Updating the remaining
> +     statements would be redundant.  */
> +  if (!final_p)
> +    {
> +      gimple *stmt;
> +      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
> +       {
> +         gcc_assert (STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt)) > 0);
> +         STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt))--;
> +       }
> +    }
>
>    SLP_TREE_CHILDREN (node).release ();
>    SLP_TREE_SCALAR_STMTS (node).release ();
> @@ -76,12 +83,14 @@ vect_free_slp_tree (slp_tree node)
>  }
>
>
> -/* Free the memory allocated for the SLP instance.  */
> +/* Free the memory allocated for the SLP instance.  FINAL_P is true if we
> +   have vectorized the instance or if we have made a final decision not
> +   to vectorize the statements in any way.  */
>
>  void
> -vect_free_slp_instance (slp_instance instance)
> +vect_free_slp_instance (slp_instance instance, bool final_p)
>  {
> -  vect_free_slp_tree (SLP_INSTANCE_TREE (instance));
> +  vect_free_slp_tree (SLP_INSTANCE_TREE (instance), final_p);
>    SLP_INSTANCE_LOADS (instance).release ();
>    free (instance);
>  }
> @@ -1284,7 +1293,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>        if (++this_tree_size > max_tree_size)
>         {
>           FOR_EACH_VEC_ELT (children, j, child)
> -           vect_free_slp_tree (child);
> +           vect_free_slp_tree (child, false);
>           vect_free_oprnd_info (oprnds_info);
>           return NULL;
>         }
> @@ -1315,7 +1324,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>                   this_loads.truncate (old_nloads);
>                   this_tree_size = old_tree_size;
>                   FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (child), j, grandchild)
> -                   vect_free_slp_tree (grandchild);
> +                   vect_free_slp_tree (grandchild, false);
>                   SLP_TREE_CHILDREN (child).truncate (0);
>
>                   dump_printf_loc (MSG_NOTE, vect_location,
> @@ -1495,7 +1504,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>                       this_loads.truncate (old_nloads);
>                       this_tree_size = old_tree_size;
>                       FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (child), j, grandchild)
> -                       vect_free_slp_tree (grandchild);
> +                       vect_free_slp_tree (grandchild, false);
>                       SLP_TREE_CHILDREN (child).truncate (0);
>
>                       dump_printf_loc (MSG_NOTE, vect_location,
> @@ -1519,7 +1528,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>  fail:
>        gcc_assert (child == NULL);
>        FOR_EACH_VEC_ELT (children, j, child)
> -       vect_free_slp_tree (child);
> +       vect_free_slp_tree (child, false);
>        vect_free_oprnd_info (oprnds_info);
>        return NULL;
>      }
> @@ -2036,13 +2045,13 @@ vect_analyze_slp_instance (vec_info *vin
>                                  "Build SLP failed: store group "
>                                  "size not a multiple of the vector size "
>                                  "in basic block SLP\n");
> -             vect_free_slp_tree (node);
> +             vect_free_slp_tree (node, false);
>               loads.release ();
>               return false;
>             }
>           /* Fatal mismatch.  */
>           matches[group_size / const_max_nunits * const_max_nunits] = false;
> -         vect_free_slp_tree (node);
> +         vect_free_slp_tree (node, false);
>           loads.release ();
>         }
>        else
> @@ -2102,7 +2111,7 @@ vect_analyze_slp_instance (vec_info *vin
>                       dump_gimple_stmt (MSG_MISSED_OPTIMIZATION,
>                                         TDF_SLIM, stmt, 0);
>                  }
> -              vect_free_slp_instance (new_instance);
> +             vect_free_slp_instance (new_instance, false);
>                return false;
>              }
>          }
> @@ -2133,7 +2142,7 @@ vect_analyze_slp_instance (vec_info *vin
>                 dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>                                  "Built SLP cancelled: can use "
>                                  "load/store-lanes\n");
> -             vect_free_slp_instance (new_instance);
> +             vect_free_slp_instance (new_instance, false);
>               return false;
>             }
>         }
> @@ -2668,7 +2677,7 @@ vect_slp_analyze_operations (vec_info *v
>           dump_gimple_stmt (MSG_NOTE, TDF_SLIM,
>                             SLP_TREE_SCALAR_STMTS
>                               (SLP_INSTANCE_TREE (instance))[0], 0);
> -         vect_free_slp_instance (instance);
> +         vect_free_slp_instance (instance, false);
>            vinfo->slp_instances.ordered_remove (i);
>           cost_vec.release ();
>         }
> @@ -2947,7 +2956,7 @@ vect_slp_analyze_bb_1 (gimple_stmt_itera
>           dump_gimple_stmt (MSG_NOTE, TDF_SLIM,
>                             SLP_TREE_SCALAR_STMTS
>                               (SLP_INSTANCE_TREE (instance))[0], 0);
> -         vect_free_slp_instance (instance);
> +         vect_free_slp_instance (instance, false);
>           BB_VINFO_SLP_INSTANCES (bb_vinfo).ordered_remove (i);
>           continue;
>         }
> Index: gcc/tree-vectorizer.c
> ===================================================================
> --- gcc/tree-vectorizer.c       2018-06-27 10:27:09.894649672 +0100
> +++ gcc/tree-vectorizer.c       2018-07-24 10:22:09.237496975 +0100
> @@ -466,7 +466,7 @@ vec_info::~vec_info ()
>    unsigned int i;
>
>    FOR_EACH_VEC_ELT (slp_instances, i, instance)
> -    vect_free_slp_instance (instance);
> +    vect_free_slp_instance (instance, true);
>
>    destroy_cost_data (target_cost_data);
>    free_stmt_vec_infos (&stmt_vec_infos);
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:22:06.269523330 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:22:09.237496975 +0100
> @@ -2229,7 +2229,7 @@ vect_analyze_loop_2 (loop_vec_info loop_
>    LOOP_VINFO_VECT_FACTOR (loop_vinfo) = saved_vectorization_factor;
>    /* Free the SLP instances.  */
>    FOR_EACH_VEC_ELT (LOOP_VINFO_SLP_INSTANCES (loop_vinfo), j, instance)
> -    vect_free_slp_instance (instance);
> +    vect_free_slp_instance (instance, false);
>    LOOP_VINFO_SLP_INSTANCES (loop_vinfo).release ();
>    /* Reset SLP type to loop_vect on all stmts.  */
>    for (i = 0; i < LOOP_VINFO_LOOP (loop_vinfo)->num_nodes; ++i)
> @@ -8683,7 +8683,7 @@ vect_transform_loop (loop_vec_info loop_
>       won't work.  */
>    slp_instance instance;
>    FOR_EACH_VEC_ELT (LOOP_VINFO_SLP_INSTANCES (loop_vinfo), i, instance)
> -    vect_free_slp_instance (instance);
> +    vect_free_slp_instance (instance, true);
>    LOOP_VINFO_SLP_INSTANCES (loop_vinfo).release ();
>    /* Clear-up safelen field since its value is invalid after vectorization
>       since vectorized loop can have loop-carried dependencies.  */

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [04/46] Factor out the test for a valid reduction input
  2018-07-24  9:54 ` [04/46] Factor out the test for a valid reduction input Richard Sandiford
@ 2018-07-25  8:46   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25  8:46 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 11:54 AM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> vect_is_slp_reduction and vect_is_simple_reduction had two instances
> each of:
>
>               && (is_gimple_assign (def_stmt)
>                   || is_gimple_call (def_stmt)
>                   || STMT_VINFO_DEF_TYPE (vinfo_for_stmt (def_stmt))
>                            == vect_induction_def
>                   || (gimple_code (def_stmt) == GIMPLE_PHI
>                       && STMT_VINFO_DEF_TYPE (vinfo_for_stmt (def_stmt))
>                                   == vect_internal_def
>                       && !is_loop_header_bb_p (gimple_bb (def_stmt)))))
>
> This patch splits it out in a subroutine.

OK

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vect-loop.c (vect_valid_reduction_input_p): New function,
>         split out from...
>         (vect_is_slp_reduction): ...here...
>         (vect_is_simple_reduction): ...and here.  Remove repetition of tests
>         that are already known to be false.
>
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:22:09.237496975 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:22:12.737465897 +0100
> @@ -2501,6 +2501,21 @@ report_vect_op (dump_flags_t msg_type, g
>    dump_gimple_stmt (msg_type, TDF_SLIM, stmt, 0);
>  }
>
> +/* DEF_STMT occurs in a loop that contains a potential reduction operation.
> +   Return true if the results of DEF_STMT are something that can be
> +   accumulated by such a reduction.  */
> +
> +static bool
> +vect_valid_reduction_input_p (gimple *def_stmt)
> +{
> +  stmt_vec_info def_stmt_info = vinfo_for_stmt (def_stmt);
> +  return (is_gimple_assign (def_stmt)
> +         || is_gimple_call (def_stmt)
> +         || STMT_VINFO_DEF_TYPE (def_stmt_info) == vect_induction_def
> +         || (gimple_code (def_stmt) == GIMPLE_PHI
> +             && STMT_VINFO_DEF_TYPE (def_stmt_info) == vect_internal_def
> +             && !is_loop_header_bb_p (gimple_bb (def_stmt))));
> +}
>
>  /* Detect SLP reduction of the form:
>
> @@ -2624,16 +2639,9 @@ vect_is_slp_reduction (loop_vec_info loo
>              ("vect_internal_def"), or it's an induction (defined by a
>              loop-header phi-node).  */
>            if (def_stmt
> -              && gimple_bb (def_stmt)
> +             && gimple_bb (def_stmt)
>               && flow_bb_inside_loop_p (loop, gimple_bb (def_stmt))
> -              && (is_gimple_assign (def_stmt)
> -                  || is_gimple_call (def_stmt)
> -                  || STMT_VINFO_DEF_TYPE (vinfo_for_stmt (def_stmt))
> -                           == vect_induction_def
> -                  || (gimple_code (def_stmt) == GIMPLE_PHI
> -                      && STMT_VINFO_DEF_TYPE (vinfo_for_stmt (def_stmt))
> -                                  == vect_internal_def
> -                      && !is_loop_header_bb_p (gimple_bb (def_stmt)))))
> +             && vect_valid_reduction_input_p (def_stmt))
>             {
>               lhs = gimple_assign_lhs (next_stmt);
>               next_stmt = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next_stmt));
> @@ -2654,16 +2662,9 @@ vect_is_slp_reduction (loop_vec_info loo
>              ("vect_internal_def"), or it's an induction (defined by a
>              loop-header phi-node).  */
>            if (def_stmt
> -              && gimple_bb (def_stmt)
> +             && gimple_bb (def_stmt)
>               && flow_bb_inside_loop_p (loop, gimple_bb (def_stmt))
> -              && (is_gimple_assign (def_stmt)
> -                  || is_gimple_call (def_stmt)
> -                  || STMT_VINFO_DEF_TYPE (vinfo_for_stmt (def_stmt))
> -                              == vect_induction_def
> -                  || (gimple_code (def_stmt) == GIMPLE_PHI
> -                      && STMT_VINFO_DEF_TYPE (vinfo_for_stmt (def_stmt))
> -                                  == vect_internal_def
> -                      && !is_loop_header_bb_p (gimple_bb (def_stmt)))))
> +             && vect_valid_reduction_input_p (def_stmt))
>             {
>               if (dump_enabled_p ())
>                 {
> @@ -3196,15 +3197,7 @@ vect_is_simple_reduction (loop_vec_info
>        && (code == COND_EXPR
>           || !def1 || gimple_nop_p (def1)
>           || !flow_bb_inside_loop_p (loop, gimple_bb (def1))
> -          || (def1 && flow_bb_inside_loop_p (loop, gimple_bb (def1))
> -              && (is_gimple_assign (def1)
> -                 || is_gimple_call (def1)
> -                 || STMT_VINFO_DEF_TYPE (vinfo_for_stmt (def1))
> -                      == vect_induction_def
> -                 || (gimple_code (def1) == GIMPLE_PHI
> -                     && STMT_VINFO_DEF_TYPE (vinfo_for_stmt (def1))
> -                          == vect_internal_def
> -                     && !is_loop_header_bb_p (gimple_bb (def1)))))))
> +         || vect_valid_reduction_input_p (def1)))
>      {
>        if (dump_enabled_p ())
>         report_vect_op (MSG_NOTE, def_stmt, "detected reduction: ");
> @@ -3215,15 +3208,7 @@ vect_is_simple_reduction (loop_vec_info
>        && (code == COND_EXPR
>           || !def2 || gimple_nop_p (def2)
>           || !flow_bb_inside_loop_p (loop, gimple_bb (def2))
> -         || (def2 && flow_bb_inside_loop_p (loop, gimple_bb (def2))
> -             && (is_gimple_assign (def2)
> -                 || is_gimple_call (def2)
> -                 || STMT_VINFO_DEF_TYPE (vinfo_for_stmt (def2))
> -                      == vect_induction_def
> -                 || (gimple_code (def2) == GIMPLE_PHI
> -                     && STMT_VINFO_DEF_TYPE (vinfo_for_stmt (def2))
> -                          == vect_internal_def
> -                     && !is_loop_header_bb_p (gimple_bb (def2)))))))
> +         || vect_valid_reduction_input_p (def2)))
>      {
>        if (! nested_in_vect_loop && orig_code != MINUS_EXPR)
>         {

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [05/46] Fix make_ssa_name call in vectorizable_reduction
  2018-07-24  9:54 ` [05/46] Fix make_ssa_name call in vectorizable_reduction Richard Sandiford
@ 2018-07-25  8:47   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25  8:47 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 11:54 AM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> The usual vectoriser dance to create new assignments is:
>
>     new_stmt = gimple_build_assign (vec_dest, ...);
>     new_temp = make_ssa_name (vec_dest, new_stmt);
>     gimple_assign_set_lhs (new_stmt, new_temp);
>
> but one site in vectorizable_reduction used:
>
>     new_temp = make_ssa_name (vec_dest, new_stmt);
>
> before creating new_stmt.
>
> This method of creating statements probably needs cleaning up, but
> that's for another day...

Yeah, one can elide the set_lhs by first allocating the SSA name
without defining stmt and then building the stmt with the SSA def
directly...

OK

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vect-loop.c (vectorizable_reduction): Fix an instance in
>         which make_ssa_name was called with new_stmt before new_stmt
>         had been created.
>
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:22:12.737465897 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:22:16.421433184 +0100
> @@ -7210,9 +7210,10 @@ vectorizable_reduction (gimple *stmt, gi
>               if (op_type == ternary_op)
>                 vop[2] = vec_oprnds2[i];
>
> -             new_temp = make_ssa_name (vec_dest, new_stmt);
> -             new_stmt = gimple_build_assign (new_temp, code,
> +             new_stmt = gimple_build_assign (vec_dest, code,
>                                               vop[0], vop[1], vop[2]);
> +             new_temp = make_ssa_name (vec_dest, new_stmt);
> +             gimple_assign_set_lhs (new_stmt, new_temp);
>             }
>           vect_finish_stmt_generation (stmt, new_stmt, gsi);
>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [06/46] Add vec_info::add_stmt
  2018-07-24  9:55 ` [06/46] Add vec_info::add_stmt Richard Sandiford
@ 2018-07-25  9:10   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25  9:10 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 11:55 AM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch adds a vec_info function for allocating and setting
> stmt_vec_infos.  It's the start of a long process of removing
> the global stmt_vec_info array.
>
>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (stmt_vec_info): Move typedef earlier in file.
>         (vec_info::add_stmt): Declare.
>         * tree-vectorizer.c (vec_info::add_stmt): New function.
>         * tree-vect-data-refs.c (vect_create_data_ref_ptr): Use it.
>         * tree-vect-loop.c (_loop_vec_info::_loop_vec_info): Likewise.
>         (vect_create_epilog_for_reduction, vectorizable_reduction): Likewise.
>         (vectorizable_induction): Likewise.
>         * tree-vect-slp.c (_bb_vec_info::_bb_vec_info): Likewise.
>         * tree-vect-stmts.c (vect_finish_stmt_generation_1): Likewise.
>         (vectorizable_simd_clone_call, vectorizable_store): Likewise.
>         (vectorizable_load): Likewise.
>         * tree-vect-patterns.c (vect_init_pattern_stmt): Likewise.
>         (vect_recog_bool_pattern, vect_recog_mask_conversion_pattern)
>         (vect_recog_gather_scatter_pattern): Likewise.
>         (append_pattern_def_seq): Likewise.  Remove a check that is
>         performed by add_stmt itself.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:22:09.237496975 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:22:19.809403100 +0100
> @@ -25,6 +25,8 @@ #define GCC_TREE_VECTORIZER_H
>  #include "tree-hash-traits.h"
>  #include "target.h"
>
> +typedef struct _stmt_vec_info *stmt_vec_info;
> +
>  /* Used for naming of new temporaries.  */
>  enum vect_var_kind {
>    vect_simple_var,
> @@ -215,6 +217,8 @@ struct vec_info {
>    vec_info (vec_kind, void *, vec_info_shared *);
>    ~vec_info ();
>
> +  stmt_vec_info add_stmt (gimple *);
> +
>    /* The type of vectorization.  */
>    vec_kind kind;
>
> @@ -761,7 +765,7 @@ struct dataref_aux {
>
>  typedef struct data_reference *dr_p;
>
> -typedef struct _stmt_vec_info {
> +struct _stmt_vec_info {
>
>    enum stmt_vec_info_type type;
>
> @@ -914,7 +918,7 @@ typedef struct _stmt_vec_info {
>       and OPERATION_BITS without changing the result.  */
>    unsigned int operation_precision;
>    signop operation_sign;
> -} *stmt_vec_info;
> +};
>
>  /* Information about a gather/scatter call.  */
>  struct gather_scatter_info {
> Index: gcc/tree-vectorizer.c
> ===================================================================
> --- gcc/tree-vectorizer.c       2018-07-24 10:22:09.237496975 +0100
> +++ gcc/tree-vectorizer.c       2018-07-24 10:22:19.809403100 +0100
> @@ -507,6 +507,17 @@ vec_info_shared::check_datarefs ()
>        gcc_unreachable ();
>  }
>
> +/* Record that STMT belongs to the vectorizable region.  Create and return
> +   an associated stmt_vec_info.  */
> +
> +stmt_vec_info
> +vec_info::add_stmt (gimple *stmt)
> +{
> +  stmt_vec_info res = new_stmt_vec_info (stmt, this);
> +  set_vinfo_for_stmt (stmt, res);

are these now the only callers?

OK.

> +  return res;
> +}
> +
>  /* A helper function to free scev and LOOP niter information, as well as
>     clear loop constraint LOOP_C_FINITE.  */
>
> Index: gcc/tree-vect-data-refs.c
> ===================================================================
> --- gcc/tree-vect-data-refs.c   2018-07-23 15:56:47.000000000 +0100
> +++ gcc/tree-vect-data-refs.c   2018-07-24 10:22:19.801403171 +0100
> @@ -4850,7 +4850,7 @@ vect_create_data_ref_ptr (gimple *stmt,
>                  aggr_ptr, loop, &incr_gsi, insert_after,
>                  &indx_before_incr, &indx_after_incr);
>        incr = gsi_stmt (incr_gsi);
> -      set_vinfo_for_stmt (incr, new_stmt_vec_info (incr, loop_vinfo));
> +      loop_vinfo->add_stmt (incr);
>
>        /* Copy the points-to information if it exists. */
>        if (DR_PTR_INFO (dr))
> @@ -4880,7 +4880,7 @@ vect_create_data_ref_ptr (gimple *stmt,
>                  containing_loop, &incr_gsi, insert_after, &indx_before_incr,
>                  &indx_after_incr);
>        incr = gsi_stmt (incr_gsi);
> -      set_vinfo_for_stmt (incr, new_stmt_vec_info (incr, loop_vinfo));
> +      loop_vinfo->add_stmt (incr);
>
>        /* Copy the points-to information if it exists. */
>        if (DR_PTR_INFO (dr))
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:22:16.421433184 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:22:19.801403171 +0100
> @@ -845,14 +845,14 @@ _loop_vec_info::_loop_vec_info (struct l
>         {
>           gimple *phi = gsi_stmt (si);
>           gimple_set_uid (phi, 0);
> -         set_vinfo_for_stmt (phi, new_stmt_vec_info (phi, this));
> +         add_stmt (phi);
>         }
>
>        for (si = gsi_start_bb (bb); !gsi_end_p (si); gsi_next (&si))
>         {
>           gimple *stmt = gsi_stmt (si);
>           gimple_set_uid (stmt, 0);
> -         set_vinfo_for_stmt (stmt, new_stmt_vec_info (stmt, this));
> +         add_stmt (stmt);
>         }
>      }
>    free (body);
> @@ -4665,8 +4665,7 @@ vect_create_epilog_for_reduction (vec<tr
>        /* Create a vector phi node.  */
>        tree new_phi_tree = make_ssa_name (cr_index_vector_type);
>        new_phi = create_phi_node (new_phi_tree, loop->header);
> -      set_vinfo_for_stmt (new_phi,
> -                         new_stmt_vec_info (new_phi, loop_vinfo));
> +      loop_vinfo->add_stmt (new_phi);
>        add_phi_arg (as_a <gphi *> (new_phi), vec_zero,
>                    loop_preheader_edge (loop), UNKNOWN_LOCATION);
>
> @@ -4691,10 +4690,8 @@ vect_create_epilog_for_reduction (vec<tr
>        gimple *index_condition = gimple_build_assign (induction_index,
>                                                      index_cond_expr);
>        gsi_insert_before (&incr_gsi, index_condition, GSI_SAME_STMT);
> -      stmt_vec_info index_vec_info = new_stmt_vec_info (index_condition,
> -                                                       loop_vinfo);
> +      stmt_vec_info index_vec_info = loop_vinfo->add_stmt (index_condition);
>        STMT_VINFO_VECTYPE (index_vec_info) = cr_index_vector_type;
> -      set_vinfo_for_stmt (index_condition, index_vec_info);
>
>        /* Update the phi with the vec cond.  */
>        add_phi_arg (as_a <gphi *> (new_phi), induction_index,
> @@ -4741,7 +4738,7 @@ vect_create_epilog_for_reduction (vec<tr
>          {
>           tree new_def = copy_ssa_name (def);
>            phi = create_phi_node (new_def, exit_bb);
> -          set_vinfo_for_stmt (phi, new_stmt_vec_info (phi, loop_vinfo));
> +         stmt_vec_info phi_info = loop_vinfo->add_stmt (phi);
>            if (j == 0)
>              new_phis.quick_push (phi);
>            else
> @@ -4751,7 +4748,7 @@ vect_create_epilog_for_reduction (vec<tr
>             }
>
>            SET_PHI_ARG_DEF (phi, single_exit (loop)->dest_idx, def);
> -          prev_phi_info = vinfo_for_stmt (phi);
> +         prev_phi_info = phi_info;
>          }
>      }
>
> @@ -4768,11 +4765,9 @@ vect_create_epilog_for_reduction (vec<tr
>           gphi *outer_phi = create_phi_node (new_result, exit_bb);
>           SET_PHI_ARG_DEF (outer_phi, single_exit (loop)->dest_idx,
>                            PHI_RESULT (phi));
> -         set_vinfo_for_stmt (outer_phi, new_stmt_vec_info (outer_phi,
> -                                                           loop_vinfo));
> +         prev_phi_info = loop_vinfo->add_stmt (outer_phi);
>           inner_phis.quick_push (phi);
>           new_phis[i] = outer_phi;
> -         prev_phi_info = vinfo_for_stmt (outer_phi);
>            while (STMT_VINFO_RELATED_STMT (vinfo_for_stmt (phi)))
>              {
>               phi = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (phi));
> @@ -4780,10 +4775,9 @@ vect_create_epilog_for_reduction (vec<tr
>               outer_phi = create_phi_node (new_result, exit_bb);
>               SET_PHI_ARG_DEF (outer_phi, single_exit (loop)->dest_idx,
>                                PHI_RESULT (phi));
> -             set_vinfo_for_stmt (outer_phi, new_stmt_vec_info (outer_phi,
> -                                                               loop_vinfo));
> +             stmt_vec_info outer_phi_info = loop_vinfo->add_stmt (outer_phi);
>               STMT_VINFO_RELATED_STMT (prev_phi_info) = outer_phi;
> -             prev_phi_info = vinfo_for_stmt (outer_phi);
> +             prev_phi_info = outer_phi_info;
>             }
>         }
>      }
> @@ -5553,10 +5547,9 @@ vect_create_epilog_for_reduction (vec<tr
>        gsi_insert_before (&exit_gsi, epilog_stmt, GSI_SAME_STMT);
>        if (nested_in_vect_loop)
>          {
> -          set_vinfo_for_stmt (epilog_stmt,
> -                              new_stmt_vec_info (epilog_stmt, loop_vinfo));
> -          STMT_VINFO_RELATED_STMT (vinfo_for_stmt (epilog_stmt)) =
> -                STMT_VINFO_RELATED_STMT (vinfo_for_stmt (new_phi));
> +         stmt_vec_info epilog_stmt_info = loop_vinfo->add_stmt (epilog_stmt);
> +         STMT_VINFO_RELATED_STMT (epilog_stmt_info)
> +           = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (new_phi));
>
>            if (!double_reduc)
>              scalar_results.quick_push (new_temp);
> @@ -5697,7 +5690,6 @@ vect_create_epilog_for_reduction (vec<tr
>                FOR_EACH_IMM_USE_STMT (use_stmt, imm_iter, orig_name)
>                  {
>                    stmt_vec_info use_stmt_vinfo;
> -                  stmt_vec_info new_phi_vinfo;
>                    tree vect_phi_init, preheader_arg, vect_phi_res;
>                    basic_block bb = gimple_bb (use_stmt);
>                   gimple *use;
> @@ -5724,9 +5716,7 @@ vect_create_epilog_for_reduction (vec<tr
>
>                    /* Create vector phi node.  */
>                    vect_phi = create_phi_node (vec_initial_def, bb);
> -                  new_phi_vinfo = new_stmt_vec_info (vect_phi,
> -                                    loop_vec_info_for_loop (outer_loop));
> -                  set_vinfo_for_stmt (vect_phi, new_phi_vinfo);
> +                 loop_vec_info_for_loop (outer_loop)->add_stmt (vect_phi);
>
>                    /* Create vs0 - initial def of the double reduction phi.  */
>                    preheader_arg = PHI_ARG_DEF_FROM_EDGE (use_stmt,
> @@ -6249,8 +6239,7 @@ vectorizable_reduction (gimple *stmt, gi
>                   /* Create the reduction-phi that defines the reduction
>                      operand.  */
>                   gimple *new_phi = create_phi_node (vec_dest, loop->header);
> -                 set_vinfo_for_stmt (new_phi,
> -                                     new_stmt_vec_info (new_phi, loop_vinfo));
> +                 stmt_vec_info new_phi_info = loop_vinfo->add_stmt (new_phi);
>
>                   if (slp_node)
>                     SLP_TREE_VEC_STMTS (slp_node).quick_push (new_phi);
> @@ -6260,7 +6249,7 @@ vectorizable_reduction (gimple *stmt, gi
>                         STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_phi;
>                       else
>                         STMT_VINFO_RELATED_STMT (prev_phi_info) = new_phi;
> -                     prev_phi_info = vinfo_for_stmt (new_phi);
> +                     prev_phi_info = new_phi_info;
>                     }
>                 }
>             }
> @@ -7537,15 +7526,14 @@ vectorizable_induction (gimple *phi,
>           /* Create the induction-phi that defines the induction-operand.  */
>           vec_dest = vect_get_new_vect_var (vectype, vect_simple_var, "vec_iv_");
>           induction_phi = create_phi_node (vec_dest, iv_loop->header);
> -         set_vinfo_for_stmt (induction_phi,
> -                             new_stmt_vec_info (induction_phi, loop_vinfo));
> +         loop_vinfo->add_stmt (induction_phi);
>           induc_def = PHI_RESULT (induction_phi);
>
>           /* Create the iv update inside the loop  */
>           vec_def = make_ssa_name (vec_dest);
>           new_stmt = gimple_build_assign (vec_def, PLUS_EXPR, induc_def, vec_step);
>           gsi_insert_before (&si, new_stmt, GSI_SAME_STMT);
> -         set_vinfo_for_stmt (new_stmt, new_stmt_vec_info (new_stmt, loop_vinfo));
> +         loop_vinfo->add_stmt (new_stmt);
>
>           /* Set the arguments of the phi node:  */
>           add_phi_arg (induction_phi, vec_init, pe, UNKNOWN_LOCATION);
> @@ -7593,8 +7581,7 @@ vectorizable_induction (gimple *phi,
>                   gimple_stmt_iterator tgsi = gsi_for_stmt (iv);
>                   gsi_insert_after (&tgsi, new_stmt, GSI_CONTINUE_LINKING);
>                 }
> -             set_vinfo_for_stmt (new_stmt,
> -                                 new_stmt_vec_info (new_stmt, loop_vinfo));
> +             loop_vinfo->add_stmt (new_stmt);
>               SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
>             }
>         }
> @@ -7623,8 +7610,7 @@ vectorizable_induction (gimple *phi,
>           new_bb = gsi_insert_on_edge_immediate (loop_preheader_edge (iv_loop),
>                                                  new_stmt);
>           gcc_assert (!new_bb);
> -         set_vinfo_for_stmt (new_stmt,
> -                             new_stmt_vec_info (new_stmt, loop_vinfo));
> +         loop_vinfo->add_stmt (new_stmt);
>         }
>      }
>    else
> @@ -7728,15 +7714,14 @@ vectorizable_induction (gimple *phi,
>    /* Create the induction-phi that defines the induction-operand.  */
>    vec_dest = vect_get_new_vect_var (vectype, vect_simple_var, "vec_iv_");
>    induction_phi = create_phi_node (vec_dest, iv_loop->header);
> -  set_vinfo_for_stmt (induction_phi,
> -                     new_stmt_vec_info (induction_phi, loop_vinfo));
> +  stmt_vec_info induction_phi_info = loop_vinfo->add_stmt (induction_phi);
>    induc_def = PHI_RESULT (induction_phi);
>
>    /* Create the iv update inside the loop  */
>    vec_def = make_ssa_name (vec_dest);
>    new_stmt = gimple_build_assign (vec_def, PLUS_EXPR, induc_def, vec_step);
>    gsi_insert_before (&si, new_stmt, GSI_SAME_STMT);
> -  set_vinfo_for_stmt (new_stmt, new_stmt_vec_info (new_stmt, loop_vinfo));
> +  stmt_vec_info new_stmt_info = loop_vinfo->add_stmt (new_stmt);
>
>    /* Set the arguments of the phi node:  */
>    add_phi_arg (induction_phi, vec_init, pe, UNKNOWN_LOCATION);
> @@ -7781,7 +7766,7 @@ vectorizable_induction (gimple *phi,
>        vec_step = vect_init_vector (phi, new_vec, vectype, NULL);
>
>        vec_def = induc_def;
> -      prev_stmt_vinfo = vinfo_for_stmt (induction_phi);
> +      prev_stmt_vinfo = induction_phi_info;
>        for (i = 1; i < ncopies; i++)
>         {
>           /* vec_i = vec_prev + vec_step  */
> @@ -7791,10 +7776,9 @@ vectorizable_induction (gimple *phi,
>           gimple_assign_set_lhs (new_stmt, vec_def);
>
>           gsi_insert_before (&si, new_stmt, GSI_SAME_STMT);
> -         set_vinfo_for_stmt (new_stmt,
> -                             new_stmt_vec_info (new_stmt, loop_vinfo));
> +         new_stmt_info = loop_vinfo->add_stmt (new_stmt);
>           STMT_VINFO_RELATED_STMT (prev_stmt_vinfo) = new_stmt;
> -         prev_stmt_vinfo = vinfo_for_stmt (new_stmt);
> +         prev_stmt_vinfo = new_stmt_info;
>         }
>      }
>
> Index: gcc/tree-vect-slp.c
> ===================================================================
> --- gcc/tree-vect-slp.c 2018-07-24 10:22:09.237496975 +0100
> +++ gcc/tree-vect-slp.c 2018-07-24 10:22:19.805403136 +0100
> @@ -2494,7 +2494,7 @@ _bb_vec_info::_bb_vec_info (gimple_stmt_
>      {
>        gimple *stmt = gsi_stmt (gsi);
>        gimple_set_uid (stmt, 0);
> -      set_vinfo_for_stmt (stmt, new_stmt_vec_info (stmt, this));
> +      add_stmt (stmt);
>      }
>
>    bb->aux = this;
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-13 10:11:14.533842692 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:22:19.809403100 +0100
> @@ -1744,7 +1744,7 @@ vect_finish_stmt_generation_1 (gimple *s
>    stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    vec_info *vinfo = stmt_info->vinfo;
>
> -  set_vinfo_for_stmt (vec_stmt, new_stmt_vec_info (vec_stmt, vinfo));
> +  vinfo->add_stmt (vec_stmt);
>
>    if (dump_enabled_p ())
>      {
> @@ -4183,8 +4183,7 @@ vectorizable_simd_clone_call (gimple *st
>                     }
>                   tree phi_res = copy_ssa_name (op);
>                   gphi *new_phi = create_phi_node (phi_res, loop->header);
> -                 set_vinfo_for_stmt (new_phi,
> -                                     new_stmt_vec_info (new_phi, loop_vinfo));
> +                 loop_vinfo->add_stmt (new_phi);
>                   add_phi_arg (new_phi, arginfo[i].op,
>                                loop_preheader_edge (loop), UNKNOWN_LOCATION);
>                   enum tree_code code
> @@ -4201,8 +4200,7 @@ vectorizable_simd_clone_call (gimple *st
>                     = gimple_build_assign (phi_arg, code, phi_res, tcst);
>                   gimple_stmt_iterator si = gsi_after_labels (loop->header);
>                   gsi_insert_after (&si, new_stmt, GSI_NEW_STMT);
> -                 set_vinfo_for_stmt (new_stmt,
> -                                     new_stmt_vec_info (new_stmt, loop_vinfo));
> +                 loop_vinfo->add_stmt (new_stmt);
>                   add_phi_arg (new_phi, phi_arg, loop_latch_edge (loop),
>                                UNKNOWN_LOCATION);
>                   arginfo[i].op = phi_res;
> @@ -6731,7 +6729,7 @@ vectorizable_store (gimple *stmt, gimple
>                  loop, &incr_gsi, insert_after,
>                  &offvar, NULL);
>        incr = gsi_stmt (incr_gsi);
> -      set_vinfo_for_stmt (incr, new_stmt_vec_info (incr, loop_vinfo));
> +      loop_vinfo->add_stmt (incr);
>
>        stride_step = cse_and_gimplify_to_preheader (loop_vinfo, stride_step);
>
> @@ -7729,7 +7727,7 @@ vectorizable_load (gimple *stmt, gimple_
>                  loop, &incr_gsi, insert_after,
>                  &offvar, NULL);
>        incr = gsi_stmt (incr_gsi);
> -      set_vinfo_for_stmt (incr, new_stmt_vec_info (incr, loop_vinfo));
> +      loop_vinfo->add_stmt (incr);
>
>        stride_step = cse_and_gimplify_to_preheader (loop_vinfo, stride_step);
>
> @@ -8488,8 +8486,7 @@ vectorizable_load (gimple *stmt, gimple_
>                                                 (gimple_assign_rhs1 (stmt))));
>                       new_temp = vect_init_vector (stmt, tem, vectype, NULL);
>                       new_stmt = SSA_NAME_DEF_STMT (new_temp);
> -                     set_vinfo_for_stmt (new_stmt,
> -                                         new_stmt_vec_info (new_stmt, vinfo));
> +                     vinfo->add_stmt (new_stmt);
>                     }
>                   else
>                     {
> Index: gcc/tree-vect-patterns.c
> ===================================================================
> --- gcc/tree-vect-patterns.c    2018-07-18 18:44:23.517905682 +0100
> +++ gcc/tree-vect-patterns.c    2018-07-24 10:22:19.805403136 +0100
> @@ -103,11 +103,7 @@ vect_init_pattern_stmt (gimple *pattern_
>  {
>    stmt_vec_info pattern_stmt_info = vinfo_for_stmt (pattern_stmt);
>    if (pattern_stmt_info == NULL)
> -    {
> -      pattern_stmt_info = new_stmt_vec_info (pattern_stmt,
> -                                            orig_stmt_info->vinfo);
> -      set_vinfo_for_stmt (pattern_stmt, pattern_stmt_info);
> -    }
> +    pattern_stmt_info = orig_stmt_info->vinfo->add_stmt (pattern_stmt);
>    gimple_set_bb (pattern_stmt, gimple_bb (orig_stmt_info->stmt));
>
>    STMT_VINFO_RELATED_STMT (pattern_stmt_info) = orig_stmt_info->stmt;
> @@ -141,9 +137,7 @@ append_pattern_def_seq (stmt_vec_info st
>    vec_info *vinfo = stmt_info->vinfo;
>    if (vectype)
>      {
> -      gcc_assert (!vinfo_for_stmt (new_stmt));
> -      stmt_vec_info new_stmt_info = new_stmt_vec_info (new_stmt, vinfo);
> -      set_vinfo_for_stmt (new_stmt, new_stmt_info);
> +      stmt_vec_info new_stmt_info = vinfo->add_stmt (new_stmt);
>        STMT_VINFO_VECTYPE (new_stmt_info) = vectype;
>      }
>    gimple_seq_add_stmt_without_update (&STMT_VINFO_PATTERN_DEF_SEQ (stmt_info),
> @@ -3832,8 +3826,7 @@ vect_recog_bool_pattern (stmt_vec_info s
>           rhs = rhs2;
>         }
>        pattern_stmt = gimple_build_assign (lhs, SSA_NAME, rhs);
> -      pattern_stmt_info = new_stmt_vec_info (pattern_stmt, vinfo);
> -      set_vinfo_for_stmt (pattern_stmt, pattern_stmt_info);
> +      pattern_stmt_info = vinfo->add_stmt (pattern_stmt);
>        STMT_VINFO_DATA_REF (pattern_stmt_info)
>         = STMT_VINFO_DATA_REF (stmt_vinfo);
>        STMT_VINFO_DR_WRT_VEC_LOOP (pattern_stmt_info)
> @@ -3958,8 +3951,7 @@ vect_recog_mask_conversion_pattern (stmt
>         }
>        gimple_call_set_nothrow (pattern_stmt, true);
>
> -      pattern_stmt_info = new_stmt_vec_info (pattern_stmt, vinfo);
> -      set_vinfo_for_stmt (pattern_stmt, pattern_stmt_info);
> +      pattern_stmt_info = vinfo->add_stmt (pattern_stmt);
>        if (STMT_VINFO_DATA_REF (stmt_vinfo))
>         {
>           STMT_VINFO_DATA_REF (pattern_stmt_info)
> @@ -4290,9 +4282,7 @@ vect_recog_gather_scatter_pattern (stmt_
>
>    /* Copy across relevant vectorization info and associate DR with the
>       new pattern statement instead of the original statement.  */
> -  stmt_vec_info pattern_stmt_info = new_stmt_vec_info (pattern_stmt,
> -                                                      loop_vinfo);
> -  set_vinfo_for_stmt (pattern_stmt, pattern_stmt_info);
> +  stmt_vec_info pattern_stmt_info = loop_vinfo->add_stmt (pattern_stmt);
>    STMT_VINFO_DATA_REF (pattern_stmt_info) = dr;
>    STMT_VINFO_DR_WRT_VEC_LOOP (pattern_stmt_info)
>      = STMT_VINFO_DR_WRT_VEC_LOOP (stmt_info);

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [07/46] Add vec_info::lookup_stmt
  2018-07-24  9:55 ` [07/46] Add vec_info::lookup_stmt Richard Sandiford
@ 2018-07-25  9:11   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25  9:11 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 11:55 AM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch adds a vec_info replacement for vinfo_for_stmt.  The main
> difference is that the new routine can cope with arbitrary statements,
> so there's no need to call vect_stmt_in_region_p first.
>
> The patch only converts calls that are still needed at the end of the
> series.  Later patches get rid of most other calls to vinfo_for_stmt.

OK.

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (vec_info::lookup_stmt): Declare.
>         * tree-vectorizer.c (vec_info::lookup_stmt): New function.
>         * tree-vect-loop.c (vect_determine_vf_for_stmt): Use it instead
>         of vinfo_for_stmt.
>         (vect_determine_vectorization_factor, vect_analyze_scalar_cycles_1)
>         (vect_compute_single_scalar_iteration_cost, vect_analyze_loop_form)
>         (vect_update_vf_for_slp, vect_analyze_loop_operations)
>         (vect_is_slp_reduction, vectorizable_induction)
>         (vect_transform_loop_stmt, vect_transform_loop): Likewise.
>         * tree-vect-patterns.c (vect_init_pattern_stmt):
>         (vect_determine_min_output_precision_1, vect_determine_precisions)
>         (vect_pattern_recog): Likewise.
>         * tree-vect-stmts.c (vect_analyze_stmt, vect_transform_stmt): Likewise.
>         * config/powerpcspe/powerpcspe.c (rs6000_density_test): Likewise.
>         * config/rs6000/rs6000.c (rs6000_density_test): Likewise.
>         * tree-vect-slp.c (vect_detect_hybrid_slp_stmts): Likewise.
>         (vect_detect_hybrid_slp_1, vect_detect_hybrid_slp_2)
>         (vect_detect_hybrid_slp): Likewise.  Change the walk_stmt_info
>         info field from a loop to a loop_vec_info.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:22:19.809403100 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:22:23.797367688 +0100
> @@ -218,6 +218,7 @@ struct vec_info {
>    ~vec_info ();
>
>    stmt_vec_info add_stmt (gimple *);
> +  stmt_vec_info lookup_stmt (gimple *);
>
>    /* The type of vectorization.  */
>    vec_kind kind;
> Index: gcc/tree-vectorizer.c
> ===================================================================
> --- gcc/tree-vectorizer.c       2018-07-24 10:22:19.809403100 +0100
> +++ gcc/tree-vectorizer.c       2018-07-24 10:22:23.797367688 +0100
> @@ -518,6 +518,23 @@ vec_info::add_stmt (gimple *stmt)
>    return res;
>  }
>
> +/* If STMT has an associated stmt_vec_info, return that vec_info, otherwise
> +   return null.  It is safe to call this function on any statement, even if
> +   it might not be part of the vectorizable region.  */
> +
> +stmt_vec_info
> +vec_info::lookup_stmt (gimple *stmt)
> +{
> +  unsigned int uid = gimple_uid (stmt);
> +  if (uid > 0 && uid - 1 < stmt_vec_infos.length ())
> +    {
> +      stmt_vec_info res = stmt_vec_infos[uid - 1];
> +      if (res && res->stmt == stmt)
> +       return res;
> +    }
> +  return NULL;
> +}
> +
>  /* A helper function to free scev and LOOP niter information, as well as
>     clear loop constraint LOOP_C_FINITE.  */
>
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:22:19.801403171 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:22:23.793367723 +0100
> @@ -213,6 +213,7 @@ vect_determine_vf_for_stmt_1 (stmt_vec_i
>  vect_determine_vf_for_stmt (stmt_vec_info stmt_info, poly_uint64 *vf,
>                             vec<stmt_vec_info > *mask_producers)
>  {
> +  vec_info *vinfo = stmt_info->vinfo;
>    if (dump_enabled_p ())
>      {
>        dump_printf_loc (MSG_NOTE, vect_location, "==> examining statement: ");
> @@ -231,7 +232,7 @@ vect_determine_vf_for_stmt (stmt_vec_inf
>        for (gimple_stmt_iterator si = gsi_start (pattern_def_seq);
>            !gsi_end_p (si); gsi_next (&si))
>         {
> -         stmt_vec_info def_stmt_info = vinfo_for_stmt (gsi_stmt (si));
> +         stmt_vec_info def_stmt_info = vinfo->lookup_stmt (gsi_stmt (si));
>           if (dump_enabled_p ())
>             {
>               dump_printf_loc (MSG_NOTE, vect_location,
> @@ -306,7 +307,7 @@ vect_determine_vectorization_factor (loo
>            gsi_next (&si))
>         {
>           phi = si.phi ();
> -         stmt_info = vinfo_for_stmt (phi);
> +         stmt_info = loop_vinfo->lookup_stmt (phi);
>           if (dump_enabled_p ())
>             {
>               dump_printf_loc (MSG_NOTE, vect_location, "==> examining phi: ");
> @@ -366,7 +367,7 @@ vect_determine_vectorization_factor (loo
>        for (gimple_stmt_iterator si = gsi_start_bb (bb); !gsi_end_p (si);
>            gsi_next (&si))
>         {
> -         stmt_info = vinfo_for_stmt (gsi_stmt (si));
> +         stmt_info = loop_vinfo->lookup_stmt (gsi_stmt (si));
>           if (!vect_determine_vf_for_stmt (stmt_info, &vectorization_factor,
>                                            &mask_producers))
>             return false;
> @@ -487,7 +488,7 @@ vect_analyze_scalar_cycles_1 (loop_vec_i
>        gphi *phi = gsi.phi ();
>        tree access_fn = NULL;
>        tree def = PHI_RESULT (phi);
> -      stmt_vec_info stmt_vinfo = vinfo_for_stmt (phi);
> +      stmt_vec_info stmt_vinfo = loop_vinfo->lookup_stmt (phi);
>
>        if (dump_enabled_p ())
>         {
> @@ -1101,7 +1102,7 @@ vect_compute_single_scalar_iteration_cos
>        for (si = gsi_start_bb (bb); !gsi_end_p (si); gsi_next (&si))
>          {
>           gimple *stmt = gsi_stmt (si);
> -          stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +         stmt_vec_info stmt_info = loop_vinfo->lookup_stmt (stmt);
>
>            if (!is_gimple_assign (stmt) && !is_gimple_call (stmt))
>              continue;
> @@ -1390,10 +1391,14 @@ vect_analyze_loop_form (struct loop *loo
>          }
>      }
>
> -  STMT_VINFO_TYPE (vinfo_for_stmt (loop_cond)) = loop_exit_ctrl_vec_info_type;
> +  stmt_vec_info loop_cond_info = loop_vinfo->lookup_stmt (loop_cond);
> +  STMT_VINFO_TYPE (loop_cond_info) = loop_exit_ctrl_vec_info_type;
>    if (inner_loop_cond)
> -    STMT_VINFO_TYPE (vinfo_for_stmt (inner_loop_cond))
> -      = loop_exit_ctrl_vec_info_type;
> +    {
> +      stmt_vec_info inner_loop_cond_info
> +       = loop_vinfo->lookup_stmt (inner_loop_cond);
> +      STMT_VINFO_TYPE (inner_loop_cond_info) = loop_exit_ctrl_vec_info_type;
> +    }
>
>    gcc_assert (!loop->aux);
>    loop->aux = loop_vinfo;
> @@ -1432,7 +1437,7 @@ vect_update_vf_for_slp (loop_vec_info lo
>            gsi_next (&si))
>         {
>           gimple *stmt = gsi_stmt (si);
> -         stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +         stmt_vec_info stmt_info = loop_vinfo->lookup_stmt (gsi_stmt (si));
>           if (STMT_VINFO_IN_PATTERN_P (stmt_info)
>               && STMT_VINFO_RELATED_STMT (stmt_info))
>             {
> @@ -1532,7 +1537,7 @@ vect_analyze_loop_operations (loop_vec_i
>            gphi *phi = si.phi ();
>            ok = true;
>
> -          stmt_info = vinfo_for_stmt (phi);
> +         stmt_info = loop_vinfo->lookup_stmt (phi);
>            if (dump_enabled_p ())
>              {
>                dump_printf_loc (MSG_NOTE, vect_location, "examining phi: ");
> @@ -2238,13 +2243,13 @@ vect_analyze_loop_2 (loop_vec_info loop_
>        for (gimple_stmt_iterator si = gsi_start_phis (bb);
>            !gsi_end_p (si); gsi_next (&si))
>         {
> -         stmt_vec_info stmt_info = vinfo_for_stmt (gsi_stmt (si));
> +         stmt_vec_info stmt_info = loop_vinfo->lookup_stmt (gsi_stmt (si));
>           STMT_SLP_TYPE (stmt_info) = loop_vect;
>         }
>        for (gimple_stmt_iterator si = gsi_start_bb (bb);
>            !gsi_end_p (si); gsi_next (&si))
>         {
> -         stmt_vec_info stmt_info = vinfo_for_stmt (gsi_stmt (si));
> +         stmt_vec_info stmt_info = loop_vinfo->lookup_stmt (gsi_stmt (si));
>           STMT_SLP_TYPE (stmt_info) = loop_vect;
>           if (STMT_VINFO_IN_PATTERN_P (stmt_info))
>             {
> @@ -2253,10 +2258,8 @@ vect_analyze_loop_2 (loop_vec_info loop_
>               STMT_SLP_TYPE (stmt_info) = loop_vect;
>               for (gimple_stmt_iterator pi = gsi_start (pattern_def_seq);
>                    !gsi_end_p (pi); gsi_next (&pi))
> -               {
> -                 gimple *pstmt = gsi_stmt (pi);
> -                 STMT_SLP_TYPE (vinfo_for_stmt (pstmt)) = loop_vect;
> -               }
> +               STMT_SLP_TYPE (loop_vinfo->lookup_stmt (gsi_stmt (pi)))
> +                 = loop_vect;
>             }
>         }
>      }
> @@ -2602,7 +2605,7 @@ vect_is_slp_reduction (loop_vec_info loo
>          return false;
>
>        /* Insert USE_STMT into reduction chain.  */
> -      use_stmt_info = vinfo_for_stmt (loop_use_stmt);
> +      use_stmt_info = loop_info->lookup_stmt (loop_use_stmt);
>        if (current_stmt)
>          {
>            current_stmt_info = vinfo_for_stmt (current_stmt);
> @@ -5549,7 +5552,7 @@ vect_create_epilog_for_reduction (vec<tr
>          {
>           stmt_vec_info epilog_stmt_info = loop_vinfo->add_stmt (epilog_stmt);
>           STMT_VINFO_RELATED_STMT (epilog_stmt_info)
> -           = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (new_phi));
> +           = STMT_VINFO_RELATED_STMT (loop_vinfo->lookup_stmt (new_phi));
>
>            if (!double_reduc)
>              scalar_results.quick_push (new_temp);
> @@ -5653,7 +5656,8 @@ vect_create_epilog_for_reduction (vec<tr
>          {
>            if (outer_loop)
>              {
> -              stmt_vec_info exit_phi_vinfo = vinfo_for_stmt (exit_phi);
> +             stmt_vec_info exit_phi_vinfo
> +               = loop_vinfo->lookup_stmt (exit_phi);
>                gphi *vect_phi;
>
>                /* FORNOW. Currently not supporting the case that an inner-loop
> @@ -5700,7 +5704,7 @@ vect_create_epilog_for_reduction (vec<tr
>                        || gimple_phi_num_args (use_stmt) != 2
>                        || bb->loop_father != outer_loop)
>                      continue;
> -                  use_stmt_vinfo = vinfo_for_stmt (use_stmt);
> +                 use_stmt_vinfo = loop_vinfo->lookup_stmt (use_stmt);
>                    if (!use_stmt_vinfo
>                        || STMT_VINFO_DEF_TYPE (use_stmt_vinfo)
>                            != vect_double_reduction_def)
> @@ -7377,7 +7381,7 @@ vectorizable_induction (gimple *phi,
>         }
>        if (exit_phi)
>         {
> -         stmt_vec_info exit_phi_vinfo  = vinfo_for_stmt (exit_phi);
> +         stmt_vec_info exit_phi_vinfo = loop_vinfo->lookup_stmt (exit_phi);
>           if (!(STMT_VINFO_RELEVANT_P (exit_phi_vinfo)
>                 && !STMT_VINFO_LIVE_P (exit_phi_vinfo)))
>             {
> @@ -7801,7 +7805,7 @@ vectorizable_induction (gimple *phi,
>          }
>        if (exit_phi)
>         {
> -         stmt_vec_info stmt_vinfo = vinfo_for_stmt (exit_phi);
> +         stmt_vec_info stmt_vinfo = loop_vinfo->lookup_stmt (exit_phi);
>           /* FORNOW. Currently not supporting the case that an inner-loop induction
>              is not used in the outer-loop (i.e. only outside the outer-loop).  */
>           gcc_assert (STMT_VINFO_RELEVANT_P (stmt_vinfo)
> @@ -8260,7 +8264,7 @@ vect_transform_loop_stmt (loop_vec_info
>  {
>    struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
>    poly_uint64 vf = LOOP_VINFO_VECT_FACTOR (loop_vinfo);
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +  stmt_vec_info stmt_info = loop_vinfo->lookup_stmt (stmt);
>    if (!stmt_info)
>      return;
>
> @@ -8463,7 +8467,7 @@ vect_transform_loop (loop_vec_info loop_
>                                 "------>vectorizing phi: ");
>               dump_gimple_stmt (MSG_NOTE, TDF_SLIM, phi, 0);
>             }
> -         stmt_info = vinfo_for_stmt (phi);
> +         stmt_info = loop_vinfo->lookup_stmt (phi);
>           if (!stmt_info)
>             continue;
>
> @@ -8504,7 +8508,7 @@ vect_transform_loop (loop_vec_info loop_
>             }
>           else
>             {
> -             stmt_info = vinfo_for_stmt (stmt);
> +             stmt_info = loop_vinfo->lookup_stmt (stmt);
>
>               /* vector stmts created in the outer-loop during vectorization of
>                  stmts in an inner-loop may not have a stmt_info, and do not
> Index: gcc/tree-vect-patterns.c
> ===================================================================
> --- gcc/tree-vect-patterns.c    2018-07-24 10:22:19.805403136 +0100
> +++ gcc/tree-vect-patterns.c    2018-07-24 10:22:23.793367723 +0100
> @@ -101,7 +101,8 @@ vect_pattern_detected (const char *name,
>  vect_init_pattern_stmt (gimple *pattern_stmt, stmt_vec_info orig_stmt_info,
>                         tree vectype)
>  {
> -  stmt_vec_info pattern_stmt_info = vinfo_for_stmt (pattern_stmt);
> +  vec_info *vinfo = orig_stmt_info->vinfo;
> +  stmt_vec_info pattern_stmt_info = vinfo->lookup_stmt (pattern_stmt);
>    if (pattern_stmt_info == NULL)
>      pattern_stmt_info = orig_stmt_info->vinfo->add_stmt (pattern_stmt);
>    gimple_set_bb (pattern_stmt, gimple_bb (orig_stmt_info->stmt));
> @@ -4401,6 +4402,7 @@ vect_set_min_input_precision (stmt_vec_i
>  vect_determine_min_output_precision_1 (stmt_vec_info stmt_info, tree lhs)
>  {
>    /* Take the maximum precision required by users of the result.  */
> +  vec_info *vinfo = stmt_info->vinfo;
>    unsigned int precision = 0;
>    imm_use_iterator iter;
>    use_operand_p use;
> @@ -4409,10 +4411,8 @@ vect_determine_min_output_precision_1 (s
>        gimple *use_stmt = USE_STMT (use);
>        if (is_gimple_debug (use_stmt))
>         continue;
> -      if (!vect_stmt_in_region_p (stmt_info->vinfo, use_stmt))
> -       return false;
> -      stmt_vec_info use_stmt_info = vinfo_for_stmt (use_stmt);
> -      if (!use_stmt_info->min_input_precision)
> +      stmt_vec_info use_stmt_info = vinfo->lookup_stmt (use_stmt);
> +      if (!use_stmt_info || !use_stmt_info->min_input_precision)
>         return false;
>        precision = MAX (precision, use_stmt_info->min_input_precision);
>      }
> @@ -4657,7 +4657,8 @@ vect_determine_precisions (vec_info *vin
>           basic_block bb = bbs[nbbs - i - 1];
>           for (gimple_stmt_iterator si = gsi_last_bb (bb);
>                !gsi_end_p (si); gsi_prev (&si))
> -           vect_determine_stmt_precisions (vinfo_for_stmt (gsi_stmt (si)));
> +           vect_determine_stmt_precisions
> +             (vinfo->lookup_stmt (gsi_stmt (si)));
>         }
>      }
>    else
> @@ -4672,7 +4673,7 @@ vect_determine_precisions (vec_info *vin
>           else
>             gsi_prev (&si);
>           stmt = gsi_stmt (si);
> -         stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +         stmt_vec_info stmt_info = vinfo->lookup_stmt (stmt);
>           if (stmt_info && STMT_VINFO_VECTORIZABLE (stmt_info))
>             vect_determine_stmt_precisions (stmt_info);
>         }
> @@ -4971,7 +4972,7 @@ vect_pattern_recog (vec_info *vinfo)
>            gsi_stmt (si) != gsi_stmt (bb_vinfo->region_end); gsi_next (&si))
>         {
>           gimple *stmt = gsi_stmt (si);
> -         stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +         stmt_vec_info stmt_info = bb_vinfo->lookup_stmt (stmt);
>           if (stmt_info && !STMT_VINFO_VECTORIZABLE (stmt_info))
>             continue;
>
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-24 10:22:19.809403100 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:22:23.797367688 +0100
> @@ -9377,6 +9377,7 @@ vect_analyze_stmt (gimple *stmt, bool *n
>                    slp_instance node_instance, stmt_vector_for_cost *cost_vec)
>  {
>    stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +  vec_info *vinfo = stmt_info->vinfo;
>    bb_vec_info bb_vinfo = STMT_VINFO_BB_VINFO (stmt_info);
>    enum vect_relevant relevance = STMT_VINFO_RELEVANT (stmt_info);
>    bool ok;
> @@ -9407,8 +9408,10 @@ vect_analyze_stmt (gimple *stmt, bool *n
>        for (si = gsi_start (pattern_def_seq); !gsi_end_p (si); gsi_next (&si))
>         {
>           gimple *pattern_def_stmt = gsi_stmt (si);
> -         if (STMT_VINFO_RELEVANT_P (vinfo_for_stmt (pattern_def_stmt))
> -             || STMT_VINFO_LIVE_P (vinfo_for_stmt (pattern_def_stmt)))
> +         stmt_vec_info pattern_def_stmt_info
> +           = vinfo->lookup_stmt (gsi_stmt (si));
> +         if (STMT_VINFO_RELEVANT_P (pattern_def_stmt_info)
> +             || STMT_VINFO_LIVE_P (pattern_def_stmt_info))
>             {
>               /* Analyze def stmt of STMT if it's a pattern stmt.  */
>               if (dump_enabled_p ())
> @@ -9605,9 +9608,10 @@ vect_transform_stmt (gimple *stmt, gimpl
>                      bool *grouped_store, slp_tree slp_node,
>                       slp_instance slp_node_instance)
>  {
> +  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +  vec_info *vinfo = stmt_info->vinfo;
>    bool is_store = false;
>    gimple *vec_stmt = NULL;
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    bool done;
>
>    gcc_assert (slp_node || !PURE_SLP_STMT (stmt_info));
> @@ -9728,7 +9732,6 @@ vect_transform_stmt (gimple *stmt, gimpl
>        imm_use_iterator imm_iter;
>        use_operand_p use_p;
>        tree scalar_dest;
> -      gimple *exit_phi;
>
>        if (dump_enabled_p ())
>          dump_printf_loc (MSG_NOTE, vect_location,
> @@ -9743,13 +9746,12 @@ vect_transform_stmt (gimple *stmt, gimpl
>          scalar_dest = gimple_assign_lhs (stmt);
>
>        FOR_EACH_IMM_USE_FAST (use_p, imm_iter, scalar_dest)
> -       {
> -         if (!flow_bb_inside_loop_p (innerloop, gimple_bb (USE_STMT (use_p))))
> -           {
> -             exit_phi = USE_STMT (use_p);
> -             STMT_VINFO_VEC_STMT (vinfo_for_stmt (exit_phi)) = vec_stmt;
> -           }
> -       }
> +       if (!flow_bb_inside_loop_p (innerloop, gimple_bb (USE_STMT (use_p))))
> +         {
> +           stmt_vec_info exit_phi_info
> +             = vinfo->lookup_stmt (USE_STMT (use_p));
> +           STMT_VINFO_VEC_STMT (exit_phi_info) = vec_stmt;
> +         }
>      }
>
>    /* Handle stmts whose DEF is used outside the loop-nest that is
> Index: gcc/config/powerpcspe/powerpcspe.c
> ===================================================================
> --- gcc/config/powerpcspe/powerpcspe.c  2018-07-18 18:44:23.681904201 +0100
> +++ gcc/config/powerpcspe/powerpcspe.c  2018-07-24 10:22:23.785367794 +0100
> @@ -6030,6 +6030,7 @@ rs6000_density_test (rs6000_cost_data *d
>    struct loop *loop = data->loop_info;
>    basic_block *bbs = get_loop_body (loop);
>    int nbbs = loop->num_nodes;
> +  loop_vec_info loop_vinfo = loop_vec_info_for_loop (data->loop_info);
>    int vec_cost = data->cost[vect_body], not_vec_cost = 0;
>    int i, density_pct;
>
> @@ -6041,7 +6042,7 @@ rs6000_density_test (rs6000_cost_data *d
>        for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
>         {
>           gimple *stmt = gsi_stmt (gsi);
> -         stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +         stmt_vec_info stmt_info = loop_vinfo->lookup_stmt (stmt);
>
>           if (!STMT_VINFO_RELEVANT_P (stmt_info)
>               && !STMT_VINFO_IN_PATTERN_P (stmt_info))
> Index: gcc/config/rs6000/rs6000.c
> ===================================================================
> --- gcc/config/rs6000/rs6000.c  2018-07-23 17:14:27.395541019 +0100
> +++ gcc/config/rs6000/rs6000.c  2018-07-24 10:22:23.793367723 +0100
> @@ -5566,6 +5566,7 @@ rs6000_density_test (rs6000_cost_data *d
>    struct loop *loop = data->loop_info;
>    basic_block *bbs = get_loop_body (loop);
>    int nbbs = loop->num_nodes;
> +  loop_vec_info loop_vinfo = loop_vec_info_for_loop (data->loop_info);
>    int vec_cost = data->cost[vect_body], not_vec_cost = 0;
>    int i, density_pct;
>
> @@ -5577,7 +5578,7 @@ rs6000_density_test (rs6000_cost_data *d
>        for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
>         {
>           gimple *stmt = gsi_stmt (gsi);
> -         stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +         stmt_vec_info stmt_info = loop_vinfo->lookup_stmt (stmt);
>
>           if (!STMT_VINFO_RELEVANT_P (stmt_info)
>               && !STMT_VINFO_IN_PATTERN_P (stmt_info))
> Index: gcc/tree-vect-slp.c
> ===================================================================
> --- gcc/tree-vect-slp.c 2018-07-24 10:22:19.805403136 +0100
> +++ gcc/tree-vect-slp.c 2018-07-24 10:22:23.793367723 +0100
> @@ -2315,7 +2315,6 @@ vect_detect_hybrid_slp_stmts (slp_tree n
>    stmt_vec_info use_vinfo, stmt_vinfo = vinfo_for_stmt (stmt);
>    slp_tree child;
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_vinfo);
> -  struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
>    int j;
>
>    /* Propagate hybrid down the SLP tree.  */
> @@ -2340,9 +2339,9 @@ vect_detect_hybrid_slp_stmts (slp_tree n
>        if (def)
>         FOR_EACH_IMM_USE_STMT (use_stmt, imm_iter, def)
>           {
> -           if (!flow_bb_inside_loop_p (loop, gimple_bb (use_stmt)))
> +           use_vinfo = loop_vinfo->lookup_stmt (use_stmt);
> +           if (!use_vinfo)
>               continue;
> -           use_vinfo = vinfo_for_stmt (use_stmt);
>             if (STMT_VINFO_IN_PATTERN_P (use_vinfo)
>                 && STMT_VINFO_RELATED_STMT (use_vinfo))
>               use_vinfo = vinfo_for_stmt (STMT_VINFO_RELATED_STMT (use_vinfo));
> @@ -2385,25 +2384,23 @@ vect_detect_hybrid_slp_stmts (slp_tree n
>  vect_detect_hybrid_slp_1 (tree *tp, int *, void *data)
>  {
>    walk_stmt_info *wi = (walk_stmt_info *)data;
> -  struct loop *loopp = (struct loop *)wi->info;
> +  loop_vec_info loop_vinfo = (loop_vec_info) wi->info;
>
>    if (wi->is_lhs)
>      return NULL_TREE;
>
> +  stmt_vec_info def_stmt_info;
>    if (TREE_CODE (*tp) == SSA_NAME
> -      && !SSA_NAME_IS_DEFAULT_DEF (*tp))
> +      && !SSA_NAME_IS_DEFAULT_DEF (*tp)
> +      && (def_stmt_info = loop_vinfo->lookup_stmt (SSA_NAME_DEF_STMT (*tp)))
> +      && PURE_SLP_STMT (def_stmt_info))
>      {
> -      gimple *def_stmt = SSA_NAME_DEF_STMT (*tp);
> -      if (flow_bb_inside_loop_p (loopp, gimple_bb (def_stmt))
> -         && PURE_SLP_STMT (vinfo_for_stmt (def_stmt)))
> +      if (dump_enabled_p ())
>         {
> -         if (dump_enabled_p ())
> -           {
> -             dump_printf_loc (MSG_NOTE, vect_location, "marking hybrid: ");
> -             dump_gimple_stmt (MSG_NOTE, TDF_SLIM, def_stmt, 0);
> -           }
> -         STMT_SLP_TYPE (vinfo_for_stmt (def_stmt)) = hybrid;
> +         dump_printf_loc (MSG_NOTE, vect_location, "marking hybrid: ");
> +         dump_gimple_stmt (MSG_NOTE, TDF_SLIM, def_stmt_info->stmt, 0);
>         }
> +      STMT_SLP_TYPE (def_stmt_info) = hybrid;
>      }
>
>    return NULL_TREE;
> @@ -2411,9 +2408,10 @@ vect_detect_hybrid_slp_1 (tree *tp, int
>
>  static tree
>  vect_detect_hybrid_slp_2 (gimple_stmt_iterator *gsi, bool *handled,
> -                         walk_stmt_info *)
> +                         walk_stmt_info *wi)
>  {
> -  stmt_vec_info use_vinfo = vinfo_for_stmt (gsi_stmt (*gsi));
> +  loop_vec_info loop_vinfo = (loop_vec_info) wi->info;
> +  stmt_vec_info use_vinfo = loop_vinfo->lookup_stmt (gsi_stmt (*gsi));
>    /* If the stmt is in a SLP instance then this isn't a reason
>       to mark use definitions in other SLP instances as hybrid.  */
>    if (! STMT_SLP_TYPE (use_vinfo)
> @@ -2447,12 +2445,12 @@ vect_detect_hybrid_slp (loop_vec_info lo
>            gsi_next (&gsi))
>         {
>           gimple *stmt = gsi_stmt (gsi);
> -         stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +         stmt_vec_info stmt_info = loop_vinfo->lookup_stmt (stmt);
>           if (STMT_VINFO_IN_PATTERN_P (stmt_info))
>             {
>               walk_stmt_info wi;
>               memset (&wi, 0, sizeof (wi));
> -             wi.info = LOOP_VINFO_LOOP (loop_vinfo);
> +             wi.info = loop_vinfo;
>               gimple_stmt_iterator gsi2
>                 = gsi_for_stmt (STMT_VINFO_RELATED_STMT (stmt_info));
>               walk_gimple_stmt (&gsi2, vect_detect_hybrid_slp_2,

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [08/46] Add vec_info::lookup_def
  2018-07-24  9:55 ` [08/46] Add vec_info::lookup_def Richard Sandiford
@ 2018-07-25  9:12   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25  9:12 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 11:55 AM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch adds a vec_info helper for checking whether an operand is an
> SSA_NAME that is defined in the vectorisable region.

OK.

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (vec_info::lookup_def): Declare.
>         * tree-vectorizer.c (vec_info::lookup_def): New function.
>         * tree-vect-patterns.c (vect_get_internal_def): Use it.
>         (vect_widened_op_tree): Likewise.
>         * tree-vect-stmts.c (vect_is_simple_use): Likewise.
>         * tree-vect-loop.c (vect_analyze_loop_operations): Likewise.
>         (vectorizable_reduction): Likewise.
>         (vect_valid_reduction_input_p): Take a stmt_vec_info instead
>         of a gimple *.
>         (vect_is_slp_reduction): Update calls accordingly.  Use
>         vec_info::lookup_def.
>         (vect_is_simple_reduction): Likewise
>         * tree-vect-slp.c (vect_detect_hybrid_slp_1): Use vec_info::lookup_def.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:22:23.797367688 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:22:27.285336715 +0100
> @@ -219,6 +219,7 @@ struct vec_info {
>
>    stmt_vec_info add_stmt (gimple *);
>    stmt_vec_info lookup_stmt (gimple *);
> +  stmt_vec_info lookup_def (tree);
>
>    /* The type of vectorization.  */
>    vec_kind kind;
> Index: gcc/tree-vectorizer.c
> ===================================================================
> --- gcc/tree-vectorizer.c       2018-07-24 10:22:23.797367688 +0100
> +++ gcc/tree-vectorizer.c       2018-07-24 10:22:27.285336715 +0100
> @@ -535,6 +535,19 @@ vec_info::lookup_stmt (gimple *stmt)
>    return NULL;
>  }
>
> +/* If NAME is an SSA_NAME and its definition has an associated stmt_vec_info,
> +   return that stmt_vec_info, otherwise return null.  It is safe to call
> +   this on arbitrary operands.  */
> +
> +stmt_vec_info
> +vec_info::lookup_def (tree name)
> +{
> +  if (TREE_CODE (name) == SSA_NAME
> +      && !SSA_NAME_IS_DEFAULT_DEF (name))
> +    return lookup_stmt (SSA_NAME_DEF_STMT (name));
> +  return NULL;
> +}
> +
>  /* A helper function to free scev and LOOP niter information, as well as
>     clear loop constraint LOOP_C_FINITE.  */
>
> Index: gcc/tree-vect-patterns.c
> ===================================================================
> --- gcc/tree-vect-patterns.c    2018-07-24 10:22:23.793367723 +0100
> +++ gcc/tree-vect-patterns.c    2018-07-24 10:22:27.281336751 +0100
> @@ -227,14 +227,11 @@ vect_element_precision (unsigned int pre
>  static stmt_vec_info
>  vect_get_internal_def (vec_info *vinfo, tree op)
>  {
> -  vect_def_type dt;
> -  gimple *def_stmt;
> -  if (TREE_CODE (op) != SSA_NAME
> -      || !vect_is_simple_use (op, vinfo, &dt, &def_stmt)
> -      || dt != vect_internal_def)
> -    return NULL;
> -
> -  return vinfo_for_stmt (def_stmt);
> +  stmt_vec_info def_stmt_info = vinfo->lookup_def (op);
> +  if (def_stmt_info
> +      && STMT_VINFO_DEF_TYPE (def_stmt_info) == vect_internal_def)
> +    return def_stmt_info;
> +  return NULL;
>  }
>
>  /* Check whether NAME, an ssa-name used in USE_STMT,
> @@ -528,6 +525,7 @@ vect_widened_op_tree (stmt_vec_info stmt
>                       vect_unpromoted_value *unprom, tree *common_type)
>  {
>    /* Check for an integer operation with the right code.  */
> +  vec_info *vinfo = stmt_info->vinfo;
>    gassign *assign = dyn_cast <gassign *> (stmt_info->stmt);
>    if (!assign)
>      return 0;
> @@ -584,7 +582,7 @@ vect_widened_op_tree (stmt_vec_info stmt
>
>               /* Recursively process the definition of the operand.  */
>               stmt_vec_info def_stmt_info
> -               = vinfo_for_stmt (SSA_NAME_DEF_STMT (this_unprom->op));
> +               = vinfo->lookup_def (this_unprom->op);
>               nops = vect_widened_op_tree (def_stmt_info, code, widened_code,
>                                            shift_p, max_nops, this_unprom,
>                                            common_type);
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-24 10:22:23.797367688 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:22:27.281336751 +0100
> @@ -10092,11 +10092,11 @@ vect_is_simple_use (tree operand, vec_in
>    else
>      {
>        gimple *def_stmt = SSA_NAME_DEF_STMT (operand);
> -      if (! vect_stmt_in_region_p (vinfo, def_stmt))
> +      stmt_vec_info stmt_vinfo = vinfo->lookup_def (operand);
> +      if (!stmt_vinfo)
>         *dt = vect_external_def;
>        else
>         {
> -         stmt_vec_info stmt_vinfo = vinfo_for_stmt (def_stmt);
>           if (STMT_VINFO_IN_PATTERN_P (stmt_vinfo))
>             {
>               def_stmt = STMT_VINFO_RELATED_STMT (stmt_vinfo);
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:22:23.793367723 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:22:27.277336786 +0100
> @@ -1569,26 +1569,19 @@ vect_analyze_loop_operations (loop_vec_i
>                if (STMT_VINFO_RELEVANT_P (stmt_info))
>                  {
>                    tree phi_op;
> -                 gimple *op_def_stmt;
>
>                    if (gimple_phi_num_args (phi) != 1)
>                      return false;
>
>                    phi_op = PHI_ARG_DEF (phi, 0);
> -                  if (TREE_CODE (phi_op) != SSA_NAME)
> +                 stmt_vec_info op_def_info = loop_vinfo->lookup_def (phi_op);
> +                 if (!op_def_info)
>                      return false;
>
> -                  op_def_stmt = SSA_NAME_DEF_STMT (phi_op);
> -                 if (gimple_nop_p (op_def_stmt)
> -                     || !flow_bb_inside_loop_p (loop, gimple_bb (op_def_stmt))
> -                     || !vinfo_for_stmt (op_def_stmt))
> -                    return false;
> -
> -                  if (STMT_VINFO_RELEVANT (vinfo_for_stmt (op_def_stmt))
> -                        != vect_used_in_outer
> -                      && STMT_VINFO_RELEVANT (vinfo_for_stmt (op_def_stmt))
> -                           != vect_used_in_outer_by_reduction)
> -                    return false;
> +                 if (STMT_VINFO_RELEVANT (op_def_info) != vect_used_in_outer
> +                     && (STMT_VINFO_RELEVANT (op_def_info)
> +                         != vect_used_in_outer_by_reduction))
> +                   return false;
>                  }
>
>                continue;
> @@ -2504,20 +2497,19 @@ report_vect_op (dump_flags_t msg_type, g
>    dump_gimple_stmt (msg_type, TDF_SLIM, stmt, 0);
>  }
>
> -/* DEF_STMT occurs in a loop that contains a potential reduction operation.
> -   Return true if the results of DEF_STMT are something that can be
> -   accumulated by such a reduction.  */
> +/* DEF_STMT_INFO occurs in a loop that contains a potential reduction
> +   operation.  Return true if the results of DEF_STMT_INFO are something
> +   that can be accumulated by such a reduction.  */
>
>  static bool
> -vect_valid_reduction_input_p (gimple *def_stmt)
> +vect_valid_reduction_input_p (stmt_vec_info def_stmt_info)
>  {
> -  stmt_vec_info def_stmt_info = vinfo_for_stmt (def_stmt);
> -  return (is_gimple_assign (def_stmt)
> -         || is_gimple_call (def_stmt)
> +  return (is_gimple_assign (def_stmt_info->stmt)
> +         || is_gimple_call (def_stmt_info->stmt)
>           || STMT_VINFO_DEF_TYPE (def_stmt_info) == vect_induction_def
> -         || (gimple_code (def_stmt) == GIMPLE_PHI
> +         || (gimple_code (def_stmt_info->stmt) == GIMPLE_PHI
>               && STMT_VINFO_DEF_TYPE (def_stmt_info) == vect_internal_def
> -             && !is_loop_header_bb_p (gimple_bb (def_stmt))));
> +             && !is_loop_header_bb_p (gimple_bb (def_stmt_info->stmt))));
>  }
>
>  /* Detect SLP reduction of the form:
> @@ -2633,18 +2625,14 @@ vect_is_slp_reduction (loop_vec_info loo
>        if (gimple_assign_rhs2 (next_stmt) == lhs)
>         {
>           tree op = gimple_assign_rhs1 (next_stmt);
> -         gimple *def_stmt = NULL;
> -
> -          if (TREE_CODE (op) == SSA_NAME)
> -            def_stmt = SSA_NAME_DEF_STMT (op);
> +         stmt_vec_info def_stmt_info = loop_info->lookup_def (op);
>
>           /* Check that the other def is either defined in the loop
>              ("vect_internal_def"), or it's an induction (defined by a
>              loop-header phi-node).  */
> -          if (def_stmt
> -             && gimple_bb (def_stmt)
> -             && flow_bb_inside_loop_p (loop, gimple_bb (def_stmt))
> -             && vect_valid_reduction_input_p (def_stmt))
> +         if (def_stmt_info
> +             && flow_bb_inside_loop_p (loop, gimple_bb (def_stmt_info->stmt))
> +             && vect_valid_reduction_input_p (def_stmt_info))
>             {
>               lhs = gimple_assign_lhs (next_stmt);
>               next_stmt = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next_stmt));
> @@ -2656,18 +2644,14 @@ vect_is_slp_reduction (loop_vec_info loo
>        else
>         {
>            tree op = gimple_assign_rhs2 (next_stmt);
> -         gimple *def_stmt = NULL;
> -
> -          if (TREE_CODE (op) == SSA_NAME)
> -            def_stmt = SSA_NAME_DEF_STMT (op);
> +         stmt_vec_info def_stmt_info = loop_info->lookup_def (op);
>
>            /* Check that the other def is either defined in the loop
>              ("vect_internal_def"), or it's an induction (defined by a
>              loop-header phi-node).  */
> -          if (def_stmt
> -             && gimple_bb (def_stmt)
> -             && flow_bb_inside_loop_p (loop, gimple_bb (def_stmt))
> -             && vect_valid_reduction_input_p (def_stmt))
> +         if (def_stmt_info
> +             && flow_bb_inside_loop_p (loop, gimple_bb (def_stmt_info->stmt))
> +             && vect_valid_reduction_input_p (def_stmt_info))
>             {
>               if (dump_enabled_p ())
>                 {
> @@ -2896,7 +2880,7 @@ vect_is_simple_reduction (loop_vec_info
>  {
>    struct loop *loop = (gimple_bb (phi))->loop_father;
>    struct loop *vect_loop = LOOP_VINFO_LOOP (loop_info);
> -  gimple *def_stmt, *def1 = NULL, *def2 = NULL, *phi_use_stmt = NULL;
> +  gimple *def_stmt, *phi_use_stmt = NULL;
>    enum tree_code orig_code, code;
>    tree op1, op2, op3 = NULL_TREE, op4 = NULL_TREE;
>    tree type;
> @@ -3020,7 +3004,7 @@ vect_is_simple_reduction (loop_vec_info
>            return NULL;
>          }
>
> -      def1 = SSA_NAME_DEF_STMT (op1);
> +      gimple *def1 = SSA_NAME_DEF_STMT (op1);
>        if (gimple_bb (def1)
>           && flow_bb_inside_loop_p (loop, gimple_bb (def_stmt))
>            && loop->inner
> @@ -3178,14 +3162,9 @@ vect_is_simple_reduction (loop_vec_info
>       1) integer arithmetic and no trapv
>       2) floating point arithmetic, and special flags permit this optimization
>       3) nested cycle (i.e., outer loop vectorization).  */
> -  if (TREE_CODE (op1) == SSA_NAME)
> -    def1 = SSA_NAME_DEF_STMT (op1);
> -
> -  if (TREE_CODE (op2) == SSA_NAME)
> -    def2 = SSA_NAME_DEF_STMT (op2);
> -
> -  if (code != COND_EXPR
> -      && ((!def1 || gimple_nop_p (def1)) && (!def2 || gimple_nop_p (def2))))
> +  stmt_vec_info def1_info = loop_info->lookup_def (op1);
> +  stmt_vec_info def2_info = loop_info->lookup_def (op2);
> +  if (code != COND_EXPR && !def1_info && !def2_info)
>      {
>        if (dump_enabled_p ())
>         report_vect_op (MSG_NOTE, def_stmt, "reduction: no defs for operands: ");
> @@ -3196,22 +3175,22 @@ vect_is_simple_reduction (loop_vec_info
>       the other def is either defined in the loop ("vect_internal_def"),
>       or it's an induction (defined by a loop-header phi-node).  */
>
> -  if (def2 && def2 == phi
> +  if (def2_info
> +      && def2_info->stmt == phi
>        && (code == COND_EXPR
> -         || !def1 || gimple_nop_p (def1)
> -         || !flow_bb_inside_loop_p (loop, gimple_bb (def1))
> -         || vect_valid_reduction_input_p (def1)))
> +         || !def1_info
> +         || vect_valid_reduction_input_p (def1_info)))
>      {
>        if (dump_enabled_p ())
>         report_vect_op (MSG_NOTE, def_stmt, "detected reduction: ");
>        return def_stmt;
>      }
>
> -  if (def1 && def1 == phi
> +  if (def1_info
> +      && def1_info->stmt == phi
>        && (code == COND_EXPR
> -         || !def2 || gimple_nop_p (def2)
> -         || !flow_bb_inside_loop_p (loop, gimple_bb (def2))
> -         || vect_valid_reduction_input_p (def2)))
> +         || !def2_info
> +         || vect_valid_reduction_input_p (def2_info)))
>      {
>        if (! nested_in_vect_loop && orig_code != MINUS_EXPR)
>         {
> @@ -6131,9 +6110,8 @@ vectorizable_reduction (gimple *stmt, gi
>    bool nested_cycle = false, found_nested_cycle_def = false;
>    bool double_reduc = false;
>    basic_block def_bb;
> -  struct loop * def_stmt_loop, *outer_loop = NULL;
> +  struct loop * def_stmt_loop;
>    tree def_arg;
> -  gimple *def_arg_stmt;
>    auto_vec<tree> vec_oprnds0;
>    auto_vec<tree> vec_oprnds1;
>    auto_vec<tree> vec_oprnds2;
> @@ -6151,7 +6129,6 @@ vectorizable_reduction (gimple *stmt, gi
>
>    if (nested_in_vect_loop_p (loop, stmt))
>      {
> -      outer_loop = loop;
>        loop = loop->inner;
>        nested_cycle = true;
>      }
> @@ -6731,13 +6708,10 @@ vectorizable_reduction (gimple *stmt, gi
>        def_stmt_loop = def_bb->loop_father;
>        def_arg = PHI_ARG_DEF_FROM_EDGE (reduc_def_stmt,
>                                         loop_preheader_edge (def_stmt_loop));
> -      if (TREE_CODE (def_arg) == SSA_NAME
> -          && (def_arg_stmt = SSA_NAME_DEF_STMT (def_arg))
> -          && gimple_code (def_arg_stmt) == GIMPLE_PHI
> -          && flow_bb_inside_loop_p (outer_loop, gimple_bb (def_arg_stmt))
> -          && vinfo_for_stmt (def_arg_stmt)
> -          && STMT_VINFO_DEF_TYPE (vinfo_for_stmt (def_arg_stmt))
> -              == vect_double_reduction_def)
> +      stmt_vec_info def_arg_stmt_info = loop_vinfo->lookup_def (def_arg);
> +      if (def_arg_stmt_info
> +         && (STMT_VINFO_DEF_TYPE (def_arg_stmt_info)
> +             == vect_double_reduction_def))
>          double_reduc = true;
>      }
>
> Index: gcc/tree-vect-slp.c
> ===================================================================
> --- gcc/tree-vect-slp.c 2018-07-24 10:22:23.793367723 +0100
> +++ gcc/tree-vect-slp.c 2018-07-24 10:22:27.281336751 +0100
> @@ -2389,11 +2389,8 @@ vect_detect_hybrid_slp_1 (tree *tp, int
>    if (wi->is_lhs)
>      return NULL_TREE;
>
> -  stmt_vec_info def_stmt_info;
> -  if (TREE_CODE (*tp) == SSA_NAME
> -      && !SSA_NAME_IS_DEFAULT_DEF (*tp)
> -      && (def_stmt_info = loop_vinfo->lookup_stmt (SSA_NAME_DEF_STMT (*tp)))
> -      && PURE_SLP_STMT (def_stmt_info))
> +  stmt_vec_info def_stmt_info = loop_vinfo->lookup_def (*tp);
> +  if (def_stmt_info && PURE_SLP_STMT (def_stmt_info))
>      {
>        if (dump_enabled_p ())
>         {

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [09/46] Add vec_info::lookup_single_use
  2018-07-24  9:56 ` [09/46] Add vec_info::lookup_single_use Richard Sandiford
@ 2018-07-25  9:13   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25  9:13 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 11:56 AM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch adds a helper function for seeing whether there is a single
> user of an SSA name, and whether that user has a stmt_vec_info.

OK.

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (vec_info::lookup_single_use): Declare.
>         * tree-vectorizer.c (vec_info::lookup_single_use): New function.
>         * tree-vect-loop.c (vectorizable_reduction): Use it instead of
>         a single_imm_use-based sequence.
>         * tree-vect-stmts.c (supportable_widening_operation): Likewise.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:22:27.285336715 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:22:30.401309046 +0100
> @@ -220,6 +220,7 @@ struct vec_info {
>    stmt_vec_info add_stmt (gimple *);
>    stmt_vec_info lookup_stmt (gimple *);
>    stmt_vec_info lookup_def (tree);
> +  stmt_vec_info lookup_single_use (tree);
>
>    /* The type of vectorization.  */
>    vec_kind kind;
> Index: gcc/tree-vectorizer.c
> ===================================================================
> --- gcc/tree-vectorizer.c       2018-07-24 10:22:27.285336715 +0100
> +++ gcc/tree-vectorizer.c       2018-07-24 10:22:30.401309046 +0100
> @@ -548,6 +548,20 @@ vec_info::lookup_def (tree name)
>    return NULL;
>  }
>
> +/* See whether there is a single non-debug statement that uses LHS and
> +   whether that statement has an associated stmt_vec_info.  Return the
> +   stmt_vec_info if so, otherwise return null.  */
> +
> +stmt_vec_info
> +vec_info::lookup_single_use (tree lhs)
> +{
> +  use_operand_p dummy;
> +  gimple *use_stmt;
> +  if (single_imm_use (lhs, &dummy, &use_stmt))
> +    return lookup_stmt (use_stmt);
> +  return NULL;
> +}
> +
>  /* A helper function to free scev and LOOP niter information, as well as
>     clear loop constraint LOOP_C_FINITE.  */
>
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:22:27.277336786 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:22:30.401309046 +0100
> @@ -6138,6 +6138,7 @@ vectorizable_reduction (gimple *stmt, gi
>
>    if (gimple_code (stmt) == GIMPLE_PHI)
>      {
> +      tree phi_result = gimple_phi_result (stmt);
>        /* Analysis is fully done on the reduction stmt invocation.  */
>        if (! vec_stmt)
>         {
> @@ -6158,7 +6159,8 @@ vectorizable_reduction (gimple *stmt, gi
>        if (STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (reduc_stmt)))
>         reduc_stmt = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (reduc_stmt));
>
> -      if (STMT_VINFO_VEC_REDUCTION_TYPE (vinfo_for_stmt (reduc_stmt))
> +      stmt_vec_info reduc_stmt_info = vinfo_for_stmt (reduc_stmt);
> +      if (STMT_VINFO_VEC_REDUCTION_TYPE (reduc_stmt_info)
>           == EXTRACT_LAST_REDUCTION)
>         /* Leave the scalar phi in place.  */
>         return true;
> @@ -6185,15 +6187,12 @@ vectorizable_reduction (gimple *stmt, gi
>        else
>         ncopies = vect_get_num_copies (loop_vinfo, vectype_in);
>
> -      use_operand_p use_p;
> -      gimple *use_stmt;
> +      stmt_vec_info use_stmt_info;
>        if (ncopies > 1
> -         && (STMT_VINFO_RELEVANT (vinfo_for_stmt (reduc_stmt))
> -             <= vect_used_only_live)
> -         && single_imm_use (gimple_phi_result (stmt), &use_p, &use_stmt)
> -         && (use_stmt == reduc_stmt
> -             || (STMT_VINFO_RELATED_STMT (vinfo_for_stmt (use_stmt))
> -                 == reduc_stmt)))
> +         && STMT_VINFO_RELEVANT (reduc_stmt_info) <= vect_used_only_live
> +         && (use_stmt_info = loop_vinfo->lookup_single_use (phi_result))
> +         && (use_stmt_info == reduc_stmt_info
> +             || STMT_VINFO_RELATED_STMT (use_stmt_info) == reduc_stmt))
>         single_defuse_cycle = true;
>
>        /* Create the destination vector  */
> @@ -6955,13 +6954,13 @@ vectorizable_reduction (gimple *stmt, gi
>     This only works when we see both the reduction PHI and its only consumer
>     in vectorizable_reduction and there are no intermediate stmts
>     participating.  */
> -  use_operand_p use_p;
> -  gimple *use_stmt;
> +  stmt_vec_info use_stmt_info;
> +  tree reduc_phi_result = gimple_phi_result (reduc_def_stmt);
>    if (ncopies > 1
>        && (STMT_VINFO_RELEVANT (stmt_info) <= vect_used_only_live)
> -      && single_imm_use (gimple_phi_result (reduc_def_stmt), &use_p, &use_stmt)
> -      && (use_stmt == stmt
> -         || STMT_VINFO_RELATED_STMT (vinfo_for_stmt (use_stmt)) == stmt))
> +      && (use_stmt_info = loop_vinfo->lookup_single_use (reduc_phi_result))
> +      && (use_stmt_info == stmt_info
> +         || STMT_VINFO_RELATED_STMT (use_stmt_info) == stmt))
>      {
>        single_defuse_cycle = true;
>        epilog_copies = 1;
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-24 10:22:27.281336751 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:22:30.401309046 +0100
> @@ -10310,14 +10310,11 @@ supportable_widening_operation (enum tre
>               same operation.  One such an example is s += a * b, where elements
>               in a and b cannot be reordered.  Here we check if the vector defined
>               by STMT is only directly used in the reduction statement.  */
> -          tree lhs = gimple_assign_lhs (stmt);
> -          use_operand_p dummy;
> -          gimple *use_stmt;
> -          stmt_vec_info use_stmt_info = NULL;
> -          if (single_imm_use (lhs, &dummy, &use_stmt)
> -              && (use_stmt_info = vinfo_for_stmt (use_stmt))
> -              && STMT_VINFO_DEF_TYPE (use_stmt_info) == vect_reduction_def)
> -            return true;
> +         tree lhs = gimple_assign_lhs (stmt);
> +         stmt_vec_info use_stmt_info = loop_info->lookup_single_use (lhs);
> +         if (use_stmt_info
> +             && STMT_VINFO_DEF_TYPE (use_stmt_info) == vect_reduction_def)
> +           return true;
>          }
>        c1 = VEC_WIDEN_MULT_LO_EXPR;
>        c2 = VEC_WIDEN_MULT_HI_EXPR;

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [10/46] Temporarily make stmt_vec_info a class
  2018-07-24  9:57 ` [10/46] Temporarily make stmt_vec_info a class Richard Sandiford
@ 2018-07-25  9:14   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25  9:14 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 11:57 AM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch turns stmt_vec_info into an unspeakably bad wrapper class
> and adds an implicit conversion to the associated gimple stmt.
> Having this conversion makes the rest of the series easier to write,
> but since the class goes away again at the end of the series, I've
> not bothered adding any comments or tried to make it pretty.

So I guess I do not need to approve it ;)

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (stmt_vec_info): Temporarily change from
>         a typedef to a wrapper class.
>         (NULL_STMT_VEC_INFO): New macro.
>         (vec_info::stmt_infos): Change to vec<stmt_vec_info>.
>         (stmt_vec_info::operator*): New function.
>         (stmt_vec_info::operator gimple *): Likewise.
>         (set_vinfo_for_stmt): Use NULL_STMT_VEC_INFO.
>         (add_stmt_costs): Likewise.
>         * tree-vect-loop-manip.c (iv_phi_p): Likewise.
>         * tree-vect-loop.c (vect_compute_single_scalar_iteration_cost)
>         (vect_get_known_peeling_cost): Likewise.
>         (vect_estimate_min_profitable_iters): Likewise.
>         * tree-vect-patterns.c (vect_init_pattern_stmt): Likewise.
>         * tree-vect-slp.c (vect_remove_slp_scalar_calls): Likewise.
>         * tree-vect-stmts.c (vect_build_gather_load_calls): Likewise.
>         (vectorizable_store, free_stmt_vec_infos): Likewise.
>         (new_stmt_vec_info): Change return type of xcalloc to
>         _stmt_vec_info *.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:22:30.401309046 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:22:33.829278607 +0100
> @@ -21,12 +21,31 @@ Software Foundation; either version 3, o
>  #ifndef GCC_TREE_VECTORIZER_H
>  #define GCC_TREE_VECTORIZER_H
>
> +class stmt_vec_info {
> +public:
> +  stmt_vec_info () {}
> +  stmt_vec_info (struct _stmt_vec_info *ptr) : m_ptr (ptr) {}
> +  struct _stmt_vec_info *operator-> () const { return m_ptr; }
> +  struct _stmt_vec_info &operator* () const;
> +  operator struct _stmt_vec_info * () const { return m_ptr; }
> +  operator gimple * () const;
> +  operator void * () const { return m_ptr; }
> +  operator bool () const { return m_ptr; }
> +  bool operator == (const stmt_vec_info &x) { return x.m_ptr == m_ptr; }
> +  bool operator == (_stmt_vec_info *x) { return x == m_ptr; }
> +  bool operator != (const stmt_vec_info &x) { return x.m_ptr != m_ptr; }
> +  bool operator != (_stmt_vec_info *x) { return x != m_ptr; }
> +
> +private:
> +  struct _stmt_vec_info *m_ptr;
> +};
> +
> +#define NULL_STMT_VEC_INFO (stmt_vec_info (NULL))
> +
>  #include "tree-data-ref.h"
>  #include "tree-hash-traits.h"
>  #include "target.h"
>
> -typedef struct _stmt_vec_info *stmt_vec_info;
> -
>  /* Used for naming of new temporaries.  */
>  enum vect_var_kind {
>    vect_simple_var,
> @@ -229,7 +248,7 @@ struct vec_info {
>    vec_info_shared *shared;
>
>    /* The mapping of GIMPLE UID to stmt_vec_info.  */
> -  vec<struct _stmt_vec_info *> stmt_vec_infos;
> +  vec<stmt_vec_info> stmt_vec_infos;
>
>    /* All SLP instances.  */
>    auto_vec<slp_instance> slp_instances;
> @@ -1052,6 +1071,17 @@ #define VECT_SCALAR_BOOLEAN_TYPE_P(TYPE)
>         && TYPE_PRECISION (TYPE) == 1           \
>         && TYPE_UNSIGNED (TYPE)))
>
> +inline _stmt_vec_info &
> +stmt_vec_info::operator* () const
> +{
> +  return *m_ptr;
> +}
> +
> +inline stmt_vec_info::operator gimple * () const
> +{
> +  return m_ptr ? m_ptr->stmt : NULL;
> +}
> +
>  extern vec<stmt_vec_info> *stmt_vec_info_vec;
>
>  void set_stmt_vec_info_vec (vec<stmt_vec_info> *);
> @@ -1084,7 +1114,7 @@ set_vinfo_for_stmt (gimple *stmt, stmt_v
>      }
>    else
>      {
> -      gcc_checking_assert (info == NULL);
> +      gcc_checking_assert (info == NULL_STMT_VEC_INFO);
>        (*stmt_vec_info_vec)[uid - 1] = info;
>      }
>  }
> @@ -1261,7 +1291,9 @@ add_stmt_costs (void *data, stmt_vector_
>    unsigned i;
>    FOR_EACH_VEC_ELT (*cost_vec, i, cost)
>      add_stmt_cost (data, cost->count, cost->kind,
> -                  cost->stmt ? vinfo_for_stmt (cost->stmt) : NULL,
> +                  (cost->stmt
> +                   ? vinfo_for_stmt (cost->stmt)
> +                   : NULL_STMT_VEC_INFO),
>                    cost->misalign, cost->where);
>  }
>
> Index: gcc/tree-vect-loop-manip.c
> ===================================================================
> --- gcc/tree-vect-loop-manip.c  2018-06-30 14:56:22.022893750 +0100
> +++ gcc/tree-vect-loop-manip.c  2018-07-24 10:22:33.821278677 +0100
> @@ -1344,7 +1344,7 @@ iv_phi_p (gphi *phi)
>      return false;
>
>    stmt_vec_info stmt_info = vinfo_for_stmt (phi);
> -  gcc_assert (stmt_info != NULL);
> +  gcc_assert (stmt_info != NULL_STMT_VEC_INFO);
>    if (STMT_VINFO_DEF_TYPE (stmt_info) == vect_reduction_def
>        || STMT_VINFO_DEF_TYPE (stmt_info) == vect_double_reduction_def)
>      return false;
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:22:30.401309046 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:22:33.821278677 +0100
> @@ -1139,7 +1139,7 @@ vect_compute_single_scalar_iteration_cos
>                     j, si)
>      {
>        struct _stmt_vec_info *stmt_info
> -       = si->stmt ? vinfo_for_stmt (si->stmt) : NULL;
> +       = si->stmt ? vinfo_for_stmt (si->stmt) : NULL_STMT_VEC_INFO;
>        (void) add_stmt_cost (target_cost_data, si->count,
>                             si->kind, stmt_info, si->misalign,
>                             vect_body);
> @@ -3351,7 +3351,7 @@ vect_get_known_peeling_cost (loop_vec_in
>      FOR_EACH_VEC_ELT (*scalar_cost_vec, j, si)
>         {
>           stmt_vec_info stmt_info
> -           = si->stmt ? vinfo_for_stmt (si->stmt) : NULL;
> +           = si->stmt ? vinfo_for_stmt (si->stmt) : NULL_STMT_VEC_INFO;
>           retval += record_stmt_cost (prologue_cost_vec,
>                                       si->count * peel_iters_prologue,
>                                       si->kind, stmt_info, si->misalign,
> @@ -3361,7 +3361,7 @@ vect_get_known_peeling_cost (loop_vec_in
>      FOR_EACH_VEC_ELT (*scalar_cost_vec, j, si)
>         {
>           stmt_vec_info stmt_info
> -           = si->stmt ? vinfo_for_stmt (si->stmt) : NULL;
> +           = si->stmt ? vinfo_for_stmt (si->stmt) : NULL_STMT_VEC_INFO;
>           retval += record_stmt_cost (epilogue_cost_vec,
>                                       si->count * *peel_iters_epilogue,
>                                       si->kind, stmt_info, si->misalign,
> @@ -3504,7 +3504,7 @@ vect_estimate_min_profitable_iters (loop
>                             j, si)
>             {
>               struct _stmt_vec_info *stmt_info
> -               = si->stmt ? vinfo_for_stmt (si->stmt) : NULL;
> +               = si->stmt ? vinfo_for_stmt (si->stmt) : NULL_STMT_VEC_INFO;
>               (void) add_stmt_cost (target_cost_data, si->count,
>                                     si->kind, stmt_info, si->misalign,
>                                     vect_epilogue);
> @@ -3541,7 +3541,7 @@ vect_estimate_min_profitable_iters (loop
>        FOR_EACH_VEC_ELT (LOOP_VINFO_SCALAR_ITERATION_COST (loop_vinfo), j, si)
>         {
>           struct _stmt_vec_info *stmt_info
> -           = si->stmt ? vinfo_for_stmt (si->stmt) : NULL;
> +           = si->stmt ? vinfo_for_stmt (si->stmt) : NULL_STMT_VEC_INFO;
>           (void) add_stmt_cost (target_cost_data,
>                                 si->count * peel_iters_prologue,
>                                 si->kind, stmt_info, si->misalign,
> @@ -3573,7 +3573,7 @@ vect_estimate_min_profitable_iters (loop
>        FOR_EACH_VEC_ELT (prologue_cost_vec, j, si)
>         {
>           struct _stmt_vec_info *stmt_info
> -           = si->stmt ? vinfo_for_stmt (si->stmt) : NULL;
> +           = si->stmt ? vinfo_for_stmt (si->stmt) : NULL_STMT_VEC_INFO;
>           (void) add_stmt_cost (data, si->count, si->kind, stmt_info,
>                                 si->misalign, vect_prologue);
>         }
> @@ -3581,7 +3581,7 @@ vect_estimate_min_profitable_iters (loop
>        FOR_EACH_VEC_ELT (epilogue_cost_vec, j, si)
>         {
>           struct _stmt_vec_info *stmt_info
> -           = si->stmt ? vinfo_for_stmt (si->stmt) : NULL;
> +           = si->stmt ? vinfo_for_stmt (si->stmt) : NULL_STMT_VEC_INFO;
>           (void) add_stmt_cost (data, si->count, si->kind, stmt_info,
>                                 si->misalign, vect_epilogue);
>         }
> Index: gcc/tree-vect-patterns.c
> ===================================================================
> --- gcc/tree-vect-patterns.c    2018-07-24 10:22:27.281336751 +0100
> +++ gcc/tree-vect-patterns.c    2018-07-24 10:22:33.825278642 +0100
> @@ -103,7 +103,7 @@ vect_init_pattern_stmt (gimple *pattern_
>  {
>    vec_info *vinfo = orig_stmt_info->vinfo;
>    stmt_vec_info pattern_stmt_info = vinfo->lookup_stmt (pattern_stmt);
> -  if (pattern_stmt_info == NULL)
> +  if (pattern_stmt_info == NULL_STMT_VEC_INFO)
>      pattern_stmt_info = orig_stmt_info->vinfo->add_stmt (pattern_stmt);
>    gimple_set_bb (pattern_stmt, gimple_bb (orig_stmt_info->stmt));
>
> Index: gcc/tree-vect-slp.c
> ===================================================================
> --- gcc/tree-vect-slp.c 2018-07-24 10:22:27.281336751 +0100
> +++ gcc/tree-vect-slp.c 2018-07-24 10:22:33.825278642 +0100
> @@ -4039,7 +4039,7 @@ vect_remove_slp_scalar_calls (slp_tree n
>        if (!is_gimple_call (stmt) || gimple_bb (stmt) == NULL)
>         continue;
>        stmt_info = vinfo_for_stmt (stmt);
> -      if (stmt_info == NULL
> +      if (stmt_info == NULL_STMT_VEC_INFO
>           || is_pattern_stmt_p (stmt_info)
>           || !PURE_SLP_STMT (stmt_info))
>         continue;
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-24 10:22:30.401309046 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:22:33.829278607 +0100
> @@ -2865,7 +2865,7 @@ vect_build_gather_load_calls (gimple *st
>           new_stmt = SSA_NAME_DEF_STMT (var);
>         }
>
> -      if (prev_stmt_info == NULL)
> +      if (prev_stmt_info == NULL_STMT_VEC_INFO)
>         STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
>        else
>         STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
> @@ -6550,7 +6550,7 @@ vectorizable_store (gimple *stmt, gimple
>
>           vect_finish_stmt_generation (stmt, new_stmt, gsi);
>
> -         if (prev_stmt_info == NULL)
> +         if (prev_stmt_info == NULL_STMT_VEC_INFO)
>             STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
>           else
>             STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
> @@ -9805,7 +9805,7 @@ vect_remove_stores (gimple *first_stmt)
>  new_stmt_vec_info (gimple *stmt, vec_info *vinfo)
>  {
>    stmt_vec_info res;
> -  res = (stmt_vec_info) xcalloc (1, sizeof (struct _stmt_vec_info));
> +  res = (_stmt_vec_info *) xcalloc (1, sizeof (struct _stmt_vec_info));
>
>    STMT_VINFO_TYPE (res) = undef_vec_info_type;
>    STMT_VINFO_STMT (res) = stmt;
> @@ -9862,7 +9862,7 @@ free_stmt_vec_infos (vec<stmt_vec_info>
>    unsigned int i;
>    stmt_vec_info info;
>    FOR_EACH_VEC_ELT (*v, i, info)
> -    if (info != NULL)
> +    if (info != NULL_STMT_VEC_INFO)
>        free_stmt_vec_info (STMT_VINFO_STMT (info));
>    if (v == stmt_vec_info_vec)
>      stmt_vec_info_vec = NULL;

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [11/46] Pass back a stmt_vec_info from vect_is_simple_use
  2018-07-24  9:57 ` [11/46] Pass back a stmt_vec_info from vect_is_simple_use Richard Sandiford
@ 2018-07-25  9:18   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25  9:18 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 11:57 AM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch makes vect_is_simple_use pass back a stmt_vec_info to
> those callers that want it.  Most users only need the stmt_vec_info
> but some need the gimple stmt too.

Hmm.  Unfortunately it's not redundant for dt_extern ...

> It's probably high time we added a class to represent "simple operands"
> instead, but I have a separate series that tries to clean up how
> operands are handled (with a view to allowing mixed vector sizes).

Well, we need to do sth similar to SLP and allow annotation on
SSA use edges, thus operand info needs to be context dependent.

One of my "plans" was to move everything over to the SLP datastructure
(imperfect as it is) to make that the "single" representation of stuff.
A very simple experiment allowing group sizes of one in SLP detection
worked reasonably well (and exposed all the cases we do not yet
handle in SLP ...).

OK.

Richard.

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (vect_is_simple_use): Add an optional
>         stmt_vec_info * parameter before the optional gimple **.
>         * tree-vect-stmts.c (vect_is_simple_use): Likewise.
>         (process_use, vect_get_vec_def_for_operand_1): Update callers.
>         (vect_get_vec_def_for_operand, vectorizable_shift): Likewise.
>         * tree-vect-loop.c (vectorizable_reduction): Likewise.
>         (vectorizable_live_operation): Likewise.
>         * tree-vect-patterns.c (type_conversion_p): Likewise.
>         (vect_look_through_possible_promotion): Likewise.
>         (vect_recog_rotate_pattern): Likewise.
>         * tree-vect-slp.c (vect_get_and_check_slp_defs): Likewise.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:22:33.829278607 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:22:37.257248166 +0100
> @@ -1532,9 +1532,10 @@ extern tree get_mask_type_for_scalar_typ
>  extern tree get_same_sized_vectype (tree, tree);
>  extern bool vect_get_loop_mask_type (loop_vec_info);
>  extern bool vect_is_simple_use (tree, vec_info *, enum vect_def_type *,
> -                               gimple ** = NULL);
> +                               stmt_vec_info * = NULL, gimple ** = NULL);
>  extern bool vect_is_simple_use (tree, vec_info *, enum vect_def_type *,
> -                               tree *, gimple ** = NULL);
> +                               tree *, stmt_vec_info * = NULL,
> +                               gimple ** = NULL);
>  extern bool supportable_widening_operation (enum tree_code, gimple *, tree,
>                                             tree, enum tree_code *,
>                                             enum tree_code *, int *,
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-24 10:22:33.829278607 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:22:37.257248166 +0100
> @@ -459,11 +459,9 @@ process_use (gimple *stmt, tree use, loo
>              enum vect_relevant relevant, vec<gimple *> *worklist,
>              bool force)
>  {
> -  struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
>    stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
>    stmt_vec_info dstmt_vinfo;
>    basic_block bb, def_bb;
> -  gimple *def_stmt;
>    enum vect_def_type dt;
>
>    /* case 1: we are only interested in uses that need to be vectorized.  Uses
> @@ -471,7 +469,7 @@ process_use (gimple *stmt, tree use, loo
>    if (!force && !exist_non_indexing_operands_for_use_p (use, stmt))
>       return true;
>
> -  if (!vect_is_simple_use (use, loop_vinfo, &dt, &def_stmt))
> +  if (!vect_is_simple_use (use, loop_vinfo, &dt, &dstmt_vinfo))
>      {
>        if (dump_enabled_p ())
>          dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> @@ -479,27 +477,20 @@ process_use (gimple *stmt, tree use, loo
>        return false;
>      }
>
> -  if (!def_stmt || gimple_nop_p (def_stmt))
> +  if (!dstmt_vinfo)
>      return true;
>
> -  def_bb = gimple_bb (def_stmt);
> -  if (!flow_bb_inside_loop_p (loop, def_bb))
> -    {
> -      if (dump_enabled_p ())
> -       dump_printf_loc (MSG_NOTE, vect_location, "def_stmt is out of loop.\n");
> -      return true;
> -    }
> +  def_bb = gimple_bb (dstmt_vinfo->stmt);
>
> -  /* case 2: A reduction phi (STMT) defined by a reduction stmt (DEF_STMT).
> -     DEF_STMT must have already been processed, because this should be the
> +  /* case 2: A reduction phi (STMT) defined by a reduction stmt (DSTMT_VINFO).
> +     DSTMT_VINFO must have already been processed, because this should be the
>       only way that STMT, which is a reduction-phi, was put in the worklist,
> -     as there should be no other uses for DEF_STMT in the loop.  So we just
> +     as there should be no other uses for DSTMT_VINFO in the loop.  So we just
>       check that everything is as expected, and we are done.  */
> -  dstmt_vinfo = vinfo_for_stmt (def_stmt);
>    bb = gimple_bb (stmt);
>    if (gimple_code (stmt) == GIMPLE_PHI
>        && STMT_VINFO_DEF_TYPE (stmt_vinfo) == vect_reduction_def
> -      && gimple_code (def_stmt) != GIMPLE_PHI
> +      && gimple_code (dstmt_vinfo->stmt) != GIMPLE_PHI
>        && STMT_VINFO_DEF_TYPE (dstmt_vinfo) == vect_reduction_def
>        && bb->loop_father == def_bb->loop_father)
>      {
> @@ -514,7 +505,7 @@ process_use (gimple *stmt, tree use, loo
>
>    /* case 3a: outer-loop stmt defining an inner-loop stmt:
>         outer-loop-header-bb:
> -               d = def_stmt
> +               d = dstmt_vinfo
>         inner-loop:
>                 stmt # use (d)
>         outer-loop-tail-bb:
> @@ -554,7 +545,7 @@ process_use (gimple *stmt, tree use, loo
>         outer-loop-header-bb:
>                 ...
>         inner-loop:
> -               d = def_stmt
> +               d = dstmt_vinfo
>         outer-loop-tail-bb (or outer-loop-exit-bb in double reduction):
>                 stmt # use (d)          */
>    else if (flow_loop_nested_p (bb->loop_father, def_bb->loop_father))
> @@ -601,7 +592,7 @@ process_use (gimple *stmt, tree use, loo
>      }
>
>
> -  vect_mark_relevant (worklist, def_stmt, relevant, false);
> +  vect_mark_relevant (worklist, dstmt_vinfo, relevant, false);
>    return true;
>  }
>
> @@ -1563,7 +1554,9 @@ vect_get_vec_def_for_operand (tree op, g
>        dump_printf (MSG_NOTE, "\n");
>      }
>
> -  is_simple_use = vect_is_simple_use (op, loop_vinfo, &dt, &def_stmt);
> +  stmt_vec_info def_stmt_info;
> +  is_simple_use = vect_is_simple_use (op, loop_vinfo, &dt,
> +                                     &def_stmt_info, &def_stmt);
>    gcc_assert (is_simple_use);
>    if (def_stmt && dump_enabled_p ())
>      {
> @@ -1588,7 +1581,7 @@ vect_get_vec_def_for_operand (tree op, g
>        return vect_init_vector (stmt, op, vector_type, NULL);
>      }
>    else
> -    return vect_get_vec_def_for_operand_1 (def_stmt, dt);
> +    return vect_get_vec_def_for_operand_1 (def_stmt_info, dt);
>  }
>
>
> @@ -5479,7 +5472,9 @@ vectorizable_shift (gimple *stmt, gimple
>      return false;
>
>    op1 = gimple_assign_rhs2 (stmt);
> -  if (!vect_is_simple_use (op1, vinfo, &dt[1], &op1_vectype))
> +  stmt_vec_info op1_def_stmt_info;
> +  if (!vect_is_simple_use (op1, vinfo, &dt[1], &op1_vectype,
> +                          &op1_def_stmt_info))
>      {
>        if (dump_enabled_p ())
>          dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> @@ -5524,12 +5519,8 @@ vectorizable_shift (gimple *stmt, gimple
>        /* If the shift amount is computed by a pattern stmt we cannot
>           use the scalar amount directly thus give up and use a vector
>          shift.  */
> -      if (dt[1] == vect_internal_def)
> -       {
> -         gimple *def = SSA_NAME_DEF_STMT (op1);
> -         if (is_pattern_stmt_p (vinfo_for_stmt (def)))
> -           scalar_shift_arg = false;
> -       }
> +      if (op1_def_stmt_info && is_pattern_stmt_p (op1_def_stmt_info))
> +       scalar_shift_arg = false;
>      }
>    else
>      {
> @@ -10051,7 +10042,10 @@ get_same_sized_vectype (tree scalar_type
>     VINFO - the vect info of the loop or basic block that is being vectorized.
>     OPERAND - operand in the loop or bb.
>     Output:
> -   DEF_STMT_OUT (optional) - the defining stmt in case OPERAND is an SSA_NAME.
> +   DEF_STMT_INFO_OUT (optional) - information about the defining stmt in
> +     case OPERAND is an SSA_NAME that is defined in the vectorizable region
> +   DEF_STMT_OUT (optional) - the defining stmt in case OPERAND is an SSA_NAME;
> +     the definition could be anywhere in the function
>     DT - the type of definition
>
>     Returns whether a stmt with OPERAND can be vectorized.
> @@ -10064,8 +10058,10 @@ get_same_sized_vectype (tree scalar_type
>
>  bool
>  vect_is_simple_use (tree operand, vec_info *vinfo, enum vect_def_type *dt,
> -                   gimple **def_stmt_out)
> +                   stmt_vec_info *def_stmt_info_out, gimple **def_stmt_out)
>  {
> +  if (def_stmt_info_out)
> +    *def_stmt_info_out = NULL;
>    if (def_stmt_out)
>      *def_stmt_out = NULL;
>    *dt = vect_unknown_def_type;
> @@ -10113,6 +10109,8 @@ vect_is_simple_use (tree operand, vec_in
>               *dt = vect_unknown_def_type;
>               break;
>             }
> +         if (def_stmt_info_out)
> +           *def_stmt_info_out = stmt_vinfo;
>         }
>        if (def_stmt_out)
>         *def_stmt_out = def_stmt;
> @@ -10175,14 +10173,18 @@ vect_is_simple_use (tree operand, vec_in
>
>  bool
>  vect_is_simple_use (tree operand, vec_info *vinfo, enum vect_def_type *dt,
> -                   tree *vectype, gimple **def_stmt_out)
> +                   tree *vectype, stmt_vec_info *def_stmt_info_out,
> +                   gimple **def_stmt_out)
>  {
> +  stmt_vec_info def_stmt_info;
>    gimple *def_stmt;
> -  if (!vect_is_simple_use (operand, vinfo, dt, &def_stmt))
> +  if (!vect_is_simple_use (operand, vinfo, dt, &def_stmt_info, &def_stmt))
>      return false;
>
>    if (def_stmt_out)
>      *def_stmt_out = def_stmt;
> +  if (def_stmt_info_out)
> +    *def_stmt_info_out = def_stmt_info;
>
>    /* Now get a vector type if the def is internal, otherwise supply
>       NULL_TREE and leave it up to the caller to figure out a proper
> @@ -10193,8 +10195,7 @@ vect_is_simple_use (tree operand, vec_in
>        || *dt == vect_double_reduction_def
>        || *dt == vect_nested_cycle)
>      {
> -      stmt_vec_info stmt_info = vinfo_for_stmt (def_stmt);
> -      *vectype = STMT_VINFO_VECTYPE (stmt_info);
> +      *vectype = STMT_VINFO_VECTYPE (def_stmt_info);
>        gcc_assert (*vectype != NULL_TREE);
>        if (dump_enabled_p ())
>         {
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:22:33.821278677 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:22:37.253248202 +0100
> @@ -6090,7 +6090,6 @@ vectorizable_reduction (gimple *stmt, gi
>    int op_type;
>    optab optab;
>    tree new_temp = NULL_TREE;
> -  gimple *def_stmt;
>    enum vect_def_type dt, cond_reduc_dt = vect_unknown_def_type;
>    gimple *cond_reduc_def_stmt = NULL;
>    enum tree_code cond_reduc_op_code = ERROR_MARK;
> @@ -6324,13 +6323,14 @@ vectorizable_reduction (gimple *stmt, gi
>        if (i == 0 && code == COND_EXPR)
>          continue;
>
> -      is_simple_use = vect_is_simple_use (ops[i], loop_vinfo,
> -                                         &dts[i], &tem, &def_stmt);
> +      stmt_vec_info def_stmt_info;
> +      is_simple_use = vect_is_simple_use (ops[i], loop_vinfo, &dts[i], &tem,
> +                                         &def_stmt_info);
>        dt = dts[i];
>        gcc_assert (is_simple_use);
>        if (dt == vect_reduction_def)
>         {
> -          reduc_def_stmt = def_stmt;
> +         reduc_def_stmt = def_stmt_info;
>           reduc_index = i;
>           continue;
>         }
> @@ -6352,11 +6352,11 @@ vectorizable_reduction (gimple *stmt, gi
>         return false;
>
>        if (dt == vect_nested_cycle)
> -        {
> -          found_nested_cycle_def = true;
> -          reduc_def_stmt = def_stmt;
> -          reduc_index = i;
> -        }
> +       {
> +         found_nested_cycle_def = true;
> +         reduc_def_stmt = def_stmt_info;
> +         reduc_index = i;
> +       }
>
>        if (i == 1 && code == COND_EXPR)
>         {
> @@ -6367,11 +6367,11 @@ vectorizable_reduction (gimple *stmt, gi
>               cond_reduc_val = ops[i];
>             }
>           if (dt == vect_induction_def
> -             && def_stmt != NULL
> -             && is_nonwrapping_integer_induction (def_stmt, loop))
> +             && def_stmt_info
> +             && is_nonwrapping_integer_induction (def_stmt_info, loop))
>             {
>               cond_reduc_dt = dt;
> -             cond_reduc_def_stmt = def_stmt;
> +             cond_reduc_def_stmt = def_stmt_info;
>             }
>         }
>      }
> @@ -7958,7 +7958,7 @@ vectorizable_live_operation (gimple *stm
>    else
>      {
>        enum vect_def_type dt = STMT_VINFO_DEF_TYPE (stmt_info);
> -      vec_lhs = vect_get_vec_def_for_operand_1 (stmt, dt);
> +      vec_lhs = vect_get_vec_def_for_operand_1 (stmt_info, dt);
>        gcc_checking_assert (ncopies == 1
>                            || !LOOP_VINFO_FULLY_MASKED_P (loop_vinfo));
>
> Index: gcc/tree-vect-patterns.c
> ===================================================================
> --- gcc/tree-vect-patterns.c    2018-07-24 10:22:33.825278642 +0100
> +++ gcc/tree-vect-patterns.c    2018-07-24 10:22:37.253248202 +0100
> @@ -250,7 +250,9 @@ type_conversion_p (tree name, gimple *us
>    enum vect_def_type dt;
>
>    stmt_vinfo = vinfo_for_stmt (use_stmt);
> -  if (!vect_is_simple_use (name, stmt_vinfo->vinfo, &dt, def_stmt))
> +  stmt_vec_info def_stmt_info;
> +  if (!vect_is_simple_use (name, stmt_vinfo->vinfo, &dt, &def_stmt_info,
> +                          def_stmt))
>      return false;
>
>    if (dt != vect_internal_def
> @@ -371,9 +373,10 @@ vect_look_through_possible_promotion (ve
>    while (TREE_CODE (op) == SSA_NAME && INTEGRAL_TYPE_P (op_type))
>      {
>        /* See whether OP is simple enough to vectorize.  */
> +      stmt_vec_info def_stmt_info;
>        gimple *def_stmt;
>        vect_def_type dt;
> -      if (!vect_is_simple_use (op, vinfo, &dt, &def_stmt))
> +      if (!vect_is_simple_use (op, vinfo, &dt, &def_stmt_info, &def_stmt))
>         break;
>
>        /* If OP is the input of a demotion, skip over it to see whether
> @@ -407,17 +410,15 @@ vect_look_through_possible_promotion (ve
>          the cast is potentially vectorizable.  */
>        if (!def_stmt)
>         break;
> -      if (dt == vect_internal_def)
> -       {
> -         caster = vinfo_for_stmt (def_stmt);
> -         /* Ignore pattern statements, since we don't link uses for them.  */
> -         if (single_use_p
> -             && !STMT_VINFO_RELATED_STMT (caster)
> -             && !has_single_use (res))
> -           *single_use_p = false;
> -       }
> -      else
> -       caster = NULL;
> +      caster = def_stmt_info;
> +
> +      /* Ignore pattern statements, since we don't link uses for them.  */
> +      if (caster
> +         && single_use_p
> +         && !STMT_VINFO_RELATED_STMT (caster)
> +         && !has_single_use (res))
> +       *single_use_p = false;
> +
>        gassign *assign = dyn_cast <gassign *> (def_stmt);
>        if (!assign || !CONVERT_EXPR_CODE_P (gimple_assign_rhs_code (def_stmt)))
>         break;
> @@ -1988,7 +1989,8 @@ vect_recog_rotate_pattern (stmt_vec_info
>        || !TYPE_UNSIGNED (type))
>      return NULL;
>
> -  if (!vect_is_simple_use (oprnd1, vinfo, &dt, &def_stmt))
> +  stmt_vec_info def_stmt_info;
> +  if (!vect_is_simple_use (oprnd1, vinfo, &dt, &def_stmt_info, &def_stmt))
>      return NULL;
>
>    if (dt != vect_internal_def
> Index: gcc/tree-vect-slp.c
> ===================================================================
> --- gcc/tree-vect-slp.c 2018-07-24 10:22:33.825278642 +0100
> +++ gcc/tree-vect-slp.c 2018-07-24 10:22:37.253248202 +0100
> @@ -303,7 +303,6 @@ vect_get_and_check_slp_defs (vec_info *v
>    gimple *stmt = stmts[stmt_num];
>    tree oprnd;
>    unsigned int i, number_of_oprnds;
> -  gimple *def_stmt;
>    enum vect_def_type dt = vect_uninitialized_def;
>    bool pattern = false;
>    slp_oprnd_info oprnd_info;
> @@ -357,7 +356,8 @@ vect_get_and_check_slp_defs (vec_info *v
>
>        oprnd_info = (*oprnds_info)[i];
>
> -      if (!vect_is_simple_use (oprnd, vinfo, &dt, &def_stmt))
> +      stmt_vec_info def_stmt_info;
> +      if (!vect_is_simple_use (oprnd, vinfo, &dt, &def_stmt_info))
>         {
>           if (dump_enabled_p ())
>             {
> @@ -370,13 +370,10 @@ vect_get_and_check_slp_defs (vec_info *v
>           return -1;
>         }
>
> -      /* Check if DEF_STMT is a part of a pattern in LOOP and get the def stmt
> -         from the pattern.  Check that all the stmts of the node are in the
> -         pattern.  */
> -      if (def_stmt && gimple_bb (def_stmt)
> -         && vect_stmt_in_region_p (vinfo, def_stmt)
> -         && vinfo_for_stmt (def_stmt)
> -         && is_pattern_stmt_p (vinfo_for_stmt (def_stmt)))
> +      /* Check if DEF_STMT_INFO is a part of a pattern in LOOP and get
> +        the def stmt from the pattern.  Check that all the stmts of the
> +        node are in the pattern.  */
> +      if (def_stmt_info && is_pattern_stmt_p (def_stmt_info))
>          {
>            pattern = true;
>            if (!first && !oprnd_info->first_pattern
> @@ -405,7 +402,7 @@ vect_get_and_check_slp_defs (vec_info *v
>               return 1;
>              }
>
> -          dt = STMT_VINFO_DEF_TYPE (vinfo_for_stmt (def_stmt));
> +         dt = STMT_VINFO_DEF_TYPE (def_stmt_info);
>
>            if (dt == vect_unknown_def_type)
>              {
> @@ -415,7 +412,7 @@ vect_get_and_check_slp_defs (vec_info *v
>                return -1;
>              }
>
> -          switch (gimple_code (def_stmt))
> +         switch (gimple_code (def_stmt_info->stmt))
>              {
>              case GIMPLE_PHI:
>              case GIMPLE_ASSIGN:
> @@ -499,7 +496,7 @@ vect_get_and_check_slp_defs (vec_info *v
>         case vect_reduction_def:
>         case vect_induction_def:
>         case vect_internal_def:
> -         oprnd_info->def_stmts.quick_push (def_stmt);
> +         oprnd_info->def_stmts.quick_push (def_stmt_info);
>           break;
>
>         default:

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [12/46] Make vect_finish_stmt_generation return a stmt_vec_info
  2018-07-24  9:58 ` [12/46] Make vect_finish_stmt_generation return " Richard Sandiford
@ 2018-07-25  9:19   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25  9:19 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 11:58 AM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch makes vect_finish_replace_stmt and vect_finish_stmt_generation
> return the stmt_vec_info for the vectorised statement, so that the caller
> doesn't need a separate vinfo_for_stmt to get at it.
>
> This involved changing the structure of the statement-generating loops
> so that they use narrow scopes for the vectorised gimple statements
> and use the existing (wider) scopes for the associated stmt_vec_infos.
> This helps with gimple stmt->stmt_vec_info changes further down the line.

OK.

> The way we do this generation is another area ripe for clean-up,
> but that's too much of a rabbit-hole for this series.

Indeed ...

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (vect_finish_replace_stmt): Return a stmt_vec_info
>         (vect_finish_stmt_generation): Likewise.
>         * tree-vect-stmts.c (vect_finish_stmt_generation_1): Likewise.
>         (vect_finish_replace_stmt, vect_finish_stmt_generation): Likewise.
>         (vect_build_gather_load_calls): Use the return value of the above
>         functions instead of a separate call to vinfo_for_stmt.  Use narrow
>         scopes for the input gimple stmt and wider scopes for the associated
>         stmt_vec_info.  Use vec_info::lookup_def when setting these
>         stmt_vec_infos from an SSA_NAME definition.
>         (vectorizable_bswap, vectorizable_call, vectorizable_simd_clone_call)
>         (vect_create_vectorized_demotion_stmts, vectorizable_conversion)
>         (vectorizable_assignment, vectorizable_shift, vectorizable_operation)
>         (vectorizable_store, vectorizable_load, vectorizable_condition)
>         (vectorizable_comparison): Likewise.
>         * tree-vect-loop.c (vectorize_fold_left_reduction): Likewise.
>         (vectorizable_reduction): Likewise.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:22:37.257248166 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:22:40.725217371 +0100
> @@ -1548,9 +1548,9 @@ extern void free_stmt_vec_info (gimple *
>  extern unsigned record_stmt_cost (stmt_vector_for_cost *, int,
>                                   enum vect_cost_for_stmt, stmt_vec_info,
>                                   int, enum vect_cost_model_location);
> -extern void vect_finish_replace_stmt (gimple *, gimple *);
> -extern void vect_finish_stmt_generation (gimple *, gimple *,
> -                                         gimple_stmt_iterator *);
> +extern stmt_vec_info vect_finish_replace_stmt (gimple *, gimple *);
> +extern stmt_vec_info vect_finish_stmt_generation (gimple *, gimple *,
> +                                                 gimple_stmt_iterator *);
>  extern bool vect_mark_stmts_to_be_vectorized (loop_vec_info);
>  extern tree vect_get_store_rhs (gimple *);
>  extern tree vect_get_vec_def_for_operand_1 (gimple *, enum vect_def_type);
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-24 10:22:37.257248166 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:22:40.725217371 +0100
> @@ -1729,15 +1729,15 @@ vect_get_vec_defs (tree op0, tree op1, g
>
>  /* Helper function called by vect_finish_replace_stmt and
>     vect_finish_stmt_generation.  Set the location of the new
> -   statement and create a stmt_vec_info for it.  */
> +   statement and create and return a stmt_vec_info for it.  */
>
> -static void
> +static stmt_vec_info
>  vect_finish_stmt_generation_1 (gimple *stmt, gimple *vec_stmt)
>  {
>    stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    vec_info *vinfo = stmt_info->vinfo;
>
> -  vinfo->add_stmt (vec_stmt);
> +  stmt_vec_info vec_stmt_info = vinfo->add_stmt (vec_stmt);
>
>    if (dump_enabled_p ())
>      {
> @@ -1753,12 +1753,15 @@ vect_finish_stmt_generation_1 (gimple *s
>    int lp_nr = lookup_stmt_eh_lp (stmt);
>    if (lp_nr != 0 && stmt_could_throw_p (vec_stmt))
>      add_stmt_to_eh_lp (vec_stmt, lp_nr);
> +
> +  return vec_stmt_info;
>  }
>
>  /* Replace the scalar statement STMT with a new vector statement VEC_STMT,
> -   which sets the same scalar result as STMT did.  */
> +   which sets the same scalar result as STMT did.  Create and return a
> +   stmt_vec_info for VEC_STMT.  */
>
> -void
> +stmt_vec_info
>  vect_finish_replace_stmt (gimple *stmt, gimple *vec_stmt)
>  {
>    gcc_assert (gimple_get_lhs (stmt) == gimple_get_lhs (vec_stmt));
> @@ -1766,14 +1769,13 @@ vect_finish_replace_stmt (gimple *stmt,
>    gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
>    gsi_replace (&gsi, vec_stmt, false);
>
> -  vect_finish_stmt_generation_1 (stmt, vec_stmt);
> +  return vect_finish_stmt_generation_1 (stmt, vec_stmt);
>  }
>
> -/* Function vect_finish_stmt_generation.
> -
> -   Insert a new stmt.  */
> +/* Add VEC_STMT to the vectorized implementation of STMT and insert it
> +   before *GSI.  Create and return a stmt_vec_info for VEC_STMT.  */
>
> -void
> +stmt_vec_info
>  vect_finish_stmt_generation (gimple *stmt, gimple *vec_stmt,
>                              gimple_stmt_iterator *gsi)
>  {
> @@ -1806,7 +1808,7 @@ vect_finish_stmt_generation (gimple *stm
>         }
>      }
>    gsi_insert_before (gsi, vec_stmt, GSI_SAME_STMT);
> -  vect_finish_stmt_generation_1 (stmt, vec_stmt);
> +  return vect_finish_stmt_generation_1 (stmt, vec_stmt);
>  }
>
>  /* We want to vectorize a call to combined function CFN with function
> @@ -2774,7 +2776,6 @@ vect_build_gather_load_calls (gimple *st
>    for (int j = 0; j < ncopies; ++j)
>      {
>        tree op, var;
> -      gimple *new_stmt;
>        if (modifier == WIDEN && (j & 1))
>         op = permute_vec_elements (vec_oprnd0, vec_oprnd0,
>                                    perm_mask, stmt, gsi);
> @@ -2791,7 +2792,7 @@ vect_build_gather_load_calls (gimple *st
>                                 TYPE_VECTOR_SUBPARTS (idxtype)));
>           var = vect_get_new_ssa_name (idxtype, vect_simple_var);
>           op = build1 (VIEW_CONVERT_EXPR, idxtype, op);
> -         new_stmt = gimple_build_assign (var, VIEW_CONVERT_EXPR, op);
> +         gassign *new_stmt = gimple_build_assign (var, VIEW_CONVERT_EXPR, op);
>           vect_finish_stmt_generation (stmt, new_stmt, gsi);
>           op = var;
>         }
> @@ -2816,8 +2817,8 @@ vect_build_gather_load_calls (gimple *st
>                                TYPE_VECTOR_SUBPARTS (masktype)));
>                   var = vect_get_new_ssa_name (masktype, vect_simple_var);
>                   mask_op = build1 (VIEW_CONVERT_EXPR, masktype, mask_op);
> -                 new_stmt = gimple_build_assign (var, VIEW_CONVERT_EXPR,
> -                                                 mask_op);
> +                 gassign *new_stmt
> +                   = gimple_build_assign (var, VIEW_CONVERT_EXPR, mask_op);
>                   vect_finish_stmt_generation (stmt, new_stmt, gsi);
>                   mask_op = var;
>                 }
> @@ -2825,28 +2826,29 @@ vect_build_gather_load_calls (gimple *st
>           src_op = mask_op;
>         }
>
> -      new_stmt = gimple_build_call (gs_info->decl, 5, src_op, ptr, op,
> -                                   mask_op, scale);
> +      gcall *new_call = gimple_build_call (gs_info->decl, 5, src_op, ptr, op,
> +                                          mask_op, scale);
>
> +      stmt_vec_info new_stmt_info;
>        if (!useless_type_conversion_p (vectype, rettype))
>         {
>           gcc_assert (known_eq (TYPE_VECTOR_SUBPARTS (vectype),
>                                 TYPE_VECTOR_SUBPARTS (rettype)));
>           op = vect_get_new_ssa_name (rettype, vect_simple_var);
> -         gimple_call_set_lhs (new_stmt, op);
> -         vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +         gimple_call_set_lhs (new_call, op);
> +         vect_finish_stmt_generation (stmt, new_call, gsi);
>           var = make_ssa_name (vec_dest);
>           op = build1 (VIEW_CONVERT_EXPR, vectype, op);
> -         new_stmt = gimple_build_assign (var, VIEW_CONVERT_EXPR, op);
> +         gassign *new_stmt = gimple_build_assign (var, VIEW_CONVERT_EXPR, op);
> +         new_stmt_info = vect_finish_stmt_generation (stmt, new_stmt, gsi);
>         }
>        else
>         {
> -         var = make_ssa_name (vec_dest, new_stmt);
> -         gimple_call_set_lhs (new_stmt, var);
> +         var = make_ssa_name (vec_dest, new_call);
> +         gimple_call_set_lhs (new_call, var);
> +         new_stmt_info = vect_finish_stmt_generation (stmt, new_call, gsi);
>         }
>
> -      vect_finish_stmt_generation (stmt, new_stmt, gsi);
> -
>        if (modifier == NARROW)
>         {
>           if ((j & 1) == 0)
> @@ -2855,14 +2857,14 @@ vect_build_gather_load_calls (gimple *st
>               continue;
>             }
>           var = permute_vec_elements (prev_res, var, perm_mask, stmt, gsi);
> -         new_stmt = SSA_NAME_DEF_STMT (var);
> +         new_stmt_info = loop_vinfo->lookup_def (var);
>         }
>
>        if (prev_stmt_info == NULL_STMT_VEC_INFO)
> -       STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
> +       STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
>        else
> -       STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
> -      prev_stmt_info = vinfo_for_stmt (new_stmt);
> +       STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
> +      prev_stmt_info = new_stmt_info;
>      }
>  }
>
> @@ -3023,7 +3025,7 @@ vectorizable_bswap (gimple *stmt, gimple
>
>    /* Transform.  */
>    vec<tree> vec_oprnds = vNULL;
> -  gimple *new_stmt = NULL;
> +  stmt_vec_info new_stmt_info = NULL;
>    stmt_vec_info prev_stmt_info = NULL;
>    for (unsigned j = 0; j < ncopies; j++)
>      {
> @@ -3038,6 +3040,7 @@ vectorizable_bswap (gimple *stmt, gimple
>        tree vop;
>        FOR_EACH_VEC_ELT (vec_oprnds, i, vop)
>         {
> +        gimple *new_stmt;
>          tree tem = make_ssa_name (char_vectype);
>          new_stmt = gimple_build_assign (tem, build1 (VIEW_CONVERT_EXPR,
>                                                       char_vectype, vop));
> @@ -3049,20 +3052,20 @@ vectorizable_bswap (gimple *stmt, gimple
>          tem = make_ssa_name (vectype);
>          new_stmt = gimple_build_assign (tem, build1 (VIEW_CONVERT_EXPR,
>                                                       vectype, tem2));
> -        vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +        new_stmt_info = vect_finish_stmt_generation (stmt, new_stmt, gsi);
>           if (slp_node)
> -           SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
> +          SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
>         }
>
>        if (slp_node)
>          continue;
>
>        if (j == 0)
> -        STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
> +       STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
>        else
> -        STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
> +       STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
>
> -      prev_stmt_info = vinfo_for_stmt (new_stmt);
> +      prev_stmt_info = new_stmt_info;
>      }
>
>    vec_oprnds.release ();
> @@ -3123,7 +3126,6 @@ vectorizable_call (gimple *gs, gimple_st
>      = { vect_unknown_def_type, vect_unknown_def_type, vect_unknown_def_type,
>         vect_unknown_def_type };
>    int ndts = ARRAY_SIZE (dt);
> -  gimple *new_stmt = NULL;
>    int ncopies, j;
>    auto_vec<tree, 8> vargs;
>    auto_vec<tree, 8> orig_vargs;
> @@ -3361,6 +3363,7 @@ vectorizable_call (gimple *gs, gimple_st
>
>    bool masked_loop_p = loop_vinfo && LOOP_VINFO_FULLY_MASKED_P (loop_vinfo);
>
> +  stmt_vec_info new_stmt_info = NULL;
>    prev_stmt_info = NULL;
>    if (modifier == NONE || ifn != IFN_LAST)
>      {
> @@ -3399,16 +3402,19 @@ vectorizable_call (gimple *gs, gimple_st
>                         = gimple_build_call_internal_vec (ifn, vargs);
>                       gimple_call_set_lhs (call, half_res);
>                       gimple_call_set_nothrow (call, true);
> -                     new_stmt = call;
> -                     vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +                     new_stmt_info
> +                       = vect_finish_stmt_generation (stmt, call, gsi);
>                       if ((i & 1) == 0)
>                         {
>                           prev_res = half_res;
>                           continue;
>                         }
>                       new_temp = make_ssa_name (vec_dest);
> -                     new_stmt = gimple_build_assign (new_temp, convert_code,
> -                                                     prev_res, half_res);
> +                     gimple *new_stmt
> +                       = gimple_build_assign (new_temp, convert_code,
> +                                              prev_res, half_res);
> +                     new_stmt_info
> +                       = vect_finish_stmt_generation (stmt, new_stmt, gsi);
>                     }
>                   else
>                     {
> @@ -3431,10 +3437,10 @@ vectorizable_call (gimple *gs, gimple_st
>                       new_temp = make_ssa_name (vec_dest, call);
>                       gimple_call_set_lhs (call, new_temp);
>                       gimple_call_set_nothrow (call, true);
> -                     new_stmt = call;
> +                     new_stmt_info
> +                       = vect_finish_stmt_generation (stmt, call, gsi);
>                     }
> -                 vect_finish_stmt_generation (stmt, new_stmt, gsi);
> -                 SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
> +                 SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
>                 }
>
>               for (i = 0; i < nargs; i++)
> @@ -3475,7 +3481,9 @@ vectorizable_call (gimple *gs, gimple_st
>               gimple *init_stmt = gimple_build_assign (new_var, cst);
>               vect_init_vector_1 (stmt, init_stmt, NULL);
>               new_temp = make_ssa_name (vec_dest);
> -             new_stmt = gimple_build_assign (new_temp, new_var);
> +             gimple *new_stmt = gimple_build_assign (new_temp, new_var);
> +             new_stmt_info
> +               = vect_finish_stmt_generation (stmt, new_stmt, gsi);
>             }
>           else if (modifier == NARROW)
>             {
> @@ -3486,16 +3494,17 @@ vectorizable_call (gimple *gs, gimple_st
>               gcall *call = gimple_build_call_internal_vec (ifn, vargs);
>               gimple_call_set_lhs (call, half_res);
>               gimple_call_set_nothrow (call, true);
> -             new_stmt = call;
> -             vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +             new_stmt_info = vect_finish_stmt_generation (stmt, call, gsi);
>               if ((j & 1) == 0)
>                 {
>                   prev_res = half_res;
>                   continue;
>                 }
>               new_temp = make_ssa_name (vec_dest);
> -             new_stmt = gimple_build_assign (new_temp, convert_code,
> -                                             prev_res, half_res);
> +             gassign *new_stmt = gimple_build_assign (new_temp, convert_code,
> +                                                      prev_res, half_res);
> +             new_stmt_info
> +               = vect_finish_stmt_generation (stmt, new_stmt, gsi);
>             }
>           else
>             {
> @@ -3504,19 +3513,18 @@ vectorizable_call (gimple *gs, gimple_st
>                 call = gimple_build_call_internal_vec (ifn, vargs);
>               else
>                 call = gimple_build_call_vec (fndecl, vargs);
> -             new_temp = make_ssa_name (vec_dest, new_stmt);
> +             new_temp = make_ssa_name (vec_dest, call);
>               gimple_call_set_lhs (call, new_temp);
>               gimple_call_set_nothrow (call, true);
> -             new_stmt = call;
> +             new_stmt_info = vect_finish_stmt_generation (stmt, call, gsi);
>             }
> -         vect_finish_stmt_generation (stmt, new_stmt, gsi);
>
>           if (j == (modifier == NARROW ? 1 : 0))
> -           STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
> +           STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
>           else
> -           STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
> +           STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
>
> -         prev_stmt_info = vinfo_for_stmt (new_stmt);
> +         prev_stmt_info = new_stmt_info;
>         }
>      }
>    else if (modifier == NARROW)
> @@ -3560,9 +3568,9 @@ vectorizable_call (gimple *gs, gimple_st
>                   new_temp = make_ssa_name (vec_dest, call);
>                   gimple_call_set_lhs (call, new_temp);
>                   gimple_call_set_nothrow (call, true);
> -                 new_stmt = call;
> -                 vect_finish_stmt_generation (stmt, new_stmt, gsi);
> -                 SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
> +                 new_stmt_info
> +                   = vect_finish_stmt_generation (stmt, call, gsi);
> +                 SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
>                 }
>
>               for (i = 0; i < nargs; i++)
> @@ -3585,7 +3593,8 @@ vectorizable_call (gimple *gs, gimple_st
>                 }
>               else
>                 {
> -                 vec_oprnd1 = gimple_call_arg (new_stmt, 2*i + 1);
> +                 vec_oprnd1 = gimple_call_arg (new_stmt_info->stmt,
> +                                               2 * i + 1);
>                   vec_oprnd0
>                     = vect_get_vec_def_for_stmt_copy (dt[i], vec_oprnd1);
>                   vec_oprnd1
> @@ -3596,17 +3605,17 @@ vectorizable_call (gimple *gs, gimple_st
>               vargs.quick_push (vec_oprnd1);
>             }
>
> -         new_stmt = gimple_build_call_vec (fndecl, vargs);
> +         gcall *new_stmt = gimple_build_call_vec (fndecl, vargs);
>           new_temp = make_ssa_name (vec_dest, new_stmt);
>           gimple_call_set_lhs (new_stmt, new_temp);
> -         vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +         new_stmt_info = vect_finish_stmt_generation (stmt, new_stmt, gsi);
>
>           if (j == 0)
> -           STMT_VINFO_VEC_STMT (stmt_info) = new_stmt;
> +           STMT_VINFO_VEC_STMT (stmt_info) = new_stmt_info;
>           else
> -           STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
> +           STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
>
> -         prev_stmt_info = vinfo_for_stmt (new_stmt);
> +         prev_stmt_info = new_stmt_info;
>         }
>
>        *vec_stmt = STMT_VINFO_VEC_STMT (stmt_info);
> @@ -3629,7 +3638,8 @@ vectorizable_call (gimple *gs, gimple_st
>      stmt_info = vinfo_for_stmt (STMT_VINFO_RELATED_STMT (stmt_info));
>    lhs = gimple_get_lhs (stmt_info->stmt);
>
> -  new_stmt = gimple_build_assign (lhs, build_zero_cst (TREE_TYPE (lhs)));
> +  gassign *new_stmt
> +    = gimple_build_assign (lhs, build_zero_cst (TREE_TYPE (lhs)));
>    set_vinfo_for_stmt (new_stmt, stmt_info);
>    set_vinfo_for_stmt (stmt_info->stmt, NULL);
>    STMT_VINFO_STMT (stmt_info) = new_stmt;
> @@ -3752,7 +3762,6 @@ vectorizable_simd_clone_call (gimple *st
>    vec_info *vinfo = stmt_info->vinfo;
>    struct loop *loop = loop_vinfo ? LOOP_VINFO_LOOP (loop_vinfo) : NULL;
>    tree fndecl, new_temp;
> -  gimple *new_stmt = NULL;
>    int ncopies, j;
>    auto_vec<simd_call_arg_info> arginfo;
>    vec<tree> vargs = vNULL;
> @@ -4106,7 +4115,7 @@ vectorizable_simd_clone_call (gimple *st
>                         = build3 (BIT_FIELD_REF, atype, vec_oprnd0,
>                                   bitsize_int (prec),
>                                   bitsize_int ((m & (k - 1)) * prec));
> -                     new_stmt
> +                     gassign *new_stmt
>                         = gimple_build_assign (make_ssa_name (atype),
>                                                vec_oprnd0);
>                       vect_finish_stmt_generation (stmt, new_stmt, gsi);
> @@ -4142,7 +4151,7 @@ vectorizable_simd_clone_call (gimple *st
>                       else
>                         {
>                           vec_oprnd0 = build_constructor (atype, ctor_elts);
> -                         new_stmt
> +                         gassign *new_stmt
>                             = gimple_build_assign (make_ssa_name (atype),
>                                                    vec_oprnd0);
>                           vect_finish_stmt_generation (stmt, new_stmt, gsi);
> @@ -4189,7 +4198,7 @@ vectorizable_simd_clone_call (gimple *st
>                                ncopies * nunits);
>                   tree tcst = wide_int_to_tree (type, cst);
>                   tree phi_arg = copy_ssa_name (op);
> -                 new_stmt
> +                 gassign *new_stmt
>                     = gimple_build_assign (phi_arg, code, phi_res, tcst);
>                   gimple_stmt_iterator si = gsi_after_labels (loop->header);
>                   gsi_insert_after (&si, new_stmt, GSI_NEW_STMT);
> @@ -4211,8 +4220,9 @@ vectorizable_simd_clone_call (gimple *st
>                                j * nunits);
>                   tree tcst = wide_int_to_tree (type, cst);
>                   new_temp = make_ssa_name (TREE_TYPE (op));
> -                 new_stmt = gimple_build_assign (new_temp, code,
> -                                                 arginfo[i].op, tcst);
> +                 gassign *new_stmt
> +                   = gimple_build_assign (new_temp, code,
> +                                          arginfo[i].op, tcst);
>                   vect_finish_stmt_generation (stmt, new_stmt, gsi);
>                   vargs.safe_push (new_temp);
>                 }
> @@ -4228,7 +4238,7 @@ vectorizable_simd_clone_call (gimple *st
>             }
>         }
>
> -      new_stmt = gimple_build_call_vec (fndecl, vargs);
> +      gcall *new_call = gimple_build_call_vec (fndecl, vargs);
>        if (vec_dest)
>         {
>           gcc_assert (ratype || simd_clone_subparts (rtype) == nunits);
> @@ -4236,12 +4246,13 @@ vectorizable_simd_clone_call (gimple *st
>             new_temp = create_tmp_var (ratype);
>           else if (simd_clone_subparts (vectype)
>                    == simd_clone_subparts (rtype))
> -           new_temp = make_ssa_name (vec_dest, new_stmt);
> +           new_temp = make_ssa_name (vec_dest, new_call);
>           else
> -           new_temp = make_ssa_name (rtype, new_stmt);
> -         gimple_call_set_lhs (new_stmt, new_temp);
> +           new_temp = make_ssa_name (rtype, new_call);
> +         gimple_call_set_lhs (new_call, new_temp);
>         }
> -      vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +      stmt_vec_info new_stmt_info
> +       = vect_finish_stmt_generation (stmt, new_call, gsi);
>
>        if (vec_dest)
>         {
> @@ -4264,15 +4275,18 @@ vectorizable_simd_clone_call (gimple *st
>                   else
>                     t = build3 (BIT_FIELD_REF, vectype, new_temp,
>                                 bitsize_int (prec), bitsize_int (l * prec));
> -                 new_stmt
> +                 gimple *new_stmt
>                     = gimple_build_assign (make_ssa_name (vectype), t);
> -                 vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +                 new_stmt_info
> +                   = vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +
>                   if (j == 0 && l == 0)
> -                   STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
> +                   STMT_VINFO_VEC_STMT (stmt_info)
> +                     = *vec_stmt = new_stmt_info;
>                   else
> -                   STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
> +                   STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
>
> -                 prev_stmt_info = vinfo_for_stmt (new_stmt);
> +                 prev_stmt_info = new_stmt_info;
>                 }
>
>               if (ratype)
> @@ -4293,9 +4307,10 @@ vectorizable_simd_clone_call (gimple *st
>                     {
>                       tree tem = build4 (ARRAY_REF, rtype, new_temp,
>                                          size_int (m), NULL_TREE, NULL_TREE);
> -                     new_stmt
> +                     gimple *new_stmt
>                         = gimple_build_assign (make_ssa_name (rtype), tem);
> -                     vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +                     new_stmt_info
> +                       = vect_finish_stmt_generation (stmt, new_stmt, gsi);
>                       CONSTRUCTOR_APPEND_ELT (ret_ctor_elts, NULL_TREE,
>                                               gimple_assign_lhs (new_stmt));
>                     }
> @@ -4306,16 +4321,17 @@ vectorizable_simd_clone_call (gimple *st
>               if ((j & (k - 1)) != k - 1)
>                 continue;
>               vec_oprnd0 = build_constructor (vectype, ret_ctor_elts);
> -             new_stmt
> +             gimple *new_stmt
>                 = gimple_build_assign (make_ssa_name (vec_dest), vec_oprnd0);
> -             vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +             new_stmt_info
> +               = vect_finish_stmt_generation (stmt, new_stmt, gsi);
>
>               if ((unsigned) j == k - 1)
> -               STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
> +               STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
>               else
> -               STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
> +               STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
>
> -             prev_stmt_info = vinfo_for_stmt (new_stmt);
> +             prev_stmt_info = new_stmt_info;
>               continue;
>             }
>           else if (ratype)
> @@ -4323,19 +4339,20 @@ vectorizable_simd_clone_call (gimple *st
>               tree t = build_fold_addr_expr (new_temp);
>               t = build2 (MEM_REF, vectype, t,
>                           build_int_cst (TREE_TYPE (t), 0));
> -             new_stmt
> +             gimple *new_stmt
>                 = gimple_build_assign (make_ssa_name (vec_dest), t);
> -             vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +             new_stmt_info
> +               = vect_finish_stmt_generation (stmt, new_stmt, gsi);
>               vect_clobber_variable (stmt, gsi, new_temp);
>             }
>         }
>
>        if (j == 0)
> -       STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
> +       STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
>        else
> -       STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
> +       STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
>
> -      prev_stmt_info = vinfo_for_stmt (new_stmt);
> +      prev_stmt_info = new_stmt_info;
>      }
>
>    vargs.release ();
> @@ -4348,6 +4365,7 @@ vectorizable_simd_clone_call (gimple *st
>    if (slp_node)
>      return true;
>
> +  gimple *new_stmt;
>    if (scalar_dest)
>      {
>        type = TREE_TYPE (scalar_dest);
> @@ -4465,7 +4483,6 @@ vect_create_vectorized_demotion_stmts (v
>  {
>    unsigned int i;
>    tree vop0, vop1, new_tmp, vec_dest;
> -  gimple *new_stmt;
>    stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>
>    vec_dest = vec_dsts.pop ();
> @@ -4475,10 +4492,11 @@ vect_create_vectorized_demotion_stmts (v
>        /* Create demotion operation.  */
>        vop0 = (*vec_oprnds)[i];
>        vop1 = (*vec_oprnds)[i + 1];
> -      new_stmt = gimple_build_assign (vec_dest, code, vop0, vop1);
> +      gassign *new_stmt = gimple_build_assign (vec_dest, code, vop0, vop1);
>        new_tmp = make_ssa_name (vec_dest, new_stmt);
>        gimple_assign_set_lhs (new_stmt, new_tmp);
> -      vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +      stmt_vec_info new_stmt_info
> +       = vect_finish_stmt_generation (stmt, new_stmt, gsi);
>
>        if (multi_step_cvt)
>         /* Store the resulting vector for next recursive call.  */
> @@ -4489,15 +4507,15 @@ vect_create_vectorized_demotion_stmts (v
>              vectors in SLP_NODE or in vector info of the scalar statement
>              (or in STMT_VINFO_RELATED_STMT chain).  */
>           if (slp_node)
> -           SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
> +           SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
>           else
>             {
>               if (!*prev_stmt_info)
> -               STMT_VINFO_VEC_STMT (stmt_info) = new_stmt;
> +               STMT_VINFO_VEC_STMT (stmt_info) = new_stmt_info;
>               else
> -               STMT_VINFO_RELATED_STMT (*prev_stmt_info) = new_stmt;
> +               STMT_VINFO_RELATED_STMT (*prev_stmt_info) = new_stmt_info;
>
> -             *prev_stmt_info = vinfo_for_stmt (new_stmt);
> +             *prev_stmt_info = new_stmt_info;
>             }
>         }
>      }
> @@ -4595,7 +4613,6 @@ vectorizable_conversion (gimple *stmt, g
>    tree new_temp;
>    enum vect_def_type dt[2] = {vect_unknown_def_type, vect_unknown_def_type};
>    int ndts = 2;
> -  gimple *new_stmt = NULL;
>    stmt_vec_info prev_stmt_info;
>    poly_uint64 nunits_in;
>    poly_uint64 nunits_out;
> @@ -4965,31 +4982,37 @@ vectorizable_conversion (gimple *stmt, g
>
>           FOR_EACH_VEC_ELT (vec_oprnds0, i, vop0)
>             {
> +             stmt_vec_info new_stmt_info;
>               /* Arguments are ready, create the new vector stmt.  */
>               if (code1 == CALL_EXPR)
>                 {
> -                 new_stmt = gimple_build_call (decl1, 1, vop0);
> +                 gcall *new_stmt = gimple_build_call (decl1, 1, vop0);
>                   new_temp = make_ssa_name (vec_dest, new_stmt);
>                   gimple_call_set_lhs (new_stmt, new_temp);
> +                 new_stmt_info
> +                   = vect_finish_stmt_generation (stmt, new_stmt, gsi);
>                 }
>               else
>                 {
>                   gcc_assert (TREE_CODE_LENGTH (code1) == unary_op);
> -                 new_stmt = gimple_build_assign (vec_dest, code1, vop0);
> +                 gassign *new_stmt
> +                   = gimple_build_assign (vec_dest, code1, vop0);
>                   new_temp = make_ssa_name (vec_dest, new_stmt);
>                   gimple_assign_set_lhs (new_stmt, new_temp);
> +                 new_stmt_info
> +                   = vect_finish_stmt_generation (stmt, new_stmt, gsi);
>                 }
>
> -             vect_finish_stmt_generation (stmt, new_stmt, gsi);
>               if (slp_node)
> -               SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
> +               SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
>               else
>                 {
>                   if (!prev_stmt_info)
> -                   STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
> +                   STMT_VINFO_VEC_STMT (stmt_info)
> +                     = *vec_stmt = new_stmt_info;
>                   else
> -                   STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
> -                 prev_stmt_info = vinfo_for_stmt (new_stmt);
> +                   STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
> +                 prev_stmt_info = new_stmt_info;
>                 }
>             }
>         }
> @@ -5075,36 +5098,39 @@ vectorizable_conversion (gimple *stmt, g
>
>           FOR_EACH_VEC_ELT (vec_oprnds0, i, vop0)
>             {
> +             stmt_vec_info new_stmt_info;
>               if (cvt_type)
>                 {
>                   if (codecvt1 == CALL_EXPR)
>                     {
> -                     new_stmt = gimple_build_call (decl1, 1, vop0);
> +                     gcall *new_stmt = gimple_build_call (decl1, 1, vop0);
>                       new_temp = make_ssa_name (vec_dest, new_stmt);
>                       gimple_call_set_lhs (new_stmt, new_temp);
> +                     new_stmt_info
> +                       = vect_finish_stmt_generation (stmt, new_stmt, gsi);
>                     }
>                   else
>                     {
>                       gcc_assert (TREE_CODE_LENGTH (codecvt1) == unary_op);
>                       new_temp = make_ssa_name (vec_dest);
> -                     new_stmt = gimple_build_assign (new_temp, codecvt1,
> -                                                     vop0);
> +                     gassign *new_stmt
> +                       = gimple_build_assign (new_temp, codecvt1, vop0);
> +                     new_stmt_info
> +                       = vect_finish_stmt_generation (stmt, new_stmt, gsi);
>                     }
> -
> -                 vect_finish_stmt_generation (stmt, new_stmt, gsi);
>                 }
>               else
> -               new_stmt = SSA_NAME_DEF_STMT (vop0);
> +               new_stmt_info = vinfo->lookup_def (vop0);
>
>               if (slp_node)
> -               SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
> +               SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
>               else
>                 {
>                   if (!prev_stmt_info)
> -                   STMT_VINFO_VEC_STMT (stmt_info) = new_stmt;
> +                   STMT_VINFO_VEC_STMT (stmt_info) = new_stmt_info;
>                   else
> -                   STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
> -                 prev_stmt_info = vinfo_for_stmt (new_stmt);
> +                   STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
> +                 prev_stmt_info = new_stmt_info;
>                 }
>             }
>         }
> @@ -5136,19 +5162,20 @@ vectorizable_conversion (gimple *stmt, g
>               {
>                 if (codecvt1 == CALL_EXPR)
>                   {
> -                   new_stmt = gimple_build_call (decl1, 1, vop0);
> +                   gcall *new_stmt = gimple_build_call (decl1, 1, vop0);
>                     new_temp = make_ssa_name (vec_dest, new_stmt);
>                     gimple_call_set_lhs (new_stmt, new_temp);
> +                   vect_finish_stmt_generation (stmt, new_stmt, gsi);
>                   }
>                 else
>                   {
>                     gcc_assert (TREE_CODE_LENGTH (codecvt1) == unary_op);
>                     new_temp = make_ssa_name (vec_dest);
> -                   new_stmt = gimple_build_assign (new_temp, codecvt1,
> -                                                   vop0);
> +                   gassign *new_stmt
> +                     = gimple_build_assign (new_temp, codecvt1, vop0);
> +                   vect_finish_stmt_generation (stmt, new_stmt, gsi);
>                   }
>
> -               vect_finish_stmt_generation (stmt, new_stmt, gsi);
>                 vec_oprnds0[i] = new_temp;
>               }
>
> @@ -5196,7 +5223,6 @@ vectorizable_assignment (gimple *stmt, g
>    tree vop;
>    bb_vec_info bb_vinfo = STMT_VINFO_BB_VINFO (stmt_info);
>    vec_info *vinfo = stmt_info->vinfo;
> -  gimple *new_stmt = NULL;
>    stmt_vec_info prev_stmt_info = NULL;
>    enum tree_code code;
>    tree vectype_in;
> @@ -5306,28 +5332,29 @@ vectorizable_assignment (gimple *stmt, g
>          vect_get_vec_defs_for_stmt_copy (dt, &vec_oprnds, NULL);
>
>        /* Arguments are ready. create the new vector stmt.  */
> +      stmt_vec_info new_stmt_info = NULL;
>        FOR_EACH_VEC_ELT (vec_oprnds, i, vop)
>         {
>          if (CONVERT_EXPR_CODE_P (code)
>              || code == VIEW_CONVERT_EXPR)
>            vop = build1 (VIEW_CONVERT_EXPR, vectype, vop);
> -         new_stmt = gimple_build_assign (vec_dest, vop);
> +        gassign *new_stmt = gimple_build_assign (vec_dest, vop);
>           new_temp = make_ssa_name (vec_dest, new_stmt);
>           gimple_assign_set_lhs (new_stmt, new_temp);
> -         vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +        new_stmt_info = vect_finish_stmt_generation (stmt, new_stmt, gsi);
>           if (slp_node)
> -           SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
> +          SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
>         }
>
>        if (slp_node)
>          continue;
>
>        if (j == 0)
> -        STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
> +       STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
>        else
> -        STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
> +       STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
>
> -      prev_stmt_info = vinfo_for_stmt (new_stmt);
> +      prev_stmt_info = new_stmt_info;
>      }
>
>    vec_oprnds.release ();
> @@ -5398,7 +5425,6 @@ vectorizable_shift (gimple *stmt, gimple
>    machine_mode optab_op2_mode;
>    enum vect_def_type dt[2] = {vect_unknown_def_type, vect_unknown_def_type};
>    int ndts = 2;
> -  gimple *new_stmt = NULL;
>    stmt_vec_info prev_stmt_info;
>    poly_uint64 nunits_in;
>    poly_uint64 nunits_out;
> @@ -5706,25 +5732,26 @@ vectorizable_shift (gimple *stmt, gimple
>          vect_get_vec_defs_for_stmt_copy (dt, &vec_oprnds0, &vec_oprnds1);
>
>        /* Arguments are ready.  Create the new vector stmt.  */
> +      stmt_vec_info new_stmt_info = NULL;
>        FOR_EACH_VEC_ELT (vec_oprnds0, i, vop0)
>          {
>            vop1 = vec_oprnds1[i];
> -         new_stmt = gimple_build_assign (vec_dest, code, vop0, vop1);
> +         gassign *new_stmt = gimple_build_assign (vec_dest, code, vop0, vop1);
>            new_temp = make_ssa_name (vec_dest, new_stmt);
>            gimple_assign_set_lhs (new_stmt, new_temp);
> -          vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +         new_stmt_info = vect_finish_stmt_generation (stmt, new_stmt, gsi);
>            if (slp_node)
> -            SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
> +           SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
>          }
>
>        if (slp_node)
>          continue;
>
>        if (j == 0)
> -        STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
> +       STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
>        else
> -        STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
> -      prev_stmt_info = vinfo_for_stmt (new_stmt);
> +       STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
> +      prev_stmt_info = new_stmt_info;
>      }
>
>    vec_oprnds0.release ();
> @@ -5762,7 +5789,6 @@ vectorizable_operation (gimple *stmt, gi
>    enum vect_def_type dt[3]
>      = {vect_unknown_def_type, vect_unknown_def_type, vect_unknown_def_type};
>    int ndts = 3;
> -  gimple *new_stmt = NULL;
>    stmt_vec_info prev_stmt_info;
>    poly_uint64 nunits_in;
>    poly_uint64 nunits_out;
> @@ -6090,37 +6116,41 @@ vectorizable_operation (gimple *stmt, gi
>         }
>
>        /* Arguments are ready.  Create the new vector stmt.  */
> +      stmt_vec_info new_stmt_info = NULL;
>        FOR_EACH_VEC_ELT (vec_oprnds0, i, vop0)
>          {
>           vop1 = ((op_type == binary_op || op_type == ternary_op)
>                   ? vec_oprnds1[i] : NULL_TREE);
>           vop2 = ((op_type == ternary_op)
>                   ? vec_oprnds2[i] : NULL_TREE);
> -         new_stmt = gimple_build_assign (vec_dest, code, vop0, vop1, vop2);
> +         gassign *new_stmt = gimple_build_assign (vec_dest, code,
> +                                                  vop0, vop1, vop2);
>           new_temp = make_ssa_name (vec_dest, new_stmt);
>           gimple_assign_set_lhs (new_stmt, new_temp);
> -         vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +         new_stmt_info = vect_finish_stmt_generation (stmt, new_stmt, gsi);
>           if (vec_cvt_dest)
>             {
>               new_temp = build1 (VIEW_CONVERT_EXPR, vectype_out, new_temp);
> -             new_stmt = gimple_build_assign (vec_cvt_dest, VIEW_CONVERT_EXPR,
> -                                             new_temp);
> +             gassign *new_stmt
> +               = gimple_build_assign (vec_cvt_dest, VIEW_CONVERT_EXPR,
> +                                      new_temp);
>               new_temp = make_ssa_name (vec_cvt_dest, new_stmt);
>               gimple_assign_set_lhs (new_stmt, new_temp);
> -             vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +             new_stmt_info
> +               = vect_finish_stmt_generation (stmt, new_stmt, gsi);
>             }
>            if (slp_node)
> -           SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
> +           SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
>          }
>
>        if (slp_node)
>          continue;
>
>        if (j == 0)
> -       STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
> +       STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
>        else
> -       STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
> -      prev_stmt_info = vinfo_for_stmt (new_stmt);
> +       STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
> +      prev_stmt_info = new_stmt_info;
>      }
>
>    vec_oprnds0.release ();
> @@ -6230,7 +6260,6 @@ vectorizable_store (gimple *stmt, gimple
>    vec_info *vinfo = stmt_info->vinfo;
>    tree aggr_type;
>    gather_scatter_info gs_info;
> -  gimple *new_stmt;
>    poly_uint64 vf;
>    vec_load_store_type vls_type;
>    tree ref_type;
> @@ -6520,7 +6549,8 @@ vectorizable_store (gimple *stmt, gimple
>                                     TYPE_VECTOR_SUBPARTS (srctype)));
>               var = vect_get_new_ssa_name (srctype, vect_simple_var);
>               src = build1 (VIEW_CONVERT_EXPR, srctype, src);
> -             new_stmt = gimple_build_assign (var, VIEW_CONVERT_EXPR, src);
> +             gassign *new_stmt
> +               = gimple_build_assign (var, VIEW_CONVERT_EXPR, src);
>               vect_finish_stmt_generation (stmt, new_stmt, gsi);
>               src = var;
>             }
> @@ -6531,21 +6561,22 @@ vectorizable_store (gimple *stmt, gimple
>                                     TYPE_VECTOR_SUBPARTS (idxtype)));
>               var = vect_get_new_ssa_name (idxtype, vect_simple_var);
>               op = build1 (VIEW_CONVERT_EXPR, idxtype, op);
> -             new_stmt = gimple_build_assign (var, VIEW_CONVERT_EXPR, op);
> +             gassign *new_stmt
> +               = gimple_build_assign (var, VIEW_CONVERT_EXPR, op);
>               vect_finish_stmt_generation (stmt, new_stmt, gsi);
>               op = var;
>             }
>
> -         new_stmt
> +         gcall *new_stmt
>             = gimple_build_call (gs_info.decl, 5, ptr, mask, op, src, scale);
> -
> -         vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +         stmt_vec_info new_stmt_info
> +           = vect_finish_stmt_generation (stmt, new_stmt, gsi);
>
>           if (prev_stmt_info == NULL_STMT_VEC_INFO)
> -           STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
> +           STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
>           else
> -           STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
> -         prev_stmt_info = vinfo_for_stmt (new_stmt);
> +           STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
> +         prev_stmt_info = new_stmt_info;
>         }
>        return true;
>      }
> @@ -6806,7 +6837,8 @@ vectorizable_store (gimple *stmt, gimple
>
>                   /* And store it to *running_off.  */
>                   assign = gimple_build_assign (newref, elem);
> -                 vect_finish_stmt_generation (stmt, assign, gsi);
> +                 stmt_vec_info assign_info
> +                   = vect_finish_stmt_generation (stmt, assign, gsi);
>
>                   group_el += lnel;
>                   if (! slp
> @@ -6825,10 +6857,10 @@ vectorizable_store (gimple *stmt, gimple
>                     {
>                       if (j == 0 && i == 0)
>                         STMT_VINFO_VEC_STMT (stmt_info)
> -                           = *vec_stmt = assign;
> +                           = *vec_stmt = assign_info;
>                       else
> -                       STMT_VINFO_RELATED_STMT (prev_stmt_info) = assign;
> -                     prev_stmt_info = vinfo_for_stmt (assign);
> +                       STMT_VINFO_RELATED_STMT (prev_stmt_info) = assign_info;
> +                     prev_stmt_info = assign_info;
>                     }
>                 }
>             }
> @@ -6931,7 +6963,7 @@ vectorizable_store (gimple *stmt, gimple
>    tree vec_mask = NULL_TREE;
>    for (j = 0; j < ncopies; j++)
>      {
> -
> +      stmt_vec_info new_stmt_info;
>        if (j == 0)
>         {
>            if (slp)
> @@ -7081,15 +7113,14 @@ vectorizable_store (gimple *stmt, gimple
>               gimple_call_set_lhs (call, data_ref);
>             }
>           gimple_call_set_nothrow (call, true);
> -         new_stmt = call;
> -         vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +         new_stmt_info = vect_finish_stmt_generation (stmt, call, gsi);
>
>           /* Record that VEC_ARRAY is now dead.  */
>           vect_clobber_variable (stmt, gsi, vec_array);
>         }
>        else
>         {
> -         new_stmt = NULL;
> +         new_stmt_info = NULL;
>           if (grouped_store)
>             {
>               if (j == 0)
> @@ -7126,8 +7157,8 @@ vectorizable_store (gimple *stmt, gimple
>                       (IFN_SCATTER_STORE, 4, dataref_ptr, vec_offset,
>                        scale, vec_oprnd);
>                   gimple_call_set_nothrow (call, true);
> -                 new_stmt = call;
> -                 vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +                 new_stmt_info
> +                   = vect_finish_stmt_generation (stmt, call, gsi);
>                   break;
>                 }
>
> @@ -7186,7 +7217,8 @@ vectorizable_store (gimple *stmt, gimple
>                                                   dataref_ptr, ptr,
>                                                   final_mask, vec_oprnd);
>                   gimple_call_set_nothrow (call, true);
> -                 new_stmt = call;
> +                 new_stmt_info
> +                   = vect_finish_stmt_generation (stmt, call, gsi);
>                 }
>               else
>                 {
> @@ -7206,9 +7238,11 @@ vectorizable_store (gimple *stmt, gimple
>                       = build_aligned_type (TREE_TYPE (data_ref),
>                                             TYPE_ALIGN (elem_type));
>                   vect_copy_ref_info (data_ref, DR_REF (first_dr));
> -                 new_stmt = gimple_build_assign (data_ref, vec_oprnd);
> +                 gassign *new_stmt
> +                   = gimple_build_assign (data_ref, vec_oprnd);
> +                 new_stmt_info
> +                   = vect_finish_stmt_generation (stmt, new_stmt, gsi);
>                 }
> -             vect_finish_stmt_generation (stmt, new_stmt, gsi);
>
>               if (slp)
>                 continue;
> @@ -7221,10 +7255,10 @@ vectorizable_store (gimple *stmt, gimple
>        if (!slp)
>         {
>           if (j == 0)
> -           STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
> +           STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
>           else
> -           STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
> -         prev_stmt_info = vinfo_for_stmt (new_stmt);
> +           STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
> +         prev_stmt_info = new_stmt_info;
>         }
>      }
>
> @@ -7370,7 +7404,6 @@ vectorizable_load (gimple *stmt, gimple_
>    tree elem_type;
>    tree new_temp;
>    machine_mode mode;
> -  gimple *new_stmt = NULL;
>    tree dummy;
>    enum dr_alignment_support alignment_support_scheme;
>    tree dataref_ptr = NULL_TREE;
> @@ -7812,14 +7845,17 @@ vectorizable_load (gimple *stmt, gimple_
>         {
>           if (nloads > 1)
>             vec_alloc (v, nloads);
> +         stmt_vec_info new_stmt_info = NULL;
>           for (i = 0; i < nloads; i++)
>             {
>               tree this_off = build_int_cst (TREE_TYPE (alias_off),
>                                              group_el * elsz + cst_offset);
>               tree data_ref = build2 (MEM_REF, ltype, running_off, this_off);
>               vect_copy_ref_info (data_ref, DR_REF (first_dr));
> -             new_stmt = gimple_build_assign (make_ssa_name (ltype), data_ref);
> -             vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +             gassign *new_stmt
> +               = gimple_build_assign (make_ssa_name (ltype), data_ref);
> +             new_stmt_info
> +               = vect_finish_stmt_generation (stmt, new_stmt, gsi);
>               if (nloads > 1)
>                 CONSTRUCTOR_APPEND_ELT (v, NULL_TREE,
>                                         gimple_assign_lhs (new_stmt));
> @@ -7841,31 +7877,33 @@ vectorizable_load (gimple *stmt, gimple_
>             {
>               tree vec_inv = build_constructor (lvectype, v);
>               new_temp = vect_init_vector (stmt, vec_inv, lvectype, gsi);
> -             new_stmt = SSA_NAME_DEF_STMT (new_temp);
> +             new_stmt_info = vinfo->lookup_def (new_temp);
>               if (lvectype != vectype)
>                 {
> -                 new_stmt = gimple_build_assign (make_ssa_name (vectype),
> -                                                 VIEW_CONVERT_EXPR,
> -                                                 build1 (VIEW_CONVERT_EXPR,
> -                                                         vectype, new_temp));
> -                 vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +                 gassign *new_stmt
> +                   = gimple_build_assign (make_ssa_name (vectype),
> +                                          VIEW_CONVERT_EXPR,
> +                                          build1 (VIEW_CONVERT_EXPR,
> +                                                  vectype, new_temp));
> +                 new_stmt_info
> +                   = vect_finish_stmt_generation (stmt, new_stmt, gsi);
>                 }
>             }
>
>           if (slp)
>             {
>               if (slp_perm)
> -               dr_chain.quick_push (gimple_assign_lhs (new_stmt));
> +               dr_chain.quick_push (gimple_assign_lhs (new_stmt_info->stmt));
>               else
> -               SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
> +               SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
>             }
>           else
>             {
>               if (j == 0)
> -               STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
> +               STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
>               else
> -               STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
> -             prev_stmt_info = vinfo_for_stmt (new_stmt);
> +               STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
> +             prev_stmt_info = new_stmt_info;
>             }
>         }
>        if (slp_perm)
> @@ -8122,6 +8160,7 @@ vectorizable_load (gimple *stmt, gimple_
>    poly_uint64 group_elt = 0;
>    for (j = 0; j < ncopies; j++)
>      {
> +      stmt_vec_info new_stmt_info = NULL;
>        /* 1. Create the vector or array pointer update chain.  */
>        if (j == 0)
>         {
> @@ -8228,8 +8267,7 @@ vectorizable_load (gimple *stmt, gimple_
>             }
>           gimple_call_set_lhs (call, vec_array);
>           gimple_call_set_nothrow (call, true);
> -         new_stmt = call;
> -         vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +         new_stmt_info = vect_finish_stmt_generation (stmt, call, gsi);
>
>           /* Extract each vector into an SSA_NAME.  */
>           for (i = 0; i < vec_num; i++)
> @@ -8264,6 +8302,7 @@ vectorizable_load (gimple *stmt, gimple_
>                                                stmt, bump);
>
>               /* 2. Create the vector-load in the loop.  */
> +             gimple *new_stmt = NULL;
>               switch (alignment_support_scheme)
>                 {
>                 case dr_aligned:
> @@ -8421,7 +8460,8 @@ vectorizable_load (gimple *stmt, gimple_
>                 }
>               new_temp = make_ssa_name (vec_dest, new_stmt);
>               gimple_set_lhs (new_stmt, new_temp);
> -             vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +             new_stmt_info
> +               = vect_finish_stmt_generation (stmt, new_stmt, gsi);
>
>               /* 3. Handle explicit realignment if necessary/supported.
>                  Create in loop:
> @@ -8437,7 +8477,8 @@ vectorizable_load (gimple *stmt, gimple_
>                                                   msq, lsq, realignment_token);
>                   new_temp = make_ssa_name (vec_dest, new_stmt);
>                   gimple_assign_set_lhs (new_stmt, new_temp);
> -                 vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +                 new_stmt_info
> +                   = vect_finish_stmt_generation (stmt, new_stmt, gsi);
>
>                   if (alignment_support_scheme == dr_explicit_realign_optimized)
>                     {
> @@ -8477,7 +8518,7 @@ vectorizable_load (gimple *stmt, gimple_
>                                                 (gimple_assign_rhs1 (stmt))));
>                       new_temp = vect_init_vector (stmt, tem, vectype, NULL);
>                       new_stmt = SSA_NAME_DEF_STMT (new_temp);
> -                     vinfo->add_stmt (new_stmt);
> +                     new_stmt_info = vinfo->add_stmt (new_stmt);
>                     }
>                   else
>                     {
> @@ -8485,7 +8526,7 @@ vectorizable_load (gimple *stmt, gimple_
>                       gsi_next (&gsi2);
>                       new_temp = vect_init_vector (stmt, scalar_dest,
>                                                    vectype, &gsi2);
> -                     new_stmt = SSA_NAME_DEF_STMT (new_temp);
> +                     new_stmt_info = vinfo->lookup_def (new_temp);
>                     }
>                 }
>
> @@ -8494,7 +8535,7 @@ vectorizable_load (gimple *stmt, gimple_
>                   tree perm_mask = perm_mask_for_reverse (vectype);
>                   new_temp = permute_vec_elements (new_temp, new_temp,
>                                                    perm_mask, stmt, gsi);
> -                 new_stmt = SSA_NAME_DEF_STMT (new_temp);
> +                 new_stmt_info = vinfo->lookup_def (new_temp);
>                 }
>
>               /* Collect vector loads and later create their permutation in
> @@ -8504,7 +8545,7 @@ vectorizable_load (gimple *stmt, gimple_
>
>               /* Store vector loads in the corresponding SLP_NODE.  */
>               if (slp && !slp_perm)
> -               SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
> +               SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
>
>               /* With SLP permutation we load the gaps as well, without
>                  we need to skip the gaps after we manage to fully load
> @@ -8561,10 +8602,10 @@ vectorizable_load (gimple *stmt, gimple_
>            else
>             {
>               if (j == 0)
> -               STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt;
> +               STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
>               else
> -               STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt;
> -             prev_stmt_info = vinfo_for_stmt (new_stmt);
> +               STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
> +             prev_stmt_info = new_stmt_info;
>             }
>          }
>        dr_chain.release ();
> @@ -8869,7 +8910,7 @@ vectorizable_condition (gimple *stmt, gi
>    /* Handle cond expr.  */
>    for (j = 0; j < ncopies; j++)
>      {
> -      gimple *new_stmt = NULL;
> +      stmt_vec_info new_stmt_info = NULL;
>        if (j == 0)
>         {
>            if (slp_node)
> @@ -8974,6 +9015,7 @@ vectorizable_condition (gimple *stmt, gi
>               else
>                 {
>                   new_temp = make_ssa_name (vec_cmp_type);
> +                 gassign *new_stmt;
>                   if (bitop1 == BIT_NOT_EXPR)
>                     new_stmt = gimple_build_assign (new_temp, bitop1,
>                                                     vec_cond_rhs);
> @@ -9005,19 +9047,19 @@ vectorizable_condition (gimple *stmt, gi
>               if (!is_gimple_val (vec_compare))
>                 {
>                   tree vec_compare_name = make_ssa_name (vec_cmp_type);
> -                 new_stmt = gimple_build_assign (vec_compare_name,
> -                                                 vec_compare);
> +                 gassign *new_stmt = gimple_build_assign (vec_compare_name,
> +                                                          vec_compare);
>                   vect_finish_stmt_generation (stmt, new_stmt, gsi);
>                   vec_compare = vec_compare_name;
>                 }
>               gcc_assert (reduc_index == 2);
> -             new_stmt = gimple_build_call_internal
> +             gcall *new_stmt = gimple_build_call_internal
>                 (IFN_FOLD_EXTRACT_LAST, 3, else_clause, vec_compare,
>                  vec_then_clause);
>               gimple_call_set_lhs (new_stmt, scalar_dest);
>               SSA_NAME_DEF_STMT (scalar_dest) = new_stmt;
>               if (stmt == gsi_stmt (*gsi))
> -               vect_finish_replace_stmt (stmt, new_stmt);
> +               new_stmt_info = vect_finish_replace_stmt (stmt, new_stmt);
>               else
>                 {
>                   /* In this case we're moving the definition to later in the
> @@ -9025,30 +9067,32 @@ vectorizable_condition (gimple *stmt, gi
>                      lhs are in phi statements.  */
>                   gimple_stmt_iterator old_gsi = gsi_for_stmt (stmt);
>                   gsi_remove (&old_gsi, true);
> -                 vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +                 new_stmt_info
> +                   = vect_finish_stmt_generation (stmt, new_stmt, gsi);
>                 }
>             }
>           else
>             {
>               new_temp = make_ssa_name (vec_dest);
> -             new_stmt = gimple_build_assign (new_temp, VEC_COND_EXPR,
> -                                             vec_compare, vec_then_clause,
> -                                             vec_else_clause);
> -             vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [13/46] Make STMT_VINFO_RELATED_STMT a stmt_vec_info
  2018-07-24  9:58 ` [13/46] Make STMT_VINFO_RELATED_STMT a stmt_vec_info Richard Sandiford
@ 2018-07-25  9:19   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25  9:19 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 11:58 AM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch changes STMT_VINFO_RELATED_STMT from a gimple stmt to a
> stmt_vec_info.

OK.

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (_stmt_vec_info::related_stmt): Change from
>         a gimple stmt to a stmt_vec_info.
>         (is_pattern_stmt_p): Update accordingly.
>         * tree-vect-data-refs.c (vect_preserves_scalar_order_p): Likewise.
>         (vect_record_grouped_load_vectors): Likewise.
>         * tree-vect-loop.c (vect_determine_vf_for_stmt): Likewise.
>         (vect_fixup_reduc_chain, vect_update_vf_for_slp): Likewise.
>         (vect_model_reduction_cost): Likewise.
>         (vect_create_epilog_for_reduction): Likewise.
>         (vectorizable_reduction, vectorizable_induction): Likewise.
>         * tree-vect-patterns.c (vect_init_pattern_stmt): Likewise.
>         Return the stmt_vec_info for the pattern statement.
>         (vect_set_pattern_stmt): Update use of STMT_VINFO_RELATED_STMT.
>         (vect_split_statement, vect_mark_pattern_stmts): Likewise.
>         * tree-vect-slp.c (vect_detect_hybrid_slp_stmts): Likewise.
>         (vect_detect_hybrid_slp, vect_get_slp_defs): Likewise.
>         * tree-vect-stmts.c (vect_mark_relevant): Likewise.
>         (vect_get_vec_def_for_operand_1, vectorizable_call): Likewise.
>         (vectorizable_simd_clone_call, vect_analyze_stmt, new_stmt_vec_info)
>         (free_stmt_vec_info, vect_is_simple_use): Likewise.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:22:40.725217371 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:22:44.297185652 +0100
> @@ -847,7 +847,7 @@ struct _stmt_vec_info {
>          related_stmt of the "pattern stmt" points back to this stmt (which is
>          the last stmt in the original sequence of stmts that constitutes the
>          pattern).  */
> -  gimple *related_stmt;
> +  stmt_vec_info related_stmt;
>
>    /* Used to keep a sequence of def stmts of a pattern stmt if such exists.
>       The sequence is attached to the original statement rather than the
> @@ -1189,16 +1189,8 @@ get_later_stmt (gimple *stmt1, gimple *s
>  static inline bool
>  is_pattern_stmt_p (stmt_vec_info stmt_info)
>  {
> -  gimple *related_stmt;
> -  stmt_vec_info related_stmt_info;
> -
> -  related_stmt = STMT_VINFO_RELATED_STMT (stmt_info);
> -  if (related_stmt
> -      && (related_stmt_info = vinfo_for_stmt (related_stmt))
> -      && STMT_VINFO_IN_PATTERN_P (related_stmt_info))
> -    return true;
> -
> -  return false;
> +  stmt_vec_info related_stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
> +  return related_stmt_info && STMT_VINFO_IN_PATTERN_P (related_stmt_info);
>  }
>
>  /* Return true if BB is a loop header.  */
> Index: gcc/tree-vect-data-refs.c
> ===================================================================
> --- gcc/tree-vect-data-refs.c   2018-07-24 10:22:19.801403171 +0100
> +++ gcc/tree-vect-data-refs.c   2018-07-24 10:22:44.285185759 +0100
> @@ -213,10 +213,10 @@ vect_preserves_scalar_order_p (gimple *s
>       current position (but could happen earlier).  Reordering is therefore
>       only possible if the first access is a write.  */
>    if (is_pattern_stmt_p (stmtinfo_a))
> -    stmt_a = STMT_VINFO_RELATED_STMT (stmtinfo_a);
> +    stmtinfo_a = STMT_VINFO_RELATED_STMT (stmtinfo_a);
>    if (is_pattern_stmt_p (stmtinfo_b))
> -    stmt_b = STMT_VINFO_RELATED_STMT (stmtinfo_b);
> -  gimple *earlier_stmt = get_earlier_stmt (stmt_a, stmt_b);
> +    stmtinfo_b = STMT_VINFO_RELATED_STMT (stmtinfo_b);
> +  gimple *earlier_stmt = get_earlier_stmt (stmtinfo_a, stmtinfo_b);
>    return !DR_IS_WRITE (STMT_VINFO_DATA_REF (vinfo_for_stmt (earlier_stmt)));
>  }
>
> @@ -6359,8 +6359,10 @@ vect_transform_grouped_load (gimple *stm
>  void
>  vect_record_grouped_load_vectors (gimple *stmt, vec<tree> result_chain)
>  {
> -  gimple *first_stmt = DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt));
> -  gimple *next_stmt, *new_stmt;
> +  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +  vec_info *vinfo = stmt_info->vinfo;
> +  gimple *first_stmt = DR_GROUP_FIRST_ELEMENT (stmt_info);
> +  gimple *next_stmt;
>    unsigned int i, gap_count;
>    tree tmp_data_ref;
>
> @@ -6389,29 +6391,28 @@ vect_record_grouped_load_vectors (gimple
>
>        while (next_stmt)
>          {
> -         new_stmt = SSA_NAME_DEF_STMT (tmp_data_ref);
> +         stmt_vec_info new_stmt_info = vinfo->lookup_def (tmp_data_ref);
>           /* We assume that if VEC_STMT is not NULL, this is a case of multiple
>              copies, and we put the new vector statement in the first available
>              RELATED_STMT.  */
>           if (!STMT_VINFO_VEC_STMT (vinfo_for_stmt (next_stmt)))
> -           STMT_VINFO_VEC_STMT (vinfo_for_stmt (next_stmt)) = new_stmt;
> +           STMT_VINFO_VEC_STMT (vinfo_for_stmt (next_stmt)) = new_stmt_info;
>           else
>              {
>                if (!DR_GROUP_SAME_DR_STMT (vinfo_for_stmt (next_stmt)))
>                  {
>                   gimple *prev_stmt =
>                     STMT_VINFO_VEC_STMT (vinfo_for_stmt (next_stmt));
> -                 gimple *rel_stmt =
> -                   STMT_VINFO_RELATED_STMT (vinfo_for_stmt (prev_stmt));
> -                 while (rel_stmt)
> +                 stmt_vec_info rel_stmt_info
> +                   = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (prev_stmt));
> +                 while (rel_stmt_info)
>                     {
> -                     prev_stmt = rel_stmt;
> -                     rel_stmt =
> -                        STMT_VINFO_RELATED_STMT (vinfo_for_stmt (rel_stmt));
> +                     prev_stmt = rel_stmt_info;
> +                     rel_stmt_info = STMT_VINFO_RELATED_STMT (rel_stmt_info);
>                     }
>
> -                 STMT_VINFO_RELATED_STMT (vinfo_for_stmt (prev_stmt)) =
> -                    new_stmt;
> +                 STMT_VINFO_RELATED_STMT (vinfo_for_stmt (prev_stmt))
> +                   = new_stmt_info;
>                  }
>              }
>
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:22:40.721217407 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:22:44.289185723 +0100
> @@ -226,7 +226,7 @@ vect_determine_vf_for_stmt (stmt_vec_inf
>        && STMT_VINFO_RELATED_STMT (stmt_info))
>      {
>        gimple *pattern_def_seq = STMT_VINFO_PATTERN_DEF_SEQ (stmt_info);
> -      stmt_info = vinfo_for_stmt (STMT_VINFO_RELATED_STMT (stmt_info));
> +      stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
>
>        /* If a pattern statement has def stmts, analyze them too.  */
>        for (gimple_stmt_iterator si = gsi_start (pattern_def_seq);
> @@ -654,23 +654,23 @@ vect_analyze_scalar_cycles (loop_vec_inf
>  static void
>  vect_fixup_reduc_chain (gimple *stmt)
>  {
> -  gimple *firstp = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (stmt));
> -  gimple *stmtp;
> -  gcc_assert (!REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (firstp))
> -             && REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)));
> -  REDUC_GROUP_SIZE (vinfo_for_stmt (firstp))
> -    = REDUC_GROUP_SIZE (vinfo_for_stmt (stmt));
> +  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +  stmt_vec_info firstp = STMT_VINFO_RELATED_STMT (stmt_info);
> +  stmt_vec_info stmtp;
> +  gcc_assert (!REDUC_GROUP_FIRST_ELEMENT (firstp)
> +             && REDUC_GROUP_FIRST_ELEMENT (stmt_info));
> +  REDUC_GROUP_SIZE (firstp) = REDUC_GROUP_SIZE (stmt_info);
>    do
>      {
>        stmtp = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (stmt));
> -      REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmtp)) = firstp;
> +      REDUC_GROUP_FIRST_ELEMENT (stmtp) = firstp;
>        stmt = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (stmt));
>        if (stmt)
> -       REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (stmtp))
> +       REDUC_GROUP_NEXT_ELEMENT (stmtp)
>           = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (stmt));
>      }
>    while (stmt);
> -  STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmtp)) = vect_reduction_def;
> +  STMT_VINFO_DEF_TYPE (stmtp) = vect_reduction_def;
>  }
>
>  /* Fixup scalar cycles that now have their stmts detected as patterns.  */
> @@ -1436,14 +1436,10 @@ vect_update_vf_for_slp (loop_vec_info lo
>        for (gimple_stmt_iterator si = gsi_start_bb (bb); !gsi_end_p (si);
>            gsi_next (&si))
>         {
> -         gimple *stmt = gsi_stmt (si);
>           stmt_vec_info stmt_info = loop_vinfo->lookup_stmt (gsi_stmt (si));
>           if (STMT_VINFO_IN_PATTERN_P (stmt_info)
>               && STMT_VINFO_RELATED_STMT (stmt_info))
> -           {
> -             stmt = STMT_VINFO_RELATED_STMT (stmt_info);
> -             stmt_info = vinfo_for_stmt (stmt);
> -           }
> +           stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
>           if ((STMT_VINFO_RELEVANT_P (stmt_info)
>                || VECTORIZABLE_CYCLE_DEF (STMT_VINFO_DEF_TYPE (stmt_info)))
>               && !PURE_SLP_STMT (stmt_info))
> @@ -2247,7 +2243,7 @@ vect_analyze_loop_2 (loop_vec_info loop_
>           if (STMT_VINFO_IN_PATTERN_P (stmt_info))
>             {
>               gimple *pattern_def_seq = STMT_VINFO_PATTERN_DEF_SEQ (stmt_info);
> -             stmt_info = vinfo_for_stmt (STMT_VINFO_RELATED_STMT (stmt_info));
> +             stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
>               STMT_SLP_TYPE (stmt_info) = loop_vect;
>               for (gimple_stmt_iterator pi = gsi_start (pattern_def_seq);
>                    !gsi_end_p (pi); gsi_next (&pi))
> @@ -3836,7 +3832,6 @@ vect_model_reduction_cost (stmt_vec_info
>    enum tree_code code;
>    optab optab;
>    tree vectype;
> -  gimple *orig_stmt;
>    machine_mode mode;
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    struct loop *loop = NULL;
> @@ -3852,12 +3847,12 @@ vect_model_reduction_cost (stmt_vec_info
>
>    vectype = STMT_VINFO_VECTYPE (stmt_info);
>    mode = TYPE_MODE (vectype);
> -  orig_stmt = STMT_VINFO_RELATED_STMT (stmt_info);
> +  stmt_vec_info orig_stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
>
> -  if (!orig_stmt)
> -    orig_stmt = STMT_VINFO_STMT (stmt_info);
> +  if (!orig_stmt_info)
> +    orig_stmt_info = stmt_info;
>
> -  code = gimple_assign_rhs_code (orig_stmt);
> +  code = gimple_assign_rhs_code (orig_stmt_info->stmt);
>
>    if (reduction_type == EXTRACT_LAST_REDUCTION
>        || reduction_type == FOLD_LEFT_REDUCTION)
> @@ -3902,7 +3897,7 @@ vect_model_reduction_cost (stmt_vec_info
>       We have a reduction operator that will reduce the vector in one statement.
>       Also requires scalar extract.  */
>
> -  if (!loop || !nested_in_vect_loop_p (loop, orig_stmt))
> +  if (!loop || !nested_in_vect_loop_p (loop, orig_stmt_info))
>      {
>        if (reduc_fn != IFN_LAST)
>         {
> @@ -3953,7 +3948,7 @@ vect_model_reduction_cost (stmt_vec_info
>         {
>           int vec_size_in_bits = tree_to_uhwi (TYPE_SIZE (vectype));
>           tree bitsize =
> -           TYPE_SIZE (TREE_TYPE (gimple_assign_lhs (orig_stmt)));
> +           TYPE_SIZE (TREE_TYPE (gimple_assign_lhs (orig_stmt_info->stmt)));
>           int element_bitsize = tree_to_uhwi (bitsize);
>           int nelements = vec_size_in_bits / element_bitsize;
>
> @@ -4447,7 +4442,7 @@ vect_create_epilog_for_reduction (vec<tr
>    tree orig_name, scalar_result;
>    imm_use_iterator imm_iter, phi_imm_iter;
>    use_operand_p use_p, phi_use_p;
> -  gimple *use_stmt, *orig_stmt, *reduction_phi = NULL;
> +  gimple *use_stmt, *reduction_phi = NULL;
>    bool nested_in_vect_loop = false;
>    auto_vec<gimple *> new_phis;
>    auto_vec<gimple *> inner_phis;
> @@ -4726,7 +4721,7 @@ vect_create_epilog_for_reduction (vec<tr
>            else
>             {
>               def = vect_get_vec_def_for_stmt_copy (dt, def);
> -             STMT_VINFO_RELATED_STMT (prev_phi_info) = phi;
> +             STMT_VINFO_RELATED_STMT (prev_phi_info) = phi_info;
>             }
>
>            SET_PHI_ARG_DEF (phi, single_exit (loop)->dest_idx, def);
> @@ -4758,7 +4753,7 @@ vect_create_epilog_for_reduction (vec<tr
>               SET_PHI_ARG_DEF (outer_phi, single_exit (loop)->dest_idx,
>                                PHI_RESULT (phi));
>               stmt_vec_info outer_phi_info = loop_vinfo->add_stmt (outer_phi);
> -             STMT_VINFO_RELATED_STMT (prev_phi_info) = outer_phi;
> +             STMT_VINFO_RELATED_STMT (prev_phi_info) = outer_phi_info;
>               prev_phi_info = outer_phi_info;
>             }
>         }
> @@ -4775,27 +4770,26 @@ vect_create_epilog_for_reduction (vec<tr
>           Otherwise (it is a regular reduction) - the tree-code and scalar-def
>           are taken from STMT.  */
>
> -  orig_stmt = STMT_VINFO_RELATED_STMT (stmt_info);
> -  if (!orig_stmt)
> +  stmt_vec_info orig_stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
> +  if (!orig_stmt_info)
>      {
>        /* Regular reduction  */
> -      orig_stmt = stmt;
> +      orig_stmt_info = stmt_info;
>      }
>    else
>      {
>        /* Reduction pattern  */
> -      stmt_vec_info stmt_vinfo = vinfo_for_stmt (orig_stmt);
> -      gcc_assert (STMT_VINFO_IN_PATTERN_P (stmt_vinfo));
> -      gcc_assert (STMT_VINFO_RELATED_STMT (stmt_vinfo) == stmt);
> +      gcc_assert (STMT_VINFO_IN_PATTERN_P (orig_stmt_info));
> +      gcc_assert (STMT_VINFO_RELATED_STMT (orig_stmt_info) == stmt_info);
>      }
>
> -  code = gimple_assign_rhs_code (orig_stmt);
> +  code = gimple_assign_rhs_code (orig_stmt_info->stmt);
>    /* For MINUS_EXPR the initial vector is [init_val,0,...,0], therefore,
>       partial results are added and not subtracted.  */
>    if (code == MINUS_EXPR)
>      code = PLUS_EXPR;
>
> -  scalar_dest = gimple_assign_lhs (orig_stmt);
> +  scalar_dest = gimple_assign_lhs (orig_stmt_info->stmt);
>    scalar_type = TREE_TYPE (scalar_dest);
>    scalar_results.create (group_size);
>    new_scalar_dest = vect_create_destination_var (scalar_dest, NULL);
> @@ -5613,10 +5607,11 @@ vect_create_epilog_for_reduction (vec<tr
>          {
>           gimple *current_stmt = SLP_TREE_SCALAR_STMTS (slp_node)[k];
>
> -          orig_stmt = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (current_stmt));
> -          /* SLP statements can't participate in patterns.  */
> -          gcc_assert (!orig_stmt);
> -          scalar_dest = gimple_assign_lhs (current_stmt);
> +         orig_stmt_info
> +           = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (current_stmt));
> +         /* SLP statements can't participate in patterns.  */
> +         gcc_assert (!orig_stmt_info);
> +         scalar_dest = gimple_assign_lhs (current_stmt);
>          }
>
>        phis.create (3);
> @@ -6097,8 +6092,6 @@ vectorizable_reduction (gimple *stmt, gi
>    enum tree_code cond_reduc_op_code = ERROR_MARK;
>    tree scalar_type;
>    bool is_simple_use;
> -  gimple *orig_stmt;
> -  stmt_vec_info orig_stmt_info = NULL;
>    int i;
>    int ncopies;
>    int epilog_copies;
> @@ -6229,7 +6222,7 @@ vectorizable_reduction (gimple *stmt, gi
>                       if (j == 0)
>                         STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_phi;
>                       else
> -                       STMT_VINFO_RELATED_STMT (prev_phi_info) = new_phi;
> +                       STMT_VINFO_RELATED_STMT (prev_phi_info) = new_phi_info;
>                       prev_phi_info = new_phi_info;
>                     }
>                 }
> @@ -6259,10 +6252,9 @@ vectorizable_reduction (gimple *stmt, gi
>       the STMT_VINFO_RELATED_STMT field records the last stmt in
>       the original sequence that constitutes the pattern.  */
>
> -  orig_stmt = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (stmt));
> -  if (orig_stmt)
> +  stmt_vec_info orig_stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
> +  if (orig_stmt_info)
>      {
> -      orig_stmt_info = vinfo_for_stmt (orig_stmt);
>        gcc_assert (STMT_VINFO_IN_PATTERN_P (orig_stmt_info));
>        gcc_assert (!STMT_VINFO_IN_PATTERN_P (stmt_info));
>      }
> @@ -6393,7 +6385,7 @@ vectorizable_reduction (gimple *stmt, gi
>           return false;
>         }
>
> -      if (orig_stmt)
> +      if (orig_stmt_info)
>         reduc_def_stmt = STMT_VINFO_REDUC_DEF (orig_stmt_info);
>        else
>         reduc_def_stmt = STMT_VINFO_REDUC_DEF (stmt_info);
> @@ -6414,7 +6406,7 @@ vectorizable_reduction (gimple *stmt, gi
>        /* For pattern recognized stmts, orig_stmt might be a reduction,
>          but some helper statements for the pattern might not, or
>          might be COND_EXPRs with reduction uses in the condition.  */
> -      gcc_assert (orig_stmt);
> +      gcc_assert (orig_stmt_info);
>        return false;
>      }
>
> @@ -6548,10 +6540,10 @@ vectorizable_reduction (gimple *stmt, gi
>         }
>      }
>
> -  if (orig_stmt)
> -    gcc_assert (tmp == orig_stmt
> +  if (orig_stmt_info)
> +    gcc_assert (tmp == orig_stmt_info
>                 || (REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (tmp))
> -                   == orig_stmt));
> +                   == orig_stmt_info));
>    else
>      /* We changed STMT to be the first stmt in reduction chain, hence we
>         check that in this case the first element in the chain is STMT.  */
> @@ -6673,13 +6665,13 @@ vectorizable_reduction (gimple *stmt, gi
>
>    vect_reduction_type reduction_type
>      = STMT_VINFO_VEC_REDUCTION_TYPE (stmt_info);
> -  if (orig_stmt
> +  if (orig_stmt_info
>        && (reduction_type == TREE_CODE_REDUCTION
>           || reduction_type == FOLD_LEFT_REDUCTION))
>      {
>        /* This is a reduction pattern: get the vectype from the type of the
>           reduction variable, and get the tree-code from orig_stmt.  */
> -      orig_code = gimple_assign_rhs_code (orig_stmt);
> +      orig_code = gimple_assign_rhs_code (orig_stmt_info->stmt);
>        gcc_assert (vectype_out);
>        vec_mode = TYPE_MODE (vectype_out);
>      }
> @@ -7757,7 +7749,7 @@ vectorizable_induction (gimple *phi,
>
>           gsi_insert_before (&si, new_stmt, GSI_SAME_STMT);
>           new_stmt_info = loop_vinfo->add_stmt (new_stmt);
> -         STMT_VINFO_RELATED_STMT (prev_stmt_vinfo) = new_stmt;
> +         STMT_VINFO_RELATED_STMT (prev_stmt_vinfo) = new_stmt_info;
>           prev_stmt_vinfo = new_stmt_info;
>         }
>      }
> Index: gcc/tree-vect-patterns.c
> ===================================================================
> --- gcc/tree-vect-patterns.c    2018-07-24 10:22:37.253248202 +0100
> +++ gcc/tree-vect-patterns.c    2018-07-24 10:22:44.289185723 +0100
> @@ -94,10 +94,11 @@ vect_pattern_detected (const char *name,
>      }
>  }
>
> -/* Associate pattern statement PATTERN_STMT with ORIG_STMT_INFO.
> -   Set its vector type to VECTYPE if it doesn't have one already.  */
> +/* Associate pattern statement PATTERN_STMT with ORIG_STMT_INFO and
> +   return the pattern statement's stmt_vec_info.  Set its vector type to
> +   VECTYPE if it doesn't have one already.  */
>
> -static void
> +static stmt_vec_info
>  vect_init_pattern_stmt (gimple *pattern_stmt, stmt_vec_info orig_stmt_info,
>                         tree vectype)
>  {
> @@ -107,11 +108,12 @@ vect_init_pattern_stmt (gimple *pattern_
>      pattern_stmt_info = orig_stmt_info->vinfo->add_stmt (pattern_stmt);
>    gimple_set_bb (pattern_stmt, gimple_bb (orig_stmt_info->stmt));
>
> -  STMT_VINFO_RELATED_STMT (pattern_stmt_info) = orig_stmt_info->stmt;
> +  STMT_VINFO_RELATED_STMT (pattern_stmt_info) = orig_stmt_info;
>    STMT_VINFO_DEF_TYPE (pattern_stmt_info)
>      = STMT_VINFO_DEF_TYPE (orig_stmt_info);
>    if (!STMT_VINFO_VECTYPE (pattern_stmt_info))
>      STMT_VINFO_VECTYPE (pattern_stmt_info) = vectype;
> +  return pattern_stmt_info;
>  }
>
>  /* Set the pattern statement of ORIG_STMT_INFO to PATTERN_STMT.
> @@ -123,8 +125,8 @@ vect_set_pattern_stmt (gimple *pattern_s
>                        tree vectype)
>  {
>    STMT_VINFO_IN_PATTERN_P (orig_stmt_info) = true;
> -  STMT_VINFO_RELATED_STMT (orig_stmt_info) = pattern_stmt;
> -  vect_init_pattern_stmt (pattern_stmt, orig_stmt_info, vectype);
> +  STMT_VINFO_RELATED_STMT (orig_stmt_info)
> +    = vect_init_pattern_stmt (pattern_stmt, orig_stmt_info, vectype);
>  }
>
>  /* Add NEW_STMT to STMT_INFO's pattern definition statements.  If VECTYPE
> @@ -634,8 +636,7 @@ vect_split_statement (stmt_vec_info stmt
>      {
>        /* STMT2_INFO is part of a pattern.  Get the statement to which
>          the pattern is attached.  */
> -      stmt_vec_info orig_stmt2_info
> -       = vinfo_for_stmt (STMT_VINFO_RELATED_STMT (stmt2_info));
> +      stmt_vec_info orig_stmt2_info = STMT_VINFO_RELATED_STMT (stmt2_info);
>        vect_init_pattern_stmt (stmt1, orig_stmt2_info, vectype);
>
>        if (dump_enabled_p ())
> @@ -659,7 +660,7 @@ vect_split_statement (stmt_vec_info stmt
>         }
>
>        gimple_seq *def_seq = &STMT_VINFO_PATTERN_DEF_SEQ (orig_stmt2_info);
> -      if (STMT_VINFO_RELATED_STMT (orig_stmt2_info) == stmt2_info->stmt)
> +      if (STMT_VINFO_RELATED_STMT (orig_stmt2_info) == stmt2_info)
>         /* STMT2_INFO is the actual pattern statement.  Add STMT1
>            to the end of the definition sequence.  */
>         gimple_seq_add_stmt_without_update (def_seq, stmt1);
> @@ -4754,8 +4755,7 @@ vect_mark_pattern_stmts (gimple *orig_st
>         }
>
>        /* Switch to the statement that ORIG replaces.  */
> -      orig_stmt_info
> -       = vinfo_for_stmt (STMT_VINFO_RELATED_STMT (orig_stmt_info));
> +      orig_stmt_info = STMT_VINFO_RELATED_STMT (orig_stmt_info);
>
>        /* We shouldn't be replacing the main pattern statement.  */
>        gcc_assert (STMT_VINFO_RELATED_STMT (orig_stmt_info) != orig_stmt);
> Index: gcc/tree-vect-slp.c
> ===================================================================
> --- gcc/tree-vect-slp.c 2018-07-24 10:22:37.253248202 +0100
> +++ gcc/tree-vect-slp.c 2018-07-24 10:22:44.293185688 +0100
> @@ -2327,7 +2327,7 @@ vect_detect_hybrid_slp_stmts (slp_tree n
>           original stmt for immediate uses.  */
>        if (! STMT_VINFO_IN_PATTERN_P (stmt_vinfo)
>           && STMT_VINFO_RELATED_STMT (stmt_vinfo))
> -       stmt = STMT_VINFO_RELATED_STMT (stmt_vinfo);
> +       stmt = STMT_VINFO_RELATED_STMT (stmt_vinfo)->stmt;
>        tree def;
>        if (gimple_code (stmt) == GIMPLE_PHI)
>         def = gimple_phi_result (stmt);
> @@ -2341,7 +2341,7 @@ vect_detect_hybrid_slp_stmts (slp_tree n
>               continue;
>             if (STMT_VINFO_IN_PATTERN_P (use_vinfo)
>                 && STMT_VINFO_RELATED_STMT (use_vinfo))
> -             use_vinfo = vinfo_for_stmt (STMT_VINFO_RELATED_STMT (use_vinfo));
> +             use_vinfo = STMT_VINFO_RELATED_STMT (use_vinfo);
>             if (!STMT_SLP_TYPE (use_vinfo)
>                 && (STMT_VINFO_RELEVANT (use_vinfo)
>                     || VECTORIZABLE_CYCLE_DEF (STMT_VINFO_DEF_TYPE (use_vinfo)))
> @@ -2446,7 +2446,7 @@ vect_detect_hybrid_slp (loop_vec_info lo
>               memset (&wi, 0, sizeof (wi));
>               wi.info = loop_vinfo;
>               gimple_stmt_iterator gsi2
> -               = gsi_for_stmt (STMT_VINFO_RELATED_STMT (stmt_info));
> +               = gsi_for_stmt (STMT_VINFO_RELATED_STMT (stmt_info)->stmt);
>               walk_gimple_stmt (&gsi2, vect_detect_hybrid_slp_2,
>                                 vect_detect_hybrid_slp_1, &wi);
>               walk_gimple_seq (STMT_VINFO_PATTERN_DEF_SEQ (stmt_info),
> @@ -3612,7 +3612,7 @@ vect_get_slp_defs (vec<tree> ops, slp_tr
>           if (SLP_TREE_DEF_TYPE (child) == vect_internal_def)
>             {
>               gimple *first_def = SLP_TREE_SCALAR_STMTS (child)[0];
> -             gimple *related
> +             stmt_vec_info related
>                 = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (first_def));
>               tree first_def_op;
>
> @@ -3622,7 +3622,8 @@ vect_get_slp_defs (vec<tree> ops, slp_tr
>                 first_def_op = gimple_get_lhs (first_def);
>               if (operand_equal_p (oprnd, first_def_op, 0)
>                   || (related
> -                     && operand_equal_p (oprnd, gimple_get_lhs (related), 0)))
> +                     && operand_equal_p (oprnd,
> +                                         gimple_get_lhs (related->stmt), 0)))
>                 {
>                   /* The number of vector defs is determined by the number of
>                      vector statements in the node from which we get those
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-24 10:22:40.725217371 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:22:44.293185688 +0100
> @@ -202,7 +202,6 @@ vect_mark_relevant (vec<gimple *> *workl
>    stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    enum vect_relevant save_relevant = STMT_VINFO_RELEVANT (stmt_info);
>    bool save_live_p = STMT_VINFO_LIVE_P (stmt_info);
> -  gimple *pattern_stmt;
>
>    if (dump_enabled_p ())
>      {
> @@ -222,17 +221,16 @@ vect_mark_relevant (vec<gimple *> *workl
>          as relevant/live because it's not going to be vectorized.
>          Instead mark the pattern-stmt that replaces it.  */
>
> -      pattern_stmt = STMT_VINFO_RELATED_STMT (stmt_info);
> -
>        if (dump_enabled_p ())
>         dump_printf_loc (MSG_NOTE, vect_location,
>                          "last stmt in pattern. don't mark"
>                          " relevant/live.\n");
> -      stmt_info = vinfo_for_stmt (pattern_stmt);
> -      gcc_assert (STMT_VINFO_RELATED_STMT (stmt_info) == stmt);
> +      stmt_vec_info old_stmt_info = stmt_info;
> +      stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
> +      gcc_assert (STMT_VINFO_RELATED_STMT (stmt_info) == old_stmt_info);
>        save_relevant = STMT_VINFO_RELEVANT (stmt_info);
>        save_live_p = STMT_VINFO_LIVE_P (stmt_info);
> -      stmt = pattern_stmt;
> +      stmt = stmt_info->stmt;
>      }
>
>    STMT_VINFO_LIVE_P (stmt_info) |= live_p;
> @@ -1489,8 +1487,8 @@ vect_get_vec_def_for_operand_1 (gimple *
>          if (!vec_stmt
>              && STMT_VINFO_IN_PATTERN_P (def_stmt_info)
>              && !STMT_VINFO_RELEVANT (def_stmt_info))
> -          vec_stmt = STMT_VINFO_VEC_STMT (vinfo_for_stmt (
> -                       STMT_VINFO_RELATED_STMT (def_stmt_info)));
> +         vec_stmt = (STMT_VINFO_VEC_STMT
> +                     (STMT_VINFO_RELATED_STMT (def_stmt_info)));
>          gcc_assert (vec_stmt);
>         if (gimple_code (vec_stmt) == GIMPLE_PHI)
>           vec_oprnd = PHI_RESULT (vec_stmt);
> @@ -3635,7 +3633,7 @@ vectorizable_call (gimple *gs, gimple_st
>      return true;
>
>    if (is_pattern_stmt_p (stmt_info))
> -    stmt_info = vinfo_for_stmt (STMT_VINFO_RELATED_STMT (stmt_info));
> +    stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
>    lhs = gimple_get_lhs (stmt_info->stmt);
>
>    gassign *new_stmt
> @@ -4370,7 +4368,7 @@ vectorizable_simd_clone_call (gimple *st
>      {
>        type = TREE_TYPE (scalar_dest);
>        if (is_pattern_stmt_p (stmt_info))
> -       lhs = gimple_call_lhs (STMT_VINFO_RELATED_STMT (stmt_info));
> +       lhs = gimple_call_lhs (STMT_VINFO_RELATED_STMT (stmt_info)->stmt);
>        else
>         lhs = gimple_call_lhs (stmt);
>        new_stmt = gimple_build_assign (lhs, build_zero_cst (type));
> @@ -9420,7 +9418,6 @@ vect_analyze_stmt (gimple *stmt, bool *n
>    bb_vec_info bb_vinfo = STMT_VINFO_BB_VINFO (stmt_info);
>    enum vect_relevant relevance = STMT_VINFO_RELEVANT (stmt_info);
>    bool ok;
> -  gimple *pattern_stmt;
>    gimple_seq pattern_def_seq;
>
>    if (dump_enabled_p ())
> @@ -9482,18 +9479,18 @@ vect_analyze_stmt (gimple *stmt, bool *n
>       traversal, don't analyze pattern stmts instead, the pattern stmts
>       already will be part of SLP instance.  */
>
> -  pattern_stmt = STMT_VINFO_RELATED_STMT (stmt_info);
> +  stmt_vec_info pattern_stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
>    if (!STMT_VINFO_RELEVANT_P (stmt_info)
>        && !STMT_VINFO_LIVE_P (stmt_info))
>      {
>        if (STMT_VINFO_IN_PATTERN_P (stmt_info)
> -          && pattern_stmt
> -          && (STMT_VINFO_RELEVANT_P (vinfo_for_stmt (pattern_stmt))
> -              || STMT_VINFO_LIVE_P (vinfo_for_stmt (pattern_stmt))))
> +         && pattern_stmt_info
> +         && (STMT_VINFO_RELEVANT_P (pattern_stmt_info)
> +             || STMT_VINFO_LIVE_P (pattern_stmt_info)))
>          {
>            /* Analyze PATTERN_STMT instead of the original stmt.  */
> -          stmt = pattern_stmt;
> -          stmt_info = vinfo_for_stmt (pattern_stmt);
> +         stmt = pattern_stmt_info->stmt;
> +         stmt_info = pattern_stmt_info;
>            if (dump_enabled_p ())
>              {
>                dump_printf_loc (MSG_NOTE, vect_location,
> @@ -9511,9 +9508,9 @@ vect_analyze_stmt (gimple *stmt, bool *n
>      }
>    else if (STMT_VINFO_IN_PATTERN_P (stmt_info)
>            && node == NULL
> -           && pattern_stmt
> -           && (STMT_VINFO_RELEVANT_P (vinfo_for_stmt (pattern_stmt))
> -               || STMT_VINFO_LIVE_P (vinfo_for_stmt (pattern_stmt))))
> +          && pattern_stmt_info
> +          && (STMT_VINFO_RELEVANT_P (pattern_stmt_info)
> +              || STMT_VINFO_LIVE_P (pattern_stmt_info)))
>      {
>        /* Analyze PATTERN_STMT too.  */
>        if (dump_enabled_p ())
> @@ -9523,7 +9520,7 @@ vect_analyze_stmt (gimple *stmt, bool *n
>            dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
>          }
>
> -      if (!vect_analyze_stmt (pattern_stmt, need_to_vectorize, node,
> +      if (!vect_analyze_stmt (pattern_stmt_info, need_to_vectorize, node,
>                               node_instance, cost_vec))
>          return false;
>     }
> @@ -9855,7 +9852,6 @@ new_stmt_vec_info (gimple *stmt, vec_inf
>    STMT_VINFO_VEC_STMT (res) = NULL;
>    STMT_VINFO_VECTORIZABLE (res) = true;
>    STMT_VINFO_IN_PATTERN_P (res) = false;
> -  STMT_VINFO_RELATED_STMT (res) = NULL;
>    STMT_VINFO_PATTERN_DEF_SEQ (res) = NULL;
>    STMT_VINFO_DATA_REF (res) = NULL;
>    STMT_VINFO_VEC_REDUCTION_TYPE (res) = TREE_CODE_REDUCTION;
> @@ -9936,16 +9932,14 @@ free_stmt_vec_info (gimple *stmt)
>               release_ssa_name (lhs);
>             free_stmt_vec_info (seq_stmt);
>           }
> -      stmt_vec_info patt_info
> -       = vinfo_for_stmt (STMT_VINFO_RELATED_STMT (stmt_info));
> -      if (patt_info)
> -       {
> -         gimple *patt_stmt = STMT_VINFO_STMT (patt_info);
> -         gimple_set_bb (patt_stmt, NULL);
> -         tree lhs = gimple_get_lhs (patt_stmt);
> +      stmt_vec_info patt_stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
> +      if (patt_stmt_info)
> +       {
> +         gimple_set_bb (patt_stmt_info->stmt, NULL);
> +         tree lhs = gimple_get_lhs (patt_stmt_info->stmt);
>           if (lhs && TREE_CODE (lhs) == SSA_NAME)
>             release_ssa_name (lhs);
> -         free_stmt_vec_info (patt_stmt);
> +         free_stmt_vec_info (patt_stmt_info);
>         }
>      }
>
> @@ -10143,8 +10137,8 @@ vect_is_simple_use (tree operand, vec_in
>         {
>           if (STMT_VINFO_IN_PATTERN_P (stmt_vinfo))
>             {
> -             def_stmt = STMT_VINFO_RELATED_STMT (stmt_vinfo);
> -             stmt_vinfo = vinfo_for_stmt (def_stmt);
> +             stmt_vinfo = STMT_VINFO_RELATED_STMT (stmt_vinfo);
> +             def_stmt = stmt_vinfo->stmt;
>             }
>           switch (gimple_code (def_stmt))
>             {

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [14/46] Make STMT_VINFO_VEC_STMT a stmt_vec_info
  2018-07-24  9:58 ` [14/46] Make STMT_VINFO_VEC_STMT " Richard Sandiford
@ 2018-07-25  9:21   ` Richard Biener
  2018-07-25 11:03     ` Richard Sandiford
  2018-08-02  0:22   ` H.J. Lu
  1 sibling, 1 reply; 108+ messages in thread
From: Richard Biener @ 2018-07-25  9:21 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 11:58 AM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch changes STMT_VINFO_VEC_STMT from a gimple stmt to a
> stmt_vec_info and makes the vectorizable_* routines pass back
> a stmt_vec_info to vect_transform_stmt.

OK, but - I don't think we ever "use" that stmt_info on vectorized stmts apart
from the chaining via related-stmt?  I'd also like to get rid of that chaining
and instead do sth similar to SLP where we simply have a vec<> of
vectorized stmts.

Richard.

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (_stmt_vec_info::vectorized_stmt): Change from
>         a gimple stmt to a stmt_vec_info.
>         (vectorizable_condition, vectorizable_live_operation)
>         (vectorizable_reduction, vectorizable_induction): Pass back the
>         vectorized statement as a stmt_vec_info.
>         * tree-vect-data-refs.c (vect_record_grouped_load_vectors): Update
>         use of STMT_VINFO_VEC_STMT.
>         * tree-vect-loop.c (vect_create_epilog_for_reduction): Likewise,
>         accumulating the inner phis that feed the STMT_VINFO_VEC_STMT
>         as stmt_vec_infos rather than gimple stmts.
>         (vectorize_fold_left_reduction): Change vec_stmt from a gimple stmt
>         to a stmt_vec_info.
>         (vectorizable_live_operation): Likewise.
>         (vectorizable_reduction, vectorizable_induction): Likewise,
>         updating use of STMT_VINFO_VEC_STMT.
>         * tree-vect-stmts.c (vect_get_vec_def_for_operand_1): Update use
>         of STMT_VINFO_VEC_STMT.
>         (vect_build_gather_load_calls, vectorizable_bswap, vectorizable_call)
>         (vectorizable_simd_clone_call, vectorizable_conversion)
>         (vectorizable_assignment, vectorizable_shift, vectorizable_operation)
>         (vectorizable_store, vectorizable_load, vectorizable_condition)
>         (vectorizable_comparison, can_vectorize_live_stmts): Change vec_stmt
>         from a gimple stmt to a stmt_vec_info.
>         (vect_transform_stmt): Update use of STMT_VINFO_VEC_STMT.  Pass a
>         pointer to a stmt_vec_info to the vectorizable_* routines.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:22:44.297185652 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:22:47.489157307 +0100
> @@ -812,7 +812,7 @@ struct _stmt_vec_info {
>    tree vectype;
>
>    /* The vectorized version of the stmt.  */
> -  gimple *vectorized_stmt;
> +  stmt_vec_info vectorized_stmt;
>
>
>    /* The following is relevant only for stmts that contain a non-scalar
> @@ -1560,7 +1560,7 @@ extern void vect_remove_stores (gimple *
>  extern bool vect_analyze_stmt (gimple *, bool *, slp_tree, slp_instance,
>                                stmt_vector_for_cost *);
>  extern bool vectorizable_condition (gimple *, gimple_stmt_iterator *,
> -                                   gimple **, tree, int, slp_tree,
> +                                   stmt_vec_info *, tree, int, slp_tree,
>                                     stmt_vector_for_cost *);
>  extern void vect_get_load_cost (stmt_vec_info, int, bool,
>                                 unsigned int *, unsigned int *,
> @@ -1649,13 +1649,13 @@ extern tree vect_get_loop_mask (gimple_s
>  extern struct loop *vect_transform_loop (loop_vec_info);
>  extern loop_vec_info vect_analyze_loop_form (struct loop *, vec_info_shared *);
>  extern bool vectorizable_live_operation (gimple *, gimple_stmt_iterator *,
> -                                        slp_tree, int, gimple **,
> +                                        slp_tree, int, stmt_vec_info *,
>                                          stmt_vector_for_cost *);
>  extern bool vectorizable_reduction (gimple *, gimple_stmt_iterator *,
> -                                   gimple **, slp_tree, slp_instance,
> +                                   stmt_vec_info *, slp_tree, slp_instance,
>                                     stmt_vector_for_cost *);
>  extern bool vectorizable_induction (gimple *, gimple_stmt_iterator *,
> -                                   gimple **, slp_tree,
> +                                   stmt_vec_info *, slp_tree,
>                                     stmt_vector_for_cost *);
>  extern tree get_initial_def_for_reduction (gimple *, tree, tree *);
>  extern bool vect_worthwhile_without_simd_p (vec_info *, tree_code);
> Index: gcc/tree-vect-data-refs.c
> ===================================================================
> --- gcc/tree-vect-data-refs.c   2018-07-24 10:22:44.285185759 +0100
> +++ gcc/tree-vect-data-refs.c   2018-07-24 10:22:47.485157343 +0100
> @@ -6401,18 +6401,17 @@ vect_record_grouped_load_vectors (gimple
>              {
>                if (!DR_GROUP_SAME_DR_STMT (vinfo_for_stmt (next_stmt)))
>                  {
> -                 gimple *prev_stmt =
> -                   STMT_VINFO_VEC_STMT (vinfo_for_stmt (next_stmt));
> +                 stmt_vec_info prev_stmt_info
> +                   = STMT_VINFO_VEC_STMT (vinfo_for_stmt (next_stmt));
>                   stmt_vec_info rel_stmt_info
> -                   = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (prev_stmt));
> +                   = STMT_VINFO_RELATED_STMT (prev_stmt_info);
>                   while (rel_stmt_info)
>                     {
> -                     prev_stmt = rel_stmt_info;
> +                     prev_stmt_info = rel_stmt_info;
>                       rel_stmt_info = STMT_VINFO_RELATED_STMT (rel_stmt_info);
>                     }
>
> -                 STMT_VINFO_RELATED_STMT (vinfo_for_stmt (prev_stmt))
> -                   = new_stmt_info;
> +                 STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
>                  }
>              }
>
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:22:44.289185723 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:22:47.489157307 +0100
> @@ -4445,7 +4445,7 @@ vect_create_epilog_for_reduction (vec<tr
>    gimple *use_stmt, *reduction_phi = NULL;
>    bool nested_in_vect_loop = false;
>    auto_vec<gimple *> new_phis;
> -  auto_vec<gimple *> inner_phis;
> +  auto_vec<stmt_vec_info> inner_phis;
>    enum vect_def_type dt = vect_unknown_def_type;
>    int j, i;
>    auto_vec<tree> scalar_results;
> @@ -4455,7 +4455,7 @@ vect_create_epilog_for_reduction (vec<tr
>    bool slp_reduc = false;
>    bool direct_slp_reduc;
>    tree new_phi_result;
> -  gimple *inner_phi = NULL;
> +  stmt_vec_info inner_phi = NULL;
>    tree induction_index = NULL_TREE;
>
>    if (slp_node)
> @@ -4605,7 +4605,7 @@ vect_create_epilog_for_reduction (vec<tr
>        tree indx_before_incr, indx_after_incr;
>        poly_uint64 nunits_out = TYPE_VECTOR_SUBPARTS (vectype);
>
> -      gimple *vec_stmt = STMT_VINFO_VEC_STMT (stmt_info);
> +      gimple *vec_stmt = STMT_VINFO_VEC_STMT (stmt_info)->stmt;
>        gcc_assert (gimple_assign_rhs_code (vec_stmt) == VEC_COND_EXPR);
>
>        int scalar_precision
> @@ -4738,20 +4738,21 @@ vect_create_epilog_for_reduction (vec<tr
>        inner_phis.create (vect_defs.length ());
>        FOR_EACH_VEC_ELT (new_phis, i, phi)
>         {
> +         stmt_vec_info phi_info = loop_vinfo->lookup_stmt (phi);
>           tree new_result = copy_ssa_name (PHI_RESULT (phi));
>           gphi *outer_phi = create_phi_node (new_result, exit_bb);
>           SET_PHI_ARG_DEF (outer_phi, single_exit (loop)->dest_idx,
>                            PHI_RESULT (phi));
>           prev_phi_info = loop_vinfo->add_stmt (outer_phi);
> -         inner_phis.quick_push (phi);
> +         inner_phis.quick_push (phi_info);
>           new_phis[i] = outer_phi;
> -          while (STMT_VINFO_RELATED_STMT (vinfo_for_stmt (phi)))
> +         while (STMT_VINFO_RELATED_STMT (phi_info))
>              {
> -             phi = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (phi));
> -             new_result = copy_ssa_name (PHI_RESULT (phi));
> +             phi_info = STMT_VINFO_RELATED_STMT (phi_info);
> +             new_result = copy_ssa_name (PHI_RESULT (phi_info->stmt));
>               outer_phi = create_phi_node (new_result, exit_bb);
>               SET_PHI_ARG_DEF (outer_phi, single_exit (loop)->dest_idx,
> -                              PHI_RESULT (phi));
> +                              PHI_RESULT (phi_info->stmt));
>               stmt_vec_info outer_phi_info = loop_vinfo->add_stmt (outer_phi);
>               STMT_VINFO_RELATED_STMT (prev_phi_info) = outer_phi_info;
>               prev_phi_info = outer_phi_info;
> @@ -5644,7 +5645,8 @@ vect_create_epilog_for_reduction (vec<tr
>               if (double_reduc)
>                 STMT_VINFO_VEC_STMT (exit_phi_vinfo) = inner_phi;
>               else
> -               STMT_VINFO_VEC_STMT (exit_phi_vinfo) = epilog_stmt;
> +               STMT_VINFO_VEC_STMT (exit_phi_vinfo)
> +                 = vinfo_for_stmt (epilog_stmt);
>                if (!double_reduc
>                    || STMT_VINFO_DEF_TYPE (exit_phi_vinfo)
>                        != vect_double_reduction_def)
> @@ -5706,8 +5708,8 @@ vect_create_epilog_for_reduction (vec<tr
>                    add_phi_arg (vect_phi, vect_phi_init,
>                                 loop_preheader_edge (outer_loop),
>                                 UNKNOWN_LOCATION);
> -                  add_phi_arg (vect_phi, PHI_RESULT (inner_phi),
> -                               loop_latch_edge (outer_loop), UNKNOWN_LOCATION);
> +                 add_phi_arg (vect_phi, PHI_RESULT (inner_phi->stmt),
> +                              loop_latch_edge (outer_loop), UNKNOWN_LOCATION);
>                    if (dump_enabled_p ())
>                      {
>                        dump_printf_loc (MSG_NOTE, vect_location,
> @@ -5846,7 +5848,7 @@ vect_expand_fold_left (gimple_stmt_itera
>
>  static bool
>  vectorize_fold_left_reduction (gimple *stmt, gimple_stmt_iterator *gsi,
> -                              gimple **vec_stmt, slp_tree slp_node,
> +                              stmt_vec_info *vec_stmt, slp_tree slp_node,
>                                gimple *reduc_def_stmt,
>                                tree_code code, internal_fn reduc_fn,
>                                tree ops[3], tree vectype_in,
> @@ -6070,7 +6072,7 @@ is_nonwrapping_integer_induction (gimple
>
>  bool
>  vectorizable_reduction (gimple *stmt, gimple_stmt_iterator *gsi,
> -                       gimple **vec_stmt, slp_tree slp_node,
> +                       stmt_vec_info *vec_stmt, slp_tree slp_node,
>                         slp_instance slp_node_instance,
>                         stmt_vector_for_cost *cost_vec)
>  {
> @@ -6220,7 +6222,8 @@ vectorizable_reduction (gimple *stmt, gi
>                   else
>                     {
>                       if (j == 0)
> -                       STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_phi;
> +                       STMT_VINFO_VEC_STMT (stmt_info)
> +                         = *vec_stmt = new_phi_info;
>                       else
>                         STMT_VINFO_RELATED_STMT (prev_phi_info) = new_phi_info;
>                       prev_phi_info = new_phi_info;
> @@ -7201,7 +7204,7 @@ vectorizable_reduction (gimple *stmt, gi
>    /* Finalize the reduction-phi (set its arguments) and create the
>       epilog reduction code.  */
>    if ((!single_defuse_cycle || code == COND_EXPR) && !slp_node)
> -    vect_defs[0] = gimple_get_lhs (*vec_stmt);
> +    vect_defs[0] = gimple_get_lhs ((*vec_stmt)->stmt);
>
>    vect_create_epilog_for_reduction (vect_defs, stmt, reduc_def_stmt,
>                                     epilog_copies, reduc_fn, phis,
> @@ -7262,7 +7265,7 @@ vect_worthwhile_without_simd_p (vec_info
>  bool
>  vectorizable_induction (gimple *phi,
>                         gimple_stmt_iterator *gsi ATTRIBUTE_UNUSED,
> -                       gimple **vec_stmt, slp_tree slp_node,
> +                       stmt_vec_info *vec_stmt, slp_tree slp_node,
>                         stmt_vector_for_cost *cost_vec)
>  {
>    stmt_vec_info stmt_info = vinfo_for_stmt (phi);
> @@ -7700,7 +7703,7 @@ vectorizable_induction (gimple *phi,
>    add_phi_arg (induction_phi, vec_def, loop_latch_edge (iv_loop),
>                UNKNOWN_LOCATION);
>
> -  STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = induction_phi;
> +  STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = induction_phi_info;
>
>    /* In case that vectorization factor (VF) is bigger than the number
>       of elements that we can fit in a vectype (nunits), we have to generate
> @@ -7779,7 +7782,7 @@ vectorizable_induction (gimple *phi,
>           gcc_assert (STMT_VINFO_RELEVANT_P (stmt_vinfo)
>                       && !STMT_VINFO_LIVE_P (stmt_vinfo));
>
> -         STMT_VINFO_VEC_STMT (stmt_vinfo) = new_stmt;
> +         STMT_VINFO_VEC_STMT (stmt_vinfo) = new_stmt_info;
>           if (dump_enabled_p ())
>             {
>               dump_printf_loc (MSG_NOTE, vect_location,
> @@ -7811,7 +7814,7 @@ vectorizable_induction (gimple *phi,
>  vectorizable_live_operation (gimple *stmt,
>                              gimple_stmt_iterator *gsi ATTRIBUTE_UNUSED,
>                              slp_tree slp_node, int slp_index,
> -                            gimple **vec_stmt,
> +                            stmt_vec_info *vec_stmt,
>                              stmt_vector_for_cost *)
>  {
>    stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-24 10:22:44.293185688 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:22:47.489157307 +0100
> @@ -1465,7 +1465,7 @@ vect_init_vector (gimple *stmt, tree val
>  vect_get_vec_def_for_operand_1 (gimple *def_stmt, enum vect_def_type dt)
>  {
>    tree vec_oprnd;
> -  gimple *vec_stmt;
> +  stmt_vec_info vec_stmt_info;
>    stmt_vec_info def_stmt_info = NULL;
>
>    switch (dt)
> @@ -1482,21 +1482,19 @@ vect_get_vec_def_for_operand_1 (gimple *
>          /* Get the def from the vectorized stmt.  */
>          def_stmt_info = vinfo_for_stmt (def_stmt);
>
> -        vec_stmt = STMT_VINFO_VEC_STMT (def_stmt_info);
> -        /* Get vectorized pattern statement.  */
> -        if (!vec_stmt
> -            && STMT_VINFO_IN_PATTERN_P (def_stmt_info)
> -            && !STMT_VINFO_RELEVANT (def_stmt_info))
> -         vec_stmt = (STMT_VINFO_VEC_STMT
> -                     (STMT_VINFO_RELATED_STMT (def_stmt_info)));
> -        gcc_assert (vec_stmt);
> -       if (gimple_code (vec_stmt) == GIMPLE_PHI)
> -         vec_oprnd = PHI_RESULT (vec_stmt);
> -       else if (is_gimple_call (vec_stmt))
> -         vec_oprnd = gimple_call_lhs (vec_stmt);
> +       vec_stmt_info = STMT_VINFO_VEC_STMT (def_stmt_info);
> +       /* Get vectorized pattern statement.  */
> +       if (!vec_stmt_info
> +           && STMT_VINFO_IN_PATTERN_P (def_stmt_info)
> +           && !STMT_VINFO_RELEVANT (def_stmt_info))
> +         vec_stmt_info = (STMT_VINFO_VEC_STMT
> +                          (STMT_VINFO_RELATED_STMT (def_stmt_info)));
> +       gcc_assert (vec_stmt_info);
> +       if (gphi *phi = dyn_cast <gphi *> (vec_stmt_info->stmt))
> +         vec_oprnd = PHI_RESULT (phi);
>         else
> -         vec_oprnd = gimple_assign_lhs (vec_stmt);
> -        return vec_oprnd;
> +         vec_oprnd = gimple_get_lhs (vec_stmt_info->stmt);
> +       return vec_oprnd;
>        }
>
>      /* operand is defined by a loop header phi.  */
> @@ -1507,14 +1505,14 @@ vect_get_vec_def_for_operand_1 (gimple *
>        {
>         gcc_assert (gimple_code (def_stmt) == GIMPLE_PHI);
>
> -        /* Get the def from the vectorized stmt.  */
> -        def_stmt_info = vinfo_for_stmt (def_stmt);
> -        vec_stmt = STMT_VINFO_VEC_STMT (def_stmt_info);
> -       if (gimple_code (vec_stmt) == GIMPLE_PHI)
> -         vec_oprnd = PHI_RESULT (vec_stmt);
> +       /* Get the def from the vectorized stmt.  */
> +       def_stmt_info = vinfo_for_stmt (def_stmt);
> +       vec_stmt_info = STMT_VINFO_VEC_STMT (def_stmt_info);
> +       if (gphi *phi = dyn_cast <gphi *> (vec_stmt_info->stmt))
> +         vec_oprnd = PHI_RESULT (phi);
>         else
> -         vec_oprnd = gimple_get_lhs (vec_stmt);
> -        return vec_oprnd;
> +         vec_oprnd = gimple_get_lhs (vec_stmt_info->stmt);
> +       return vec_oprnd;
>        }
>
>      default:
> @@ -2674,8 +2672,9 @@ vect_build_zero_merge_argument (gimple *
>
>  static void
>  vect_build_gather_load_calls (gimple *stmt, gimple_stmt_iterator *gsi,
> -                             gimple **vec_stmt, gather_scatter_info *gs_info,
> -                             tree mask, vect_def_type mask_dt)
> +                             stmt_vec_info *vec_stmt,
> +                             gather_scatter_info *gs_info, tree mask,
> +                             vect_def_type mask_dt)
>  {
>    stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
> @@ -2960,7 +2959,7 @@ vect_get_data_ptr_increment (data_refere
>
>  static bool
>  vectorizable_bswap (gimple *stmt, gimple_stmt_iterator *gsi,
> -                   gimple **vec_stmt, slp_tree slp_node,
> +                   stmt_vec_info *vec_stmt, slp_tree slp_node,
>                     tree vectype_in, enum vect_def_type *dt,
>                     stmt_vector_for_cost *cost_vec)
>  {
> @@ -3104,8 +3103,9 @@ simple_integer_narrowing (tree vectype_o
>     Return FALSE if not a vectorizable STMT, TRUE otherwise.  */
>
>  static bool
> -vectorizable_call (gimple *gs, gimple_stmt_iterator *gsi, gimple **vec_stmt,
> -                  slp_tree slp_node, stmt_vector_for_cost *cost_vec)
> +vectorizable_call (gimple *gs, gimple_stmt_iterator *gsi,
> +                  stmt_vec_info *vec_stmt, slp_tree slp_node,
> +                  stmt_vector_for_cost *cost_vec)
>  {
>    gcall *stmt;
>    tree vec_dest;
> @@ -3745,7 +3745,7 @@ simd_clone_subparts (tree vectype)
>
>  static bool
>  vectorizable_simd_clone_call (gimple *stmt, gimple_stmt_iterator *gsi,
> -                             gimple **vec_stmt, slp_tree slp_node,
> +                             stmt_vec_info *vec_stmt, slp_tree slp_node,
>                               stmt_vector_for_cost *)
>  {
>    tree vec_dest;
> @@ -4596,7 +4596,7 @@ vect_create_vectorized_promotion_stmts (
>
>  static bool
>  vectorizable_conversion (gimple *stmt, gimple_stmt_iterator *gsi,
> -                        gimple **vec_stmt, slp_tree slp_node,
> +                        stmt_vec_info *vec_stmt, slp_tree slp_node,
>                          stmt_vector_for_cost *cost_vec)
>  {
>    tree vec_dest;
> @@ -5204,7 +5204,7 @@ vectorizable_conversion (gimple *stmt, g
>
>  static bool
>  vectorizable_assignment (gimple *stmt, gimple_stmt_iterator *gsi,
> -                        gimple **vec_stmt, slp_tree slp_node,
> +                        stmt_vec_info *vec_stmt, slp_tree slp_node,
>                          stmt_vector_for_cost *cost_vec)
>  {
>    tree vec_dest;
> @@ -5405,7 +5405,7 @@ vect_supportable_shift (enum tree_code c
>
>  static bool
>  vectorizable_shift (gimple *stmt, gimple_stmt_iterator *gsi,
> -                    gimple **vec_stmt, slp_tree slp_node,
> +                   stmt_vec_info *vec_stmt, slp_tree slp_node,
>                     stmt_vector_for_cost *cost_vec)
>  {
>    tree vec_dest;
> @@ -5769,7 +5769,7 @@ vectorizable_shift (gimple *stmt, gimple
>
>  static bool
>  vectorizable_operation (gimple *stmt, gimple_stmt_iterator *gsi,
> -                       gimple **vec_stmt, slp_tree slp_node,
> +                       stmt_vec_info *vec_stmt, slp_tree slp_node,
>                         stmt_vector_for_cost *cost_vec)
>  {
>    tree vec_dest;
> @@ -6222,8 +6222,9 @@ get_group_alias_ptr_type (gimple *first_
>     Return FALSE if not a vectorizable STMT, TRUE otherwise.  */
>
>  static bool
> -vectorizable_store (gimple *stmt, gimple_stmt_iterator *gsi, gimple **vec_stmt,
> -                    slp_tree slp_node, stmt_vector_for_cost *cost_vec)
> +vectorizable_store (gimple *stmt, gimple_stmt_iterator *gsi,
> +                   stmt_vec_info *vec_stmt, slp_tree slp_node,
> +                   stmt_vector_for_cost *cost_vec)
>  {
>    tree data_ref;
>    tree op;
> @@ -7385,8 +7386,9 @@ hoist_defs_of_uses (gimple *stmt, struct
>     Return FALSE if not a vectorizable STMT, TRUE otherwise.  */
>
>  static bool
> -vectorizable_load (gimple *stmt, gimple_stmt_iterator *gsi, gimple **vec_stmt,
> -                   slp_tree slp_node, slp_instance slp_node_instance,
> +vectorizable_load (gimple *stmt, gimple_stmt_iterator *gsi,
> +                  stmt_vec_info *vec_stmt, slp_tree slp_node,
> +                  slp_instance slp_node_instance,
>                    stmt_vector_for_cost *cost_vec)
>  {
>    tree scalar_dest;
> @@ -8710,8 +8712,9 @@ vect_is_simple_cond (tree cond, vec_info
>
>  bool
>  vectorizable_condition (gimple *stmt, gimple_stmt_iterator *gsi,
> -                       gimple **vec_stmt, tree reduc_def, int reduc_index,
> -                       slp_tree slp_node, stmt_vector_for_cost *cost_vec)
> +                       stmt_vec_info *vec_stmt, tree reduc_def,
> +                       int reduc_index, slp_tree slp_node,
> +                       stmt_vector_for_cost *cost_vec)
>  {
>    tree scalar_dest = NULL_TREE;
>    tree vec_dest = NULL_TREE;
> @@ -9111,7 +9114,7 @@ vectorizable_condition (gimple *stmt, gi
>
>  static bool
>  vectorizable_comparison (gimple *stmt, gimple_stmt_iterator *gsi,
> -                        gimple **vec_stmt, tree reduc_def,
> +                        stmt_vec_info *vec_stmt, tree reduc_def,
>                          slp_tree slp_node, stmt_vector_for_cost *cost_vec)
>  {
>    tree lhs, rhs1, rhs2;
> @@ -9383,7 +9386,7 @@ vectorizable_comparison (gimple *stmt, g
>
>  static bool
>  can_vectorize_live_stmts (gimple *stmt, gimple_stmt_iterator *gsi,
> -                         slp_tree slp_node, gimple **vec_stmt,
> +                         slp_tree slp_node, stmt_vec_info *vec_stmt,
>                           stmt_vector_for_cost *cost_vec)
>  {
>    if (slp_node)
> @@ -9647,11 +9650,11 @@ vect_transform_stmt (gimple *stmt, gimpl
>    stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    vec_info *vinfo = stmt_info->vinfo;
>    bool is_store = false;
> -  gimple *vec_stmt = NULL;
> +  stmt_vec_info vec_stmt = NULL;
>    bool done;
>
>    gcc_assert (slp_node || !PURE_SLP_STMT (stmt_info));
> -  gimple *old_vec_stmt = STMT_VINFO_VEC_STMT (stmt_info);
> +  stmt_vec_info old_vec_stmt_info = STMT_VINFO_VEC_STMT (stmt_info);
>
>    bool nested_p = (STMT_VINFO_LOOP_VINFO (stmt_info)
>                    && nested_in_vect_loop_p
> @@ -9752,7 +9755,7 @@ vect_transform_stmt (gimple *stmt, gimpl
>       This would break hybrid SLP vectorization.  */
>    if (slp_node)
>      gcc_assert (!vec_stmt
> -               && STMT_VINFO_VEC_STMT (stmt_info) == old_vec_stmt);
> +               && STMT_VINFO_VEC_STMT (stmt_info) == old_vec_stmt_info);
>
>    /* Handle inner-loop stmts whose DEF is used in the loop-nest that
>       is being vectorized, but outside the immediately enclosing loop.  */

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [16/46] Make STMT_VINFO_REDUC_DEF a stmt_vec_info
  2018-07-24  9:59 ` [16/46] Make STMT_VINFO_REDUC_DEF a stmt_vec_info Richard Sandiford
@ 2018-07-25  9:22   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25  9:22 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 11:59 AM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch changes STMT_VINFO_REDUC_DEF from a gimple stmt to a
> stmt_vec_info.

OK

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (_stmt_vec_info::reduc_def): Change from
>         a gimple stmt to a stmt_vec_info.
>         * tree-vect-loop.c (vect_active_double_reduction_p)
>         (vect_force_simple_reduction, vectorizable_reduction): Update
>         accordingly.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:22:50.777128110 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:22:53.909100298 +0100
> @@ -921,7 +921,7 @@ struct _stmt_vec_info {
>    /* On a reduction PHI the def returned by vect_force_simple_reduction.
>       On the def returned by vect_force_simple_reduction the
>       corresponding PHI.  */
> -  gimple *reduc_def;
> +  stmt_vec_info reduc_def;
>
>    /* The number of scalar stmt references from active SLP instances.  */
>    unsigned int num_slp_uses;
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:22:50.777128110 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:22:53.909100298 +0100
> @@ -1499,8 +1499,7 @@ vect_active_double_reduction_p (stmt_vec
>    if (STMT_VINFO_DEF_TYPE (stmt_info) != vect_double_reduction_def)
>      return false;
>
> -  gimple *other_phi = STMT_VINFO_REDUC_DEF (stmt_info);
> -  return STMT_VINFO_RELEVANT_P (vinfo_for_stmt (other_phi));
> +  return STMT_VINFO_RELEVANT_P (STMT_VINFO_REDUC_DEF (stmt_info));
>  }
>
>  /* Function vect_analyze_loop_operations.
> @@ -3293,12 +3292,12 @@ vect_force_simple_reduction (loop_vec_in
>                                           &v_reduc_type);
>    if (def)
>      {
> -      stmt_vec_info reduc_def_info = vinfo_for_stmt (phi);
> -      STMT_VINFO_REDUC_TYPE (reduc_def_info) = v_reduc_type;
> -      STMT_VINFO_REDUC_DEF (reduc_def_info) = def;
> -      reduc_def_info = vinfo_for_stmt (def);
> -      STMT_VINFO_REDUC_TYPE (reduc_def_info) = v_reduc_type;
> -      STMT_VINFO_REDUC_DEF (reduc_def_info) = phi;
> +      stmt_vec_info phi_info = vinfo_for_stmt (phi);
> +      stmt_vec_info def_info = vinfo_for_stmt (def);
> +      STMT_VINFO_REDUC_TYPE (phi_info) = v_reduc_type;
> +      STMT_VINFO_REDUC_DEF (phi_info) = def_info;
> +      STMT_VINFO_REDUC_TYPE (def_info) = v_reduc_type;
> +      STMT_VINFO_REDUC_DEF (def_info) = phi_info;
>      }
>    return def;
>  }
> @@ -6153,17 +6152,16 @@ vectorizable_reduction (gimple *stmt, gi
>            for reductions involving a single statement.  */
>         return true;
>
> -      gimple *reduc_stmt = STMT_VINFO_REDUC_DEF (stmt_info);
> -      if (STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (reduc_stmt)))
> -       reduc_stmt = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (reduc_stmt));
> +      stmt_vec_info reduc_stmt_info = STMT_VINFO_REDUC_DEF (stmt_info);
> +      if (STMT_VINFO_IN_PATTERN_P (reduc_stmt_info))
> +       reduc_stmt_info = STMT_VINFO_RELATED_STMT (reduc_stmt_info);
>
> -      stmt_vec_info reduc_stmt_info = vinfo_for_stmt (reduc_stmt);
>        if (STMT_VINFO_VEC_REDUCTION_TYPE (reduc_stmt_info)
>           == EXTRACT_LAST_REDUCTION)
>         /* Leave the scalar phi in place.  */
>         return true;
>
> -      gcc_assert (is_gimple_assign (reduc_stmt));
> +      gassign *reduc_stmt = as_a <gassign *> (reduc_stmt_info->stmt);
>        for (unsigned k = 1; k < gimple_num_ops (reduc_stmt); ++k)
>         {
>           tree op = gimple_op (reduc_stmt, k);
> @@ -6314,7 +6312,7 @@ vectorizable_reduction (gimple *stmt, gi
>       The last use is the reduction variable.  In case of nested cycle this
>       assumption is not true: we use reduc_index to record the index of the
>       reduction variable.  */
> -  gimple *reduc_def_stmt = NULL;
> +  stmt_vec_info reduc_def_info = NULL;
>    int reduc_index = -1;
>    for (i = 0; i < op_type; i++)
>      {
> @@ -6329,7 +6327,7 @@ vectorizable_reduction (gimple *stmt, gi
>        gcc_assert (is_simple_use);
>        if (dt == vect_reduction_def)
>         {
> -         reduc_def_stmt = def_stmt_info;
> +         reduc_def_info = def_stmt_info;
>           reduc_index = i;
>           continue;
>         }
> @@ -6353,7 +6351,7 @@ vectorizable_reduction (gimple *stmt, gi
>        if (dt == vect_nested_cycle)
>         {
>           found_nested_cycle_def = true;
> -         reduc_def_stmt = def_stmt_info;
> +         reduc_def_info = def_stmt_info;
>           reduc_index = i;
>         }
>
> @@ -6391,12 +6389,16 @@ vectorizable_reduction (gimple *stmt, gi
>         }
>
>        if (orig_stmt_info)
> -       reduc_def_stmt = STMT_VINFO_REDUC_DEF (orig_stmt_info);
> +       reduc_def_info = STMT_VINFO_REDUC_DEF (orig_stmt_info);
>        else
> -       reduc_def_stmt = STMT_VINFO_REDUC_DEF (stmt_info);
> +       reduc_def_info = STMT_VINFO_REDUC_DEF (stmt_info);
>      }
>
> -  if (! reduc_def_stmt || gimple_code (reduc_def_stmt) != GIMPLE_PHI)
> +  if (! reduc_def_info)
> +    return false;
> +
> +  gphi *reduc_def_phi = dyn_cast <gphi *> (reduc_def_info->stmt);
> +  if (!reduc_def_phi)
>      return false;
>
>    if (!(reduc_index == -1
> @@ -6415,12 +6417,11 @@ vectorizable_reduction (gimple *stmt, gi
>        return false;
>      }
>
> -  stmt_vec_info reduc_def_info = vinfo_for_stmt (reduc_def_stmt);
>    /* PHIs should not participate in patterns.  */
>    gcc_assert (!STMT_VINFO_RELATED_STMT (reduc_def_info));
>    enum vect_reduction_type v_reduc_type
>      = STMT_VINFO_REDUC_TYPE (reduc_def_info);
> -  gimple *tmp = STMT_VINFO_REDUC_DEF (reduc_def_info);
> +  stmt_vec_info tmp = STMT_VINFO_REDUC_DEF (reduc_def_info);
>
>    STMT_VINFO_VEC_REDUCTION_TYPE (stmt_info) = v_reduc_type;
>    /* If we have a condition reduction, see if we can simplify it further.  */
> @@ -6547,15 +6548,14 @@ vectorizable_reduction (gimple *stmt, gi
>
>    if (orig_stmt_info)
>      gcc_assert (tmp == orig_stmt_info
> -               || (REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (tmp))
> -                   == orig_stmt_info));
> +               || REDUC_GROUP_FIRST_ELEMENT (tmp) == orig_stmt_info);
>    else
>      /* We changed STMT to be the first stmt in reduction chain, hence we
>         check that in this case the first element in the chain is STMT.  */
> -    gcc_assert (stmt == tmp
> -               || REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (tmp)) == stmt);
> +    gcc_assert (tmp == stmt_info
> +               || REDUC_GROUP_FIRST_ELEMENT (tmp) == stmt_info);
>
> -  if (STMT_VINFO_LIVE_P (vinfo_for_stmt (reduc_def_stmt)))
> +  if (STMT_VINFO_LIVE_P (reduc_def_info))
>      return false;
>
>    if (slp_node)
> @@ -6702,9 +6702,9 @@ vectorizable_reduction (gimple *stmt, gi
>
>    if (nested_cycle)
>      {
> -      def_bb = gimple_bb (reduc_def_stmt);
> +      def_bb = gimple_bb (reduc_def_phi);
>        def_stmt_loop = def_bb->loop_father;
> -      def_arg = PHI_ARG_DEF_FROM_EDGE (reduc_def_stmt,
> +      def_arg = PHI_ARG_DEF_FROM_EDGE (reduc_def_phi,
>                                         loop_preheader_edge (def_stmt_loop));
>        stmt_vec_info def_arg_stmt_info = loop_vinfo->lookup_def (def_arg);
>        if (def_arg_stmt_info
> @@ -6954,7 +6954,7 @@ vectorizable_reduction (gimple *stmt, gi
>     in vectorizable_reduction and there are no intermediate stmts
>     participating.  */
>    stmt_vec_info use_stmt_info;
> -  tree reduc_phi_result = gimple_phi_result (reduc_def_stmt);
> +  tree reduc_phi_result = gimple_phi_result (reduc_def_phi);
>    if (ncopies > 1
>        && (STMT_VINFO_RELEVANT (stmt_info) <= vect_used_only_live)
>        && (use_stmt_info = loop_vinfo->lookup_single_use (reduc_phi_result))
> @@ -7039,7 +7039,7 @@ vectorizable_reduction (gimple *stmt, gi
>
>    if (reduction_type == FOLD_LEFT_REDUCTION)
>      return vectorize_fold_left_reduction
> -      (stmt, gsi, vec_stmt, slp_node, reduc_def_stmt, code,
> +      (stmt, gsi, vec_stmt, slp_node, reduc_def_phi, code,
>         reduc_fn, ops, vectype_in, reduc_index, masks);
>
>    if (reduction_type == EXTRACT_LAST_REDUCTION)
> @@ -7070,7 +7070,7 @@ vectorizable_reduction (gimple *stmt, gi
>    if (slp_node)
>      phis.splice (SLP_TREE_VEC_STMTS (slp_node_instance->reduc_phis));
>    else
> -    phis.quick_push (STMT_VINFO_VEC_STMT (vinfo_for_stmt (reduc_def_stmt)));
> +    phis.quick_push (STMT_VINFO_VEC_STMT (reduc_def_info));
>
>    for (j = 0; j < ncopies; j++)
>      {
> @@ -7208,7 +7208,7 @@ vectorizable_reduction (gimple *stmt, gi
>    if ((!single_defuse_cycle || code == COND_EXPR) && !slp_node)
>      vect_defs[0] = gimple_get_lhs ((*vec_stmt)->stmt);
>
> -  vect_create_epilog_for_reduction (vect_defs, stmt, reduc_def_stmt,
> +  vect_create_epilog_for_reduction (vect_defs, stmt, reduc_def_phi,
>                                     epilog_copies, reduc_fn, phis,
>                                     double_reduc, slp_node, slp_node_instance,
>                                     cond_reduc_val, cond_reduc_op_code,

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [15/46] Make SLP_TREE_VEC_STMTS a vec<stmt_vec_info>
  2018-07-24  9:59 ` [15/46] Make SLP_TREE_VEC_STMTS a vec<stmt_vec_info> Richard Sandiford
@ 2018-07-25  9:22   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25  9:22 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 11:59 AM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch changes SLP_TREE_VEC_STMTS from a vec<gimple *> to a
> vec<stmt_vec_info>.  This involved making the same change to the
> phis vector in vectorizable_reduction, since SLP_TREE_VEC_STMTS is
> spliced into it here:
>
>   phis.splice (SLP_TREE_VEC_STMTS (slp_node_instance->reduc_phis));

OK, saw that coming - question from earlier patch still stands.

Richard.

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (_slp_tree::vec_stmts): Change from a
>         vec<gimple *> to a vec<stmt_vec_info>.
>         * tree-vect-loop.c (vect_create_epilog_for_reduction): Change
>         the reduction_phis argument from a vec<gimple *> to a
>         vec<stmt_vec_info>.
>         (vectorizable_reduction): Likewise the phis local variable that
>         is passed to vect_create_epilog_for_reduction.  Update for new type
>         of SLP_TREE_VEC_STMTS.
>         (vectorizable_induction): Update for new type of SLP_TREE_VEC_STMTS.
>         (vectorizable_live_operation): Likewise.
>         * tree-vect-slp.c (vect_get_slp_vect_defs): Likewise.
>         (vect_transform_slp_perm_load, vect_schedule_slp_instance): Likewise.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:22:47.489157307 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:22:50.777128110 +0100
> @@ -143,7 +143,7 @@ struct _slp_tree {
>       permutation.  */
>    vec<unsigned> load_permutation;
>    /* Vectorized stmt/s.  */
> -  vec<gimple *> vec_stmts;
> +  vec<stmt_vec_info> vec_stmts;
>    /* Number of vector stmts that are created to replace the group of scalar
>       stmts. It is calculated during the transformation phase as the number of
>       scalar elements in one scalar iteration (GROUP_SIZE) multiplied by VF
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:22:47.489157307 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:22:50.777128110 +0100
> @@ -4412,7 +4412,7 @@ get_initial_defs_for_reduction (slp_tree
>  vect_create_epilog_for_reduction (vec<tree> vect_defs, gimple *stmt,
>                                   gimple *reduc_def_stmt,
>                                   int ncopies, internal_fn reduc_fn,
> -                                 vec<gimple *> reduction_phis,
> +                                 vec<stmt_vec_info> reduction_phis,
>                                    bool double_reduc,
>                                   slp_tree slp_node,
>                                   slp_instance slp_node_instance,
> @@ -4429,6 +4429,7 @@ vect_create_epilog_for_reduction (vec<tr
>    tree scalar_dest;
>    tree scalar_type;
>    gimple *new_phi = NULL, *phi;
> +  stmt_vec_info phi_info;
>    gimple_stmt_iterator exit_gsi;
>    tree vec_dest;
>    tree new_temp = NULL_TREE, new_dest, new_name, new_scalar_dest;
> @@ -4442,7 +4443,8 @@ vect_create_epilog_for_reduction (vec<tr
>    tree orig_name, scalar_result;
>    imm_use_iterator imm_iter, phi_imm_iter;
>    use_operand_p use_p, phi_use_p;
> -  gimple *use_stmt, *reduction_phi = NULL;
> +  gimple *use_stmt;
> +  stmt_vec_info reduction_phi_info = NULL;
>    bool nested_in_vect_loop = false;
>    auto_vec<gimple *> new_phis;
>    auto_vec<stmt_vec_info> inner_phis;
> @@ -4540,7 +4542,7 @@ vect_create_epilog_for_reduction (vec<tr
>      }
>
>    /* Set phi nodes arguments.  */
> -  FOR_EACH_VEC_ELT (reduction_phis, i, phi)
> +  FOR_EACH_VEC_ELT (reduction_phis, i, phi_info)
>      {
>        tree vec_init_def = vec_initial_defs[i];
>        tree def = vect_defs[i];
> @@ -4548,7 +4550,7 @@ vect_create_epilog_for_reduction (vec<tr
>          {
>           if (j != 0)
>             {
> -             phi = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (phi));
> +             phi_info = STMT_VINFO_RELATED_STMT (phi_info);
>               if (nested_in_vect_loop)
>                 vec_init_def
>                   = vect_get_vec_def_for_stmt_copy (initial_def_dt,
> @@ -4557,6 +4559,7 @@ vect_create_epilog_for_reduction (vec<tr
>
>           /* Set the loop-entry arg of the reduction-phi.  */
>
> +         gphi *phi = as_a <gphi *> (phi_info->stmt);
>           if (STMT_VINFO_VEC_REDUCTION_TYPE (stmt_info)
>               == INTEGER_INDUC_COND_REDUCTION)
>             {
> @@ -4569,19 +4572,18 @@ vect_create_epilog_for_reduction (vec<tr
>               tree induc_val_vec
>                 = build_vector_from_val (vec_init_def_type, induc_val);
>
> -             add_phi_arg (as_a <gphi *> (phi), induc_val_vec,
> -                          loop_preheader_edge (loop), UNKNOWN_LOCATION);
> +             add_phi_arg (phi, induc_val_vec, loop_preheader_edge (loop),
> +                          UNKNOWN_LOCATION);
>             }
>           else
> -           add_phi_arg (as_a <gphi *> (phi), vec_init_def,
> -                        loop_preheader_edge (loop), UNKNOWN_LOCATION);
> +           add_phi_arg (phi, vec_init_def, loop_preheader_edge (loop),
> +                        UNKNOWN_LOCATION);
>
>            /* Set the loop-latch arg for the reduction-phi.  */
>            if (j > 0)
>              def = vect_get_vec_def_for_stmt_copy (vect_unknown_def_type, def);
>
> -          add_phi_arg (as_a <gphi *> (phi), def, loop_latch_edge (loop),
> -                      UNKNOWN_LOCATION);
> +         add_phi_arg (phi, def, loop_latch_edge (loop), UNKNOWN_LOCATION);
>
>            if (dump_enabled_p ())
>              {
> @@ -5599,7 +5601,7 @@ vect_create_epilog_for_reduction (vec<tr
>        if (k % ratio == 0)
>          {
>            epilog_stmt = new_phis[k / ratio];
> -          reduction_phi = reduction_phis[k / ratio];
> +         reduction_phi_info = reduction_phis[k / ratio];
>           if (double_reduc)
>             inner_phi = inner_phis[k / ratio];
>          }
> @@ -5672,7 +5674,6 @@ vect_create_epilog_for_reduction (vec<tr
>                    stmt_vec_info use_stmt_vinfo;
>                    tree vect_phi_init, preheader_arg, vect_phi_res;
>                    basic_block bb = gimple_bb (use_stmt);
> -                 gimple *use;
>
>                    /* Check that USE_STMT is really double reduction phi
>                       node.  */
> @@ -5722,13 +5723,14 @@ vect_create_epilog_for_reduction (vec<tr
>                    /* Replace the use, i.e., set the correct vs1 in the regular
>                       reduction phi node.  FORNOW, NCOPIES is always 1, so the
>                       loop is redundant.  */
> -                  use = reduction_phi;
> -                  for (j = 0; j < ncopies; j++)
> -                    {
> -                      edge pr_edge = loop_preheader_edge (loop);
> -                      SET_PHI_ARG_DEF (use, pr_edge->dest_idx, vect_phi_res);
> -                      use = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (use));
> -                    }
> +                 stmt_vec_info use_info = reduction_phi_info;
> +                 for (j = 0; j < ncopies; j++)
> +                   {
> +                     edge pr_edge = loop_preheader_edge (loop);
> +                     SET_PHI_ARG_DEF (as_a <gphi *> (use_info->stmt),
> +                                      pr_edge->dest_idx, vect_phi_res);
> +                     use_info = STMT_VINFO_RELATED_STMT (use_info);
> +                   }
>                  }
>              }
>          }
> @@ -6112,7 +6114,7 @@ vectorizable_reduction (gimple *stmt, gi
>    auto_vec<tree> vec_oprnds1;
>    auto_vec<tree> vec_oprnds2;
>    auto_vec<tree> vect_defs;
> -  auto_vec<gimple *> phis;
> +  auto_vec<stmt_vec_info> phis;
>    int vec_num;
>    tree def0, tem;
>    tree cr_index_scalar_type = NULL_TREE, cr_index_vector_type = NULL_TREE;
> @@ -6218,7 +6220,7 @@ vectorizable_reduction (gimple *stmt, gi
>                   stmt_vec_info new_phi_info = loop_vinfo->add_stmt (new_phi);
>
>                   if (slp_node)
> -                   SLP_TREE_VEC_STMTS (slp_node).quick_push (new_phi);
> +                   SLP_TREE_VEC_STMTS (slp_node).quick_push (new_phi_info);
>                   else
>                     {
>                       if (j == 0)
> @@ -7075,9 +7077,9 @@ vectorizable_reduction (gimple *stmt, gi
>        if (code == COND_EXPR)
>          {
>            gcc_assert (!slp_node);
> -          vectorizable_condition (stmt, gsi, vec_stmt,
> -                                  PHI_RESULT (phis[0]),
> -                                  reduc_index, NULL, NULL);
> +         vectorizable_condition (stmt, gsi, vec_stmt,
> +                                 PHI_RESULT (phis[0]->stmt),
> +                                 reduc_index, NULL, NULL);
>            /* Multiple types are not supported for condition.  */
>            break;
>          }
> @@ -7501,7 +7503,8 @@ vectorizable_induction (gimple *phi,
>           /* Create the induction-phi that defines the induction-operand.  */
>           vec_dest = vect_get_new_vect_var (vectype, vect_simple_var, "vec_iv_");
>           induction_phi = create_phi_node (vec_dest, iv_loop->header);
> -         loop_vinfo->add_stmt (induction_phi);
> +         stmt_vec_info induction_phi_info
> +           = loop_vinfo->add_stmt (induction_phi);
>           induc_def = PHI_RESULT (induction_phi);
>
>           /* Create the iv update inside the loop  */
> @@ -7515,7 +7518,7 @@ vectorizable_induction (gimple *phi,
>           add_phi_arg (induction_phi, vec_def, loop_latch_edge (iv_loop),
>                        UNKNOWN_LOCATION);
>
> -         SLP_TREE_VEC_STMTS (slp_node).quick_push (induction_phi);
> +         SLP_TREE_VEC_STMTS (slp_node).quick_push (induction_phi_info);
>         }
>
>        /* Re-use IVs when we can.  */
> @@ -7540,7 +7543,7 @@ vectorizable_induction (gimple *phi,
>           vec_step = vect_init_vector (phi, new_vec, vectype, NULL);
>           for (; ivn < nvects; ++ivn)
>             {
> -             gimple *iv = SLP_TREE_VEC_STMTS (slp_node)[ivn - nivs];
> +             gimple *iv = SLP_TREE_VEC_STMTS (slp_node)[ivn - nivs]->stmt;
>               tree def;
>               if (gimple_code (iv) == GIMPLE_PHI)
>                 def = gimple_phi_result (iv);
> @@ -7556,8 +7559,8 @@ vectorizable_induction (gimple *phi,
>                   gimple_stmt_iterator tgsi = gsi_for_stmt (iv);
>                   gsi_insert_after (&tgsi, new_stmt, GSI_CONTINUE_LINKING);
>                 }
> -             loop_vinfo->add_stmt (new_stmt);
> -             SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt);
> +             SLP_TREE_VEC_STMTS (slp_node).quick_push
> +               (loop_vinfo->add_stmt (new_stmt));
>             }
>         }
>
> @@ -7943,7 +7946,7 @@ vectorizable_live_operation (gimple *stm
>        gcc_assert (!LOOP_VINFO_FULLY_MASKED_P (loop_vinfo));
>
>        /* Get the correct slp vectorized stmt.  */
> -      gimple *vec_stmt = SLP_TREE_VEC_STMTS (slp_node)[vec_entry];
> +      gimple *vec_stmt = SLP_TREE_VEC_STMTS (slp_node)[vec_entry]->stmt;
>        if (gphi *phi = dyn_cast <gphi *> (vec_stmt))
>         vec_lhs = gimple_phi_result (phi);
>        else
> Index: gcc/tree-vect-slp.c
> ===================================================================
> --- gcc/tree-vect-slp.c 2018-07-24 10:22:44.293185688 +0100
> +++ gcc/tree-vect-slp.c 2018-07-24 10:22:50.777128110 +0100
> @@ -3557,18 +3557,18 @@ vect_get_constant_vectors (tree op, slp_
>  vect_get_slp_vect_defs (slp_tree slp_node, vec<tree> *vec_oprnds)
>  {
>    tree vec_oprnd;
> -  gimple *vec_def_stmt;
> +  stmt_vec_info vec_def_stmt_info;
>    unsigned int i;
>
>    gcc_assert (SLP_TREE_VEC_STMTS (slp_node).exists ());
>
> -  FOR_EACH_VEC_ELT (SLP_TREE_VEC_STMTS (slp_node), i, vec_def_stmt)
> +  FOR_EACH_VEC_ELT (SLP_TREE_VEC_STMTS (slp_node), i, vec_def_stmt_info)
>      {
> -      gcc_assert (vec_def_stmt);
> -      if (gimple_code (vec_def_stmt) == GIMPLE_PHI)
> -       vec_oprnd = gimple_phi_result (vec_def_stmt);
> +      gcc_assert (vec_def_stmt_info);
> +      if (gphi *vec_def_phi = dyn_cast <gphi *> (vec_def_stmt_info->stmt))
> +       vec_oprnd = gimple_phi_result (vec_def_phi);
>        else
> -       vec_oprnd = gimple_get_lhs (vec_def_stmt);
> +       vec_oprnd = gimple_get_lhs (vec_def_stmt_info->stmt);
>        vec_oprnds->quick_push (vec_oprnd);
>      }
>  }
> @@ -3687,6 +3687,7 @@ vect_transform_slp_perm_load (slp_tree n
>  {
>    gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[0];
>    stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +  vec_info *vinfo = stmt_info->vinfo;
>    tree mask_element_type = NULL_TREE, mask_type;
>    int vec_index = 0;
>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
> @@ -3827,26 +3828,28 @@ vect_transform_slp_perm_load (slp_tree n
>                   /* Generate the permute statement if necessary.  */
>                   tree first_vec = dr_chain[first_vec_index];
>                   tree second_vec = dr_chain[second_vec_index];
> -                 gimple *perm_stmt;
> +                 stmt_vec_info perm_stmt_info;
>                   if (! noop_p)
>                     {
>                       tree perm_dest
>                         = vect_create_destination_var (gimple_assign_lhs (stmt),
>                                                        vectype);
>                       perm_dest = make_ssa_name (perm_dest);
> -                     perm_stmt = gimple_build_assign (perm_dest,
> -                                                      VEC_PERM_EXPR,
> -                                                      first_vec, second_vec,
> -                                                      mask_vec);
> -                     vect_finish_stmt_generation (stmt, perm_stmt, gsi);
> +                     gassign *perm_stmt
> +                       = gimple_build_assign (perm_dest, VEC_PERM_EXPR,
> +                                              first_vec, second_vec,
> +                                              mask_vec);
> +                     perm_stmt_info
> +                       = vect_finish_stmt_generation (stmt, perm_stmt, gsi);
>                     }
>                   else
>                     /* If mask was NULL_TREE generate the requested
>                        identity transform.  */
> -                   perm_stmt = SSA_NAME_DEF_STMT (first_vec);
> +                   perm_stmt_info = vinfo->lookup_def (first_vec);
>
>                   /* Store the vector statement in NODE.  */
> -                 SLP_TREE_VEC_STMTS (node)[vect_stmts_counter++] = perm_stmt;
> +                 SLP_TREE_VEC_STMTS (node)[vect_stmts_counter++]
> +                   = perm_stmt_info;
>                 }
>
>               index = 0;
> @@ -3948,8 +3951,8 @@ vect_schedule_slp_instance (slp_tree nod
>           mask.quick_push (0);
>        if (ocode != ERROR_MARK)
>         {
> -         vec<gimple *> v0;
> -         vec<gimple *> v1;
> +         vec<stmt_vec_info> v0;
> +         vec<stmt_vec_info> v1;
>           unsigned j;
>           tree tmask = NULL_TREE;
>           vect_transform_stmt (stmt, &si, &grouped_store, node, instance);
> @@ -3990,10 +3993,11 @@ vect_schedule_slp_instance (slp_tree nod
>               gimple *vstmt;
>               vstmt = gimple_build_assign (make_ssa_name (vectype),
>                                            VEC_PERM_EXPR,
> -                                          gimple_assign_lhs (v0[j]),
> -                                          gimple_assign_lhs (v1[j]), tmask);
> -             vect_finish_stmt_generation (stmt, vstmt, &si);
> -             SLP_TREE_VEC_STMTS (node).quick_push (vstmt);
> +                                          gimple_assign_lhs (v0[j]->stmt),
> +                                          gimple_assign_lhs (v1[j]->stmt),
> +                                          tmask);
> +             SLP_TREE_VEC_STMTS (node).quick_push
> +               (vect_finish_stmt_generation (stmt, vstmt, &si));
>             }
>           v0.release ();
>           v1.release ();

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [17/46] Make LOOP_VINFO_REDUCTIONS an auto_vec<stmt_vec_info>
  2018-07-24  9:59 ` [17/46] Make LOOP_VINFO_REDUCTIONS an auto_vec<stmt_vec_info> Richard Sandiford
@ 2018-07-25  9:23   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25  9:23 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 11:59 AM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch changes LOOP_VINFO_REDUCTIONS from an auto_vec<gimple *>
> to an auto_vec<stmt_vec_info>.  It also changes the associated
> vect_force_simple_reduction so that it takes and returns stmt_vec_infos
> instead of gimple stmts.

OK.

Highlights that reduction detection needs refactoring to be usable outside
of the vectorizer (see tree-parloops.c).  Exposing vinfos doesn't make the
situation better here...

Richard.

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (_loop_vec_info::reductions): Change from an
>         auto_vec<gimple *> to an auto_vec<stmt_vec_info>.
>         (vect_force_simple_reduction): Take and return stmt_vec_infos rather
>         than gimple stmts.
>         * tree-parloops.c (valid_reduction_p): Take a stmt_vec_info instead
>         of a gimple stmt.
>         (gather_scalar_reductions): Update after above interface changes.
>         * tree-vect-loop.c (vect_analyze_scalar_cycles_1): Likewise.
>         (vect_is_simple_reduction): Take and return stmt_vec_infos rather
>         than gimple stmts.
>         (vect_force_simple_reduction): Likewise.
>         * tree-vect-patterns.c (vect_pattern_recog_1): Update use of
>         LOOP_VINFO_REDUCTIONS.
>         * tree-vect-slp.c (vect_analyze_slp_instance): Likewise.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:22:53.909100298 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:22:57.277070390 +0100
> @@ -475,7 +475,7 @@ typedef struct _loop_vec_info : public v
>    auto_vec<gimple *> may_misalign_stmts;
>
>    /* Reduction cycles detected in the loop. Used in loop-aware SLP.  */
> -  auto_vec<gimple *> reductions;
> +  auto_vec<stmt_vec_info> reductions;
>
>    /* All reduction chains in the loop, represented by the first
>       stmt in the chain.  */
> @@ -1627,8 +1627,8 @@ extern tree vect_create_addr_base_for_ve
>
>  /* In tree-vect-loop.c.  */
>  /* FORNOW: Used in tree-parloops.c.  */
> -extern gimple *vect_force_simple_reduction (loop_vec_info, gimple *,
> -                                           bool *, bool);
> +extern stmt_vec_info vect_force_simple_reduction (loop_vec_info, stmt_vec_info,
> +                                                 bool *, bool);
>  /* Used in gimple-loop-interchange.c.  */
>  extern bool check_reduction_path (dump_user_location_t, loop_p, gphi *, tree,
>                                   enum tree_code);
> Index: gcc/tree-parloops.c
> ===================================================================
> --- gcc/tree-parloops.c 2018-06-27 10:27:09.778650686 +0100
> +++ gcc/tree-parloops.c 2018-07-24 10:22:57.273070426 +0100
> @@ -2570,15 +2570,14 @@ set_reduc_phi_uids (reduction_info **slo
>    return 1;
>  }
>
> -/* Return true if the type of reduction performed by STMT is suitable
> +/* Return true if the type of reduction performed by STMT_INFO is suitable
>     for this pass.  */
>
>  static bool
> -valid_reduction_p (gimple *stmt)
> +valid_reduction_p (stmt_vec_info stmt_info)
>  {
>    /* Parallelization would reassociate the operation, which isn't
>       allowed for in-order reductions.  */
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    vect_reduction_type reduc_type = STMT_VINFO_REDUC_TYPE (stmt_info);
>    return reduc_type != FOLD_LEFT_REDUCTION;
>  }
> @@ -2615,10 +2614,11 @@ gather_scalar_reductions (loop_p loop, r
>        if (simple_iv (loop, loop, res, &iv, true))
>         continue;
>
> -      gimple *reduc_stmt
> -       = vect_force_simple_reduction (simple_loop_info, phi,
> +      stmt_vec_info reduc_stmt_info
> +       = vect_force_simple_reduction (simple_loop_info,
> +                                      simple_loop_info->lookup_stmt (phi),
>                                        &double_reduc, true);
> -      if (!reduc_stmt || !valid_reduction_p (reduc_stmt))
> +      if (!reduc_stmt_info || !valid_reduction_p (reduc_stmt_info))
>         continue;
>
>        if (double_reduc)
> @@ -2627,11 +2627,11 @@ gather_scalar_reductions (loop_p loop, r
>             continue;
>
>           double_reduc_phis.safe_push (phi);
> -         double_reduc_stmts.safe_push (reduc_stmt);
> +         double_reduc_stmts.safe_push (reduc_stmt_info->stmt);
>           continue;
>         }
>
> -      build_new_reduction (reduction_list, reduc_stmt, phi);
> +      build_new_reduction (reduction_list, reduc_stmt_info->stmt, phi);
>      }
>    delete simple_loop_info;
>
> @@ -2661,12 +2661,15 @@ gather_scalar_reductions (loop_p loop, r
>                              &iv, true))
>                 continue;
>
> -             gimple *inner_reduc_stmt
> -               = vect_force_simple_reduction (simple_loop_info, inner_phi,
> +             stmt_vec_info inner_phi_info
> +               = simple_loop_info->lookup_stmt (inner_phi);
> +             stmt_vec_info inner_reduc_stmt_info
> +               = vect_force_simple_reduction (simple_loop_info,
> +                                              inner_phi_info,
>                                                &double_reduc, true);
>               gcc_assert (!double_reduc);
> -             if (inner_reduc_stmt == NULL
> -                 || !valid_reduction_p (inner_reduc_stmt))
> +             if (!inner_reduc_stmt_info
> +                 || !valid_reduction_p (inner_reduc_stmt_info))
>                 continue;
>
>               build_new_reduction (reduction_list, double_reduc_stmts[i], phi);
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:22:53.909100298 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:22:57.273070426 +0100
> @@ -546,7 +546,6 @@ vect_analyze_scalar_cycles_1 (loop_vec_i
>        gimple *phi = worklist.pop ();
>        tree def = PHI_RESULT (phi);
>        stmt_vec_info stmt_vinfo = vinfo_for_stmt (phi);
> -      gimple *reduc_stmt;
>
>        if (dump_enabled_p ())
>          {
> @@ -557,9 +556,10 @@ vect_analyze_scalar_cycles_1 (loop_vec_i
>        gcc_assert (!virtual_operand_p (def)
>                   && STMT_VINFO_DEF_TYPE (stmt_vinfo) == vect_unknown_def_type);
>
> -      reduc_stmt = vect_force_simple_reduction (loop_vinfo, phi,
> -                                               &double_reduc, false);
> -      if (reduc_stmt)
> +      stmt_vec_info reduc_stmt_info
> +       = vect_force_simple_reduction (loop_vinfo, stmt_vinfo,
> +                                      &double_reduc, false);
> +      if (reduc_stmt_info)
>          {
>            if (double_reduc)
>              {
> @@ -568,8 +568,8 @@ vect_analyze_scalar_cycles_1 (loop_vec_i
>                                  "Detected double reduction.\n");
>
>                STMT_VINFO_DEF_TYPE (stmt_vinfo) = vect_double_reduction_def;
> -              STMT_VINFO_DEF_TYPE (vinfo_for_stmt (reduc_stmt)) =
> -                                                    vect_double_reduction_def;
> +             STMT_VINFO_DEF_TYPE (reduc_stmt_info)
> +               = vect_double_reduction_def;
>              }
>            else
>              {
> @@ -580,8 +580,7 @@ vect_analyze_scalar_cycles_1 (loop_vec_i
>                                      "Detected vectorizable nested cycle.\n");
>
>                    STMT_VINFO_DEF_TYPE (stmt_vinfo) = vect_nested_cycle;
> -                  STMT_VINFO_DEF_TYPE (vinfo_for_stmt (reduc_stmt)) =
> -                                                             vect_nested_cycle;
> +                 STMT_VINFO_DEF_TYPE (reduc_stmt_info) = vect_nested_cycle;
>                  }
>                else
>                  {
> @@ -590,13 +589,13 @@ vect_analyze_scalar_cycles_1 (loop_vec_i
>                                      "Detected reduction.\n");
>
>                    STMT_VINFO_DEF_TYPE (stmt_vinfo) = vect_reduction_def;
> -                  STMT_VINFO_DEF_TYPE (vinfo_for_stmt (reduc_stmt)) =
> -                                                           vect_reduction_def;
> +                 STMT_VINFO_DEF_TYPE (reduc_stmt_info) = vect_reduction_def;
>                    /* Store the reduction cycles for possible vectorization in
>                       loop-aware SLP if it was not detected as reduction
>                      chain.  */
> -                 if (! REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (reduc_stmt)))
> -                   LOOP_VINFO_REDUCTIONS (loop_vinfo).safe_push (reduc_stmt);
> +                 if (! REDUC_GROUP_FIRST_ELEMENT (reduc_stmt_info))
> +                   LOOP_VINFO_REDUCTIONS (loop_vinfo).safe_push
> +                     (reduc_stmt_info);
>                  }
>              }
>          }
> @@ -2530,8 +2529,8 @@ vect_is_slp_reduction (loop_vec_info loo
>    struct loop *loop = (gimple_bb (phi))->loop_father;
>    struct loop *vect_loop = LOOP_VINFO_LOOP (loop_info);
>    enum tree_code code;
> -  gimple *current_stmt = NULL, *loop_use_stmt = NULL, *first, *next_stmt;
> -  stmt_vec_info use_stmt_info, current_stmt_info;
> +  gimple *loop_use_stmt = NULL, *first, *next_stmt;
> +  stmt_vec_info use_stmt_info, current_stmt_info = NULL;
>    tree lhs;
>    imm_use_iterator imm_iter;
>    use_operand_p use_p;
> @@ -2593,9 +2592,8 @@ vect_is_slp_reduction (loop_vec_info loo
>
>        /* Insert USE_STMT into reduction chain.  */
>        use_stmt_info = loop_info->lookup_stmt (loop_use_stmt);
> -      if (current_stmt)
> +      if (current_stmt_info)
>          {
> -          current_stmt_info = vinfo_for_stmt (current_stmt);
>           REDUC_GROUP_NEXT_ELEMENT (current_stmt_info) = loop_use_stmt;
>            REDUC_GROUP_FIRST_ELEMENT (use_stmt_info)
>              = REDUC_GROUP_FIRST_ELEMENT (current_stmt_info);
> @@ -2604,7 +2602,7 @@ vect_is_slp_reduction (loop_vec_info loo
>         REDUC_GROUP_FIRST_ELEMENT (use_stmt_info) = loop_use_stmt;
>
>        lhs = gimple_assign_lhs (loop_use_stmt);
> -      current_stmt = loop_use_stmt;
> +      current_stmt_info = use_stmt_info;
>        size++;
>     }
>
> @@ -2614,7 +2612,7 @@ vect_is_slp_reduction (loop_vec_info loo
>    /* Swap the operands, if needed, to make the reduction operand be the second
>       operand.  */
>    lhs = PHI_RESULT (phi);
> -  next_stmt = REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (current_stmt));
> +  next_stmt = REDUC_GROUP_FIRST_ELEMENT (current_stmt_info);
>    while (next_stmt)
>      {
>        if (gimple_assign_rhs2 (next_stmt) == lhs)
> @@ -2671,7 +2669,7 @@ vect_is_slp_reduction (loop_vec_info loo
>      }
>
>    /* Save the chain for further analysis in SLP detection.  */
> -  first = REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (current_stmt));
> +  first = REDUC_GROUP_FIRST_ELEMENT (current_stmt_info);
>    LOOP_VINFO_REDUCTION_CHAINS (loop_info).safe_push (first);
>    REDUC_GROUP_SIZE (vinfo_for_stmt (first)) = size;
>
> @@ -2867,15 +2865,16 @@ check_reduction_path (dump_user_location
>
>  */
>
> -static gimple *
> -vect_is_simple_reduction (loop_vec_info loop_info, gimple *phi,
> +static stmt_vec_info
> +vect_is_simple_reduction (loop_vec_info loop_info, stmt_vec_info phi_info,
>                           bool *double_reduc,
>                           bool need_wrapping_integral_overflow,
>                           enum vect_reduction_type *v_reduc_type)
>  {
> +  gphi *phi = as_a <gphi *> (phi_info->stmt);
>    struct loop *loop = (gimple_bb (phi))->loop_father;
>    struct loop *vect_loop = LOOP_VINFO_LOOP (loop_info);
> -  gimple *def_stmt, *phi_use_stmt = NULL;
> +  gimple *phi_use_stmt = NULL;
>    enum tree_code orig_code, code;
>    tree op1, op2, op3 = NULL_TREE, op4 = NULL_TREE;
>    tree type;
> @@ -2937,13 +2936,16 @@ vect_is_simple_reduction (loop_vec_info
>        return NULL;
>      }
>
> -  def_stmt = SSA_NAME_DEF_STMT (loop_arg);
> -  if (is_gimple_assign (def_stmt))
> +  stmt_vec_info def_stmt_info = loop_info->lookup_def (loop_arg);
> +  if (!def_stmt_info)
> +    return NULL;
> +
> +  if (gassign *def_stmt = dyn_cast <gassign *> (def_stmt_info->stmt))
>      {
>        name = gimple_assign_lhs (def_stmt);
>        phi_def = false;
>      }
> -  else if (gimple_code (def_stmt) == GIMPLE_PHI)
> +  else if (gphi *def_stmt = dyn_cast <gphi *> (def_stmt_info->stmt))
>      {
>        name = PHI_RESULT (def_stmt);
>        phi_def = true;
> @@ -2954,14 +2956,12 @@ vect_is_simple_reduction (loop_vec_info
>         {
>           dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>                            "reduction: unhandled reduction operation: ");
> -         dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM, def_stmt, 0);
> +         dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
> +                           def_stmt_info->stmt, 0);
>         }
>        return NULL;
>      }
>
> -  if (! flow_bb_inside_loop_p (loop, gimple_bb (def_stmt)))
> -    return NULL;
> -
>    nloop_uses = 0;
>    auto_vec<gphi *, 3> lcphis;
>    FOR_EACH_IMM_USE_FAST (use_p, imm_iter, name)
> @@ -2987,6 +2987,7 @@ vect_is_simple_reduction (loop_vec_info
>       defined in the inner loop.  */
>    if (phi_def)
>      {
> +      gphi *def_stmt = as_a <gphi *> (def_stmt_info->stmt);
>        op1 = PHI_ARG_DEF (def_stmt, 0);
>
>        if (gimple_phi_num_args (def_stmt) != 1
> @@ -3012,7 +3013,7 @@ vect_is_simple_reduction (loop_vec_info
>                             "detected double reduction: ");
>
>            *double_reduc = true;
> -          return def_stmt;
> +         return def_stmt_info;
>          }
>
>        return NULL;
> @@ -3038,6 +3039,7 @@ vect_is_simple_reduction (loop_vec_info
>           }
>      }
>
> +  gassign *def_stmt = as_a <gassign *> (def_stmt_info->stmt);
>    bool nested_in_vect_loop = flow_loop_nested_p (vect_loop, loop);
>    code = orig_code = gimple_assign_rhs_code (def_stmt);
>
> @@ -3178,7 +3180,7 @@ vect_is_simple_reduction (loop_vec_info
>      {
>        if (dump_enabled_p ())
>         report_vect_op (MSG_NOTE, def_stmt, "detected reduction: ");
> -      return def_stmt;
> +      return def_stmt_info;
>      }
>
>    if (def1_info
> @@ -3237,7 +3239,7 @@ vect_is_simple_reduction (loop_vec_info
>              report_vect_op (MSG_NOTE, def_stmt, "detected reduction: ");
>          }
>
> -      return def_stmt;
> +      return def_stmt_info;
>      }
>
>    /* Try to find SLP reduction chain.  */
> @@ -3250,7 +3252,7 @@ vect_is_simple_reduction (loop_vec_info
>          report_vect_op (MSG_NOTE, def_stmt,
>                         "reduction: detected reduction chain: ");
>
> -      return def_stmt;
> +      return def_stmt_info;
>      }
>
>    /* Dissolve group eventually half-built by vect_is_slp_reduction.  */
> @@ -3264,9 +3266,8 @@ vect_is_simple_reduction (loop_vec_info
>      }
>
>    /* Look for the expression computing loop_arg from loop PHI result.  */
> -  if (check_reduction_path (vect_location, loop, as_a <gphi *> (phi), loop_arg,
> -                           code))
> -    return def_stmt;
> +  if (check_reduction_path (vect_location, loop, phi, loop_arg, code))
> +    return def_stmt_info;
>
>    if (dump_enabled_p ())
>      {
> @@ -3281,25 +3282,24 @@ vect_is_simple_reduction (loop_vec_info
>     in-place if it enables detection of more reductions.  Arguments
>     as there.  */
>
> -gimple *
> -vect_force_simple_reduction (loop_vec_info loop_info, gimple *phi,
> +stmt_vec_info
> +vect_force_simple_reduction (loop_vec_info loop_info, stmt_vec_info phi_info,
>                              bool *double_reduc,
>                              bool need_wrapping_integral_overflow)
>  {
>    enum vect_reduction_type v_reduc_type;
> -  gimple *def = vect_is_simple_reduction (loop_info, phi, double_reduc,
> -                                         need_wrapping_integral_overflow,
> -                                         &v_reduc_type);
> -  if (def)
> +  stmt_vec_info def_info
> +    = vect_is_simple_reduction (loop_info, phi_info, double_reduc,
> +                               need_wrapping_integral_overflow,
> +                               &v_reduc_type);
> +  if (def_info)
>      {
> -      stmt_vec_info phi_info = vinfo_for_stmt (phi);
> -      stmt_vec_info def_info = vinfo_for_stmt (def);
>        STMT_VINFO_REDUC_TYPE (phi_info) = v_reduc_type;
>        STMT_VINFO_REDUC_DEF (phi_info) = def_info;
>        STMT_VINFO_REDUC_TYPE (def_info) = v_reduc_type;
>        STMT_VINFO_REDUC_DEF (def_info) = phi_info;
>      }
> -  return def;
> +  return def_info;
>  }
>
>  /* Calculate cost of peeling the loop PEEL_ITERS_PROLOGUE times.  */
> Index: gcc/tree-vect-patterns.c
> ===================================================================
> --- gcc/tree-vect-patterns.c    2018-07-24 10:22:44.289185723 +0100
> +++ gcc/tree-vect-patterns.c    2018-07-24 10:22:57.277070390 +0100
> @@ -4851,9 +4851,9 @@ vect_pattern_recog_1 (vect_recog_func *r
>    if (loop_vinfo)
>      {
>        unsigned ix, ix2;
> -      gimple **elem_ptr;
> +      stmt_vec_info *elem_ptr;
>        VEC_ORDERED_REMOVE_IF (LOOP_VINFO_REDUCTIONS (loop_vinfo), ix, ix2,
> -                            elem_ptr, *elem_ptr == stmt);
> +                            elem_ptr, *elem_ptr == stmt_info);
>      }
>  }
>
> Index: gcc/tree-vect-slp.c
> ===================================================================
> --- gcc/tree-vect-slp.c 2018-07-24 10:22:50.777128110 +0100
> +++ gcc/tree-vect-slp.c 2018-07-24 10:22:57.277070390 +0100
> @@ -1931,6 +1931,7 @@ vect_analyze_slp_instance (vec_info *vin
>    unsigned int group_size;
>    tree vectype, scalar_type = NULL_TREE;
>    gimple *next;
> +  stmt_vec_info next_info;
>    unsigned int i;
>    vec<slp_tree> loads;
>    struct data_reference *dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (stmt));
> @@ -2008,9 +2009,9 @@ vect_analyze_slp_instance (vec_info *vin
>    else
>      {
>        /* Collect reduction statements.  */
> -      vec<gimple *> reductions = as_a <loop_vec_info> (vinfo)->reductions;
> -      for (i = 0; reductions.iterate (i, &next); i++)
> -       scalar_stmts.safe_push (next);
> +      vec<stmt_vec_info> reductions = as_a <loop_vec_info> (vinfo)->reductions;
> +      for (i = 0; reductions.iterate (i, &next_info); i++)
> +       scalar_stmts.safe_push (next_info);
>      }
>
>    loads.create (group_size);

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [18/46] Make SLP_TREE_SCALAR_STMTS a vec<stmt_vec_info>
  2018-07-24 10:00 ` [18/46] Make SLP_TREE_SCALAR_STMTS " Richard Sandiford
@ 2018-07-25  9:27   ` Richard Biener
  2018-07-31 15:03     ` Richard Sandiford
  0 siblings, 1 reply; 108+ messages in thread
From: Richard Biener @ 2018-07-25  9:27 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 12:01 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch changes SLP_TREE_SCALAR_STMTS from a vec<gimple *> to
> a vec<stmt_vec_info>.  It's longer than the previous conversions
> but mostly mechanical.

OK.  I don't remember exactly but vect_external_def SLP nodes have
empty stmts vector then?  I realize we only have those for defs that
are in the vectorized region.

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (_slp_tree::stmts): Change from a vec<gimple *>
>         to a vec<stmt_vec_info>.
>         * tree-vect-slp.c (vect_free_slp_tree): Update accordingly.
>         (vect_create_new_slp_node): Take a vec<gimple *> instead of a
>         vec<stmt_vec_info>.
>         (_slp_oprnd_info::def_stmts): Change from a vec<gimple *>
>         to a vec<stmt_vec_info>.
>         (bst_traits::value_type, bst_traits::value_type): Likewise.
>         (bst_traits::hash): Update accordingly.
>         (vect_get_and_check_slp_defs): Change the stmts parameter from
>         a vec<gimple *> to a vec<stmt_vec_info>.
>         (vect_two_operations_perm_ok_p, vect_build_slp_tree_1): Likewise.
>         (vect_build_slp_tree): Likewise.
>         (vect_build_slp_tree_2): Likewise.  Update uses of
>         SLP_TREE_SCALAR_STMTS.
>         (vect_print_slp_tree): Update uses of SLP_TREE_SCALAR_STMTS.
>         (vect_mark_slp_stmts, vect_mark_slp_stmts_relevant)
>         (vect_slp_rearrange_stmts, vect_attempt_slp_rearrange_stmts)
>         (vect_supported_load_permutation_p, vect_find_last_scalar_stmt_in_slp)
>         (vect_detect_hybrid_slp_stmts, vect_slp_analyze_node_operations_1)
>         (vect_slp_analyze_node_operations, vect_slp_analyze_operations)
>         (vect_bb_slp_scalar_cost, vect_slp_analyze_bb_1)
>         (vect_get_constant_vectors, vect_get_slp_defs)
>         (vect_transform_slp_perm_load, vect_schedule_slp_instance)
>         (vect_remove_slp_scalar_calls, vect_schedule_slp): Likewise.
>         (vect_analyze_slp_instance): Build up a vec of stmt_vec_infos
>         instead of gimple stmts.
>         * tree-vect-data-refs.c (vect_slp_analyze_node_dependences): Change
>         the stores parameter for a vec<gimple *> to a vec<stmt_vec_info>.
>         (vect_slp_analyze_instance_dependence): Update uses of
>         SLP_TREE_SCALAR_STMTS.
>         (vect_slp_analyze_and_verify_node_alignment): Likewise.
>         (vect_slp_analyze_and_verify_instance_alignment): Likewise.
>         * tree-vect-loop.c (neutral_op_for_slp_reduction): Likewise.
>         (get_initial_defs_for_reduction): Likewise.
>         (vect_create_epilog_for_reduction): Likewise.
>         (vectorize_fold_left_reduction): Likewise.
>         * tree-vect-stmts.c (vect_prologue_cost_for_slp_op): Likewise.
>         (vect_model_simple_cost, vectorizable_shift, vectorizable_load)
>         (can_vectorize_live_stmts): Likewise.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:22:57.277070390 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:23:00.401042649 +0100
> @@ -138,7 +138,7 @@ struct _slp_tree {
>    /* Nodes that contain def-stmts of this node statements operands.  */
>    vec<slp_tree> children;
>    /* A group of scalar stmts to be vectorized together.  */
> -  vec<gimple *> stmts;
> +  vec<stmt_vec_info> stmts;
>    /* Load permutation relative to the stores, NULL if there is no
>       permutation.  */
>    vec<unsigned> load_permutation;
> Index: gcc/tree-vect-slp.c
> ===================================================================
> --- gcc/tree-vect-slp.c 2018-07-24 10:22:57.277070390 +0100
> +++ gcc/tree-vect-slp.c 2018-07-24 10:23:00.401042649 +0100
> @@ -66,11 +66,11 @@ vect_free_slp_tree (slp_tree node, bool
>       statements would be redundant.  */
>    if (!final_p)
>      {
> -      gimple *stmt;
> -      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
> +      stmt_vec_info stmt_info;
> +      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
>         {
> -         gcc_assert (STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt)) > 0);
> -         STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt))--;
> +         gcc_assert (STMT_VINFO_NUM_SLP_USES (stmt_info) > 0);
> +         STMT_VINFO_NUM_SLP_USES (stmt_info)--;
>         }
>      }
>
> @@ -99,21 +99,21 @@ vect_free_slp_instance (slp_instance ins
>  /* Create an SLP node for SCALAR_STMTS.  */
>
>  static slp_tree
> -vect_create_new_slp_node (vec<gimple *> scalar_stmts)
> +vect_create_new_slp_node (vec<stmt_vec_info> scalar_stmts)
>  {
>    slp_tree node;
> -  gimple *stmt = scalar_stmts[0];
> +  stmt_vec_info stmt_info = scalar_stmts[0];
>    unsigned int nops;
>
> -  if (is_gimple_call (stmt))
> +  if (gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt))
>      nops = gimple_call_num_args (stmt);
> -  else if (is_gimple_assign (stmt))
> +  else if (gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt))
>      {
>        nops = gimple_num_ops (stmt) - 1;
>        if (gimple_assign_rhs_code (stmt) == COND_EXPR)
>         nops++;
>      }
> -  else if (gimple_code (stmt) == GIMPLE_PHI)
> +  else if (is_a <gphi *> (stmt_info->stmt))
>      nops = 0;
>    else
>      return NULL;
> @@ -128,8 +128,8 @@ vect_create_new_slp_node (vec<gimple *>
>    SLP_TREE_DEF_TYPE (node) = vect_internal_def;
>
>    unsigned i;
> -  FOR_EACH_VEC_ELT (scalar_stmts, i, stmt)
> -    STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt))++;
> +  FOR_EACH_VEC_ELT (scalar_stmts, i, stmt_info)
> +    STMT_VINFO_NUM_SLP_USES (stmt_info)++;
>
>    return node;
>  }
> @@ -141,7 +141,7 @@ vect_create_new_slp_node (vec<gimple *>
>  typedef struct _slp_oprnd_info
>  {
>    /* Def-stmts for the operands.  */
> -  vec<gimple *> def_stmts;
> +  vec<stmt_vec_info> def_stmts;
>    /* Information about the first statement, its vector def-type, type, the
>       operand itself in case it's constant, and an indication if it's a pattern
>       stmt.  */
> @@ -297,10 +297,10 @@ can_duplicate_and_interleave_p (unsigned
>     ok return 0.  */
>  static int
>  vect_get_and_check_slp_defs (vec_info *vinfo, unsigned char *swap,
> -                            vec<gimple *> stmts, unsigned stmt_num,
> +                            vec<stmt_vec_info> stmts, unsigned stmt_num,
>                              vec<slp_oprnd_info> *oprnds_info)
>  {
> -  gimple *stmt = stmts[stmt_num];
> +  stmt_vec_info stmt_info = stmts[stmt_num];
>    tree oprnd;
>    unsigned int i, number_of_oprnds;
>    enum vect_def_type dt = vect_uninitialized_def;
> @@ -312,12 +312,12 @@ vect_get_and_check_slp_defs (vec_info *v
>    bool first = stmt_num == 0;
>    bool second = stmt_num == 1;
>
> -  if (is_gimple_call (stmt))
> +  if (gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt))
>      {
>        number_of_oprnds = gimple_call_num_args (stmt);
>        first_op_idx = 3;
>      }
> -  else if (is_gimple_assign (stmt))
> +  else if (gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt))
>      {
>        enum tree_code code = gimple_assign_rhs_code (stmt);
>        number_of_oprnds = gimple_num_ops (stmt) - 1;
> @@ -347,12 +347,13 @@ vect_get_and_check_slp_defs (vec_info *v
>           int *map = maps[*swap];
>
>           if (i < 2)
> -           oprnd = TREE_OPERAND (gimple_op (stmt, first_op_idx), map[i]);
> +           oprnd = TREE_OPERAND (gimple_op (stmt_info->stmt,
> +                                            first_op_idx), map[i]);
>           else
> -           oprnd = gimple_op (stmt, map[i]);
> +           oprnd = gimple_op (stmt_info->stmt, map[i]);
>         }
>        else
> -       oprnd = gimple_op (stmt, first_op_idx + (swapped ? !i : i));
> +       oprnd = gimple_op (stmt_info->stmt, first_op_idx + (swapped ? !i : i));
>
>        oprnd_info = (*oprnds_info)[i];
>
> @@ -518,18 +519,20 @@ vect_get_and_check_slp_defs (vec_info *v
>      {
>        /* If there are already uses of this stmt in a SLP instance then
>           we've committed to the operand order and can't swap it.  */
> -      if (STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt)) != 0)
> +      if (STMT_VINFO_NUM_SLP_USES (stmt_info) != 0)
>         {
>           if (dump_enabled_p ())
>             {
>               dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>                                "Build SLP failed: cannot swap operands of "
>                                "shared stmt ");
> -             dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM, stmt, 0);
> +             dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
> +                               stmt_info->stmt, 0);
>             }
>           return -1;
>         }
>
> +      gassign *stmt = as_a <gassign *> (stmt_info->stmt);
>        if (first_op_cond)
>         {
>           tree cond = gimple_assign_rhs1 (stmt);
> @@ -655,8 +658,9 @@ vect_record_max_nunits (vec_info *vinfo,
>     would be permuted.  */
>
>  static bool
> -vect_two_operations_perm_ok_p (vec<gimple *> stmts, unsigned int group_size,
> -                              tree vectype, tree_code alt_stmt_code)
> +vect_two_operations_perm_ok_p (vec<stmt_vec_info> stmts,
> +                              unsigned int group_size, tree vectype,
> +                              tree_code alt_stmt_code)
>  {
>    unsigned HOST_WIDE_INT count;
>    if (!TYPE_VECTOR_SUBPARTS (vectype).is_constant (&count))
> @@ -666,7 +670,8 @@ vect_two_operations_perm_ok_p (vec<gimpl
>    for (unsigned int i = 0; i < count; ++i)
>      {
>        unsigned int elt = i;
> -      if (gimple_assign_rhs_code (stmts[i % group_size]) == alt_stmt_code)
> +      gassign *stmt = as_a <gassign *> (stmts[i % group_size]->stmt);
> +      if (gimple_assign_rhs_code (stmt) == alt_stmt_code)
>         elt += count;
>        sel.quick_push (elt);
>      }
> @@ -690,12 +695,12 @@ vect_two_operations_perm_ok_p (vec<gimpl
>
>  static bool
>  vect_build_slp_tree_1 (vec_info *vinfo, unsigned char *swap,
> -                      vec<gimple *> stmts, unsigned int group_size,
> +                      vec<stmt_vec_info> stmts, unsigned int group_size,
>                        poly_uint64 *max_nunits, bool *matches,
>                        bool *two_operators)
>  {
>    unsigned int i;
> -  gimple *first_stmt = stmts[0], *stmt = stmts[0];
> +  stmt_vec_info first_stmt_info = stmts[0];
>    enum tree_code first_stmt_code = ERROR_MARK;
>    enum tree_code alt_stmt_code = ERROR_MARK;
>    enum tree_code rhs_code = ERROR_MARK;
> @@ -710,9 +715,10 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>    gimple *first_load = NULL, *prev_first_load = NULL;
>
>    /* For every stmt in NODE find its def stmt/s.  */
> -  FOR_EACH_VEC_ELT (stmts, i, stmt)
> +  stmt_vec_info stmt_info;
> +  FOR_EACH_VEC_ELT (stmts, i, stmt_info)
>      {
> -      stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +      gimple *stmt = stmt_info->stmt;
>        swap[i] = 0;
>        matches[i] = false;
>
> @@ -723,7 +729,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>         }
>
>        /* Fail to vectorize statements marked as unvectorizable.  */
> -      if (!STMT_VINFO_VECTORIZABLE (vinfo_for_stmt (stmt)))
> +      if (!STMT_VINFO_VECTORIZABLE (stmt_info))
>          {
>            if (dump_enabled_p ())
>              {
> @@ -755,7 +761,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>        if (!vect_get_vector_types_for_stmt (stmt_info, &vectype,
>                                            &nunits_vectype)
>           || (nunits_vectype
> -             && !vect_record_max_nunits (vinfo, stmt, group_size,
> +             && !vect_record_max_nunits (vinfo, stmt_info, group_size,
>                                           nunits_vectype, max_nunits)))
>         {
>           /* Fatal mismatch.  */
> @@ -877,7 +883,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>                    && (alt_stmt_code == PLUS_EXPR
>                        || alt_stmt_code == MINUS_EXPR)
>                    && rhs_code == alt_stmt_code)
> -              && !(STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt))
> +             && !(STMT_VINFO_GROUPED_ACCESS (stmt_info)
>                     && (first_stmt_code == ARRAY_REF
>                         || first_stmt_code == BIT_FIELD_REF
>                         || first_stmt_code == INDIRECT_REF
> @@ -893,7 +899,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>                   dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>                                    "original stmt ");
>                   dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
> -                                   first_stmt, 0);
> +                                   first_stmt_info->stmt, 0);
>                 }
>               /* Mismatch.  */
>               continue;
> @@ -915,8 +921,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>
>           if (rhs_code == CALL_EXPR)
>             {
> -             gimple *first_stmt = stmts[0];
> -             if (!compatible_calls_p (as_a <gcall *> (first_stmt),
> +             if (!compatible_calls_p (as_a <gcall *> (stmts[0]->stmt),
>                                        as_a <gcall *> (stmt)))
>                 {
>                   if (dump_enabled_p ())
> @@ -933,7 +938,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>         }
>
>        /* Grouped store or load.  */
> -      if (STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt)))
> +      if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
>         {
>           if (REFERENCE_CLASS_P (lhs))
>             {
> @@ -943,7 +948,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>           else
>             {
>               /* Load.  */
> -              first_load = DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt));
> +             first_load = DR_GROUP_FIRST_ELEMENT (stmt_info);
>                if (prev_first_load)
>                  {
>                    /* Check that there are no loads from different interleaving
> @@ -1061,7 +1066,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>                                              vectype, alt_stmt_code))
>         {
>           for (i = 0; i < group_size; ++i)
> -           if (gimple_assign_rhs_code (stmts[i]) == alt_stmt_code)
> +           if (gimple_assign_rhs_code (stmts[i]->stmt) == alt_stmt_code)
>               {
>                 matches[i] = false;
>                 if (dump_enabled_p ())
> @@ -1070,11 +1075,11 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>                                      "Build SLP failed: different operation "
>                                      "in stmt ");
>                     dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
> -                                     stmts[i], 0);
> +                                     stmts[i]->stmt, 0);
>                     dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>                                      "original stmt ");
>                     dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
> -                                     first_stmt, 0);
> +                                     first_stmt_info->stmt, 0);
>                   }
>               }
>           return false;
> @@ -1090,8 +1095,8 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>     need a special value for deleted that differs from empty.  */
>  struct bst_traits
>  {
> -  typedef vec <gimple *> value_type;
> -  typedef vec <gimple *> compare_type;
> +  typedef vec <stmt_vec_info> value_type;
> +  typedef vec <stmt_vec_info> compare_type;
>    static inline hashval_t hash (value_type);
>    static inline bool equal (value_type existing, value_type candidate);
>    static inline bool is_empty (value_type x) { return !x.exists (); }
> @@ -1105,7 +1110,7 @@ bst_traits::hash (value_type x)
>  {
>    inchash::hash h;
>    for (unsigned i = 0; i < x.length (); ++i)
> -    h.add_int (gimple_uid (x[i]));
> +    h.add_int (gimple_uid (x[i]->stmt));
>    return h.end ();
>  }
>  inline bool
> @@ -1128,7 +1133,7 @@ typedef hash_map <vec <gimple *>, slp_tr
>
>  static slp_tree
>  vect_build_slp_tree_2 (vec_info *vinfo,
> -                      vec<gimple *> stmts, unsigned int group_size,
> +                      vec<stmt_vec_info> stmts, unsigned int group_size,
>                        poly_uint64 *max_nunits,
>                        vec<slp_tree> *loads,
>                        bool *matches, unsigned *npermutes, unsigned *tree_size,
> @@ -1136,7 +1141,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>
>  static slp_tree
>  vect_build_slp_tree (vec_info *vinfo,
> -                    vec<gimple *> stmts, unsigned int group_size,
> +                    vec<stmt_vec_info> stmts, unsigned int group_size,
>                      poly_uint64 *max_nunits, vec<slp_tree> *loads,
>                      bool *matches, unsigned *npermutes, unsigned *tree_size,
>                      unsigned max_tree_size)
> @@ -1151,7 +1156,7 @@ vect_build_slp_tree (vec_info *vinfo,
>       scalars, see PR81723.  */
>    if (! res)
>      {
> -      vec <gimple *> x;
> +      vec <stmt_vec_info> x;
>        x.create (stmts.length ());
>        x.splice (stmts);
>        bst_fail->add (x);
> @@ -1168,7 +1173,7 @@ vect_build_slp_tree (vec_info *vinfo,
>
>  static slp_tree
>  vect_build_slp_tree_2 (vec_info *vinfo,
> -                      vec<gimple *> stmts, unsigned int group_size,
> +                      vec<stmt_vec_info> stmts, unsigned int group_size,
>                        poly_uint64 *max_nunits,
>                        vec<slp_tree> *loads,
>                        bool *matches, unsigned *npermutes, unsigned *tree_size,
> @@ -1176,53 +1181,54 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>  {
>    unsigned nops, i, this_tree_size = 0;
>    poly_uint64 this_max_nunits = *max_nunits;
> -  gimple *stmt;
>    slp_tree node;
>
>    matches[0] = false;
>
> -  stmt = stmts[0];
> -  if (is_gimple_call (stmt))
> +  stmt_vec_info stmt_info = stmts[0];
> +  if (gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt))
>      nops = gimple_call_num_args (stmt);
> -  else if (is_gimple_assign (stmt))
> +  else if (gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt))
>      {
>        nops = gimple_num_ops (stmt) - 1;
>        if (gimple_assign_rhs_code (stmt) == COND_EXPR)
>         nops++;
>      }
> -  else if (gimple_code (stmt) == GIMPLE_PHI)
> +  else if (is_a <gphi *> (stmt_info->stmt))
>      nops = 0;
>    else
>      return NULL;
>
>    /* If the SLP node is a PHI (induction or reduction), terminate
>       the recursion.  */
> -  if (gimple_code (stmt) == GIMPLE_PHI)
> +  if (gphi *stmt = dyn_cast <gphi *> (stmt_info->stmt))
>      {
>        tree scalar_type = TREE_TYPE (PHI_RESULT (stmt));
>        tree vectype = get_vectype_for_scalar_type (scalar_type);
> -      if (!vect_record_max_nunits (vinfo, stmt, group_size, vectype,
> +      if (!vect_record_max_nunits (vinfo, stmt_info, group_size, vectype,
>                                    max_nunits))
>         return NULL;
>
> -      vect_def_type def_type = STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt));
> +      vect_def_type def_type = STMT_VINFO_DEF_TYPE (stmt_info);
>        /* Induction from different IVs is not supported.  */
>        if (def_type == vect_induction_def)
>         {
> -         FOR_EACH_VEC_ELT (stmts, i, stmt)
> -           if (stmt != stmts[0])
> +         stmt_vec_info other_info;
> +         FOR_EACH_VEC_ELT (stmts, i, other_info)
> +           if (stmt_info != other_info)
>               return NULL;
>         }
>        else
>         {
>           /* Else def types have to match.  */
> -         FOR_EACH_VEC_ELT (stmts, i, stmt)
> +         stmt_vec_info other_info;
> +         FOR_EACH_VEC_ELT (stmts, i, other_info)
>             {
>               /* But for reduction chains only check on the first stmt.  */
> -             if (REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt))
> -                 && REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)) != stmt)
> +             if (REDUC_GROUP_FIRST_ELEMENT (other_info)
> +                 && REDUC_GROUP_FIRST_ELEMENT (other_info) != stmt_info)
>                 continue;
> -             if (STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt)) != def_type)
> +             if (STMT_VINFO_DEF_TYPE (other_info) != def_type)
>                 return NULL;
>             }
>         }
> @@ -1238,8 +1244,8 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>      return NULL;
>
>    /* If the SLP node is a load, terminate the recursion.  */
> -  if (STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt))
> -      && DR_IS_READ (STMT_VINFO_DATA_REF (vinfo_for_stmt (stmt))))
> +  if (STMT_VINFO_GROUPED_ACCESS (stmt_info)
> +      && DR_IS_READ (STMT_VINFO_DATA_REF (stmt_info)))
>      {
>        *max_nunits = this_max_nunits;
>        node = vect_create_new_slp_node (stmts);
> @@ -1250,7 +1256,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>    /* Get at the operands, verifying they are compatible.  */
>    vec<slp_oprnd_info> oprnds_info = vect_create_oprnd_info (nops, group_size);
>    slp_oprnd_info oprnd_info;
> -  FOR_EACH_VEC_ELT (stmts, i, stmt)
> +  FOR_EACH_VEC_ELT (stmts, i, stmt_info)
>      {
>        int res = vect_get_and_check_slp_defs (vinfo, &swap[i],
>                                              stmts, i, &oprnds_info);
> @@ -1269,7 +1275,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>    auto_vec<slp_tree, 4> children;
>    auto_vec<slp_tree> this_loads;
>
> -  stmt = stmts[0];
> +  stmt_info = stmts[0];
>
>    if (tree_size)
>      max_tree_size -= *tree_size;
> @@ -1307,8 +1313,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>               /* ???  Rejecting patterns this way doesn't work.  We'd have to
>                  do extra work to cancel the pattern so the uses see the
>                  scalar version.  */
> -             && !is_pattern_stmt_p
> -                   (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (child)[0])))
> +             && !is_pattern_stmt_p (SLP_TREE_SCALAR_STMTS (child)[0]))
>             {
>               slp_tree grandchild;
>
> @@ -1352,7 +1357,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>           /* ???  Rejecting patterns this way doesn't work.  We'd have to
>              do extra work to cancel the pattern so the uses see the
>              scalar version.  */
> -         && !is_pattern_stmt_p (vinfo_for_stmt (stmt)))
> +         && !is_pattern_stmt_p (stmt_info))
>         {
>           dump_printf_loc (MSG_NOTE, vect_location,
>                            "Building vector operands from scalars\n");
> @@ -1373,7 +1378,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>              as well as the arms under some constraints.  */
>           && nops == 2
>           && oprnds_info[1]->first_dt == vect_internal_def
> -         && is_gimple_assign (stmt)
> +         && is_gimple_assign (stmt_info->stmt)
>           /* Do so only if the number of not successful permutes was nor more
>              than a cut-ff as re-trying the recursive match on
>              possibly each level of the tree would expose exponential
> @@ -1389,9 +1394,10 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>                 {
>                   if (matches[j] != !swap_not_matching)
>                     continue;
> -                 gimple *stmt = stmts[j];
> +                 stmt_vec_info stmt_info = stmts[j];
>                   /* Verify if we can swap operands of this stmt.  */
> -                 if (!is_gimple_assign (stmt)
> +                 gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt);
> +                 if (!stmt
>                       || !commutative_tree_code (gimple_assign_rhs_code (stmt)))
>                     {
>                       if (!swap_not_matching)
> @@ -1406,7 +1412,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>                      node and temporarily do that when processing it
>                      (or wrap operand accessors in a helper).  */
>                   else if (swap[j] != 0
> -                          || STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt)))
> +                          || STMT_VINFO_NUM_SLP_USES (stmt_info))
>                     {
>                       if (!swap_not_matching)
>                         {
> @@ -1417,7 +1423,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>                                                "Build SLP failed: cannot swap "
>                                                "operands of shared stmt ");
>                               dump_gimple_stmt (MSG_MISSED_OPTIMIZATION,
> -                                               TDF_SLIM, stmts[j], 0);
> +                                               TDF_SLIM, stmts[j]->stmt, 0);
>                             }
>                           goto fail;
>                         }
> @@ -1454,31 +1460,23 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>                  if we end up building the operand from scalars as
>                  we'll continue to process swapped operand two.  */
>               for (j = 0; j < group_size; ++j)
> -               {
> -                 gimple *stmt = stmts[j];
> -                 gimple_set_plf (stmt, GF_PLF_1, false);
> -               }
> +               gimple_set_plf (stmts[j]->stmt, GF_PLF_1, false);
>               for (j = 0; j < group_size; ++j)
> -               {
> -                 gimple *stmt = stmts[j];
> -                 if (matches[j] == !swap_not_matching)
> -                   {
> -                     /* Avoid swapping operands twice.  */
> -                     if (gimple_plf (stmt, GF_PLF_1))
> -                       continue;
> -                     swap_ssa_operands (stmt, gimple_assign_rhs1_ptr (stmt),
> -                                        gimple_assign_rhs2_ptr (stmt));
> -                     gimple_set_plf (stmt, GF_PLF_1, true);
> -                   }
> -               }
> +               if (matches[j] == !swap_not_matching)
> +                 {
> +                   gassign *stmt = as_a <gassign *> (stmts[j]->stmt);
> +                   /* Avoid swapping operands twice.  */
> +                   if (gimple_plf (stmt, GF_PLF_1))
> +                     continue;
> +                   swap_ssa_operands (stmt, gimple_assign_rhs1_ptr (stmt),
> +                                      gimple_assign_rhs2_ptr (stmt));
> +                   gimple_set_plf (stmt, GF_PLF_1, true);
> +                 }
>               /* Verify we swap all duplicates or none.  */
>               if (flag_checking)
>                 for (j = 0; j < group_size; ++j)
> -                 {
> -                   gimple *stmt = stmts[j];
> -                   gcc_assert (gimple_plf (stmt, GF_PLF_1)
> -                               == (matches[j] == !swap_not_matching));
> -                 }
> +                 gcc_assert (gimple_plf (stmts[j]->stmt, GF_PLF_1)
> +                             == (matches[j] == !swap_not_matching));
>
>               /* If we have all children of child built up from scalars then
>                  just throw that away and build it up this node from scalars.  */
> @@ -1486,8 +1484,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>                   /* ???  Rejecting patterns this way doesn't work.  We'd have
>                      to do extra work to cancel the pattern so the uses see the
>                      scalar version.  */
> -                 && !is_pattern_stmt_p
> -                       (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (child)[0])))
> +                 && !is_pattern_stmt_p (SLP_TREE_SCALAR_STMTS (child)[0]))
>                 {
>                   unsigned int j;
>                   slp_tree grandchild;
> @@ -1550,16 +1547,16 @@ vect_print_slp_tree (dump_flags_t dump_k
>                      slp_tree node)
>  {
>    int i;
> -  gimple *stmt;
> +  stmt_vec_info stmt_info;
>    slp_tree child;
>
>    dump_printf_loc (dump_kind, loc, "node%s\n",
>                    SLP_TREE_DEF_TYPE (node) != vect_internal_def
>                    ? " (external)" : "");
> -  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
> +  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
>      {
>        dump_printf_loc (dump_kind, loc, "\tstmt %d ", i);
> -      dump_gimple_stmt (dump_kind, TDF_SLIM, stmt, 0);
> +      dump_gimple_stmt (dump_kind, TDF_SLIM, stmt_info->stmt, 0);
>      }
>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
>      vect_print_slp_tree (dump_kind, loc, child);
> @@ -1575,15 +1572,15 @@ vect_print_slp_tree (dump_flags_t dump_k
>  vect_mark_slp_stmts (slp_tree node, enum slp_vect_type mark, int j)
>  {
>    int i;
> -  gimple *stmt;
> +  stmt_vec_info stmt_info;
>    slp_tree child;
>
>    if (SLP_TREE_DEF_TYPE (node) != vect_internal_def)
>      return;
>
> -  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
> +  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
>      if (j < 0 || i == j)
> -      STMT_SLP_TYPE (vinfo_for_stmt (stmt)) = mark;
> +      STMT_SLP_TYPE (stmt_info) = mark;
>
>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
>      vect_mark_slp_stmts (child, mark, j);
> @@ -1596,16 +1593,14 @@ vect_mark_slp_stmts (slp_tree node, enum
>  vect_mark_slp_stmts_relevant (slp_tree node)
>  {
>    int i;
> -  gimple *stmt;
>    stmt_vec_info stmt_info;
>    slp_tree child;
>
>    if (SLP_TREE_DEF_TYPE (node) != vect_internal_def)
>      return;
>
> -  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
> +  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
>      {
> -      stmt_info = vinfo_for_stmt (stmt);
>        gcc_assert (!STMT_VINFO_RELEVANT (stmt_info)
>                    || STMT_VINFO_RELEVANT (stmt_info) == vect_used_in_scope);
>        STMT_VINFO_RELEVANT (stmt_info) = vect_used_in_scope;
> @@ -1622,8 +1617,8 @@ vect_mark_slp_stmts_relevant (slp_tree n
>  vect_slp_rearrange_stmts (slp_tree node, unsigned int group_size,
>                            vec<unsigned> permutation)
>  {
> -  gimple *stmt;
> -  vec<gimple *> tmp_stmts;
> +  stmt_vec_info stmt_info;
> +  vec<stmt_vec_info> tmp_stmts;
>    unsigned int i;
>    slp_tree child;
>
> @@ -1634,8 +1629,8 @@ vect_slp_rearrange_stmts (slp_tree node,
>    tmp_stmts.create (group_size);
>    tmp_stmts.quick_grow_cleared (group_size);
>
> -  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
> -    tmp_stmts[permutation[i]] = stmt;
> +  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
> +    tmp_stmts[permutation[i]] = stmt_info;
>
>    SLP_TREE_SCALAR_STMTS (node).release ();
>    SLP_TREE_SCALAR_STMTS (node) = tmp_stmts;
> @@ -1696,13 +1691,14 @@ vect_attempt_slp_rearrange_stmts (slp_in
>    poly_uint64 unrolling_factor = SLP_INSTANCE_UNROLLING_FACTOR (slp_instn);
>    FOR_EACH_VEC_ELT (SLP_INSTANCE_LOADS (slp_instn), i, node)
>      {
> -      gimple *first_stmt = SLP_TREE_SCALAR_STMTS (node)[0];
> -      first_stmt = DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (first_stmt));
> +      stmt_vec_info first_stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
> +      first_stmt_info
> +       = vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (first_stmt_info));
>        /* But we have to keep those permutations that are required because
>           of handling of gaps.  */
>        if (known_eq (unrolling_factor, 1U)
> -         || (group_size == DR_GROUP_SIZE (vinfo_for_stmt (first_stmt))
> -             && DR_GROUP_GAP (vinfo_for_stmt (first_stmt)) == 0))
> +         || (group_size == DR_GROUP_SIZE (first_stmt_info)
> +             && DR_GROUP_GAP (first_stmt_info) == 0))
>         SLP_TREE_LOAD_PERMUTATION (node).release ();
>        else
>         for (j = 0; j < SLP_TREE_LOAD_PERMUTATION (node).length (); ++j)
> @@ -1721,7 +1717,7 @@ vect_supported_load_permutation_p (slp_i
>    unsigned int group_size = SLP_INSTANCE_GROUP_SIZE (slp_instn);
>    unsigned int i, j, k, next;
>    slp_tree node;
> -  gimple *stmt, *load, *next_load;
> +  gimple *next_load;
>
>    if (dump_enabled_p ())
>      {
> @@ -1750,18 +1746,18 @@ vect_supported_load_permutation_p (slp_i
>        return false;
>
>    node = SLP_INSTANCE_TREE (slp_instn);
> -  stmt = SLP_TREE_SCALAR_STMTS (node)[0];
> +  stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
>
>    /* Reduction (there are no data-refs in the root).
>       In reduction chain the order of the loads is not important.  */
> -  if (!STMT_VINFO_DATA_REF (vinfo_for_stmt (stmt))
> -      && !REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)))
> +  if (!STMT_VINFO_DATA_REF (stmt_info)
> +      && !REDUC_GROUP_FIRST_ELEMENT (stmt_info))
>      vect_attempt_slp_rearrange_stmts (slp_instn);
>
>    /* In basic block vectorization we allow any subchain of an interleaving
>       chain.
>       FORNOW: not supported in loop SLP because of realignment compications.  */
> -  if (STMT_VINFO_BB_VINFO (vinfo_for_stmt (stmt)))
> +  if (STMT_VINFO_BB_VINFO (stmt_info))
>      {
>        /* Check whether the loads in an instance form a subchain and thus
>           no permutation is necessary.  */
> @@ -1771,24 +1767,25 @@ vect_supported_load_permutation_p (slp_i
>             continue;
>           bool subchain_p = true;
>            next_load = NULL;
> -          FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), j, load)
> -            {
> -              if (j != 0
> -                 && (next_load != load
> -                     || DR_GROUP_GAP (vinfo_for_stmt (load)) != 1))
> +         stmt_vec_info load_info;
> +         FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), j, load_info)
> +           {
> +             if (j != 0
> +                 && (next_load != load_info
> +                     || DR_GROUP_GAP (load_info) != 1))
>                 {
>                   subchain_p = false;
>                   break;
>                 }
> -              next_load = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (load));
> -            }
> +             next_load = DR_GROUP_NEXT_ELEMENT (load_info);
> +           }
>           if (subchain_p)
>             SLP_TREE_LOAD_PERMUTATION (node).release ();
>           else
>             {
> -             stmt_vec_info group_info
> -               = vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (node)[0]);
> -             group_info = vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (group_info));
> +             stmt_vec_info group_info = SLP_TREE_SCALAR_STMTS (node)[0];
> +             group_info
> +               = vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (group_info));
>               unsigned HOST_WIDE_INT nunits;
>               unsigned k, maxk = 0;
>               FOR_EACH_VEC_ELT (SLP_TREE_LOAD_PERMUTATION (node), j, k)
> @@ -1831,7 +1828,7 @@ vect_supported_load_permutation_p (slp_i
>    poly_uint64 test_vf
>      = force_common_multiple (SLP_INSTANCE_UNROLLING_FACTOR (slp_instn),
>                              LOOP_VINFO_VECT_FACTOR
> -                            (STMT_VINFO_LOOP_VINFO (vinfo_for_stmt (stmt))));
> +                            (STMT_VINFO_LOOP_VINFO (stmt_info)));
>    FOR_EACH_VEC_ELT (SLP_INSTANCE_LOADS (slp_instn), i, node)
>      if (node->load_permutation.exists ()
>         && !vect_transform_slp_perm_load (node, vNULL, NULL, test_vf,
> @@ -1847,15 +1844,15 @@ vect_supported_load_permutation_p (slp_i
>  gimple *
>  vect_find_last_scalar_stmt_in_slp (slp_tree node)
>  {
> -  gimple *last = NULL, *stmt;
> +  gimple *last = NULL;
> +  stmt_vec_info stmt_vinfo;
>
> -  for (int i = 0; SLP_TREE_SCALAR_STMTS (node).iterate (i, &stmt); i++)
> +  for (int i = 0; SLP_TREE_SCALAR_STMTS (node).iterate (i, &stmt_vinfo); i++)
>      {
> -      stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
>        if (is_pattern_stmt_p (stmt_vinfo))
>         last = get_later_stmt (STMT_VINFO_RELATED_STMT (stmt_vinfo), last);
>        else
> -       last = get_later_stmt (stmt, last);
> +       last = get_later_stmt (stmt_vinfo, last);
>      }
>
>    return last;
> @@ -1926,6 +1923,7 @@ calculate_unrolling_factor (poly_uint64
>  vect_analyze_slp_instance (vec_info *vinfo,
>                            gimple *stmt, unsigned max_tree_size)
>  {
> +  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    slp_instance new_instance;
>    slp_tree node;
>    unsigned int group_size;
> @@ -1934,25 +1932,25 @@ vect_analyze_slp_instance (vec_info *vin
>    stmt_vec_info next_info;
>    unsigned int i;
>    vec<slp_tree> loads;
> -  struct data_reference *dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (stmt));
> -  vec<gimple *> scalar_stmts;
> +  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
> +  vec<stmt_vec_info> scalar_stmts;
>
> -  if (STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt)))
> +  if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
>      {
>        scalar_type = TREE_TYPE (DR_REF (dr));
>        vectype = get_vectype_for_scalar_type (scalar_type);
> -      group_size = DR_GROUP_SIZE (vinfo_for_stmt (stmt));
> +      group_size = DR_GROUP_SIZE (stmt_info);
>      }
> -  else if (!dr && REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)))
> +  else if (!dr && REDUC_GROUP_FIRST_ELEMENT (stmt_info))
>      {
>        gcc_assert (is_a <loop_vec_info> (vinfo));
> -      vectype = STMT_VINFO_VECTYPE (vinfo_for_stmt (stmt));
> -      group_size = REDUC_GROUP_SIZE (vinfo_for_stmt (stmt));
> +      vectype = STMT_VINFO_VECTYPE (stmt_info);
> +      group_size = REDUC_GROUP_SIZE (stmt_info);
>      }
>    else
>      {
>        gcc_assert (is_a <loop_vec_info> (vinfo));
> -      vectype = STMT_VINFO_VECTYPE (vinfo_for_stmt (stmt));
> +      vectype = STMT_VINFO_VECTYPE (stmt_info);
>        group_size = as_a <loop_vec_info> (vinfo)->reductions.length ();
>      }
>
> @@ -1973,38 +1971,38 @@ vect_analyze_slp_instance (vec_info *vin
>    /* Create a node (a root of the SLP tree) for the packed grouped stores.  */
>    scalar_stmts.create (group_size);
>    next = stmt;
> -  if (STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt)))
> +  if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
>      {
>        /* Collect the stores and store them in SLP_TREE_SCALAR_STMTS.  */
>        while (next)
>          {
> -         if (STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (next))
> -             && STMT_VINFO_RELATED_STMT (vinfo_for_stmt (next)))
> -           scalar_stmts.safe_push (
> -                 STMT_VINFO_RELATED_STMT (vinfo_for_stmt (next)));
> +         next_info = vinfo_for_stmt (next);
> +         if (STMT_VINFO_IN_PATTERN_P (next_info)
> +             && STMT_VINFO_RELATED_STMT (next_info))
> +           scalar_stmts.safe_push (STMT_VINFO_RELATED_STMT (next_info));
>           else
> -            scalar_stmts.safe_push (next);
> +           scalar_stmts.safe_push (next_info);
>            next = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next));
>          }
>      }
> -  else if (!dr && REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)))
> +  else if (!dr && REDUC_GROUP_FIRST_ELEMENT (stmt_info))
>      {
>        /* Collect the reduction stmts and store them in
>          SLP_TREE_SCALAR_STMTS.  */
>        while (next)
>          {
> -         if (STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (next))
> -             && STMT_VINFO_RELATED_STMT (vinfo_for_stmt (next)))
> -           scalar_stmts.safe_push (
> -                 STMT_VINFO_RELATED_STMT (vinfo_for_stmt (next)));
> +         next_info = vinfo_for_stmt (next);
> +         if (STMT_VINFO_IN_PATTERN_P (next_info)
> +             && STMT_VINFO_RELATED_STMT (next_info))
> +           scalar_stmts.safe_push (STMT_VINFO_RELATED_STMT (next_info));
>           else
> -            scalar_stmts.safe_push (next);
> +           scalar_stmts.safe_push (next_info);
>            next = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next));
>          }
>        /* Mark the first element of the reduction chain as reduction to properly
>          transform the node.  In the reduction analysis phase only the last
>          element of the chain is marked as reduction.  */
> -      STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt)) = vect_reduction_def;
> +      STMT_VINFO_DEF_TYPE (stmt_info) = vect_reduction_def;
>      }
>    else
>      {
> @@ -2068,15 +2066,16 @@ vect_analyze_slp_instance (vec_info *vin
>         {
>           vec<unsigned> load_permutation;
>           int j;
> -         gimple *load, *first_stmt;
> +         stmt_vec_info load_info;
> +         gimple *first_stmt;
>           bool this_load_permuted = false;
>           load_permutation.create (group_size);
>           first_stmt = DR_GROUP_FIRST_ELEMENT
> -             (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (load_node)[0]));
> -         FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (load_node), j, load)
> +           (SLP_TREE_SCALAR_STMTS (load_node)[0]);
> +         FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (load_node), j, load_info)
>             {
> -                 int load_place = vect_get_place_in_interleaving_chain
> -                                    (load, first_stmt);
> +             int load_place = vect_get_place_in_interleaving_chain
> +               (load_info, first_stmt);
>               gcc_assert (load_place != -1);
>               if (load_place != j)
>                 this_load_permuted = true;
> @@ -2124,7 +2123,7 @@ vect_analyze_slp_instance (vec_info *vin
>           FOR_EACH_VEC_ELT (loads, i, load_node)
>             {
>               gimple *first_stmt = DR_GROUP_FIRST_ELEMENT
> -                 (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (load_node)[0]));
> +               (SLP_TREE_SCALAR_STMTS (load_node)[0]);
>               stmt_vec_info stmt_vinfo = vinfo_for_stmt (first_stmt);
>                   /* Use SLP for strided accesses (or if we
>                      can't load-lanes).  */
> @@ -2307,10 +2306,10 @@ vect_make_slp_decision (loop_vec_info lo
>  static void
>  vect_detect_hybrid_slp_stmts (slp_tree node, unsigned i, slp_vect_type stype)
>  {
> -  gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[i];
> +  stmt_vec_info stmt_vinfo = SLP_TREE_SCALAR_STMTS (node)[i];
>    imm_use_iterator imm_iter;
>    gimple *use_stmt;
> -  stmt_vec_info use_vinfo, stmt_vinfo = vinfo_for_stmt (stmt);
> +  stmt_vec_info use_vinfo;
>    slp_tree child;
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_vinfo);
>    int j;
> @@ -2326,6 +2325,7 @@ vect_detect_hybrid_slp_stmts (slp_tree n
>        gcc_checking_assert (PURE_SLP_STMT (stmt_vinfo));
>        /* If we get a pattern stmt here we have to use the LHS of the
>           original stmt for immediate uses.  */
> +      gimple *stmt = stmt_vinfo->stmt;
>        if (! STMT_VINFO_IN_PATTERN_P (stmt_vinfo)
>           && STMT_VINFO_RELATED_STMT (stmt_vinfo))
>         stmt = STMT_VINFO_RELATED_STMT (stmt_vinfo)->stmt;
> @@ -2366,7 +2366,7 @@ vect_detect_hybrid_slp_stmts (slp_tree n
>        if (dump_enabled_p ())
>         {
>           dump_printf_loc (MSG_NOTE, vect_location, "marking hybrid: ");
> -         dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
> +         dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_vinfo->stmt, 0);
>         }
>        STMT_SLP_TYPE (stmt_vinfo) = hybrid;
>      }
> @@ -2525,9 +2525,8 @@ vect_slp_analyze_node_operations_1 (vec_
>                                     slp_instance node_instance,
>                                     stmt_vector_for_cost *cost_vec)
>  {
> -  gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[0];
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> -  gcc_assert (stmt_info);
> +  stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
> +  gimple *stmt = stmt_info->stmt;
>    gcc_assert (STMT_SLP_TYPE (stmt_info) != loop_vect);
>
>    /* For BB vectorization vector types are assigned here.
> @@ -2551,10 +2550,10 @@ vect_slp_analyze_node_operations_1 (vec_
>             return false;
>         }
>
> -      gimple *sstmt;
> +      stmt_vec_info sstmt_info;
>        unsigned int i;
> -      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, sstmt)
> -       STMT_VINFO_VECTYPE (vinfo_for_stmt (sstmt)) = vectype;
> +      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, sstmt_info)
> +       STMT_VINFO_VECTYPE (sstmt_info) = vectype;
>      }
>
>    /* Calculate the number of vector statements to be created for the
> @@ -2626,14 +2625,14 @@ vect_slp_analyze_node_operations (vec_in
>    /* Push SLP node def-type to stmt operands.  */
>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), j, child)
>      if (SLP_TREE_DEF_TYPE (child) != vect_internal_def)
> -      STMT_VINFO_DEF_TYPE (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (child)[0]))
> +      STMT_VINFO_DEF_TYPE (SLP_TREE_SCALAR_STMTS (child)[0])
>         = SLP_TREE_DEF_TYPE (child);
>    bool res = vect_slp_analyze_node_operations_1 (vinfo, node, node_instance,
>                                                  cost_vec);
>    /* Restore def-types.  */
>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), j, child)
>      if (SLP_TREE_DEF_TYPE (child) != vect_internal_def)
> -      STMT_VINFO_DEF_TYPE (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (child)[0]))
> +      STMT_VINFO_DEF_TYPE (SLP_TREE_SCALAR_STMTS (child)[0])
>         = vect_internal_def;
>    if (! res)
>      return false;
> @@ -2665,11 +2664,11 @@ vect_slp_analyze_operations (vec_info *v
>                                              instance, visited, &lvisited,
>                                              &cost_vec))
>          {
> +         slp_tree node = SLP_INSTANCE_TREE (instance);
> +         stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
>           dump_printf_loc (MSG_NOTE, vect_location,
>                            "removing SLP instance operations starting from: ");
> -         dump_gimple_stmt (MSG_NOTE, TDF_SLIM,
> -                           SLP_TREE_SCALAR_STMTS
> -                             (SLP_INSTANCE_TREE (instance))[0], 0);
> +         dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
>           vect_free_slp_instance (instance, false);
>            vinfo->slp_instances.ordered_remove (i);
>           cost_vec.release ();
> @@ -2701,14 +2700,14 @@ vect_bb_slp_scalar_cost (basic_block bb,
>                          stmt_vector_for_cost *cost_vec)
>  {
>    unsigned i;
> -  gimple *stmt;
> +  stmt_vec_info stmt_info;
>    slp_tree child;
>
> -  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
> +  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
>      {
> +      gimple *stmt = stmt_info->stmt;
>        ssa_op_iter op_iter;
>        def_operand_p def_p;
> -      stmt_vec_info stmt_info;
>
>        if ((*life)[i])
>         continue;
> @@ -2724,8 +2723,7 @@ vect_bb_slp_scalar_cost (basic_block bb,
>           gimple *use_stmt;
>           FOR_EACH_IMM_USE_STMT (use_stmt, use_iter, DEF_FROM_PTR (def_p))
>             if (!is_gimple_debug (use_stmt)
> -               && (! vect_stmt_in_region_p (vinfo_for_stmt (stmt)->vinfo,
> -                                            use_stmt)
> +               && (! vect_stmt_in_region_p (stmt_info->vinfo, use_stmt)
>                     || ! PURE_SLP_STMT (vinfo_for_stmt (use_stmt))))
>               {
>                 (*life)[i] = true;
> @@ -2740,7 +2738,6 @@ vect_bb_slp_scalar_cost (basic_block bb,
>         continue;
>        gimple_set_visited (stmt, true);
>
> -      stmt_info = vinfo_for_stmt (stmt);
>        vect_cost_for_stmt kind;
>        if (STMT_VINFO_DATA_REF (stmt_info))
>          {
> @@ -2944,11 +2941,11 @@ vect_slp_analyze_bb_1 (gimple_stmt_itera
>        if (! vect_slp_analyze_and_verify_instance_alignment (instance)
>           || ! vect_slp_analyze_instance_dependence (instance))
>         {
> +         slp_tree node = SLP_INSTANCE_TREE (instance);
> +         stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
>           dump_printf_loc (MSG_NOTE, vect_location,
>                            "removing SLP instance operations starting from: ");
> -         dump_gimple_stmt (MSG_NOTE, TDF_SLIM,
> -                           SLP_TREE_SCALAR_STMTS
> -                             (SLP_INSTANCE_TREE (instance))[0], 0);
> +         dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
>           vect_free_slp_instance (instance, false);
>           BB_VINFO_SLP_INSTANCES (bb_vinfo).ordered_remove (i);
>           continue;
> @@ -3299,9 +3296,9 @@ vect_get_constant_vectors (tree op, slp_
>                             vec<tree> *vec_oprnds,
>                            unsigned int op_num, unsigned int number_of_vectors)
>  {
> -  vec<gimple *> stmts = SLP_TREE_SCALAR_STMTS (slp_node);
> -  gimple *stmt = stmts[0];
> -  stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
> +  vec<stmt_vec_info> stmts = SLP_TREE_SCALAR_STMTS (slp_node);
> +  stmt_vec_info stmt_vinfo = stmts[0];
> +  gimple *stmt = stmt_vinfo->stmt;
>    unsigned HOST_WIDE_INT nunits;
>    tree vec_cst;
>    unsigned j, number_of_places_left_in_vector;
> @@ -3320,7 +3317,7 @@ vect_get_constant_vectors (tree op, slp_
>
>    /* Check if vector type is a boolean vector.  */
>    if (VECT_SCALAR_BOOLEAN_TYPE_P (TREE_TYPE (op))
> -      && vect_mask_constant_operand_p (stmt, op_num))
> +      && vect_mask_constant_operand_p (stmt_vinfo, op_num))
>      vector_type
>        = build_same_sized_truth_vector_type (STMT_VINFO_VECTYPE (stmt_vinfo));
>    else
> @@ -3366,8 +3363,9 @@ vect_get_constant_vectors (tree op, slp_
>    bool place_after_defs = false;
>    for (j = 0; j < number_of_copies; j++)
>      {
> -      for (i = group_size - 1; stmts.iterate (i, &stmt); i--)
> +      for (i = group_size - 1; stmts.iterate (i, &stmt_vinfo); i--)
>          {
> +         stmt = stmt_vinfo->stmt;
>            if (is_store)
>              op = gimple_assign_rhs1 (stmt);
>            else
> @@ -3496,10 +3494,12 @@ vect_get_constant_vectors (tree op, slp_
>                 {
>                   gsi = gsi_for_stmt
>                           (vect_find_last_scalar_stmt_in_slp (slp_node));
> -                 init = vect_init_vector (stmt, vec_cst, vector_type, &gsi);
> +                 init = vect_init_vector (stmt_vinfo, vec_cst, vector_type,
> +                                          &gsi);
>                 }
>               else
> -               init = vect_init_vector (stmt, vec_cst, vector_type, NULL);
> +               init = vect_init_vector (stmt_vinfo, vec_cst, vector_type,
> +                                        NULL);
>               if (ctor_seq != NULL)
>                 {
>                   gsi = gsi_for_stmt (SSA_NAME_DEF_STMT (init));
> @@ -3612,15 +3612,14 @@ vect_get_slp_defs (vec<tree> ops, slp_tr
>           /* We have to check both pattern and original def, if available.  */
>           if (SLP_TREE_DEF_TYPE (child) == vect_internal_def)
>             {
> -             gimple *first_def = SLP_TREE_SCALAR_STMTS (child)[0];
> -             stmt_vec_info related
> -               = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (first_def));
> +             stmt_vec_info first_def_info = SLP_TREE_SCALAR_STMTS (child)[0];
> +             stmt_vec_info related = STMT_VINFO_RELATED_STMT (first_def_info);
>               tree first_def_op;
>
> -             if (gimple_code (first_def) == GIMPLE_PHI)
> +             if (gphi *first_def = dyn_cast <gphi *> (first_def_info->stmt))
>                 first_def_op = gimple_phi_result (first_def);
>               else
> -               first_def_op = gimple_get_lhs (first_def);
> +               first_def_op = gimple_get_lhs (first_def_info->stmt);
>               if (operand_equal_p (oprnd, first_def_op, 0)
>                   || (related
>                       && operand_equal_p (oprnd,
> @@ -3686,8 +3685,7 @@ vect_transform_slp_perm_load (slp_tree n
>                               slp_instance slp_node_instance, bool analyze_only,
>                               unsigned *n_perms)
>  {
> -  gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[0];
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +  stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
>    vec_info *vinfo = stmt_info->vinfo;
>    tree mask_element_type = NULL_TREE, mask_type;
>    int vec_index = 0;
> @@ -3779,7 +3777,7 @@ vect_transform_slp_perm_load (slp_tree n
>                                    "permutation requires at "
>                                    "least three vectors ");
>                   dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
> -                                   stmt, 0);
> +                                   stmt_info->stmt, 0);
>                 }
>               gcc_assert (analyze_only);
>               return false;
> @@ -3832,6 +3830,7 @@ vect_transform_slp_perm_load (slp_tree n
>                   stmt_vec_info perm_stmt_info;
>                   if (! noop_p)
>                     {
> +                     gassign *stmt = as_a <gassign *> (stmt_info->stmt);
>                       tree perm_dest
>                         = vect_create_destination_var (gimple_assign_lhs (stmt),
>                                                        vectype);
> @@ -3841,7 +3840,8 @@ vect_transform_slp_perm_load (slp_tree n
>                                                first_vec, second_vec,
>                                                mask_vec);
>                       perm_stmt_info
> -                       = vect_finish_stmt_generation (stmt, perm_stmt, gsi);
> +                       = vect_finish_stmt_generation (stmt_info, perm_stmt,
> +                                                      gsi);
>                     }
>                   else
>                     /* If mask was NULL_TREE generate the requested
> @@ -3870,7 +3870,6 @@ vect_transform_slp_perm_load (slp_tree n
>  vect_schedule_slp_instance (slp_tree node, slp_instance instance,
>                             scalar_stmts_to_slp_tree_map_t *bst_map)
>  {
> -  gimple *stmt;
>    bool grouped_store, is_store;
>    gimple_stmt_iterator si;
>    stmt_vec_info stmt_info;
> @@ -3897,11 +3896,13 @@ vect_schedule_slp_instance (slp_tree nod
>    /* Push SLP node def-type to stmts.  */
>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
>      if (SLP_TREE_DEF_TYPE (child) != vect_internal_def)
> -      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (child), j, stmt)
> -       STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt)) = SLP_TREE_DEF_TYPE (child);
> +      {
> +       stmt_vec_info child_stmt_info;
> +       FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (child), j, child_stmt_info)
> +         STMT_VINFO_DEF_TYPE (child_stmt_info) = SLP_TREE_DEF_TYPE (child);
> +      }
>
> -  stmt = SLP_TREE_SCALAR_STMTS (node)[0];
> -  stmt_info = vinfo_for_stmt (stmt);
> +  stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
>
>    /* VECTYPE is the type of the destination.  */
>    vectype = STMT_VINFO_VECTYPE (stmt_info);
> @@ -3916,7 +3917,7 @@ vect_schedule_slp_instance (slp_tree nod
>      {
>        dump_printf_loc (MSG_NOTE,vect_location,
>                        "------>vectorizing SLP node starting from: ");
> -      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
> +      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
>      }
>
>    /* Vectorized stmts go before the last scalar stmt which is where
> @@ -3928,7 +3929,7 @@ vect_schedule_slp_instance (slp_tree nod
>       chain is marked as reduction.  */
>    if (!STMT_VINFO_GROUPED_ACCESS (stmt_info)
>        && REDUC_GROUP_FIRST_ELEMENT (stmt_info)
> -      && REDUC_GROUP_FIRST_ELEMENT (stmt_info) == stmt)
> +      && REDUC_GROUP_FIRST_ELEMENT (stmt_info) == stmt_info)
>      {
>        STMT_VINFO_DEF_TYPE (stmt_info) = vect_reduction_def;
>        STMT_VINFO_TYPE (stmt_info) = reduc_vec_info_type;
> @@ -3938,29 +3939,33 @@ vect_schedule_slp_instance (slp_tree nod
>       both operations and then performing a merge.  */
>    if (SLP_TREE_TWO_OPERATORS (node))
>      {
> +      gassign *stmt = as_a <gassign *> (stmt_info->stmt);
>        enum tree_code code0 = gimple_assign_rhs_code (stmt);
>        enum tree_code ocode = ERROR_MARK;
> -      gimple *ostmt;
> +      stmt_vec_info ostmt_info;
>        vec_perm_builder mask (group_size, group_size, 1);
> -      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, ostmt)
> -       if (gimple_assign_rhs_code (ostmt) != code0)
> -         {
> -           mask.quick_push (1);
> -           ocode = gimple_assign_rhs_code (ostmt);
> -         }
> -       else
> -         mask.quick_push (0);
> +      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, ostmt_info)
> +       {
> +         gassign *ostmt = as_a <gassign *> (ostmt_info->stmt);
> +         if (gimple_assign_rhs_code (ostmt) != code0)
> +           {
> +             mask.quick_push (1);
> +             ocode = gimple_assign_rhs_code (ostmt);
> +           }
> +         else
> +           mask.quick_push (0);
> +       }
>        if (ocode != ERROR_MARK)
>         {
>           vec<stmt_vec_info> v0;
>           vec<stmt_vec_info> v1;
>           unsigned j;
>           tree tmask = NULL_TREE;
> -         vect_transform_stmt (stmt, &si, &grouped_store, node, instance);
> +         vect_transform_stmt (stmt_info, &si, &grouped_store, node, instance);
>           v0 = SLP_TREE_VEC_STMTS (node).copy ();
>           SLP_TREE_VEC_STMTS (node).truncate (0);
>           gimple_assign_set_rhs_code (stmt, ocode);
> -         vect_transform_stmt (stmt, &si, &grouped_store, node, instance);
> +         vect_transform_stmt (stmt_info, &si, &grouped_store, node, instance);
>           gimple_assign_set_rhs_code (stmt, code0);
>           v1 = SLP_TREE_VEC_STMTS (node).copy ();
>           SLP_TREE_VEC_STMTS (node).truncate (0);
> @@ -3998,20 +4003,24 @@ vect_schedule_slp_instance (slp_tree nod
>                                            gimple_assign_lhs (v1[j]->stmt),
>                                            tmask);
>               SLP_TREE_VEC_STMTS (node).quick_push
> -               (vect_finish_stmt_generation (stmt, vstmt, &si));
> +               (vect_finish_stmt_generation (stmt_info, vstmt, &si));
>             }
>           v0.release ();
>           v1.release ();
>           return false;
>         }
>      }
> -  is_store = vect_transform_stmt (stmt, &si, &grouped_store, node, instance);
> +  is_store = vect_transform_stmt (stmt_info, &si, &grouped_store, node,
> +                                 instance);
>
>    /* Restore stmt def-types.  */
>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
>      if (SLP_TREE_DEF_TYPE (child) != vect_internal_def)
> -      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (child), j, stmt)
> -       STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt)) = vect_internal_def;
> +      {
> +       stmt_vec_info child_stmt_info;
> +       FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (child), j, child_stmt_info)
> +         STMT_VINFO_DEF_TYPE (child_stmt_info) = vect_internal_def;
> +      }
>
>    return is_store;
>  }
> @@ -4024,7 +4033,7 @@ vect_schedule_slp_instance (slp_tree nod
>  static void
>  vect_remove_slp_scalar_calls (slp_tree node)
>  {
> -  gimple *stmt, *new_stmt;
> +  gimple *new_stmt;
>    gimple_stmt_iterator gsi;
>    int i;
>    slp_tree child;
> @@ -4037,13 +4046,12 @@ vect_remove_slp_scalar_calls (slp_tree n
>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
>      vect_remove_slp_scalar_calls (child);
>
> -  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
> +  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
>      {
> -      if (!is_gimple_call (stmt) || gimple_bb (stmt) == NULL)
> +      gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt);
> +      if (!stmt || gimple_bb (stmt) == NULL)
>         continue;
> -      stmt_info = vinfo_for_stmt (stmt);
> -      if (stmt_info == NULL_STMT_VEC_INFO
> -         || is_pattern_stmt_p (stmt_info)
> +      if (is_pattern_stmt_p (stmt_info)
>           || !PURE_SLP_STMT (stmt_info))
>         continue;
>        lhs = gimple_call_lhs (stmt);
> @@ -4085,7 +4093,7 @@ vect_schedule_slp (vec_info *vinfo)
>    FOR_EACH_VEC_ELT (slp_instances, i, instance)
>      {
>        slp_tree root = SLP_INSTANCE_TREE (instance);
> -      gimple *store;
> +      stmt_vec_info store_info;
>        unsigned int j;
>        gimple_stmt_iterator gsi;
>
> @@ -4099,20 +4107,20 @@ vect_schedule_slp (vec_info *vinfo)
>        if (is_a <loop_vec_info> (vinfo))
>         vect_remove_slp_scalar_calls (root);
>
> -      for (j = 0; SLP_TREE_SCALAR_STMTS (root).iterate (j, &store)
> +      for (j = 0; SLP_TREE_SCALAR_STMTS (root).iterate (j, &store_info)
>                    && j < SLP_INSTANCE_GROUP_SIZE (instance); j++)
>          {
> -          if (!STMT_VINFO_DATA_REF (vinfo_for_stmt (store)))
> -            break;
> +         if (!STMT_VINFO_DATA_REF (store_info))
> +           break;
>
> -         if (is_pattern_stmt_p (vinfo_for_stmt (store)))
> -           store = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (store));
> -          /* Free the attached stmt_vec_info and remove the stmt.  */
> -          gsi = gsi_for_stmt (store);
> -         unlink_stmt_vdef (store);
> -          gsi_remove (&gsi, true);
> -         release_defs (store);
> -          free_stmt_vec_info (store);
> +         if (is_pattern_stmt_p (store_info))
> +           store_info = STMT_VINFO_RELATED_STMT (store_info);
> +         /* Free the attached stmt_vec_info and remove the stmt.  */
> +         gsi = gsi_for_stmt (store_info);
> +         unlink_stmt_vdef (store_info);
> +         gsi_remove (&gsi, true);
> +         release_defs (store_info);
> +         free_stmt_vec_info (store_info);
>          }
>      }
>
> Index: gcc/tree-vect-data-refs.c
> ===================================================================
> --- gcc/tree-vect-data-refs.c   2018-07-24 10:22:47.485157343 +0100
> +++ gcc/tree-vect-data-refs.c   2018-07-24 10:23:00.397042684 +0100
> @@ -665,7 +665,8 @@ vect_slp_analyze_data_ref_dependence (st
>
>  static bool
>  vect_slp_analyze_node_dependences (slp_instance instance, slp_tree node,
> -                                  vec<gimple *> stores, gimple *last_store)
> +                                  vec<stmt_vec_info> stores,
> +                                  gimple *last_store)
>  {
>    /* This walks over all stmts involved in the SLP load/store done
>       in NODE verifying we can sink them up to the last stmt in the
> @@ -673,13 +674,13 @@ vect_slp_analyze_node_dependences (slp_i
>    gimple *last_access = vect_find_last_scalar_stmt_in_slp (node);
>    for (unsigned k = 0; k < SLP_INSTANCE_GROUP_SIZE (instance); ++k)
>      {
> -      gimple *access = SLP_TREE_SCALAR_STMTS (node)[k];
> -      if (access == last_access)
> +      stmt_vec_info access_info = SLP_TREE_SCALAR_STMTS (node)[k];
> +      if (access_info == last_access)
>         continue;
> -      data_reference *dr_a = STMT_VINFO_DATA_REF (vinfo_for_stmt (access));
> +      data_reference *dr_a = STMT_VINFO_DATA_REF (access_info);
>        ao_ref ref;
>        bool ref_initialized_p = false;
> -      for (gimple_stmt_iterator gsi = gsi_for_stmt (access);
> +      for (gimple_stmt_iterator gsi = gsi_for_stmt (access_info->stmt);
>            gsi_stmt (gsi) != last_access; gsi_next (&gsi))

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [19/46] Make vect_dr_stmt return a stmt_vec_info
  2018-07-24 10:01 ` [19/46] Make vect_dr_stmt return a stmt_vec_info Richard Sandiford
@ 2018-07-25  9:28   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25  9:28 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 12:01 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch makes vect_dr_stmt return a stmt_vec_info instead of a
> gimple stmt.  Rather than retain a separate gimple stmt variable
> in cases where both existed, the patch replaces uses of the gimple
> variable with the uses of the stmt_vec_info.  Later patches do this
> more generally.
>
> Many things that are keyed off a data_reference would these days
> be better keyed off a stmt_vec_info, but it's more convenient
> to do that later in the series.  The vect_dr_size calls that are
> left over do still benefit from this patch.

OK

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (vect_dr_stmt): Return a stmt_vec_info rather
>         than a gimple stmt.
>         * tree-vect-data-refs.c (vect_analyze_data_ref_dependence)
>         (vect_slp_analyze_data_ref_dependence, vect_record_base_alignments)
>         (vect_calculate_target_alignmentm, vect_compute_data_ref_alignment)
>         (vect_update_misalignment_for_peel, vect_verify_datarefs_alignment)
>         (vector_alignment_reachable_p, vect_get_data_access_cost)
>         (vect_get_peeling_costs_all_drs, vect_peeling_hash_get_lowest_cost)
>         (vect_peeling_supportable, vect_enhance_data_refs_alignment)
>         (vect_find_same_alignment_drs, vect_analyze_data_refs_alignment)
>         (vect_analyze_group_access_1, vect_analyze_group_access)
>         (vect_analyze_data_ref_access, vect_analyze_data_ref_accesses)
>         (vect_vfa_access_size, vect_small_gap_p, vect_analyze_data_refs)
>         (vect_supportable_dr_alignment): Remove vinfo_for_stmt from the
>         result of vect_dr_stmt and use the stmt_vec_info instead of
>         the associated gimple stmt.
>         * tree-vect-loop-manip.c (get_misalign_in_elems): Likewise.
>         (vect_gen_prolog_loop_niters): Likewise.
>         * tree-vect-loop.c (vect_analyze_loop_2): Likewise.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:23:00.401042649 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:23:04.033010396 +0100
> @@ -1370,7 +1370,7 @@ vect_dr_behavior (data_reference *dr)
>     a pattern this returns the corresponding pattern stmt.  Otherwise
>     DR_STMT is returned.  */
>
> -inline gimple *
> +inline stmt_vec_info
>  vect_dr_stmt (data_reference *dr)
>  {
>    gimple *stmt = DR_STMT (dr);
> @@ -1379,7 +1379,7 @@ vect_dr_stmt (data_reference *dr)
>      return STMT_VINFO_RELATED_STMT (stmt_info);
>    /* DR_STMT should never refer to a stmt in a pattern replacement.  */
>    gcc_checking_assert (!STMT_VINFO_RELATED_STMT (stmt_info));
> -  return stmt;
> +  return stmt_info;
>  }
>
>  /* Return true if the vect cost model is unlimited.  */
> Index: gcc/tree-vect-data-refs.c
> ===================================================================
> --- gcc/tree-vect-data-refs.c   2018-07-24 10:23:00.397042684 +0100
> +++ gcc/tree-vect-data-refs.c   2018-07-24 10:23:04.029010432 +0100
> @@ -294,8 +294,8 @@ vect_analyze_data_ref_dependence (struct
>    struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
>    struct data_reference *dra = DDR_A (ddr);
>    struct data_reference *drb = DDR_B (ddr);
> -  stmt_vec_info stmtinfo_a = vinfo_for_stmt (vect_dr_stmt (dra));
> -  stmt_vec_info stmtinfo_b = vinfo_for_stmt (vect_dr_stmt (drb));
> +  stmt_vec_info stmtinfo_a = vect_dr_stmt (dra);
> +  stmt_vec_info stmtinfo_b = vect_dr_stmt (drb);
>    lambda_vector dist_v;
>    unsigned int loop_depth;
>
> @@ -627,9 +627,9 @@ vect_slp_analyze_data_ref_dependence (st
>
>    /* If dra and drb are part of the same interleaving chain consider
>       them independent.  */
> -  if (STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (vect_dr_stmt (dra)))
> -      && (DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (vect_dr_stmt (dra)))
> -         == DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (vect_dr_stmt (drb)))))
> +  if (STMT_VINFO_GROUPED_ACCESS (vect_dr_stmt (dra))
> +      && (DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (dra))
> +         == DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (drb))))
>      return false;
>
>    /* Unknown data dependence.  */
> @@ -841,19 +841,18 @@ vect_record_base_alignments (vec_info *v
>    unsigned int i;
>    FOR_EACH_VEC_ELT (vinfo->shared->datarefs, i, dr)
>      {
> -      gimple *stmt = vect_dr_stmt (dr);
> -      stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +      stmt_vec_info stmt_info = vect_dr_stmt (dr);
>        if (!DR_IS_CONDITIONAL_IN_STMT (dr)
>           && STMT_VINFO_VECTORIZABLE (stmt_info)
>           && !STMT_VINFO_GATHER_SCATTER_P (stmt_info))
>         {
> -         vect_record_base_alignment (vinfo, stmt, &DR_INNERMOST (dr));
> +         vect_record_base_alignment (vinfo, stmt_info, &DR_INNERMOST (dr));
>
>           /* If DR is nested in the loop that is being vectorized, we can also
>              record the alignment of the base wrt the outer loop.  */
> -         if (loop && nested_in_vect_loop_p (loop, stmt))
> +         if (loop && nested_in_vect_loop_p (loop, stmt_info))
>             vect_record_base_alignment
> -               (vinfo, stmt, &STMT_VINFO_DR_WRT_VEC_LOOP (stmt_info));
> +               (vinfo, stmt_info, &STMT_VINFO_DR_WRT_VEC_LOOP (stmt_info));
>         }
>      }
>  }
> @@ -863,8 +862,7 @@ vect_record_base_alignments (vec_info *v
>  static unsigned int
>  vect_calculate_target_alignment (struct data_reference *dr)
>  {
> -  gimple *stmt = vect_dr_stmt (dr);
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>    return targetm.vectorize.preferred_vector_alignment (vectype);
>  }
> @@ -882,8 +880,7 @@ vect_calculate_target_alignment (struct
>  static void
>  vect_compute_data_ref_alignment (struct data_reference *dr)
>  {
> -  gimple *stmt = vect_dr_stmt (dr);
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>    vec_base_alignments *base_alignments = &stmt_info->vinfo->base_alignments;
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    struct loop *loop = NULL;
> @@ -923,7 +920,7 @@ vect_compute_data_ref_alignment (struct
>       stays the same throughout the execution of the inner-loop, which is why
>       we have to check that the stride of the dataref in the inner-loop evenly
>       divides by the vector alignment.  */
> -  else if (nested_in_vect_loop_p (loop, stmt))
> +  else if (nested_in_vect_loop_p (loop, stmt_info))
>      {
>        step_preserves_misalignment_p
>         = (DR_STEP_ALIGNMENT (dr) % vector_alignment) == 0;
> @@ -1074,8 +1071,8 @@ vect_update_misalignment_for_peel (struc
>    struct data_reference *current_dr;
>    int dr_size = vect_get_scalar_dr_size (dr);
>    int dr_peel_size = vect_get_scalar_dr_size (dr_peel);
> -  stmt_vec_info stmt_info = vinfo_for_stmt (vect_dr_stmt (dr));
> -  stmt_vec_info peel_stmt_info = vinfo_for_stmt (vect_dr_stmt (dr_peel));
> +  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +  stmt_vec_info peel_stmt_info = vect_dr_stmt (dr_peel);
>
>   /* For interleaved data accesses the step in the loop must be multiplied by
>       the size of the interleaving group.  */
> @@ -1086,8 +1083,7 @@ vect_update_misalignment_for_peel (struc
>
>    /* It can be assumed that the data refs with the same alignment as dr_peel
>       are aligned in the vector loop.  */
> -  same_aligned_drs
> -    = STMT_VINFO_SAME_ALIGN_REFS (vinfo_for_stmt (vect_dr_stmt (dr_peel)));
> +  same_aligned_drs = STMT_VINFO_SAME_ALIGN_REFS (vect_dr_stmt (dr_peel));
>    FOR_EACH_VEC_ELT (same_aligned_drs, i, current_dr)
>      {
>        if (current_dr != dr)
> @@ -1167,15 +1163,14 @@ vect_verify_datarefs_alignment (loop_vec
>
>    FOR_EACH_VEC_ELT (datarefs, i, dr)
>      {
> -      gimple *stmt = vect_dr_stmt (dr);
> -      stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +      stmt_vec_info stmt_info = vect_dr_stmt (dr);
>
>        if (!STMT_VINFO_RELEVANT_P (stmt_info))
>         continue;
>
>        /* For interleaving, only the alignment of the first access matters.   */
>        if (STMT_VINFO_GROUPED_ACCESS (stmt_info)
> -         && DR_GROUP_FIRST_ELEMENT (stmt_info) != stmt)
> +         && DR_GROUP_FIRST_ELEMENT (stmt_info) != stmt_info)
>         continue;
>
>        /* Strided accesses perform only component accesses, alignment is
> @@ -1212,8 +1207,7 @@ not_size_aligned (tree exp)
>  static bool
>  vector_alignment_reachable_p (struct data_reference *dr)
>  {
> -  gimple *stmt = vect_dr_stmt (dr);
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>
>    if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
> @@ -1282,8 +1276,7 @@ vect_get_data_access_cost (struct data_r
>                            stmt_vector_for_cost *body_cost_vec,
>                            stmt_vector_for_cost *prologue_cost_vec)
>  {
> -  gimple *stmt = vect_dr_stmt (dr);
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    int ncopies;
>
> @@ -1412,16 +1405,15 @@ vect_get_peeling_costs_all_drs (vec<data
>
>    FOR_EACH_VEC_ELT (datarefs, i, dr)
>      {
> -      gimple *stmt = vect_dr_stmt (dr);
> -      stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +      stmt_vec_info stmt_info = vect_dr_stmt (dr);
>        if (!STMT_VINFO_RELEVANT_P (stmt_info))
>         continue;
>
>        /* For interleaving, only the alignment of the first access
>           matters.  */
>        if (STMT_VINFO_GROUPED_ACCESS (stmt_info)
> -          && DR_GROUP_FIRST_ELEMENT (stmt_info) != stmt)
> -        continue;
> +         && DR_GROUP_FIRST_ELEMENT (stmt_info) != stmt_info)
> +       continue;
>
>        /* Strided accesses perform only component accesses, alignment is
>           irrelevant for them.  */
> @@ -1453,8 +1445,7 @@ vect_peeling_hash_get_lowest_cost (_vect
>    vect_peel_info elem = *slot;
>    int dummy;
>    unsigned int inside_cost = 0, outside_cost = 0;
> -  gimple *stmt = vect_dr_stmt (elem->dr);
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +  stmt_vec_info stmt_info = vect_dr_stmt (elem->dr);
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    stmt_vector_for_cost prologue_cost_vec, body_cost_vec,
>                        epilogue_cost_vec;
> @@ -1537,8 +1528,6 @@ vect_peeling_supportable (loop_vec_info
>    unsigned i;
>    struct data_reference *dr = NULL;
>    vec<data_reference_p> datarefs = LOOP_VINFO_DATAREFS (loop_vinfo);
> -  gimple *stmt;
> -  stmt_vec_info stmt_info;
>    enum dr_alignment_support supportable_dr_alignment;
>
>    /* Ensure that all data refs can be vectorized after the peel.  */
> @@ -1549,12 +1538,11 @@ vect_peeling_supportable (loop_vec_info
>        if (dr == dr0)
>         continue;
>
> -      stmt = vect_dr_stmt (dr);
> -      stmt_info = vinfo_for_stmt (stmt);
> +      stmt_vec_info stmt_info = vect_dr_stmt (dr);
>        /* For interleaving, only the alignment of the first access
>          matters.  */
>        if (STMT_VINFO_GROUPED_ACCESS (stmt_info)
> -         && DR_GROUP_FIRST_ELEMENT (stmt_info) != stmt)
> +         && DR_GROUP_FIRST_ELEMENT (stmt_info) != stmt_info)
>         continue;
>
>        /* Strided accesses perform only component accesses, alignment is
> @@ -1678,8 +1666,6 @@ vect_enhance_data_refs_alignment (loop_v
>    bool do_peeling = false;
>    bool do_versioning = false;
>    bool stat;
> -  gimple *stmt;
> -  stmt_vec_info stmt_info;
>    unsigned int npeel = 0;
>    bool one_misalignment_known = false;
>    bool one_misalignment_unknown = false;
> @@ -1731,8 +1717,7 @@ vect_enhance_data_refs_alignment (loop_v
>
>    FOR_EACH_VEC_ELT (datarefs, i, dr)
>      {
> -      stmt = vect_dr_stmt (dr);
> -      stmt_info = vinfo_for_stmt (stmt);
> +      stmt_vec_info stmt_info = vect_dr_stmt (dr);
>
>        if (!STMT_VINFO_RELEVANT_P (stmt_info))
>         continue;
> @@ -1740,8 +1725,8 @@ vect_enhance_data_refs_alignment (loop_v
>        /* For interleaving, only the alignment of the first access
>           matters.  */
>        if (STMT_VINFO_GROUPED_ACCESS (stmt_info)
> -          && DR_GROUP_FIRST_ELEMENT (stmt_info) != stmt)
> -        continue;
> +         && DR_GROUP_FIRST_ELEMENT (stmt_info) != stmt_info)
> +       continue;
>
>        /* For scatter-gather or invariant accesses there is nothing
>          to enhance.  */
> @@ -1943,8 +1928,7 @@ vect_enhance_data_refs_alignment (loop_v
>        epilogue_cost_vec.release ();
>
>        peel_for_unknown_alignment.peel_info.count = 1
> -       + STMT_VINFO_SAME_ALIGN_REFS
> -       (vinfo_for_stmt (vect_dr_stmt (dr0))).length ();
> +       + STMT_VINFO_SAME_ALIGN_REFS (vect_dr_stmt (dr0)).length ();
>      }
>
>    peel_for_unknown_alignment.peel_info.npeel = 0;
> @@ -2025,8 +2009,7 @@ vect_enhance_data_refs_alignment (loop_v
>
>    if (do_peeling)
>      {
> -      stmt = vect_dr_stmt (dr0);
> -      stmt_info = vinfo_for_stmt (stmt);
> +      stmt_vec_info stmt_info = vect_dr_stmt (dr0);
>        vectype = STMT_VINFO_VECTYPE (stmt_info);
>
>        if (known_alignment_for_access_p (dr0))
> @@ -2049,7 +2032,7 @@ vect_enhance_data_refs_alignment (loop_v
>           /* For interleaved data access every iteration accesses all the
>              members of the group, therefore we divide the number of iterations
>              by the group size.  */
> -         stmt_info = vinfo_for_stmt (vect_dr_stmt (dr0));
> +         stmt_info = vect_dr_stmt (dr0);
>           if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
>             npeel /= DR_GROUP_SIZE (stmt_info);
>
> @@ -2123,7 +2106,7 @@ vect_enhance_data_refs_alignment (loop_v
>               {
>                 /* Strided accesses perform only component accesses, alignment
>                    is irrelevant for them.  */
> -               stmt_info = vinfo_for_stmt (vect_dr_stmt (dr));
> +               stmt_info = vect_dr_stmt (dr);
>                 if (STMT_VINFO_STRIDED_P (stmt_info)
>                     && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
>                   continue;
> @@ -2172,14 +2155,13 @@ vect_enhance_data_refs_alignment (loop_v
>      {
>        FOR_EACH_VEC_ELT (datarefs, i, dr)
>          {
> -         stmt = vect_dr_stmt (dr);
> -         stmt_info = vinfo_for_stmt (stmt);
> +         stmt_vec_info stmt_info = vect_dr_stmt (dr);
>
>           /* For interleaving, only the alignment of the first access
>              matters.  */
>           if (aligned_access_p (dr)
>               || (STMT_VINFO_GROUPED_ACCESS (stmt_info)
> -                 && DR_GROUP_FIRST_ELEMENT (stmt_info) != stmt))
> +                 && DR_GROUP_FIRST_ELEMENT (stmt_info) != stmt_info))
>             continue;
>
>           if (STMT_VINFO_STRIDED_P (stmt_info))
> @@ -2196,7 +2178,6 @@ vect_enhance_data_refs_alignment (loop_v
>
>            if (!supportable_dr_alignment)
>              {
> -             gimple *stmt;
>                int mask;
>                tree vectype;
>
> @@ -2208,9 +2189,9 @@ vect_enhance_data_refs_alignment (loop_v
>                    break;
>                  }
>
> -              stmt = vect_dr_stmt (dr);
> -              vectype = STMT_VINFO_VECTYPE (vinfo_for_stmt (stmt));
> -              gcc_assert (vectype);
> +             stmt_info = vect_dr_stmt (dr);
> +             vectype = STMT_VINFO_VECTYPE (stmt_info);
> +             gcc_assert (vectype);
>
>               /* At present we don't support versioning for alignment
>                  with variable VF, since there's no guarantee that the
> @@ -2237,8 +2218,7 @@ vect_enhance_data_refs_alignment (loop_v
>                gcc_assert (!LOOP_VINFO_PTR_MASK (loop_vinfo)
>                            || LOOP_VINFO_PTR_MASK (loop_vinfo) == mask);
>                LOOP_VINFO_PTR_MASK (loop_vinfo) = mask;
> -              LOOP_VINFO_MAY_MISALIGN_STMTS (loop_vinfo).safe_push (
> -                     vect_dr_stmt (dr));
> +             LOOP_VINFO_MAY_MISALIGN_STMTS (loop_vinfo).safe_push (stmt_info);
>              }
>          }
>
> @@ -2298,8 +2278,8 @@ vect_find_same_alignment_drs (struct dat
>  {
>    struct data_reference *dra = DDR_A (ddr);
>    struct data_reference *drb = DDR_B (ddr);
> -  stmt_vec_info stmtinfo_a = vinfo_for_stmt (vect_dr_stmt (dra));
> -  stmt_vec_info stmtinfo_b = vinfo_for_stmt (vect_dr_stmt (drb));
> +  stmt_vec_info stmtinfo_a = vect_dr_stmt (dra);
> +  stmt_vec_info stmtinfo_b = vect_dr_stmt (drb);
>
>    if (DDR_ARE_DEPENDENT (ddr) == chrec_known)
>      return;
> @@ -2372,7 +2352,7 @@ vect_analyze_data_refs_alignment (loop_v
>    vect_record_base_alignments (vinfo);
>    FOR_EACH_VEC_ELT (datarefs, i, dr)
>      {
> -      stmt_vec_info stmt_info = vinfo_for_stmt (vect_dr_stmt (dr));
> +      stmt_vec_info stmt_info = vect_dr_stmt (dr);
>        if (STMT_VINFO_VECTORIZABLE (stmt_info))
>         vect_compute_data_ref_alignment (dr);
>      }
> @@ -2451,8 +2431,7 @@ vect_analyze_group_access_1 (struct data
>    tree step = DR_STEP (dr);
>    tree scalar_type = TREE_TYPE (DR_REF (dr));
>    HOST_WIDE_INT type_size = TREE_INT_CST_LOW (TYPE_SIZE_UNIT (scalar_type));
> -  gimple *stmt = vect_dr_stmt (dr);
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    bb_vec_info bb_vinfo = STMT_VINFO_BB_VINFO (stmt_info);
>    HOST_WIDE_INT dr_step = -1;
> @@ -2491,7 +2470,7 @@ vect_analyze_group_access_1 (struct data
>      groupsize = 0;
>
>    /* Not consecutive access is possible only if it is a part of interleaving.  */
> -  if (!DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)))
> +  if (!DR_GROUP_FIRST_ELEMENT (stmt_info))
>      {
>        /* Check if it this DR is a part of interleaving, and is a single
>          element of the group that is accessed in the loop.  */
> @@ -2502,8 +2481,8 @@ vect_analyze_group_access_1 (struct data
>           && (dr_step % type_size) == 0
>           && groupsize > 0)
>         {
> -         DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)) = stmt;
> -         DR_GROUP_SIZE (vinfo_for_stmt (stmt)) = groupsize;
> +         DR_GROUP_FIRST_ELEMENT (stmt_info) = stmt_info;
> +         DR_GROUP_SIZE (stmt_info) = groupsize;
>           DR_GROUP_GAP (stmt_info) = groupsize - 1;
>           if (dump_enabled_p ())
>             {
> @@ -2522,29 +2501,30 @@ vect_analyze_group_access_1 (struct data
>          {
>           dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>                            "not consecutive access ");
> -         dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM, stmt, 0);
> +         dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
> +                           stmt_info->stmt, 0);
>          }
>
>        if (bb_vinfo)
> -        {
> -          /* Mark the statement as unvectorizable.  */
> -          STMT_VINFO_VECTORIZABLE (vinfo_for_stmt (vect_dr_stmt (dr))) = false;
> -          return true;
> -        }
> +       {
> +         /* Mark the statement as unvectorizable.  */
> +         STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr)) = false;
> +         return true;
> +       }
>
>        dump_printf_loc (MSG_NOTE, vect_location, "using strided accesses\n");
>        STMT_VINFO_STRIDED_P (stmt_info) = true;
>        return true;
>      }
>
> -  if (DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)) == stmt)
> +  if (DR_GROUP_FIRST_ELEMENT (stmt_info) == stmt_info)
>      {
>        /* First stmt in the interleaving chain. Check the chain.  */
> -      gimple *next = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (stmt));
> +      gimple *next = DR_GROUP_NEXT_ELEMENT (stmt_info);
>        struct data_reference *data_ref = dr;
>        unsigned int count = 1;
>        tree prev_init = DR_INIT (data_ref);
> -      gimple *prev = stmt;
> +      gimple *prev = stmt_info;
>        HOST_WIDE_INT diff, gaps = 0;
>
>        /* By construction, all group members have INTEGER_CST DR_INITs.  */
> @@ -2643,9 +2623,9 @@ vect_analyze_group_access_1 (struct data
>          difference between the groupsize and the last accessed
>          element.
>          When there is no gap, this difference should be 0.  */
> -      DR_GROUP_GAP (vinfo_for_stmt (stmt)) = groupsize - last_accessed_element;
> +      DR_GROUP_GAP (stmt_info) = groupsize - last_accessed_element;
>
> -      DR_GROUP_SIZE (vinfo_for_stmt (stmt)) = groupsize;
> +      DR_GROUP_SIZE (stmt_info) = groupsize;
>        if (dump_enabled_p ())
>         {
>           dump_printf_loc (MSG_NOTE, vect_location,
> @@ -2656,22 +2636,22 @@ vect_analyze_group_access_1 (struct data
>             dump_printf (MSG_NOTE, "store ");
>           dump_printf (MSG_NOTE, "of size %u starting with ",
>                        (unsigned)groupsize);
> -         dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
> -         if (DR_GROUP_GAP (vinfo_for_stmt (stmt)) != 0)
> +         dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
> +         if (DR_GROUP_GAP (stmt_info) != 0)
>             dump_printf_loc (MSG_NOTE, vect_location,
>                              "There is a gap of %u elements after the group\n",
> -                            DR_GROUP_GAP (vinfo_for_stmt (stmt)));
> +                            DR_GROUP_GAP (stmt_info));
>         }
>
>        /* SLP: create an SLP data structure for every interleaving group of
>          stores for further analysis in vect_analyse_slp.  */
>        if (DR_IS_WRITE (dr) && !slp_impossible)
> -        {
> -          if (loop_vinfo)
> -            LOOP_VINFO_GROUPED_STORES (loop_vinfo).safe_push (stmt);
> -          if (bb_vinfo)
> -            BB_VINFO_GROUPED_STORES (bb_vinfo).safe_push (stmt);
> -        }
> +       {
> +         if (loop_vinfo)
> +           LOOP_VINFO_GROUPED_STORES (loop_vinfo).safe_push (stmt_info);
> +         if (bb_vinfo)
> +           BB_VINFO_GROUPED_STORES (bb_vinfo).safe_push (stmt_info);
> +       }
>      }
>
>    return true;
> @@ -2689,7 +2669,7 @@ vect_analyze_group_access (struct data_r
>      {
>        /* Dissolve the group if present.  */
>        gimple *next;
> -      gimple *stmt = DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (vect_dr_stmt (dr)));
> +      gimple *stmt = DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (dr));
>        while (stmt)
>         {
>           stmt_vec_info vinfo = vinfo_for_stmt (stmt);
> @@ -2712,8 +2692,7 @@ vect_analyze_data_ref_access (struct dat
>  {
>    tree step = DR_STEP (dr);
>    tree scalar_type = TREE_TYPE (DR_REF (dr));
> -  gimple *stmt = vect_dr_stmt (dr);
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    struct loop *loop = NULL;
>
> @@ -2734,8 +2713,8 @@ vect_analyze_data_ref_access (struct dat
>    /* Allow loads with zero step in inner-loop vectorization.  */
>    if (loop_vinfo && integer_zerop (step))
>      {
> -      DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)) = NULL;
> -      if (!nested_in_vect_loop_p (loop, stmt))
> +      DR_GROUP_FIRST_ELEMENT (stmt_info) = NULL;
> +      if (!nested_in_vect_loop_p (loop, stmt_info))
>         return DR_IS_READ (dr);
>        /* Allow references with zero step for outer loops marked
>          with pragma omp simd only - it guarantees absence of
> @@ -2749,11 +2728,11 @@ vect_analyze_data_ref_access (struct dat
>         }
>      }
>
> -  if (loop && nested_in_vect_loop_p (loop, stmt))
> +  if (loop && nested_in_vect_loop_p (loop, stmt_info))
>      {
>        /* Interleaved accesses are not yet supported within outer-loop
>          vectorization for references in the inner-loop.  */
> -      DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)) = NULL;
> +      DR_GROUP_FIRST_ELEMENT (stmt_info) = NULL;
>
>        /* For the rest of the analysis we use the outer-loop step.  */
>        step = STMT_VINFO_DR_STEP (stmt_info);
> @@ -2775,12 +2754,12 @@ vect_analyze_data_ref_access (struct dat
>               && !compare_tree_int (TYPE_SIZE_UNIT (scalar_type), -dr_step)))
>         {
>           /* Mark that it is not interleaving.  */
> -         DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)) = NULL;
> +         DR_GROUP_FIRST_ELEMENT (stmt_info) = NULL;
>           return true;
>         }
>      }
>
> -  if (loop && nested_in_vect_loop_p (loop, stmt))
> +  if (loop && nested_in_vect_loop_p (loop, stmt_info))
>      {
>        if (dump_enabled_p ())
>         dump_printf_loc (MSG_NOTE, vect_location,
> @@ -2939,7 +2918,7 @@ vect_analyze_data_ref_accesses (vec_info
>    for (i = 0; i < datarefs_copy.length () - 1;)
>      {
>        data_reference_p dra = datarefs_copy[i];
> -      stmt_vec_info stmtinfo_a = vinfo_for_stmt (vect_dr_stmt (dra));
> +      stmt_vec_info stmtinfo_a = vect_dr_stmt (dra);
>        stmt_vec_info lastinfo = NULL;
>        if (!STMT_VINFO_VECTORIZABLE (stmtinfo_a)
>           || STMT_VINFO_GATHER_SCATTER_P (stmtinfo_a))
> @@ -2950,7 +2929,7 @@ vect_analyze_data_ref_accesses (vec_info
>        for (i = i + 1; i < datarefs_copy.length (); ++i)
>         {
>           data_reference_p drb = datarefs_copy[i];
> -         stmt_vec_info stmtinfo_b = vinfo_for_stmt (vect_dr_stmt (drb));
> +         stmt_vec_info stmtinfo_b = vect_dr_stmt (drb);
>           if (!STMT_VINFO_VECTORIZABLE (stmtinfo_b)
>               || STMT_VINFO_GATHER_SCATTER_P (stmtinfo_b))
>             break;
> @@ -3073,7 +3052,7 @@ vect_analyze_data_ref_accesses (vec_info
>      }
>
>    FOR_EACH_VEC_ELT (datarefs_copy, i, dr)
> -    if (STMT_VINFO_VECTORIZABLE (vinfo_for_stmt (vect_dr_stmt (dr)))
> +    if (STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr))
>          && !vect_analyze_data_ref_access (dr))
>        {
>         if (dump_enabled_p ())
> @@ -3081,11 +3060,11 @@ vect_analyze_data_ref_accesses (vec_info
>                            "not vectorized: complicated access pattern.\n");
>
>          if (is_a <bb_vec_info> (vinfo))
> -          {
> -            /* Mark the statement as not vectorizable.  */
> -            STMT_VINFO_VECTORIZABLE (vinfo_for_stmt (vect_dr_stmt (dr))) = false;
> -            continue;
> -          }
> +         {
> +           /* Mark the statement as not vectorizable.  */
> +           STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr)) = false;
> +           continue;
> +         }
>          else
>           {
>             datarefs_copy.release ();
> @@ -3124,7 +3103,7 @@ vect_vfa_segment_size (struct data_refer
>  static unsigned HOST_WIDE_INT
>  vect_vfa_access_size (data_reference *dr)
>  {
> -  stmt_vec_info stmt_vinfo = vinfo_for_stmt (vect_dr_stmt (dr));
> +  stmt_vec_info stmt_vinfo = vect_dr_stmt (dr);
>    tree ref_type = TREE_TYPE (DR_REF (dr));
>    unsigned HOST_WIDE_INT ref_size = tree_to_uhwi (TYPE_SIZE_UNIT (ref_type));
>    unsigned HOST_WIDE_INT access_size = ref_size;
> @@ -3298,7 +3277,7 @@ vect_check_lower_bound (loop_vec_info lo
>  static bool
>  vect_small_gap_p (loop_vec_info loop_vinfo, data_reference *dr, poly_int64 gap)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (vect_dr_stmt (dr));
> +  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>    HOST_WIDE_INT count
>      = estimated_poly_value (LOOP_VINFO_VECT_FACTOR (loop_vinfo));
>    if (DR_GROUP_FIRST_ELEMENT (stmt_info))
> @@ -4141,14 +4120,11 @@ vect_analyze_data_refs (vec_info *vinfo,
>    vec<data_reference_p> datarefs = vinfo->shared->datarefs;
>    FOR_EACH_VEC_ELT (datarefs, i, dr)
>      {
> -      gimple *stmt;
> -      stmt_vec_info stmt_info;
>        enum { SG_NONE, GATHER, SCATTER } gatherscatter = SG_NONE;
>        poly_uint64 vf;
>
>        gcc_assert (DR_REF (dr));
> -      stmt = vect_dr_stmt (dr);
> -      stmt_info = vinfo_for_stmt (stmt);
> +      stmt_vec_info stmt_info = vect_dr_stmt (dr);
>
>        /* Check that analysis of the data-ref succeeded.  */
>        if (!DR_BASE_ADDRESS (dr) || !DR_OFFSET (dr) || !DR_INIT (dr)
> @@ -4168,7 +4144,7 @@ vect_analyze_data_refs (vec_info *vinfo,
>           /* If target supports vector gather loads or scatter stores,
>              see if they can't be used.  */
>           if (is_a <loop_vec_info> (vinfo)
> -             && !nested_in_vect_loop_p (loop, stmt))
> +             && !nested_in_vect_loop_p (loop, stmt_info))
>             {
>               if (maybe_gather || maybe_scatter)
>                 {
> @@ -4186,7 +4162,8 @@ vect_analyze_data_refs (vec_info *vinfo,
>                   dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>                                     "not vectorized: data ref analysis "
>                                     "failed ");
> -                 dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM, stmt, 0);
> +                 dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
> +                                   stmt_info->stmt, 0);
>                 }
>               if (is_a <bb_vec_info> (vinfo))
>                 {
> @@ -4202,14 +4179,15 @@ vect_analyze_data_refs (vec_info *vinfo,
>        /* See if this was detected as SIMD lane access.  */
>        if (dr->aux == (void *)-1)
>         {
> -         if (nested_in_vect_loop_p (loop, stmt))
> +         if (nested_in_vect_loop_p (loop, stmt_info))
>             {
>               if (dump_enabled_p ())
>                 {
>                   dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>                                    "not vectorized: data ref analysis "
>                                    "failed ");
> -                 dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM, stmt, 0);
> +                 dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
> +                                   stmt_info->stmt, 0);
>                 }
>               return false;
>             }
> @@ -4224,7 +4202,8 @@ vect_analyze_data_refs (vec_info *vinfo,
>                dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>                                 "not vectorized: base object not addressable "
>                                "for stmt: ");
> -              dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM, stmt, 0);
> +             dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
> +                               stmt_info->stmt, 0);
>              }
>            if (is_a <bb_vec_info> (vinfo))
>             {
> @@ -4240,14 +4219,15 @@ vect_analyze_data_refs (vec_info *vinfo,
>           && DR_STEP (dr)
>           && TREE_CODE (DR_STEP (dr)) != INTEGER_CST)
>         {
> -         if (nested_in_vect_loop_p (loop, stmt))
> +         if (nested_in_vect_loop_p (loop, stmt_info))
>             {
>               if (dump_enabled_p ())
>                 {
>                   dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>                                     "not vectorized: not suitable for strided "
>                                     "load ");
> -                 dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM, stmt, 0);
> +                 dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
> +                                   stmt_info->stmt, 0);
>                 }
>               return false;
>             }
> @@ -4262,7 +4242,7 @@ vect_analyze_data_refs (vec_info *vinfo,
>          inner-most enclosing loop).  We do that by building a reference to the
>          first location accessed by the inner-loop, and analyze it relative to
>          the outer-loop.  */
> -      if (loop && nested_in_vect_loop_p (loop, stmt))
> +      if (loop && nested_in_vect_loop_p (loop, stmt_info))
>         {
>           /* Build a reference to the first location accessed by the
>              inner loop: *(BASE + INIT + OFFSET).  By construction,
> @@ -4329,7 +4309,8 @@ vect_analyze_data_refs (vec_info *vinfo,
>              {
>                dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>                                 "not vectorized: no vectype for stmt: ");
> -              dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM, stmt, 0);
> +             dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
> +                               stmt_info->stmt, 0);
>                dump_printf (MSG_MISSED_OPTIMIZATION, " scalar_type: ");
>                dump_generic_expr (MSG_MISSED_OPTIMIZATION, TDF_DETAILS,
>                                   scalar_type);
> @@ -4351,7 +4332,7 @@ vect_analyze_data_refs (vec_info *vinfo,
>             {
>               dump_printf_loc (MSG_NOTE, vect_location,
>                                "got vectype for stmt: ");
> -             dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
> +             dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
>               dump_generic_expr (MSG_NOTE, TDF_SLIM,
>                                  STMT_VINFO_VECTYPE (stmt_info));
>               dump_printf (MSG_NOTE, "\n");
> @@ -4366,7 +4347,8 @@ vect_analyze_data_refs (vec_info *vinfo,
>        if (gatherscatter != SG_NONE)
>         {
>           gather_scatter_info gs_info;
> -         if (!vect_check_gather_scatter (stmt, as_a <loop_vec_info> (vinfo),
> +         if (!vect_check_gather_scatter (stmt_info,
> +                                         as_a <loop_vec_info> (vinfo),
>                                           &gs_info)
>               || !get_vectype_for_scalar_type (TREE_TYPE (gs_info.offset)))
>             {
> @@ -4378,7 +4360,8 @@ vect_analyze_data_refs (vec_info *vinfo,
>                                    "load " :
>                                    "not vectorized: not suitable for scatter "
>                                    "store ");
> -                 dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM, stmt, 0);
> +                 dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
> +                                   stmt_info->stmt, 0);
>                 }
>               return false;
>             }
> @@ -6459,8 +6442,7 @@ enum dr_alignment_support
>  vect_supportable_dr_alignment (struct data_reference *dr,
>                                 bool check_aligned_accesses)
>  {
> -  gimple *stmt = vect_dr_stmt (dr);
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>    machine_mode mode = TYPE_MODE (vectype);
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
> @@ -6472,16 +6454,16 @@ vect_supportable_dr_alignment (struct da
>
>    /* For now assume all conditional loads/stores support unaligned
>       access without any special code.  */
> -  if (is_gimple_call (stmt)
> -      && gimple_call_internal_p (stmt)
> -      && (gimple_call_internal_fn (stmt) == IFN_MASK_LOAD
> -         || gimple_call_internal_fn (stmt) == IFN_MASK_STORE))
> -    return dr_unaligned_supported;
> +  if (gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt))
> +    if (gimple_call_internal_p (stmt)
> +       && (gimple_call_internal_fn (stmt) == IFN_MASK_LOAD
> +           || gimple_call_internal_fn (stmt) == IFN_MASK_STORE))
> +      return dr_unaligned_supported;
>
>    if (loop_vinfo)
>      {
>        vect_loop = LOOP_VINFO_LOOP (loop_vinfo);
> -      nested_in_vect_loop = nested_in_vect_loop_p (vect_loop, stmt);
> +      nested_in_vect_loop = nested_in_vect_loop_p (vect_loop, stmt_info);
>      }
>
>    /* Possibly unaligned access.  */
> Index: gcc/tree-vect-loop-manip.c
> ===================================================================
> --- gcc/tree-vect-loop-manip.c  2018-07-24 10:22:33.821278677 +0100
> +++ gcc/tree-vect-loop-manip.c  2018-07-24 10:23:04.029010432 +0100
> @@ -1560,8 +1560,7 @@ vect_update_ivs_after_vectorizer (loop_v
>  get_misalign_in_elems (gimple **seq, loop_vec_info loop_vinfo)
>  {
>    struct data_reference *dr = LOOP_VINFO_UNALIGNED_DR (loop_vinfo);
> -  gimple *dr_stmt = vect_dr_stmt (dr);
> -  stmt_vec_info stmt_info = vinfo_for_stmt (dr_stmt);
> +  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>
>    unsigned int target_align = DR_TARGET_ALIGNMENT (dr);
> @@ -1571,7 +1570,7 @@ get_misalign_in_elems (gimple **seq, loo
>    tree offset = (negative
>                  ? size_int (-TYPE_VECTOR_SUBPARTS (vectype) + 1)
>                  : size_zero_node);
> -  tree start_addr = vect_create_addr_base_for_vector_ref (dr_stmt, seq,
> +  tree start_addr = vect_create_addr_base_for_vector_ref (stmt_info, seq,
>                                                           offset);
>    tree type = unsigned_type_for (TREE_TYPE (start_addr));
>    tree target_align_minus_1 = build_int_cst (type, target_align - 1);
> @@ -1631,8 +1630,7 @@ vect_gen_prolog_loop_niters (loop_vec_in
>    tree niters_type = TREE_TYPE (LOOP_VINFO_NITERS (loop_vinfo));
>    gimple_seq stmts = NULL, new_stmts = NULL;
>    tree iters, iters_name;
> -  gimple *dr_stmt = vect_dr_stmt (dr);
> -  stmt_vec_info stmt_info = vinfo_for_stmt (dr_stmt);
> +  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>    unsigned int target_align = DR_TARGET_ALIGNMENT (dr);
>
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:23:00.397042684 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:23:04.033010396 +0100
> @@ -2145,8 +2145,7 @@ vect_analyze_loop_2 (loop_vec_info loop_
>           if (LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) < 0)
>             {
>               struct data_reference *dr = LOOP_VINFO_UNALIGNED_DR (loop_vinfo);
> -             tree vectype
> -               = STMT_VINFO_VECTYPE (vinfo_for_stmt (vect_dr_stmt (dr)));
> +             tree vectype = STMT_VINFO_VECTYPE (vect_dr_stmt (dr));
>               niters_th += TYPE_VECTOR_SUBPARTS (vectype) - 1;
>             }
>           else

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [20/46] Make *FIRST_ELEMENT and *NEXT_ELEMENT stmt_vec_infos
  2018-07-24 10:01 ` [20/46] Make *FIRST_ELEMENT and *NEXT_ELEMENT stmt_vec_infos Richard Sandiford
@ 2018-07-25  9:28   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25  9:28 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 12:01 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch changes {REDUC,DR}_GROUP_{FIRST,NEXT} element from a
> gimple stmt to stmt_vec_info.

OK.

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (_stmt_vec_info::first_element): Change from
>         a gimple stmt to a stmt_vec_info.
>         (_stmt_vec_info::next_element): Likewise.
>         * tree-vect-data-refs.c (vect_update_misalignment_for_peel)
>         (vect_slp_analyze_and_verify_node_alignment)
>         (vect_analyze_group_access_1, vect_analyze_group_access)
>         (vect_small_gap_p, vect_prune_runtime_alias_test_list)
>         (vect_create_data_ref_ptr, vect_record_grouped_load_vectors)
>         (vect_supportable_dr_alignment): Update accordingly.
>         * tree-vect-loop.c (vect_fixup_reduc_chain): Likewise.
>         (vect_fixup_scalar_cycles_with_patterns, vect_is_slp_reduction)
>         (vect_is_simple_reduction, vectorizable_reduction): Likewise.
>         * tree-vect-patterns.c (vect_reassociating_reduction_p): Likewise.
>         * tree-vect-slp.c (vect_build_slp_tree_1)
>         (vect_attempt_slp_rearrange_stmts, vect_supported_load_permutation_p)
>         (vect_split_slp_store_group, vect_analyze_slp_instance)
>         (vect_analyze_slp, vect_transform_slp_perm_load): Likewise.
>         * tree-vect-stmts.c (vect_model_store_cost, vect_model_load_cost)
>         (get_group_load_store_type, get_load_store_type)
>         (get_group_alias_ptr_type, vectorizable_store, vectorizable_load)
>         (vect_transform_stmt, vect_remove_stores): Likewise.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:23:04.033010396 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:23:08.536970400 +0100
> @@ -871,9 +871,9 @@ struct _stmt_vec_info {
>
>    /* Interleaving and reduction chains info.  */
>    /* First element in the group.  */
> -  gimple *first_element;
> +  stmt_vec_info first_element;
>    /* Pointer to the next element in the group.  */
> -  gimple *next_element;
> +  stmt_vec_info next_element;
>    /* For data-refs, in case that two or more stmts share data-ref, this is the
>       pointer to the previously detected stmt with the same dr.  */
>    gimple *same_dr_stmt;
> Index: gcc/tree-vect-data-refs.c
> ===================================================================
> --- gcc/tree-vect-data-refs.c   2018-07-24 10:23:04.029010432 +0100
> +++ gcc/tree-vect-data-refs.c   2018-07-24 10:23:08.532970436 +0100
> @@ -1077,7 +1077,7 @@ vect_update_misalignment_for_peel (struc
>   /* For interleaved data accesses the step in the loop must be multiplied by
>       the size of the interleaving group.  */
>    if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
> -    dr_size *= DR_GROUP_SIZE (vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (stmt_info)));
> +    dr_size *= DR_GROUP_SIZE (DR_GROUP_FIRST_ELEMENT (stmt_info));
>    if (STMT_VINFO_GROUPED_ACCESS (peel_stmt_info))
>      dr_peel_size *= DR_GROUP_SIZE (peel_stmt_info);
>
> @@ -2370,12 +2370,11 @@ vect_slp_analyze_and_verify_node_alignme
>       the node is permuted in which case we start from the first
>       element in the group.  */
>    stmt_vec_info first_stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
> -  gimple *first_stmt = first_stmt_info->stmt;
>    data_reference_p first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
>    if (SLP_TREE_LOAD_PERMUTATION (node).exists ())
> -    first_stmt = DR_GROUP_FIRST_ELEMENT (first_stmt_info);
> +    first_stmt_info = DR_GROUP_FIRST_ELEMENT (first_stmt_info);
>
> -  data_reference_p dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (first_stmt));
> +  data_reference_p dr = STMT_VINFO_DATA_REF (first_stmt_info);
>    vect_compute_data_ref_alignment (dr);
>    /* For creating the data-ref pointer we need alignment of the
>       first element anyway.  */
> @@ -2520,11 +2519,11 @@ vect_analyze_group_access_1 (struct data
>    if (DR_GROUP_FIRST_ELEMENT (stmt_info) == stmt_info)
>      {
>        /* First stmt in the interleaving chain. Check the chain.  */
> -      gimple *next = DR_GROUP_NEXT_ELEMENT (stmt_info);
> +      stmt_vec_info next = DR_GROUP_NEXT_ELEMENT (stmt_info);
>        struct data_reference *data_ref = dr;
>        unsigned int count = 1;
>        tree prev_init = DR_INIT (data_ref);
> -      gimple *prev = stmt_info;
> +      stmt_vec_info prev = stmt_info;
>        HOST_WIDE_INT diff, gaps = 0;
>
>        /* By construction, all group members have INTEGER_CST DR_INITs.  */
> @@ -2535,8 +2534,7 @@ vect_analyze_group_access_1 (struct data
>               stmt, and the rest get their vectorized loads from the first
>               one.  */
>            if (!tree_int_cst_compare (DR_INIT (data_ref),
> -                                     DR_INIT (STMT_VINFO_DATA_REF (
> -                                                  vinfo_for_stmt (next)))))
> +                                    DR_INIT (STMT_VINFO_DATA_REF (next))))
>              {
>                if (DR_IS_WRITE (data_ref))
>                  {
> @@ -2550,16 +2548,16 @@ vect_analyze_group_access_1 (struct data
>                 dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>                                  "Two or more load stmts share the same dr.\n");
>
> -              /* For load use the same data-ref load.  */
> -              DR_GROUP_SAME_DR_STMT (vinfo_for_stmt (next)) = prev;
> +             /* For load use the same data-ref load.  */
> +             DR_GROUP_SAME_DR_STMT (next) = prev;
>
> -              prev = next;
> -              next = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next));
> -              continue;
> +             prev = next;
> +             next = DR_GROUP_NEXT_ELEMENT (next);
> +             continue;
>              }
>
> -          prev = next;
> -          data_ref = STMT_VINFO_DATA_REF (vinfo_for_stmt (next));
> +         prev = next;
> +         data_ref = STMT_VINFO_DATA_REF (next);
>
>           /* All group members have the same STEP by construction.  */
>           gcc_checking_assert (operand_equal_p (DR_STEP (data_ref), step, 0));
> @@ -2587,12 +2585,12 @@ vect_analyze_group_access_1 (struct data
>
>            /* Store the gap from the previous member of the group. If there is no
>               gap in the access, DR_GROUP_GAP is always 1.  */
> -          DR_GROUP_GAP (vinfo_for_stmt (next)) = diff;
> +         DR_GROUP_GAP (next) = diff;
>
> -          prev_init = DR_INIT (data_ref);
> -          next = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next));
> -          /* Count the number of data-refs in the chain.  */
> -          count++;
> +         prev_init = DR_INIT (data_ref);
> +         next = DR_GROUP_NEXT_ELEMENT (next);
> +         /* Count the number of data-refs in the chain.  */
> +         count++;
>          }
>
>        if (groupsize == 0)
> @@ -2668,15 +2666,13 @@ vect_analyze_group_access (struct data_r
>    if (!vect_analyze_group_access_1 (dr))
>      {
>        /* Dissolve the group if present.  */
> -      gimple *next;
> -      gimple *stmt = DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (dr));
> -      while (stmt)
> -       {
> -         stmt_vec_info vinfo = vinfo_for_stmt (stmt);
> -         next = DR_GROUP_NEXT_ELEMENT (vinfo);
> -         DR_GROUP_FIRST_ELEMENT (vinfo) = NULL;
> -         DR_GROUP_NEXT_ELEMENT (vinfo) = NULL;
> -         stmt = next;
> +      stmt_vec_info stmt_info = DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (dr));
> +      while (stmt_info)
> +       {
> +         stmt_vec_info next = DR_GROUP_NEXT_ELEMENT (stmt_info);
> +         DR_GROUP_FIRST_ELEMENT (stmt_info) = NULL;
> +         DR_GROUP_NEXT_ELEMENT (stmt_info) = NULL;
> +         stmt_info = next;
>         }
>        return false;
>      }
> @@ -3281,7 +3277,7 @@ vect_small_gap_p (loop_vec_info loop_vin
>    HOST_WIDE_INT count
>      = estimated_poly_value (LOOP_VINFO_VECT_FACTOR (loop_vinfo));
>    if (DR_GROUP_FIRST_ELEMENT (stmt_info))
> -    count *= DR_GROUP_SIZE (vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (stmt_info)));
> +    count *= DR_GROUP_SIZE (DR_GROUP_FIRST_ELEMENT (stmt_info));
>    return estimated_poly_value (gap) <= count * vect_get_scalar_dr_size (dr);
>  }
>
> @@ -3379,11 +3375,9 @@ vect_prune_runtime_alias_test_list (loop
>        int comp_res;
>        poly_uint64 lower_bound;
>        struct data_reference *dr_a, *dr_b;
> -      gimple *dr_group_first_a, *dr_group_first_b;
>        tree segment_length_a, segment_length_b;
>        unsigned HOST_WIDE_INT access_size_a, access_size_b;
>        unsigned int align_a, align_b;
> -      gimple *stmt_a, *stmt_b;
>
>        /* Ignore the alias if the VF we chose ended up being no greater
>          than the dependence distance.  */
> @@ -3409,15 +3403,15 @@ vect_prune_runtime_alias_test_list (loop
>         }
>
>        dr_a = DDR_A (ddr);
> -      stmt_a = vect_dr_stmt (DDR_A (ddr));
> +      stmt_vec_info stmt_info_a = vect_dr_stmt (DDR_A (ddr));
>
>        dr_b = DDR_B (ddr);
> -      stmt_b = vect_dr_stmt (DDR_B (ddr));
> +      stmt_vec_info stmt_info_b = vect_dr_stmt (DDR_B (ddr));
>
>        /* Skip the pair if inter-iteration dependencies are irrelevant
>          and intra-iteration dependencies are guaranteed to be honored.  */
>        if (ignore_step_p
> -         && (vect_preserves_scalar_order_p (stmt_a, stmt_b)
> +         && (vect_preserves_scalar_order_p (stmt_info_a, stmt_info_b)
>               || vectorizable_with_step_bound_p (dr_a, dr_b, &lower_bound)))
>         {
>           if (dump_enabled_p ())
> @@ -3468,18 +3462,18 @@ vect_prune_runtime_alias_test_list (loop
>           continue;
>         }
>
> -      dr_group_first_a = DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt_a));
> +      stmt_vec_info dr_group_first_a = DR_GROUP_FIRST_ELEMENT (stmt_info_a);
>        if (dr_group_first_a)
>         {
> -         stmt_a = dr_group_first_a;
> -         dr_a = STMT_VINFO_DATA_REF (vinfo_for_stmt (stmt_a));
> +         stmt_info_a = dr_group_first_a;
> +         dr_a = STMT_VINFO_DATA_REF (stmt_info_a);
>         }
>
> -      dr_group_first_b = DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt_b));
> +      stmt_vec_info dr_group_first_b = DR_GROUP_FIRST_ELEMENT (stmt_info_b);
>        if (dr_group_first_b)
>         {
> -         stmt_b = dr_group_first_b;
> -         dr_b = STMT_VINFO_DATA_REF (vinfo_for_stmt (stmt_b));
> +         stmt_info_b = dr_group_first_b;
> +         dr_b = STMT_VINFO_DATA_REF (stmt_info_b);
>         }
>
>        if (ignore_step_p)
> @@ -4734,10 +4728,9 @@ vect_create_data_ref_ptr (gimple *stmt,
>    /* Likewise for any of the data references in the stmt group.  */
>    else if (DR_GROUP_SIZE (stmt_info) > 1)
>      {
> -      gimple *orig_stmt = DR_GROUP_FIRST_ELEMENT (stmt_info);
> +      stmt_vec_info sinfo = DR_GROUP_FIRST_ELEMENT (stmt_info);
>        do
>         {
> -         stmt_vec_info sinfo = vinfo_for_stmt (orig_stmt);
>           struct data_reference *sdr = STMT_VINFO_DATA_REF (sinfo);
>           if (!alias_sets_conflict_p (get_alias_set (aggr_type),
>                                       get_alias_set (DR_REF (sdr))))
> @@ -4745,9 +4738,9 @@ vect_create_data_ref_ptr (gimple *stmt,
>               need_ref_all = true;
>               break;
>             }
> -         orig_stmt = DR_GROUP_NEXT_ELEMENT (sinfo);
> +         sinfo = DR_GROUP_NEXT_ELEMENT (sinfo);
>         }
> -      while (orig_stmt);
> +      while (sinfo);
>      }
>    aggr_ptr_type = build_pointer_type_for_mode (aggr_type, ptr_mode,
>                                                need_ref_all);
> @@ -6345,19 +6338,18 @@ vect_record_grouped_load_vectors (gimple
>  {
>    stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    vec_info *vinfo = stmt_info->vinfo;
> -  gimple *first_stmt = DR_GROUP_FIRST_ELEMENT (stmt_info);
> -  gimple *next_stmt;
> +  stmt_vec_info first_stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
>    unsigned int i, gap_count;
>    tree tmp_data_ref;
>
>    /* Put a permuted data-ref in the VECTORIZED_STMT field.
>       Since we scan the chain starting from it's first node, their order
>       corresponds the order of data-refs in RESULT_CHAIN.  */
> -  next_stmt = first_stmt;
> +  stmt_vec_info next_stmt_info = first_stmt_info;
>    gap_count = 1;
>    FOR_EACH_VEC_ELT (result_chain, i, tmp_data_ref)
>      {
> -      if (!next_stmt)
> +      if (!next_stmt_info)
>         break;
>
>        /* Skip the gaps.  Loads created for the gaps will be removed by dead
> @@ -6366,27 +6358,27 @@ vect_record_grouped_load_vectors (gimple
>         DR_GROUP_GAP is the number of steps in elements from the previous
>         access (if there is no gap DR_GROUP_GAP is 1).  We skip loads that
>         correspond to the gaps.  */
> -      if (next_stmt != first_stmt
> -          && gap_count < DR_GROUP_GAP (vinfo_for_stmt (next_stmt)))
> +      if (next_stmt_info != first_stmt_info
> +         && gap_count < DR_GROUP_GAP (next_stmt_info))
>        {
>          gap_count++;
>          continue;
>        }
>
> -      while (next_stmt)
> +      while (next_stmt_info)
>          {
>           stmt_vec_info new_stmt_info = vinfo->lookup_def (tmp_data_ref);
>           /* We assume that if VEC_STMT is not NULL, this is a case of multiple
>              copies, and we put the new vector statement in the first available
>              RELATED_STMT.  */
> -         if (!STMT_VINFO_VEC_STMT (vinfo_for_stmt (next_stmt)))
> -           STMT_VINFO_VEC_STMT (vinfo_for_stmt (next_stmt)) = new_stmt_info;
> +         if (!STMT_VINFO_VEC_STMT (next_stmt_info))
> +           STMT_VINFO_VEC_STMT (next_stmt_info) = new_stmt_info;
>           else
>              {
> -              if (!DR_GROUP_SAME_DR_STMT (vinfo_for_stmt (next_stmt)))
> +             if (!DR_GROUP_SAME_DR_STMT (next_stmt_info))
>                  {
>                   stmt_vec_info prev_stmt_info
> -                   = STMT_VINFO_VEC_STMT (vinfo_for_stmt (next_stmt));
> +                   = STMT_VINFO_VEC_STMT (next_stmt_info);
>                   stmt_vec_info rel_stmt_info
>                     = STMT_VINFO_RELATED_STMT (prev_stmt_info);
>                   while (rel_stmt_info)
> @@ -6399,12 +6391,12 @@ vect_record_grouped_load_vectors (gimple
>                  }
>              }
>
> -         next_stmt = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next_stmt));
> +         next_stmt_info = DR_GROUP_NEXT_ELEMENT (next_stmt_info);
>           gap_count = 1;
> -         /* If NEXT_STMT accesses the same DR as the previous statement,
> +         /* If NEXT_STMT_INFO accesses the same DR as the previous statement,
>              put the same TMP_DATA_REF as its vectorized statement; otherwise
>              get the next data-ref from RESULT_CHAIN.  */
> -         if (!next_stmt || !DR_GROUP_SAME_DR_STMT (vinfo_for_stmt (next_stmt)))
> +         if (!next_stmt_info || !DR_GROUP_SAME_DR_STMT (next_stmt_info))
>             break;
>          }
>      }
> @@ -6545,8 +6537,8 @@ vect_supportable_dr_alignment (struct da
>           if (loop_vinfo
>               && STMT_SLP_TYPE (stmt_info)
>               && !multiple_p (LOOP_VINFO_VECT_FACTOR (loop_vinfo)
> -                             * DR_GROUP_SIZE (vinfo_for_stmt
> -                                           (DR_GROUP_FIRST_ELEMENT (stmt_info))),
> +                             * (DR_GROUP_SIZE
> +                                (DR_GROUP_FIRST_ELEMENT (stmt_info))),
>                               TYPE_VECTOR_SUBPARTS (vectype)))
>             ;
>           else if (!loop_vinfo
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:23:04.033010396 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:23:08.532970436 +0100
> @@ -661,14 +661,14 @@ vect_fixup_reduc_chain (gimple *stmt)
>    REDUC_GROUP_SIZE (firstp) = REDUC_GROUP_SIZE (stmt_info);
>    do
>      {
> -      stmtp = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (stmt));
> +      stmtp = STMT_VINFO_RELATED_STMT (stmt_info);
>        REDUC_GROUP_FIRST_ELEMENT (stmtp) = firstp;
> -      stmt = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (stmt));
> -      if (stmt)
> +      stmt_info = REDUC_GROUP_NEXT_ELEMENT (stmt_info);
> +      if (stmt_info)
>         REDUC_GROUP_NEXT_ELEMENT (stmtp)
> -         = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (stmt));
> +         = STMT_VINFO_RELATED_STMT (stmt_info);
>      }
> -  while (stmt);
> +  while (stmt_info);
>    STMT_VINFO_DEF_TYPE (stmtp) = vect_reduction_def;
>  }
>
> @@ -683,12 +683,12 @@ vect_fixup_scalar_cycles_with_patterns (
>    FOR_EACH_VEC_ELT (LOOP_VINFO_REDUCTION_CHAINS (loop_vinfo), i, first)
>      if (STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (first)))
>        {
> -       gimple *next = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (first));
> +       stmt_vec_info next = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (first));
>         while (next)
>           {
> -           if (! STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (next)))
> +           if (! STMT_VINFO_IN_PATTERN_P (next))
>               break;
> -           next = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next));
> +           next = REDUC_GROUP_NEXT_ELEMENT (next);
>           }
>         /* If not all stmt in the chain are patterns try to handle
>            the chain without patterns.  */
> @@ -2188,7 +2188,7 @@ vect_analyze_loop_2 (loop_vec_info loop_
>        vinfo = SLP_TREE_SCALAR_STMTS (SLP_INSTANCE_TREE (instance))[0];
>        if (! STMT_VINFO_GROUPED_ACCESS (vinfo))
>         continue;
> -      vinfo = vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (vinfo));
> +      vinfo = DR_GROUP_FIRST_ELEMENT (vinfo);
>        unsigned int size = DR_GROUP_SIZE (vinfo);
>        tree vectype = STMT_VINFO_VECTYPE (vinfo);
>        if (! vect_store_lanes_supported (vectype, size, false)
> @@ -2198,7 +2198,7 @@ vect_analyze_loop_2 (loop_vec_info loop_
>        FOR_EACH_VEC_ELT (SLP_INSTANCE_LOADS (instance), j, node)
>         {
>           vinfo = SLP_TREE_SCALAR_STMTS (node)[0];
> -         vinfo = vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (vinfo));
> +         vinfo = DR_GROUP_FIRST_ELEMENT (vinfo);
>           bool single_element_p = !DR_GROUP_NEXT_ELEMENT (vinfo);
>           size = DR_GROUP_SIZE (vinfo);
>           vectype = STMT_VINFO_VECTYPE (vinfo);
> @@ -2527,7 +2527,7 @@ vect_is_slp_reduction (loop_vec_info loo
>    struct loop *loop = (gimple_bb (phi))->loop_father;
>    struct loop *vect_loop = LOOP_VINFO_LOOP (loop_info);
>    enum tree_code code;
> -  gimple *loop_use_stmt = NULL, *first, *next_stmt;
> +  gimple *loop_use_stmt = NULL;
>    stmt_vec_info use_stmt_info, current_stmt_info = NULL;
>    tree lhs;
>    imm_use_iterator imm_iter;
> @@ -2592,12 +2592,12 @@ vect_is_slp_reduction (loop_vec_info loo
>        use_stmt_info = loop_info->lookup_stmt (loop_use_stmt);
>        if (current_stmt_info)
>          {
> -         REDUC_GROUP_NEXT_ELEMENT (current_stmt_info) = loop_use_stmt;
> +         REDUC_GROUP_NEXT_ELEMENT (current_stmt_info) = use_stmt_info;
>            REDUC_GROUP_FIRST_ELEMENT (use_stmt_info)
>              = REDUC_GROUP_FIRST_ELEMENT (current_stmt_info);
>          }
>        else
> -       REDUC_GROUP_FIRST_ELEMENT (use_stmt_info) = loop_use_stmt;
> +       REDUC_GROUP_FIRST_ELEMENT (use_stmt_info) = use_stmt_info;
>
>        lhs = gimple_assign_lhs (loop_use_stmt);
>        current_stmt_info = use_stmt_info;
> @@ -2610,9 +2610,10 @@ vect_is_slp_reduction (loop_vec_info loo
>    /* Swap the operands, if needed, to make the reduction operand be the second
>       operand.  */
>    lhs = PHI_RESULT (phi);
> -  next_stmt = REDUC_GROUP_FIRST_ELEMENT (current_stmt_info);
> -  while (next_stmt)
> +  stmt_vec_info next_stmt_info = REDUC_GROUP_FIRST_ELEMENT (current_stmt_info);
> +  while (next_stmt_info)
>      {
> +      gassign *next_stmt = as_a <gassign *> (next_stmt_info->stmt);
>        if (gimple_assign_rhs2 (next_stmt) == lhs)
>         {
>           tree op = gimple_assign_rhs1 (next_stmt);
> @@ -2626,7 +2627,7 @@ vect_is_slp_reduction (loop_vec_info loo
>               && vect_valid_reduction_input_p (def_stmt_info))
>             {
>               lhs = gimple_assign_lhs (next_stmt);
> -             next_stmt = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next_stmt));
> +             next_stmt_info = REDUC_GROUP_NEXT_ELEMENT (next_stmt_info);
>               continue;
>             }
>
> @@ -2663,13 +2664,14 @@ vect_is_slp_reduction (loop_vec_info loo
>          }
>
>        lhs = gimple_assign_lhs (next_stmt);
> -      next_stmt = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next_stmt));
> +      next_stmt_info = REDUC_GROUP_NEXT_ELEMENT (next_stmt_info);
>      }
>
>    /* Save the chain for further analysis in SLP detection.  */
> -  first = REDUC_GROUP_FIRST_ELEMENT (current_stmt_info);
> -  LOOP_VINFO_REDUCTION_CHAINS (loop_info).safe_push (first);
> -  REDUC_GROUP_SIZE (vinfo_for_stmt (first)) = size;
> +  stmt_vec_info first_stmt_info
> +    = REDUC_GROUP_FIRST_ELEMENT (current_stmt_info);
> +  LOOP_VINFO_REDUCTION_CHAINS (loop_info).safe_push (first_stmt_info);
> +  REDUC_GROUP_SIZE (first_stmt_info) = size;
>
>    return true;
>  }
> @@ -3254,12 +3256,12 @@ vect_is_simple_reduction (loop_vec_info
>      }
>
>    /* Dissolve group eventually half-built by vect_is_slp_reduction.  */
> -  gimple *first = REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (def_stmt));
> +  stmt_vec_info first = REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (def_stmt));
>    while (first)
>      {
> -      gimple *next = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (first));
> -      REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (first)) = NULL;
> -      REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (first)) = NULL;
> +      stmt_vec_info next = REDUC_GROUP_NEXT_ELEMENT (first);
> +      REDUC_GROUP_FIRST_ELEMENT (first) = NULL;
> +      REDUC_GROUP_NEXT_ELEMENT (first) = NULL;
>        first = next;
>      }
>
> @@ -6130,7 +6132,8 @@ vectorizable_reduction (gimple *stmt, gi
>      }
>
>    if (REDUC_GROUP_FIRST_ELEMENT (stmt_info))
> -    gcc_assert (slp_node && REDUC_GROUP_FIRST_ELEMENT (stmt_info) == stmt);
> +    gcc_assert (slp_node
> +               && REDUC_GROUP_FIRST_ELEMENT (stmt_info) == stmt_info);
>
>    if (gimple_code (stmt) == GIMPLE_PHI)
>      {
> @@ -6784,8 +6787,8 @@ vectorizable_reduction (gimple *stmt, gi
>    tree neutral_op = NULL_TREE;
>    if (slp_node)
>      neutral_op = neutral_op_for_slp_reduction
> -                  (slp_node_instance->reduc_phis, code,
> -                   REDUC_GROUP_FIRST_ELEMENT (stmt_info) != NULL);
> +      (slp_node_instance->reduc_phis, code,
> +       REDUC_GROUP_FIRST_ELEMENT (stmt_info) != NULL_STMT_VEC_INFO);
>
>    if (double_reduc && reduction_type == FOLD_LEFT_REDUCTION)
>      {
> Index: gcc/tree-vect-patterns.c
> ===================================================================
> --- gcc/tree-vect-patterns.c    2018-07-24 10:22:57.277070390 +0100
> +++ gcc/tree-vect-patterns.c    2018-07-24 10:23:08.536970400 +0100
> @@ -820,7 +820,7 @@ vect_reassociating_reduction_p (stmt_vec
>  {
>    return (STMT_VINFO_DEF_TYPE (stmt_vinfo) == vect_reduction_def
>           ? STMT_VINFO_REDUC_TYPE (stmt_vinfo) != FOLD_LEFT_REDUCTION
> -         : REDUC_GROUP_FIRST_ELEMENT (stmt_vinfo) != NULL);
> +         : REDUC_GROUP_FIRST_ELEMENT (stmt_vinfo) != NULL_STMT_VEC_INFO);
>  }
>
>  /* As above, but also require it to have code CODE and to be a reduction
> Index: gcc/tree-vect-slp.c
> ===================================================================
> --- gcc/tree-vect-slp.c 2018-07-24 10:23:00.401042649 +0100
> +++ gcc/tree-vect-slp.c 2018-07-24 10:23:08.536970400 +0100
> @@ -712,7 +712,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>    int icode;
>    machine_mode optab_op2_mode;
>    machine_mode vec_mode;
> -  gimple *first_load = NULL, *prev_first_load = NULL;
> +  stmt_vec_info first_load = NULL, prev_first_load = NULL;
>
>    /* For every stmt in NODE find its def stmt/s.  */
>    stmt_vec_info stmt_info;
> @@ -1692,8 +1692,7 @@ vect_attempt_slp_rearrange_stmts (slp_in
>    FOR_EACH_VEC_ELT (SLP_INSTANCE_LOADS (slp_instn), i, node)
>      {
>        stmt_vec_info first_stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
> -      first_stmt_info
> -       = vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (first_stmt_info));
> +      first_stmt_info = DR_GROUP_FIRST_ELEMENT (first_stmt_info);
>        /* But we have to keep those permutations that are required because
>           of handling of gaps.  */
>        if (known_eq (unrolling_factor, 1U)
> @@ -1717,7 +1716,6 @@ vect_supported_load_permutation_p (slp_i
>    unsigned int group_size = SLP_INSTANCE_GROUP_SIZE (slp_instn);
>    unsigned int i, j, k, next;
>    slp_tree node;
> -  gimple *next_load;
>
>    if (dump_enabled_p ())
>      {
> @@ -1766,26 +1764,25 @@ vect_supported_load_permutation_p (slp_i
>           if (!SLP_TREE_LOAD_PERMUTATION (node).exists ())
>             continue;
>           bool subchain_p = true;
> -          next_load = NULL;
> +         stmt_vec_info next_load_info = NULL;
>           stmt_vec_info load_info;
>           FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), j, load_info)
>             {
>               if (j != 0
> -                 && (next_load != load_info
> +                 && (next_load_info != load_info
>                       || DR_GROUP_GAP (load_info) != 1))
>                 {
>                   subchain_p = false;
>                   break;
>                 }
> -             next_load = DR_GROUP_NEXT_ELEMENT (load_info);
> +             next_load_info = DR_GROUP_NEXT_ELEMENT (load_info);
>             }
>           if (subchain_p)
>             SLP_TREE_LOAD_PERMUTATION (node).release ();
>           else
>             {
>               stmt_vec_info group_info = SLP_TREE_SCALAR_STMTS (node)[0];
> -             group_info
> -               = vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (group_info));
> +             group_info = DR_GROUP_FIRST_ELEMENT (group_info);
>               unsigned HOST_WIDE_INT nunits;
>               unsigned k, maxk = 0;
>               FOR_EACH_VEC_ELT (SLP_TREE_LOAD_PERMUTATION (node), j, k)
> @@ -1868,33 +1865,33 @@ vect_find_last_scalar_stmt_in_slp (slp_t
>  vect_split_slp_store_group (gimple *first_stmt, unsigned group1_size)
>  {
>    stmt_vec_info first_vinfo = vinfo_for_stmt (first_stmt);
> -  gcc_assert (DR_GROUP_FIRST_ELEMENT (first_vinfo) == first_stmt);
> +  gcc_assert (DR_GROUP_FIRST_ELEMENT (first_vinfo) == first_vinfo);
>    gcc_assert (group1_size > 0);
>    int group2_size = DR_GROUP_SIZE (first_vinfo) - group1_size;
>    gcc_assert (group2_size > 0);
>    DR_GROUP_SIZE (first_vinfo) = group1_size;
>
> -  gimple *stmt = first_stmt;
> +  stmt_vec_info stmt_info = first_vinfo;
>    for (unsigned i = group1_size; i > 1; i--)
>      {
> -      stmt = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (stmt));
> -      gcc_assert (DR_GROUP_GAP (vinfo_for_stmt (stmt)) == 1);
> +      stmt_info = DR_GROUP_NEXT_ELEMENT (stmt_info);
> +      gcc_assert (DR_GROUP_GAP (stmt_info) == 1);
>      }
>    /* STMT is now the last element of the first group.  */
> -  gimple *group2 = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (stmt));
> -  DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (stmt)) = 0;
> +  stmt_vec_info group2 = DR_GROUP_NEXT_ELEMENT (stmt_info);
> +  DR_GROUP_NEXT_ELEMENT (stmt_info) = 0;
>
> -  DR_GROUP_SIZE (vinfo_for_stmt (group2)) = group2_size;
> -  for (stmt = group2; stmt; stmt = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (stmt)))
> +  DR_GROUP_SIZE (group2) = group2_size;
> +  for (stmt_info = group2; stmt_info;
> +       stmt_info = DR_GROUP_NEXT_ELEMENT (stmt_info))
>      {
> -      DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)) = group2;
> -      gcc_assert (DR_GROUP_GAP (vinfo_for_stmt (stmt)) == 1);
> +      DR_GROUP_FIRST_ELEMENT (stmt_info) = group2;
> +      gcc_assert (DR_GROUP_GAP (stmt_info) == 1);
>      }
>
>    /* For the second group, the DR_GROUP_GAP is that before the original group,
>       plus skipping over the first vector.  */
> -  DR_GROUP_GAP (vinfo_for_stmt (group2))
> -    = DR_GROUP_GAP (first_vinfo) + group1_size;
> +  DR_GROUP_GAP (group2) = DR_GROUP_GAP (first_vinfo) + group1_size;
>
>    /* DR_GROUP_GAP of the first group now has to skip over the second group too.  */
>    DR_GROUP_GAP (first_vinfo) += group2_size;
> @@ -1928,8 +1925,6 @@ vect_analyze_slp_instance (vec_info *vin
>    slp_tree node;
>    unsigned int group_size;
>    tree vectype, scalar_type = NULL_TREE;
> -  gimple *next;
> -  stmt_vec_info next_info;
>    unsigned int i;
>    vec<slp_tree> loads;
>    struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
> @@ -1970,34 +1965,32 @@ vect_analyze_slp_instance (vec_info *vin
>
>    /* Create a node (a root of the SLP tree) for the packed grouped stores.  */
>    scalar_stmts.create (group_size);
> -  next = stmt;
> +  stmt_vec_info next_info = stmt_info;
>    if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
>      {
>        /* Collect the stores and store them in SLP_TREE_SCALAR_STMTS.  */
> -      while (next)
> +      while (next_info)
>          {
> -         next_info = vinfo_for_stmt (next);
>           if (STMT_VINFO_IN_PATTERN_P (next_info)
>               && STMT_VINFO_RELATED_STMT (next_info))
>             scalar_stmts.safe_push (STMT_VINFO_RELATED_STMT (next_info));
>           else
>             scalar_stmts.safe_push (next_info);
> -          next = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next));
> +         next_info = DR_GROUP_NEXT_ELEMENT (next_info);
>          }
>      }
>    else if (!dr && REDUC_GROUP_FIRST_ELEMENT (stmt_info))
>      {
>        /* Collect the reduction stmts and store them in
>          SLP_TREE_SCALAR_STMTS.  */
> -      while (next)
> +      while (next_info)
>          {
> -         next_info = vinfo_for_stmt (next);
>           if (STMT_VINFO_IN_PATTERN_P (next_info)
>               && STMT_VINFO_RELATED_STMT (next_info))
>             scalar_stmts.safe_push (STMT_VINFO_RELATED_STMT (next_info));
>           else
>             scalar_stmts.safe_push (next_info);
> -          next = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next));
> +         next_info = REDUC_GROUP_NEXT_ELEMENT (next_info);
>          }
>        /* Mark the first element of the reduction chain as reduction to properly
>          transform the node.  In the reduction analysis phase only the last
> @@ -2067,15 +2060,14 @@ vect_analyze_slp_instance (vec_info *vin
>           vec<unsigned> load_permutation;
>           int j;
>           stmt_vec_info load_info;
> -         gimple *first_stmt;
>           bool this_load_permuted = false;
>           load_permutation.create (group_size);
> -         first_stmt = DR_GROUP_FIRST_ELEMENT
> +         stmt_vec_info first_stmt_info = DR_GROUP_FIRST_ELEMENT
>             (SLP_TREE_SCALAR_STMTS (load_node)[0]);
>           FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (load_node), j, load_info)
>             {
>               int load_place = vect_get_place_in_interleaving_chain
> -               (load_info, first_stmt);
> +               (load_info, first_stmt_info);
>               gcc_assert (load_place != -1);
>               if (load_place != j)
>                 this_load_permuted = true;
> @@ -2086,8 +2078,8 @@ vect_analyze_slp_instance (vec_info *vin
>                  a gap either because the group is larger than the SLP
>                  group-size or because there is a gap between the groups.  */
>               && (known_eq (unrolling_factor, 1U)
> -                 || (group_size == DR_GROUP_SIZE (vinfo_for_stmt (first_stmt))
> -                     && DR_GROUP_GAP (vinfo_for_stmt (first_stmt)) == 0)))
> +                 || (group_size == DR_GROUP_SIZE (first_stmt_info)
> +                     && DR_GROUP_GAP (first_stmt_info) == 0)))
>             {
>               load_permutation.release ();
>               continue;
> @@ -2122,11 +2114,9 @@ vect_analyze_slp_instance (vec_info *vin
>           slp_tree load_node;
>           FOR_EACH_VEC_ELT (loads, i, load_node)
>             {
> -             gimple *first_stmt = DR_GROUP_FIRST_ELEMENT
> +             stmt_vec_info stmt_vinfo = DR_GROUP_FIRST_ELEMENT
>                 (SLP_TREE_SCALAR_STMTS (load_node)[0]);
> -             stmt_vec_info stmt_vinfo = vinfo_for_stmt (first_stmt);
> -                 /* Use SLP for strided accesses (or if we
> -                    can't load-lanes).  */
> +             /* Use SLP for strided accesses (or if we can't load-lanes).  */
>               if (STMT_VINFO_STRIDED_P (stmt_vinfo)
>                   || ! vect_load_lanes_supported
>                         (STMT_VINFO_VECTYPE (stmt_vinfo),
> @@ -2230,11 +2220,11 @@ vect_analyze_slp (vec_info *vinfo, unsig
>                                              max_tree_size))
>               {
>                 /* Dissolve reduction chain group.  */
> -               gimple *next, *stmt = first_element;
> +               gimple *stmt = first_element;
>                 while (stmt)
>                   {
>                     stmt_vec_info vinfo = vinfo_for_stmt (stmt);
> -                   next = REDUC_GROUP_NEXT_ELEMENT (vinfo);
> +                   stmt_vec_info next = REDUC_GROUP_NEXT_ELEMENT (vinfo);
>                     REDUC_GROUP_FIRST_ELEMENT (vinfo) = NULL;
>                     REDUC_GROUP_NEXT_ELEMENT (vinfo) = NULL;
>                     stmt = next;
> @@ -3698,7 +3688,7 @@ vect_transform_slp_perm_load (slp_tree n
>    if (!STMT_VINFO_GROUPED_ACCESS (stmt_info))
>      return false;
>
> -  stmt_info = vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (stmt_info));
> +  stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
>
>    mode = TYPE_MODE (vectype);
>
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-24 10:23:00.401042649 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:23:08.536970400 +0100
> @@ -978,7 +978,7 @@ vect_model_store_cost (stmt_vec_info stm
>                        stmt_vector_for_cost *cost_vec)
>  {
>    unsigned int inside_cost = 0, prologue_cost = 0;
> -  gimple *first_stmt = STMT_VINFO_STMT (stmt_info);
> +  stmt_vec_info first_stmt_info = stmt_info;
>    bool grouped_access_p = STMT_VINFO_GROUPED_ACCESS (stmt_info);
>
>    /* ???  Somehow we need to fix this at the callers.  */
> @@ -998,12 +998,12 @@ vect_model_store_cost (stmt_vec_info stm
>    /* Grouped stores update all elements in the group at once,
>       so we want the DR for the first statement.  */
>    if (!slp_node && grouped_access_p)
> -    first_stmt = DR_GROUP_FIRST_ELEMENT (stmt_info);
> +    first_stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
>
>    /* True if we should include any once-per-group costs as well as
>       the cost of the statement itself.  For SLP we only get called
>       once per group anyhow.  */
> -  bool first_stmt_p = (first_stmt == STMT_VINFO_STMT (stmt_info));
> +  bool first_stmt_p = (first_stmt_info == stmt_info);
>
>    /* We assume that the cost of a single store-lanes instruction is
>       equivalent to the cost of DR_GROUP_SIZE separate stores.  If a grouped
> @@ -1014,7 +1014,7 @@ vect_model_store_cost (stmt_vec_info stm
>      {
>        /* Uses a high and low interleave or shuffle operations for each
>          needed permute.  */
> -      int group_size = DR_GROUP_SIZE (vinfo_for_stmt (first_stmt));
> +      int group_size = DR_GROUP_SIZE (first_stmt_info);
>        int nstmts = ncopies * ceil_log2 (group_size) * group_size;
>        inside_cost = record_stmt_cost (cost_vec, nstmts, vec_perm,
>                                       stmt_info, 0, vect_body);
> @@ -1122,7 +1122,6 @@ vect_model_load_cost (stmt_vec_info stmt
>                       slp_tree slp_node,
>                       stmt_vector_for_cost *cost_vec)
>  {
> -  gimple *first_stmt = STMT_VINFO_STMT (stmt_info);
>    unsigned int inside_cost = 0, prologue_cost = 0;
>    bool grouped_access_p = STMT_VINFO_GROUPED_ACCESS (stmt_info);
>
> @@ -1136,28 +1135,27 @@ vect_model_load_cost (stmt_vec_info stmt
>      {
>        /* If the load is permuted then the alignment is determined by
>          the first group element not by the first scalar stmt DR.  */
> -      gimple *stmt = DR_GROUP_FIRST_ELEMENT (stmt_info);
> -      stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +      stmt_vec_info first_stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
>        /* Record the cost for the permutation.  */
>        unsigned n_perms;
>        unsigned assumed_nunits
> -       = vect_nunits_for_cost (STMT_VINFO_VECTYPE (stmt_info));
> +       = vect_nunits_for_cost (STMT_VINFO_VECTYPE (first_stmt_info));
>        unsigned slp_vf = (ncopies * assumed_nunits) / instance->group_size;
>        vect_transform_slp_perm_load (slp_node, vNULL, NULL,
>                                     slp_vf, instance, true,
>                                     &n_perms);
>        inside_cost += record_stmt_cost (cost_vec, n_perms, vec_perm,
> -                                      stmt_info, 0, vect_body);
> +                                      first_stmt_info, 0, vect_body);
>        /* And adjust the number of loads performed.  This handles
>          redundancies as well as loads that are later dead.  */
> -      auto_sbitmap perm (DR_GROUP_SIZE (stmt_info));
> +      auto_sbitmap perm (DR_GROUP_SIZE (first_stmt_info));
>        bitmap_clear (perm);
>        for (unsigned i = 0;
>            i < SLP_TREE_LOAD_PERMUTATION (slp_node).length (); ++i)
>         bitmap_set_bit (perm, SLP_TREE_LOAD_PERMUTATION (slp_node)[i]);
>        ncopies = 0;
>        bool load_seen = false;
> -      for (unsigned i = 0; i < DR_GROUP_SIZE (stmt_info); ++i)
> +      for (unsigned i = 0; i < DR_GROUP_SIZE (first_stmt_info); ++i)
>         {
>           if (i % assumed_nunits == 0)
>             {
> @@ -1171,19 +1169,21 @@ vect_model_load_cost (stmt_vec_info stmt
>        if (load_seen)
>         ncopies++;
>        gcc_assert (ncopies
> -                 <= (DR_GROUP_SIZE (stmt_info) - DR_GROUP_GAP (stmt_info)
> +                 <= (DR_GROUP_SIZE (first_stmt_info)
> +                     - DR_GROUP_GAP (first_stmt_info)
>                       + assumed_nunits - 1) / assumed_nunits);
>      }
>
>    /* Grouped loads read all elements in the group at once,
>       so we want the DR for the first statement.  */
> +  stmt_vec_info first_stmt_info = stmt_info;
>    if (!slp_node && grouped_access_p)
> -    first_stmt = DR_GROUP_FIRST_ELEMENT (stmt_info);
> +    first_stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
>
>    /* True if we should include any once-per-group costs as well as
>       the cost of the statement itself.  For SLP we only get called
>       once per group anyhow.  */
> -  bool first_stmt_p = (first_stmt == STMT_VINFO_STMT (stmt_info));
> +  bool first_stmt_p = (first_stmt_info == stmt_info);
>
>    /* We assume that the cost of a single load-lanes instruction is
>       equivalent to the cost of DR_GROUP_SIZE separate loads.  If a grouped
> @@ -1194,7 +1194,7 @@ vect_model_load_cost (stmt_vec_info stmt
>      {
>        /* Uses an even and odd extract operations or shuffle operations
>          for each needed permute.  */
> -      int group_size = DR_GROUP_SIZE (vinfo_for_stmt (first_stmt));
> +      int group_size = DR_GROUP_SIZE (first_stmt_info);
>        int nstmts = ncopies * ceil_log2 (group_size) * group_size;
>        inside_cost += record_stmt_cost (cost_vec, nstmts, vec_perm,
>                                        stmt_info, 0, vect_body);
> @@ -2183,12 +2183,12 @@ get_group_load_store_type (gimple *stmt,
>    vec_info *vinfo = stmt_info->vinfo;
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    struct loop *loop = loop_vinfo ? LOOP_VINFO_LOOP (loop_vinfo) : NULL;
> -  gimple *first_stmt = DR_GROUP_FIRST_ELEMENT (stmt_info);
> -  data_reference *first_dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (first_stmt));
> -  unsigned int group_size = DR_GROUP_SIZE (vinfo_for_stmt (first_stmt));
> -  bool single_element_p = (stmt == first_stmt
> +  stmt_vec_info first_stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
> +  data_reference *first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
> +  unsigned int group_size = DR_GROUP_SIZE (first_stmt_info);
> +  bool single_element_p = (stmt_info == first_stmt_info
>                            && !DR_GROUP_NEXT_ELEMENT (stmt_info));
> -  unsigned HOST_WIDE_INT gap = DR_GROUP_GAP (vinfo_for_stmt (first_stmt));
> +  unsigned HOST_WIDE_INT gap = DR_GROUP_GAP (first_stmt_info);
>    poly_uint64 nunits = TYPE_VECTOR_SUBPARTS (vectype);
>
>    /* True if the vectorized statements would access beyond the last
> @@ -2315,14 +2315,14 @@ get_group_load_store_type (gimple *stmt,
>         *memory_access_type = VMAT_GATHER_SCATTER;
>      }
>
> -  if (vls_type != VLS_LOAD && first_stmt == stmt)
> +  if (vls_type != VLS_LOAD && first_stmt_info == stmt_info)
>      {
>        /* STMT is the leader of the group. Check the operands of all the
>          stmts of the group.  */
> -      gimple *next_stmt = DR_GROUP_NEXT_ELEMENT (stmt_info);
> -      while (next_stmt)
> +      stmt_vec_info next_stmt_info = DR_GROUP_NEXT_ELEMENT (stmt_info);
> +      while (next_stmt_info)
>         {
> -         tree op = vect_get_store_rhs (next_stmt);
> +         tree op = vect_get_store_rhs (next_stmt_info);
>           enum vect_def_type dt;
>           if (!vect_is_simple_use (op, vinfo, &dt))
>             {
> @@ -2331,7 +2331,7 @@ get_group_load_store_type (gimple *stmt,
>                                  "use not simple.\n");
>               return false;
>             }
> -         next_stmt = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next_stmt));
> +         next_stmt_info = DR_GROUP_NEXT_ELEMENT (next_stmt_info);
>         }
>      }
>
> @@ -2482,7 +2482,7 @@ get_load_store_type (gimple *stmt, tree
>       traditional behavior until that can be fixed.  */
>    if (*memory_access_type == VMAT_ELEMENTWISE
>        && !STMT_VINFO_STRIDED_P (stmt_info)
> -      && !(stmt == DR_GROUP_FIRST_ELEMENT (stmt_info)
> +      && !(stmt_info == DR_GROUP_FIRST_ELEMENT (stmt_info)
>            && !DR_GROUP_NEXT_ELEMENT (stmt_info)
>            && !pow2p_hwi (DR_GROUP_SIZE (stmt_info))))
>      {
> @@ -6195,13 +6195,13 @@ ensure_base_align (struct data_reference
>  get_group_alias_ptr_type (gimple *first_stmt)
>  {
>    struct data_reference *first_dr, *next_dr;
> -  gimple *next_stmt;
>
>    first_dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (first_stmt));
> -  next_stmt = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (first_stmt));
> -  while (next_stmt)
> +  stmt_vec_info next_stmt_info
> +    = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (first_stmt));
> +  while (next_stmt_info)
>      {
> -      next_dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (next_stmt));
> +      next_dr = STMT_VINFO_DATA_REF (next_stmt_info);
>        if (get_alias_set (DR_REF (first_dr))
>           != get_alias_set (DR_REF (next_dr)))
>         {
> @@ -6210,7 +6210,7 @@ get_group_alias_ptr_type (gimple *first_
>                              "conflicting alias set types.\n");
>           return ptr_type_node;
>         }
> -      next_stmt = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next_stmt));
> +      next_stmt_info = DR_GROUP_NEXT_ELEMENT (next_stmt_info);
>      }
>    return reference_alias_ptr_type (DR_REF (first_dr));
>  }
> @@ -6248,7 +6248,7 @@ vectorizable_store (gimple *stmt, gimple
>    gimple *ptr_incr = NULL;
>    int ncopies;
>    int j;
> -  gimple *next_stmt, *first_stmt;
> +  stmt_vec_info first_stmt_info;
>    bool grouped_store;
>    unsigned int group_size, i;
>    vec<tree> oprnds = vNULL;
> @@ -6400,13 +6400,13 @@ vectorizable_store (gimple *stmt, gimple
>                    && (slp || memory_access_type != VMAT_CONTIGUOUS));
>    if (grouped_store)
>      {
> -      first_stmt = DR_GROUP_FIRST_ELEMENT (stmt_info);
> -      first_dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (first_stmt));
> -      group_size = DR_GROUP_SIZE (vinfo_for_stmt (first_stmt));
> +      first_stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
> +      first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
> +      group_size = DR_GROUP_SIZE (first_stmt_info);
>      }
>    else
>      {
> -      first_stmt = stmt;
> +      first_stmt_info = stmt_info;
>        first_dr = dr;
>        group_size = vec_num = 1;
>      }
> @@ -6584,10 +6584,7 @@ vectorizable_store (gimple *stmt, gimple
>      }
>
>    if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
> -    {
> -      gimple *group_stmt = DR_GROUP_FIRST_ELEMENT (stmt_info);
> -      DR_GROUP_STORE_COUNT (vinfo_for_stmt (group_stmt))++;
> -    }
> +    DR_GROUP_STORE_COUNT (DR_GROUP_FIRST_ELEMENT (stmt_info))++;
>
>    if (grouped_store)
>      {
> @@ -6596,8 +6593,8 @@ vectorizable_store (gimple *stmt, gimple
>
>        /* We vectorize all the stmts of the interleaving group when we
>          reach the last stmt in the group.  */
> -      if (DR_GROUP_STORE_COUNT (vinfo_for_stmt (first_stmt))
> -         < DR_GROUP_SIZE (vinfo_for_stmt (first_stmt))
> +      if (DR_GROUP_STORE_COUNT (first_stmt_info)
> +         < DR_GROUP_SIZE (first_stmt_info)
>           && !slp)
>         {
>           *vec_stmt = NULL;
> @@ -6610,17 +6607,18 @@ vectorizable_store (gimple *stmt, gimple
>            /* VEC_NUM is the number of vect stmts to be created for this
>               group.  */
>            vec_num = SLP_TREE_NUMBER_OF_VEC_STMTS (slp_node);
> -          first_stmt = SLP_TREE_SCALAR_STMTS (slp_node)[0];
> -         gcc_assert (DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (first_stmt)) == first_stmt);
> -          first_dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (first_stmt));
> -         op = vect_get_store_rhs (first_stmt);
> +         first_stmt_info = SLP_TREE_SCALAR_STMTS (slp_node)[0];
> +         gcc_assert (DR_GROUP_FIRST_ELEMENT (first_stmt_info)
> +                     == first_stmt_info);
> +         first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
> +         op = vect_get_store_rhs (first_stmt_info);
>          }
>        else
>          /* VEC_NUM is the number of vect stmts to be created for this
>             group.  */
>         vec_num = group_size;
>
> -      ref_type = get_group_alias_ptr_type (first_stmt);
> +      ref_type = get_group_alias_ptr_type (first_stmt_info);
>      }
>    else
>      ref_type = reference_alias_ptr_type (DR_REF (first_dr));
> @@ -6759,7 +6757,7 @@ vectorizable_store (gimple *stmt, gimple
>
>        prev_stmt_info = NULL;
>        alias_off = build_int_cst (ref_type, 0);
> -      next_stmt = first_stmt;
> +      stmt_vec_info next_stmt_info = first_stmt_info;
>        for (g = 0; g < group_size; g++)
>         {
>           running_off = offvar;
> @@ -6780,7 +6778,7 @@ vectorizable_store (gimple *stmt, gimple
>           for (j = 0; j < ncopies; j++)
>             {
>               /* We've set op and dt above, from vect_get_store_rhs,
> -                and first_stmt == stmt.  */
> +                and first_stmt_info == stmt_info.  */
>               if (j == 0)
>                 {
>                   if (slp)
> @@ -6791,8 +6789,9 @@ vectorizable_store (gimple *stmt, gimple
>                     }
>                   else
>                     {
> -                     op = vect_get_store_rhs (next_stmt);
> -                     vec_oprnd = vect_get_vec_def_for_operand (op, next_stmt);
> +                     op = vect_get_store_rhs (next_stmt_info);
> +                     vec_oprnd = vect_get_vec_def_for_operand
> +                       (op, next_stmt_info);
>                     }
>                 }
>               else
> @@ -6866,7 +6865,7 @@ vectorizable_store (gimple *stmt, gimple
>                     }
>                 }
>             }
> -         next_stmt = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next_stmt));
> +         next_stmt_info = DR_GROUP_NEXT_ELEMENT (next_stmt_info);
>           if (slp)
>             break;
>         }
> @@ -6985,19 +6984,20 @@ vectorizable_store (gimple *stmt, gimple
>
>                  If the store is not grouped, DR_GROUP_SIZE is 1, and DR_CHAIN and
>                  OPRNDS are of size 1.  */
> -             next_stmt = first_stmt;
> +             stmt_vec_info next_stmt_info = first_stmt_info;
>               for (i = 0; i < group_size; i++)
>                 {
>                   /* Since gaps are not supported for interleaved stores,
>                      DR_GROUP_SIZE is the exact number of stmts in the chain.
> -                    Therefore, NEXT_STMT can't be NULL_TREE.  In case that
> -                    there is no interleaving, DR_GROUP_SIZE is 1, and only one
> -                    iteration of the loop will be executed.  */
> -                 op = vect_get_store_rhs (next_stmt);
> -                 vec_oprnd = vect_get_vec_def_for_operand (op, next_stmt);
> +                    Therefore, NEXT_STMT_INFO can't be NULL_TREE.  In case
> +                    that there is no interleaving, DR_GROUP_SIZE is 1,
> +                    and only one iteration of the loop will be executed.  */
> +                 op = vect_get_store_rhs (next_stmt_info);
> +                 vec_oprnd = vect_get_vec_def_for_operand
> +                   (op, next_stmt_info);
>                   dr_chain.quick_push (vec_oprnd);
>                   oprnds.quick_push (vec_oprnd);
> -                 next_stmt = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next_stmt));
> +                 next_stmt_info = DR_GROUP_NEXT_ELEMENT (next_stmt_info);
>                 }
>               if (mask)
>                 vec_mask = vect_get_vec_def_for_operand (mask, stmt,
> @@ -7029,7 +7029,7 @@ vectorizable_store (gimple *stmt, gimple
>             }
>           else
>             dataref_ptr
> -             = vect_create_data_ref_ptr (first_stmt, aggr_type,
> +             = vect_create_data_ref_ptr (first_stmt_info, aggr_type,
>                                           simd_lane_access_p ? loop : NULL,
>                                           offset, &dummy, gsi, &ptr_incr,
>                                           simd_lane_access_p, &inv_p,
> @@ -7132,7 +7132,7 @@ vectorizable_store (gimple *stmt, gimple
>                                         &result_chain);
>             }
>
> -         next_stmt = first_stmt;
> +         stmt_vec_info next_stmt_info = first_stmt_info;
>           for (i = 0; i < vec_num; i++)
>             {
>               unsigned align, misalign;
> @@ -7249,8 +7249,8 @@ vectorizable_store (gimple *stmt, gimple
>               if (slp)
>                 continue;
>
> -             next_stmt = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next_stmt));
> -             if (!next_stmt)
> +             next_stmt_info = DR_GROUP_NEXT_ELEMENT (next_stmt_info);
> +             if (!next_stmt_info)
>                 break;
>             }
>         }
> @@ -7423,7 +7423,7 @@ vectorizable_load (gimple *stmt, gimple_
>    gphi *phi = NULL;
>    vec<tree> dr_chain = vNULL;
>    bool grouped_load = false;
> -  gimple *first_stmt;
> +  stmt_vec_info first_stmt_info;
>    stmt_vec_info first_stmt_info_for_drptr = NULL;
>    bool inv_p;
>    bool compute_in_loop = false;
> @@ -7565,8 +7565,8 @@ vectorizable_load (gimple *stmt, gimple_
>        gcc_assert (!nested_in_vect_loop);
>        gcc_assert (!STMT_VINFO_GATHER_SCATTER_P (stmt_info));
>
> -      first_stmt = DR_GROUP_FIRST_ELEMENT (stmt_info);
> -      group_size = DR_GROUP_SIZE (vinfo_for_stmt (first_stmt));
> +      first_stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
> +      group_size = DR_GROUP_SIZE (first_stmt_info);
>
>        if (slp && SLP_TREE_LOAD_PERMUTATION (slp_node).exists ())
>         slp_perm = true;
> @@ -7696,25 +7696,26 @@ vectorizable_load (gimple *stmt, gimple_
>
>        if (grouped_load)
>         {
> -         first_stmt = DR_GROUP_FIRST_ELEMENT (stmt_info);
> -         first_dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (first_stmt));
> +         first_stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
> +         first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
>         }
>        else
>         {
> -         first_stmt = stmt;
> +         first_stmt_info = stmt_info;
>           first_dr = dr;
>         }
>        if (slp && grouped_load)
>         {
> -         group_size = DR_GROUP_SIZE (vinfo_for_stmt (first_stmt));
> -         ref_type = get_group_alias_ptr_type (first_stmt);
> +         group_size = DR_GROUP_SIZE (first_stmt_info);
> +         ref_type = get_group_alias_ptr_type (first_stmt_info);
>         }
>        else
>         {
>           if (grouped_load)
>             cst_offset
>               = (tree_to_uhwi (TYPE_SIZE_UNIT (TREE_TYPE (vectype)))
> -                * vect_get_place_in_interleaving_chain (stmt, first_stmt));
> +                * vect_get_place_in_interleaving_chain (stmt,
> +                                                        first_stmt_info));
>           group_size = 1;
>           ref_type = reference_alias_ptr_type (DR_REF (dr));
>         }
> @@ -7924,19 +7925,19 @@ vectorizable_load (gimple *stmt, gimple_
>
>    if (grouped_load)
>      {
> -      first_stmt = DR_GROUP_FIRST_ELEMENT (stmt_info);
> -      group_size = DR_GROUP_SIZE (vinfo_for_stmt (first_stmt));
> +      first_stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
> +      group_size = DR_GROUP_SIZE (first_stmt_info);
>        /* For SLP vectorization we directly vectorize a subchain
>           without permutation.  */
>        if (slp && ! SLP_TREE_LOAD_PERMUTATION (slp_node).exists ())
> -       first_stmt = SLP_TREE_SCALAR_STMTS (slp_node)[0];
> +       first_stmt_info = SLP_TREE_SCALAR_STMTS (slp_node)[0];
>        /* For BB vectorization always use the first stmt to base
>          the data ref pointer on.  */
>        if (bb_vinfo)
>         first_stmt_info_for_drptr = SLP_TREE_SCALAR_STMTS (slp_node)[0];
>
>        /* Check if the chain of loads is already vectorized.  */
> -      if (STMT_VINFO_VEC_STMT (vinfo_for_stmt (first_stmt))
> +      if (STMT_VINFO_VEC_STMT (first_stmt_info)
>           /* For SLP we would need to copy over SLP_TREE_VEC_STMTS.
>              ???  But we can only do so if there is exactly one
>              as we have no way to get at the rest.  Leave the CSE
> @@ -7950,7 +7951,7 @@ vectorizable_load (gimple *stmt, gimple_
>           *vec_stmt = STMT_VINFO_VEC_STMT (stmt_info);
>           return true;
>         }
> -      first_dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (first_stmt));
> +      first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
>        group_gap_adj = 0;
>
>        /* VEC_NUM is the number of vect stmts to be created for this group.  */
> @@ -7979,11 +7980,11 @@ vectorizable_load (gimple *stmt, gimple_
>        else
>         vec_num = group_size;
>
> -      ref_type = get_group_alias_ptr_type (first_stmt);
> +      ref_type = get_group_alias_ptr_type (first_stmt_info);
>      }
>    else
>      {
> -      first_stmt = stmt;
> +      first_stmt_info = stmt_info;
>        first_dr = dr;
>        group_size = vec_num = 1;
>        group_gap_adj = 0;
> @@ -8120,7 +8121,7 @@ vectorizable_load (gimple *stmt, gimple_
>         || alignment_support_scheme == dr_explicit_realign)
>        && !compute_in_loop)
>      {
> -      msq = vect_setup_realignment (first_stmt, gsi, &realignment_token,
> +      msq = vect_setup_realignment (first_stmt_info, gsi, &realignment_token,
>                                     alignment_support_scheme, NULL_TREE,
>                                     &at_loop);
>        if (alignment_support_scheme == dr_explicit_realign_optimized)
> @@ -8184,7 +8185,7 @@ vectorizable_load (gimple *stmt, gimple_
>               inv_p = false;
>             }
>           else if (first_stmt_info_for_drptr
> -                  && first_stmt != first_stmt_info_for_drptr)
> +                  && first_stmt_info != first_stmt_info_for_drptr)
>             {
>               dataref_ptr
>                 = vect_create_data_ref_ptr (first_stmt_info_for_drptr,
> @@ -8209,7 +8210,7 @@ vectorizable_load (gimple *stmt, gimple_
>             }
>           else
>             dataref_ptr
> -             = vect_create_data_ref_ptr (first_stmt, aggr_type, at_loop,
> +             = vect_create_data_ref_ptr (first_stmt_info, aggr_type, at_loop,
>                                           offset, &dummy, gsi, &ptr_incr,
>                                           simd_lane_access_p, &inv_p,
>                                           byte_offset, bump);
> @@ -8388,7 +8389,7 @@ vectorizable_load (gimple *stmt, gimple_
>                     tree vs = size_int (TYPE_VECTOR_SUBPARTS (vectype));
>
>                     if (compute_in_loop)
> -                     msq = vect_setup_realignment (first_stmt, gsi,
> +                     msq = vect_setup_realignment (first_stmt_info, gsi,
>                                                     &realignment_token,
>                                                     dr_explicit_realign,
>                                                     dataref_ptr, NULL);
> @@ -9708,8 +9709,7 @@ vect_transform_stmt (gimple *stmt, gimpl
>              one are skipped, and there vec_stmt_info shouldn't be freed
>              meanwhile.  */
>           *grouped_store = true;
> -         stmt_vec_info group_info
> -           = vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (stmt_info));
> +         stmt_vec_info group_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
>           if (DR_GROUP_STORE_COUNT (group_info) == DR_GROUP_SIZE (group_info))
>             is_store = true;
>         }
> @@ -9817,14 +9817,13 @@ vect_transform_stmt (gimple *stmt, gimpl
>  vect_remove_stores (gimple *first_stmt)
>  {
>    gimple *next = first_stmt;
> -  gimple *tmp;
>    gimple_stmt_iterator next_si;
>
>    while (next)
>      {
>        stmt_vec_info stmt_info = vinfo_for_stmt (next);
>
> -      tmp = DR_GROUP_NEXT_ELEMENT (stmt_info);
> +      stmt_vec_info tmp = DR_GROUP_NEXT_ELEMENT (stmt_info);
>        if (is_pattern_stmt_p (stmt_info))
>         next = STMT_VINFO_RELATED_STMT (stmt_info);
>        /* Free the attached stmt_vec_info and remove the stmt.  */

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [21/46] Make grouped_stores and reduction_chains use stmt_vec_infos
  2018-07-24 10:01 ` [21/46] Make grouped_stores and reduction_chains use stmt_vec_infos Richard Sandiford
@ 2018-07-25  9:28   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25  9:28 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 12:01 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch changes the SLP lists grouped_stores and reduction_chains
> from auto_vec<gimple *> to auto_vec<stmt_vec_info>.  It was easier
> to do them together due to the way vect_analyze_slp is structured.

OK.

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (vec_info::grouped_stores): Change from
>         an auto_vec<gimple *> to an auto_vec<stmt_vec_info>.
>         (_loop_vec_info::reduction_chains): Likewise.
>         * tree-vect-loop.c (vect_fixup_scalar_cycles_with_patterns): Update
>         accordingly.
>         * tree-vect-slp.c (vect_analyze_slp): Likewise.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:23:08.536970400 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:23:12.060939107 +0100
> @@ -259,7 +259,7 @@ struct vec_info {
>
>    /* All interleaving chains of stores, represented by the first
>       stmt in the chain.  */
> -  auto_vec<gimple *> grouped_stores;
> +  auto_vec<stmt_vec_info> grouped_stores;
>
>    /* Cost data used by the target cost model.  */
>    void *target_cost_data;
> @@ -479,7 +479,7 @@ typedef struct _loop_vec_info : public v
>
>    /* All reduction chains in the loop, represented by the first
>       stmt in the chain.  */
> -  auto_vec<gimple *> reduction_chains;
> +  auto_vec<stmt_vec_info> reduction_chains;
>
>    /* Cost vector for a single scalar iteration.  */
>    auto_vec<stmt_info_for_cost> scalar_cost_vec;
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:23:08.532970436 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:23:12.060939107 +0100
> @@ -677,13 +677,13 @@ vect_fixup_reduc_chain (gimple *stmt)
>  static void
>  vect_fixup_scalar_cycles_with_patterns (loop_vec_info loop_vinfo)
>  {
> -  gimple *first;
> +  stmt_vec_info first;
>    unsigned i;
>
>    FOR_EACH_VEC_ELT (LOOP_VINFO_REDUCTION_CHAINS (loop_vinfo), i, first)
> -    if (STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (first)))
> +    if (STMT_VINFO_IN_PATTERN_P (first))
>        {
> -       stmt_vec_info next = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (first));
> +       stmt_vec_info next = REDUC_GROUP_NEXT_ELEMENT (first);
>         while (next)
>           {
>             if (! STMT_VINFO_IN_PATTERN_P (next))
> @@ -696,7 +696,7 @@ vect_fixup_scalar_cycles_with_patterns (
>           {
>             vect_fixup_reduc_chain (first);
>             LOOP_VINFO_REDUCTION_CHAINS (loop_vinfo)[i]
> -             = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (first));
> +             = STMT_VINFO_RELATED_STMT (first);
>           }
>        }
>  }
> Index: gcc/tree-vect-slp.c
> ===================================================================
> --- gcc/tree-vect-slp.c 2018-07-24 10:23:08.536970400 +0100
> +++ gcc/tree-vect-slp.c 2018-07-24 10:23:12.060939107 +0100
> @@ -2202,7 +2202,7 @@ vect_analyze_slp_instance (vec_info *vin
>  vect_analyze_slp (vec_info *vinfo, unsigned max_tree_size)
>  {
>    unsigned int i;
> -  gimple *first_element;
> +  stmt_vec_info first_element;
>
>    DUMP_VECT_SCOPE ("vect_analyze_slp");
>
> @@ -2220,17 +2220,15 @@ vect_analyze_slp (vec_info *vinfo, unsig
>                                              max_tree_size))
>               {
>                 /* Dissolve reduction chain group.  */
> -               gimple *stmt = first_element;
> -               while (stmt)
> +               stmt_vec_info vinfo = first_element;
> +               while (vinfo)
>                   {
> -                   stmt_vec_info vinfo = vinfo_for_stmt (stmt);
>                     stmt_vec_info next = REDUC_GROUP_NEXT_ELEMENT (vinfo);
>                     REDUC_GROUP_FIRST_ELEMENT (vinfo) = NULL;
>                     REDUC_GROUP_NEXT_ELEMENT (vinfo) = NULL;
> -                   stmt = next;
> +                   vinfo = next;
>                   }
> -               STMT_VINFO_DEF_TYPE (vinfo_for_stmt (first_element))
> -                 = vect_internal_def;
> +               STMT_VINFO_DEF_TYPE (first_element) = vect_internal_def;
>               }
>         }
>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [23/46] Make LOOP_VINFO_MAY_MISALIGN_STMTS use stmt_vec_info
  2018-07-24 10:02 ` [23/46] Make LOOP_VINFO_MAY_MISALIGN_STMTS use stmt_vec_info Richard Sandiford
@ 2018-07-25  9:29   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25  9:29 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 12:02 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch changes LOOP_VINFO_MAY_MISALIGN_STMTS from an
> auto_vec<gimple *> to an auto_vec<stmt_vec_info>.

OK.

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (_loop_vec_info::may_misalign_stmts): Change
>         from an auto_vec<gimple *> to an auto_vec<stmt_vec_info>.
>         * tree-vect-data-refs.c (vect_enhance_data_refs_alignment): Update
>         accordingly.
>         * tree-vect-loop-manip.c (vect_create_cond_for_align_checks): Likewise.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:23:15.756906285 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:23:18.856878757 +0100
> @@ -472,7 +472,7 @@ typedef struct _loop_vec_info : public v
>
>    /* Statements in the loop that have data references that are candidates for a
>       runtime (loop versioning) misalignment check.  */
> -  auto_vec<gimple *> may_misalign_stmts;
> +  auto_vec<stmt_vec_info> may_misalign_stmts;
>
>    /* Reduction cycles detected in the loop. Used in loop-aware SLP.  */
>    auto_vec<stmt_vec_info> reductions;
> Index: gcc/tree-vect-data-refs.c
> ===================================================================
> --- gcc/tree-vect-data-refs.c   2018-07-24 10:23:08.532970436 +0100
> +++ gcc/tree-vect-data-refs.c   2018-07-24 10:23:18.856878757 +0100
> @@ -2231,16 +2231,15 @@ vect_enhance_data_refs_alignment (loop_v
>
>    if (do_versioning)
>      {
> -      vec<gimple *> may_misalign_stmts
> +      vec<stmt_vec_info> may_misalign_stmts
>          = LOOP_VINFO_MAY_MISALIGN_STMTS (loop_vinfo);
> -      gimple *stmt;
> +      stmt_vec_info stmt_info;
>
>        /* It can now be assumed that the data references in the statements
>           in LOOP_VINFO_MAY_MISALIGN_STMTS will be aligned in the version
>           of the loop being vectorized.  */
> -      FOR_EACH_VEC_ELT (may_misalign_stmts, i, stmt)
> +      FOR_EACH_VEC_ELT (may_misalign_stmts, i, stmt_info)
>          {
> -          stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>            dr = STMT_VINFO_DATA_REF (stmt_info);
>           SET_DR_MISALIGNMENT (dr, 0);
>           if (dump_enabled_p ())
> Index: gcc/tree-vect-loop-manip.c
> ===================================================================
> --- gcc/tree-vect-loop-manip.c  2018-07-24 10:23:04.029010432 +0100
> +++ gcc/tree-vect-loop-manip.c  2018-07-24 10:23:18.856878757 +0100
> @@ -2772,9 +2772,9 @@ vect_create_cond_for_align_checks (loop_
>                                     tree *cond_expr,
>                                    gimple_seq *cond_expr_stmt_list)
>  {
> -  vec<gimple *> may_misalign_stmts
> +  vec<stmt_vec_info> may_misalign_stmts
>      = LOOP_VINFO_MAY_MISALIGN_STMTS (loop_vinfo);
> -  gimple *ref_stmt;
> +  stmt_vec_info stmt_info;
>    int mask = LOOP_VINFO_PTR_MASK (loop_vinfo);
>    tree mask_cst;
>    unsigned int i;
> @@ -2795,23 +2795,22 @@ vect_create_cond_for_align_checks (loop_
>    /* Create expression (mask & (dr_1 || ... || dr_n)) where dr_i is the address
>       of the first vector of the i'th data reference. */
>
> -  FOR_EACH_VEC_ELT (may_misalign_stmts, i, ref_stmt)
> +  FOR_EACH_VEC_ELT (may_misalign_stmts, i, stmt_info)
>      {
>        gimple_seq new_stmt_list = NULL;
>        tree addr_base;
>        tree addr_tmp_name;
>        tree new_or_tmp_name;
>        gimple *addr_stmt, *or_stmt;
> -      stmt_vec_info stmt_vinfo = vinfo_for_stmt (ref_stmt);
> -      tree vectype = STMT_VINFO_VECTYPE (stmt_vinfo);
> +      tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>        bool negative = tree_int_cst_compare
> -       (DR_STEP (STMT_VINFO_DATA_REF (stmt_vinfo)), size_zero_node) < 0;
> +       (DR_STEP (STMT_VINFO_DATA_REF (stmt_info)), size_zero_node) < 0;
>        tree offset = negative
>         ? size_int (-TYPE_VECTOR_SUBPARTS (vectype) + 1) : size_zero_node;
>
>        /* create: addr_tmp = (int)(address_of_first_vector) */
>        addr_base =
> -       vect_create_addr_base_for_vector_ref (ref_stmt, &new_stmt_list,
> +       vect_create_addr_base_for_vector_ref (stmt_info, &new_stmt_list,
>                                               offset);
>        if (new_stmt_list != NULL)
>         gimple_seq_add_seq (cond_expr_stmt_list, new_stmt_list);

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [22/46] Make DR_GROUP_SAME_DR_STMT a stmt_vec_info
  2018-07-24 10:02 ` [22/46] Make DR_GROUP_SAME_DR_STMT " Richard Sandiford
@ 2018-07-25  9:29   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25  9:29 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 12:02 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch changes STMT_VINFO_SAME_DR_STMT from a gimple stmt to a
> stmt_vec_info.

OK.

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (_stmt_vec_info::same_dr_stmt): Change from
>         a gimple stmt to a stmt_vec_info.
>         * tree-vect-stmts.c (vectorizable_load): Update accordingly.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:23:12.060939107 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:23:15.756906285 +0100
> @@ -876,7 +876,7 @@ struct _stmt_vec_info {
>    stmt_vec_info next_element;
>    /* For data-refs, in case that two or more stmts share data-ref, this is the
>       pointer to the previously detected stmt with the same dr.  */
> -  gimple *same_dr_stmt;
> +  stmt_vec_info same_dr_stmt;
>    /* The size of the group.  */
>    unsigned int size;
>    /* For stores, number of stores from this group seen. We vectorize the last
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-24 10:23:08.536970400 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:23:15.756906285 +0100
> @@ -7590,8 +7590,7 @@ vectorizable_load (gimple *stmt, gimple_
>          we have to give up.  */
>        if (DR_GROUP_SAME_DR_STMT (stmt_info)
>           && (STMT_SLP_TYPE (stmt_info)
> -             != STMT_SLP_TYPE (vinfo_for_stmt
> -                                (DR_GROUP_SAME_DR_STMT (stmt_info)))))
> +             != STMT_SLP_TYPE (DR_GROUP_SAME_DR_STMT (stmt_info))))
>         {
>           if (dump_enabled_p ())
>             dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [24/46] Make stmt_info_for_cost use a stmt_vec_info
  2018-07-24 10:02 ` [24/46] Make stmt_info_for_cost use " Richard Sandiford
@ 2018-07-25  9:30   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25  9:30 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 12:02 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch makes stmt_info_for_cost carry a stmt_vec_info instead
> of a gimple stmt.  The structure is internal to the vectoriser,
> so targets aren't affected.

OK

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (stmt_info_for_cost::stmt): Replace with...
>         (stmt_info_for_cost::stmt_info): ...this new field.
>         (add_stmt_costs): Update accordingly.
>         * tree-vect-loop.c (vect_compute_single_scalar_iteration_cost)
>         (vect_get_known_peeling_cost): Likewise.
>         (vect_estimate_min_profitable_iters): Likewise.
>         * tree-vect-stmts.c (record_stmt_cost): Likewise.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:23:18.856878757 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:23:22.264848493 +0100
> @@ -116,7 +116,7 @@ struct stmt_info_for_cost {
>    int count;
>    enum vect_cost_for_stmt kind;
>    enum vect_cost_model_location where;
> -  gimple *stmt;
> +  stmt_vec_info stmt_info;
>    int misalign;
>  };
>
> @@ -1282,10 +1282,7 @@ add_stmt_costs (void *data, stmt_vector_
>    stmt_info_for_cost *cost;
>    unsigned i;
>    FOR_EACH_VEC_ELT (*cost_vec, i, cost)
> -    add_stmt_cost (data, cost->count, cost->kind,
> -                  (cost->stmt
> -                   ? vinfo_for_stmt (cost->stmt)
> -                   : NULL_STMT_VEC_INFO),
> +    add_stmt_cost (data, cost->count, cost->kind, cost->stmt_info,
>                    cost->misalign, cost->where);
>  }
>
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:23:12.060939107 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:23:22.260848529 +0100
> @@ -1136,13 +1136,9 @@ vect_compute_single_scalar_iteration_cos
>    int j;
>    FOR_EACH_VEC_ELT (LOOP_VINFO_SCALAR_ITERATION_COST (loop_vinfo),
>                     j, si)
> -    {
> -      struct _stmt_vec_info *stmt_info
> -       = si->stmt ? vinfo_for_stmt (si->stmt) : NULL_STMT_VEC_INFO;
> -      (void) add_stmt_cost (target_cost_data, si->count,
> -                           si->kind, stmt_info, si->misalign,
> -                           vect_body);
> -    }
> +    (void) add_stmt_cost (target_cost_data, si->count,
> +                         si->kind, si->stmt_info, si->misalign,
> +                         vect_body);
>    unsigned dummy, body_cost = 0;
>    finish_cost (target_cost_data, &dummy, &body_cost, &dummy);
>    destroy_cost_data (target_cost_data);
> @@ -3344,24 +3340,16 @@ vect_get_known_peeling_cost (loop_vec_in
>    int j;
>    if (peel_iters_prologue)
>      FOR_EACH_VEC_ELT (*scalar_cost_vec, j, si)
> -       {
> -         stmt_vec_info stmt_info
> -           = si->stmt ? vinfo_for_stmt (si->stmt) : NULL_STMT_VEC_INFO;
> -         retval += record_stmt_cost (prologue_cost_vec,
> -                                     si->count * peel_iters_prologue,
> -                                     si->kind, stmt_info, si->misalign,
> -                                     vect_prologue);
> -       }
> +      retval += record_stmt_cost (prologue_cost_vec,
> +                                 si->count * peel_iters_prologue,
> +                                 si->kind, si->stmt_info, si->misalign,
> +                                 vect_prologue);
>    if (*peel_iters_epilogue)
>      FOR_EACH_VEC_ELT (*scalar_cost_vec, j, si)
> -       {
> -         stmt_vec_info stmt_info
> -           = si->stmt ? vinfo_for_stmt (si->stmt) : NULL_STMT_VEC_INFO;
> -         retval += record_stmt_cost (epilogue_cost_vec,
> -                                     si->count * *peel_iters_epilogue,
> -                                     si->kind, stmt_info, si->misalign,
> -                                     vect_epilogue);
> -       }
> +      retval += record_stmt_cost (epilogue_cost_vec,
> +                                 si->count * *peel_iters_epilogue,
> +                                 si->kind, si->stmt_info, si->misalign,
> +                                 vect_epilogue);
>
>    return retval;
>  }
> @@ -3497,13 +3485,9 @@ vect_estimate_min_profitable_iters (loop
>           int j;
>           FOR_EACH_VEC_ELT (LOOP_VINFO_SCALAR_ITERATION_COST (loop_vinfo),
>                             j, si)
> -           {
> -             struct _stmt_vec_info *stmt_info
> -               = si->stmt ? vinfo_for_stmt (si->stmt) : NULL_STMT_VEC_INFO;
> -             (void) add_stmt_cost (target_cost_data, si->count,
> -                                   si->kind, stmt_info, si->misalign,
> -                                   vect_epilogue);
> -           }
> +           (void) add_stmt_cost (target_cost_data, si->count,
> +                                 si->kind, si->stmt_info, si->misalign,
> +                                 vect_epilogue);
>         }
>      }
>    else if (npeel < 0)
> @@ -3535,15 +3519,13 @@ vect_estimate_min_profitable_iters (loop
>        int j;
>        FOR_EACH_VEC_ELT (LOOP_VINFO_SCALAR_ITERATION_COST (loop_vinfo), j, si)
>         {
> -         struct _stmt_vec_info *stmt_info
> -           = si->stmt ? vinfo_for_stmt (si->stmt) : NULL_STMT_VEC_INFO;
>           (void) add_stmt_cost (target_cost_data,
>                                 si->count * peel_iters_prologue,
> -                               si->kind, stmt_info, si->misalign,
> +                               si->kind, si->stmt_info, si->misalign,
>                                 vect_prologue);
>           (void) add_stmt_cost (target_cost_data,
>                                 si->count * peel_iters_epilogue,
> -                               si->kind, stmt_info, si->misalign,
> +                               si->kind, si->stmt_info, si->misalign,
>                                 vect_epilogue);
>         }
>      }
> @@ -3566,20 +3548,12 @@ vect_estimate_min_profitable_iters (loop
>                                           &epilogue_cost_vec);
>
>        FOR_EACH_VEC_ELT (prologue_cost_vec, j, si)
> -       {
> -         struct _stmt_vec_info *stmt_info
> -           = si->stmt ? vinfo_for_stmt (si->stmt) : NULL_STMT_VEC_INFO;
> -         (void) add_stmt_cost (data, si->count, si->kind, stmt_info,
> -                               si->misalign, vect_prologue);
> -       }
> +       (void) add_stmt_cost (data, si->count, si->kind, si->stmt_info,
> +                             si->misalign, vect_prologue);
>
>        FOR_EACH_VEC_ELT (epilogue_cost_vec, j, si)
> -       {
> -         struct _stmt_vec_info *stmt_info
> -           = si->stmt ? vinfo_for_stmt (si->stmt) : NULL_STMT_VEC_INFO;
> -         (void) add_stmt_cost (data, si->count, si->kind, stmt_info,
> -                               si->misalign, vect_epilogue);
> -       }
> +       (void) add_stmt_cost (data, si->count, si->kind, si->stmt_info,
> +                             si->misalign, vect_epilogue);
>
>        prologue_cost_vec.release ();
>        epilogue_cost_vec.release ();
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-24 10:23:15.756906285 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:23:22.260848529 +0100
> @@ -98,9 +98,7 @@ record_stmt_cost (stmt_vector_for_cost *
>        && STMT_VINFO_GATHER_SCATTER_P (stmt_info))
>      kind = vector_scatter_store;
>
> -  stmt_info_for_cost si = { count, kind, where,
> -      stmt_info ? STMT_VINFO_STMT (stmt_info) : NULL,
> -      misalign };
> +  stmt_info_for_cost si = { count, kind, where, stmt_info, misalign };
>    body_cost_vec->safe_push (si);
>
>    tree vectype = stmt_info ? stmt_vectype (stmt_info) : NULL_TREE;

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [25/46] Make get_earlier/later_stmt take and return stmt_vec_infos
  2018-07-24 10:03 ` [25/46] Make get_earlier/later_stmt take and return stmt_vec_infos Richard Sandiford
@ 2018-07-25  9:31   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25  9:31 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 12:03 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> ...and also make vect_find_last_scalar_stmt_in_slp return a stmt_vec_info.

OK

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (get_earlier_stmt, get_later_stmt): Take and
>         return stmt_vec_infos rather than gimple stmts.  Do not accept
>         null arguments.
>         (vect_find_last_scalar_stmt_in_slp): Return a stmt_vec_info instead
>         of a gimple stmt.
>         * tree-vect-slp.c (vect_find_last_scalar_stmt_in_slp): Likewise.
>         Update use of get_later_stmt.
>         (vect_get_constant_vectors): Update call accordingly.
>         (vect_schedule_slp_instance): Likewise
>         * tree-vect-data-refs.c (vect_slp_analyze_node_dependences): Likewise.
>         (vect_slp_analyze_instance_dependence): Likewise.
>         (vect_preserves_scalar_order_p): Update use of get_earlier_stmt.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:23:22.264848493 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:23:25.232822136 +0100
> @@ -1119,68 +1119,36 @@ set_vinfo_for_stmt (gimple *stmt, stmt_v
>      }
>  }
>
> -/* Return the earlier statement between STMT1 and STMT2.  */
> +/* Return the earlier statement between STMT1_INFO and STMT2_INFO.  */
>
> -static inline gimple *
> -get_earlier_stmt (gimple *stmt1, gimple *stmt2)
> +static inline stmt_vec_info
> +get_earlier_stmt (stmt_vec_info stmt1_info, stmt_vec_info stmt2_info)
>  {
> -  unsigned int uid1, uid2;
> +  gcc_checking_assert ((STMT_VINFO_IN_PATTERN_P (stmt1_info)
> +                       || !STMT_VINFO_RELATED_STMT (stmt1_info))
> +                      && (STMT_VINFO_IN_PATTERN_P (stmt2_info)
> +                          || !STMT_VINFO_RELATED_STMT (stmt2_info)));
>
> -  if (stmt1 == NULL)
> -    return stmt2;
> -
> -  if (stmt2 == NULL)
> -    return stmt1;
> -
> -  uid1 = gimple_uid (stmt1);
> -  uid2 = gimple_uid (stmt2);
> -
> -  if (uid1 == 0 || uid2 == 0)
> -    return NULL;
> -
> -  gcc_assert (uid1 <= stmt_vec_info_vec->length ()
> -             && uid2 <= stmt_vec_info_vec->length ());
> -  gcc_checking_assert ((STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (stmt1))
> -                       || !STMT_VINFO_RELATED_STMT (vinfo_for_stmt (stmt1)))
> -                      && (STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (stmt2))
> -                          || !STMT_VINFO_RELATED_STMT (vinfo_for_stmt (stmt2))));
> -
> -  if (uid1 < uid2)
> -    return stmt1;
> +  if (gimple_uid (stmt1_info->stmt) < gimple_uid (stmt2_info->stmt))
> +    return stmt1_info;
>    else
> -    return stmt2;
> +    return stmt2_info;
>  }
>
> -/* Return the later statement between STMT1 and STMT2.  */
> +/* Return the later statement between STMT1_INFO and STMT2_INFO.  */
>
> -static inline gimple *
> -get_later_stmt (gimple *stmt1, gimple *stmt2)
> +static inline stmt_vec_info
> +get_later_stmt (stmt_vec_info stmt1_info, stmt_vec_info stmt2_info)
>  {
> -  unsigned int uid1, uid2;
> -
> -  if (stmt1 == NULL)
> -    return stmt2;
> -
> -  if (stmt2 == NULL)
> -    return stmt1;
> -
> -  uid1 = gimple_uid (stmt1);
> -  uid2 = gimple_uid (stmt2);
> -
> -  if (uid1 == 0 || uid2 == 0)
> -    return NULL;
> -
> -  gcc_assert (uid1 <= stmt_vec_info_vec->length ()
> -             && uid2 <= stmt_vec_info_vec->length ());
> -  gcc_checking_assert ((STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (stmt1))
> -                       || !STMT_VINFO_RELATED_STMT (vinfo_for_stmt (stmt1)))
> -                      && (STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (stmt2))
> -                          || !STMT_VINFO_RELATED_STMT (vinfo_for_stmt (stmt2))));
> +  gcc_checking_assert ((STMT_VINFO_IN_PATTERN_P (stmt1_info)
> +                       || !STMT_VINFO_RELATED_STMT (stmt1_info))
> +                      && (STMT_VINFO_IN_PATTERN_P (stmt2_info)
> +                          || !STMT_VINFO_RELATED_STMT (stmt2_info)));
>
> -  if (uid1 > uid2)
> -    return stmt1;
> +  if (gimple_uid (stmt1_info->stmt) > gimple_uid (stmt2_info->stmt))
> +    return stmt1_info;
>    else
> -    return stmt2;
> +    return stmt2_info;
>  }
>
>  /* Return TRUE if a statement represented by STMT_INFO is a part of a
> @@ -1674,7 +1642,7 @@ extern bool vect_make_slp_decision (loop
>  extern void vect_detect_hybrid_slp (loop_vec_info);
>  extern void vect_get_slp_defs (vec<tree> , slp_tree, vec<vec<tree> > *);
>  extern bool vect_slp_bb (basic_block);
> -extern gimple *vect_find_last_scalar_stmt_in_slp (slp_tree);
> +extern stmt_vec_info vect_find_last_scalar_stmt_in_slp (slp_tree);
>  extern bool is_simple_and_all_uses_invariant (gimple *, loop_vec_info);
>  extern bool can_duplicate_and_interleave_p (unsigned int, machine_mode,
>                                             unsigned int * = NULL,
> Index: gcc/tree-vect-slp.c
> ===================================================================
> --- gcc/tree-vect-slp.c 2018-07-24 10:23:12.060939107 +0100
> +++ gcc/tree-vect-slp.c 2018-07-24 10:23:25.232822136 +0100
> @@ -1838,18 +1838,17 @@ vect_supported_load_permutation_p (slp_i
>
>  /* Find the last store in SLP INSTANCE.  */
>
> -gimple *
> +stmt_vec_info
>  vect_find_last_scalar_stmt_in_slp (slp_tree node)
>  {
> -  gimple *last = NULL;
> +  stmt_vec_info last = NULL;
>    stmt_vec_info stmt_vinfo;
>
>    for (int i = 0; SLP_TREE_SCALAR_STMTS (node).iterate (i, &stmt_vinfo); i++)
>      {
>        if (is_pattern_stmt_p (stmt_vinfo))
> -       last = get_later_stmt (STMT_VINFO_RELATED_STMT (stmt_vinfo), last);
> -      else
> -       last = get_later_stmt (stmt_vinfo, last);
> +       stmt_vinfo = STMT_VINFO_RELATED_STMT (stmt_vinfo);
> +      last = last ? get_later_stmt (stmt_vinfo, last) : stmt_vinfo;
>      }
>
>    return last;
> @@ -3480,8 +3479,9 @@ vect_get_constant_vectors (tree op, slp_
>               gimple_stmt_iterator gsi;
>               if (place_after_defs)
>                 {
> -                 gsi = gsi_for_stmt
> -                         (vect_find_last_scalar_stmt_in_slp (slp_node));
> +                 stmt_vec_info last_stmt_info
> +                   = vect_find_last_scalar_stmt_in_slp (slp_node);
> +                 gsi = gsi_for_stmt (last_stmt_info->stmt);
>                   init = vect_init_vector (stmt_vinfo, vec_cst, vector_type,
>                                            &gsi);
>                 }
> @@ -3910,7 +3910,8 @@ vect_schedule_slp_instance (slp_tree nod
>
>    /* Vectorized stmts go before the last scalar stmt which is where
>       all uses are ready.  */
> -  si = gsi_for_stmt (vect_find_last_scalar_stmt_in_slp (node));
> +  stmt_vec_info last_stmt_info = vect_find_last_scalar_stmt_in_slp (node);
> +  si = gsi_for_stmt (last_stmt_info->stmt);
>
>    /* Mark the first element of the reduction chain as reduction to properly
>       transform the node.  In the analysis phase only the last element of the
> Index: gcc/tree-vect-data-refs.c
> ===================================================================
> --- gcc/tree-vect-data-refs.c   2018-07-24 10:23:18.856878757 +0100
> +++ gcc/tree-vect-data-refs.c   2018-07-24 10:23:25.228822172 +0100
> @@ -216,8 +216,8 @@ vect_preserves_scalar_order_p (gimple *s
>      stmtinfo_a = STMT_VINFO_RELATED_STMT (stmtinfo_a);
>    if (is_pattern_stmt_p (stmtinfo_b))
>      stmtinfo_b = STMT_VINFO_RELATED_STMT (stmtinfo_b);
> -  gimple *earlier_stmt = get_earlier_stmt (stmtinfo_a, stmtinfo_b);
> -  return !DR_IS_WRITE (STMT_VINFO_DATA_REF (vinfo_for_stmt (earlier_stmt)));
> +  stmt_vec_info earlier_stmt_info = get_earlier_stmt (stmtinfo_a, stmtinfo_b);
> +  return !DR_IS_WRITE (STMT_VINFO_DATA_REF (earlier_stmt_info));
>  }
>
>  /* A subroutine of vect_analyze_data_ref_dependence.  Handle
> @@ -671,17 +671,17 @@ vect_slp_analyze_node_dependences (slp_i
>    /* This walks over all stmts involved in the SLP load/store done
>       in NODE verifying we can sink them up to the last stmt in the
>       group.  */
> -  gimple *last_access = vect_find_last_scalar_stmt_in_slp (node);
> +  stmt_vec_info last_access_info = vect_find_last_scalar_stmt_in_slp (node);
>    for (unsigned k = 0; k < SLP_INSTANCE_GROUP_SIZE (instance); ++k)
>      {
>        stmt_vec_info access_info = SLP_TREE_SCALAR_STMTS (node)[k];
> -      if (access_info == last_access)
> +      if (access_info == last_access_info)
>         continue;
>        data_reference *dr_a = STMT_VINFO_DATA_REF (access_info);
>        ao_ref ref;
>        bool ref_initialized_p = false;
>        for (gimple_stmt_iterator gsi = gsi_for_stmt (access_info->stmt);
> -          gsi_stmt (gsi) != last_access; gsi_next (&gsi))
> +          gsi_stmt (gsi) != last_access_info->stmt; gsi_next (&gsi))
>         {
>           gimple *stmt = gsi_stmt (gsi);
>           if (! gimple_vuse (stmt)
> @@ -757,14 +757,14 @@ vect_slp_analyze_instance_dependence (sl
>      store = NULL;
>
>    /* Verify we can sink stores to the vectorized stmt insert location.  */
> -  gimple *last_store = NULL;
> +  stmt_vec_info last_store_info = NULL;
>    if (store)
>      {
>        if (! vect_slp_analyze_node_dependences (instance, store, vNULL, NULL))
>         return false;
>
>        /* Mark stores in this instance and remember the last one.  */
> -      last_store = vect_find_last_scalar_stmt_in_slp (store);
> +      last_store_info = vect_find_last_scalar_stmt_in_slp (store);
>        for (unsigned k = 0; k < SLP_INSTANCE_GROUP_SIZE (instance); ++k)
>         gimple_set_visited (SLP_TREE_SCALAR_STMTS (store)[k]->stmt, true);
>      }
> @@ -779,7 +779,7 @@ vect_slp_analyze_instance_dependence (sl
>      if (! vect_slp_analyze_node_dependences (instance, load,
>                                              store
>                                              ? SLP_TREE_SCALAR_STMTS (store)
> -                                            : vNULL, last_store))
> +                                            : vNULL, last_store_info))
>        {
>         res = false;
>         break;

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [26/46] Make more use of dyn_cast in tree-vect*
  2018-07-24 10:03 ` [26/46] Make more use of dyn_cast in tree-vect* Richard Sandiford
@ 2018-07-25  9:31   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25  9:31 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 12:03 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> If we use stmt_vec_infos to represent statements in the vectoriser,
> it's then more natural to use dyn_cast when processing the statement
> as an assignment, call, etc.  This patch does that in a few more places.

OK.

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vect-data-refs.c (vect_check_gather_scatter): Pass the
>         gcall rather than the generic gimple stmt to gimple_call_internal_fn.
>         (vect_get_smallest_scalar_type, can_group_stmts_p): Use dyn_cast
>         to get gassigns and gcalls, rather than operating on generc gimple
>         stmts.
>         * tree-vect-stmts.c (exist_non_indexing_operands_for_use_p)
>         (vect_mark_stmts_to_be_vectorized, vectorizable_store)
>         (vectorizable_load, vect_analyze_stmt): Likewise.
>         * tree-vect-loop.c (vectorizable_reduction): Likewise gphi.
>
> Index: gcc/tree-vect-data-refs.c
> ===================================================================
> --- gcc/tree-vect-data-refs.c   2018-07-24 10:23:25.228822172 +0100
> +++ gcc/tree-vect-data-refs.c   2018-07-24 10:23:28.452793542 +0100
> @@ -130,15 +130,16 @@ vect_get_smallest_scalar_type (gimple *s
>
>    lhs = rhs = TREE_INT_CST_LOW (TYPE_SIZE_UNIT (scalar_type));
>
> -  if (is_gimple_assign (stmt)
> -      && (gimple_assign_cast_p (stmt)
> -          || gimple_assign_rhs_code (stmt) == DOT_PROD_EXPR
> -          || gimple_assign_rhs_code (stmt) == WIDEN_SUM_EXPR
> -          || gimple_assign_rhs_code (stmt) == WIDEN_MULT_EXPR
> -          || gimple_assign_rhs_code (stmt) == WIDEN_LSHIFT_EXPR
> -          || gimple_assign_rhs_code (stmt) == FLOAT_EXPR))
> +  gassign *assign = dyn_cast <gassign *> (stmt);
> +  if (assign
> +      && (gimple_assign_cast_p (assign)
> +         || gimple_assign_rhs_code (assign) == DOT_PROD_EXPR
> +         || gimple_assign_rhs_code (assign) == WIDEN_SUM_EXPR
> +         || gimple_assign_rhs_code (assign) == WIDEN_MULT_EXPR
> +         || gimple_assign_rhs_code (assign) == WIDEN_LSHIFT_EXPR
> +         || gimple_assign_rhs_code (assign) == FLOAT_EXPR))
>      {
> -      tree rhs_type = TREE_TYPE (gimple_assign_rhs1 (stmt));
> +      tree rhs_type = TREE_TYPE (gimple_assign_rhs1 (assign));
>
>        rhs = TREE_INT_CST_LOW (TYPE_SIZE_UNIT (rhs_type));
>        if (rhs < lhs)
> @@ -2850,21 +2851,23 @@ can_group_stmts_p (gimple *stmt1, gimple
>    if (gimple_assign_single_p (stmt1))
>      return gimple_assign_single_p (stmt2);
>
> -  if (is_gimple_call (stmt1) && gimple_call_internal_p (stmt1))
> +  gcall *call1 = dyn_cast <gcall *> (stmt1);
> +  if (call1 && gimple_call_internal_p (call1))
>      {
>        /* Check for two masked loads or two masked stores.  */
> -      if (!is_gimple_call (stmt2) || !gimple_call_internal_p (stmt2))
> +      gcall *call2 = dyn_cast <gcall *> (stmt2);
> +      if (!call2 || !gimple_call_internal_p (call2))
>         return false;
> -      internal_fn ifn = gimple_call_internal_fn (stmt1);
> +      internal_fn ifn = gimple_call_internal_fn (call1);
>        if (ifn != IFN_MASK_LOAD && ifn != IFN_MASK_STORE)
>         return false;
> -      if (ifn != gimple_call_internal_fn (stmt2))
> +      if (ifn != gimple_call_internal_fn (call2))
>         return false;
>
>        /* Check that the masks are the same.  Cope with casts of masks,
>          like those created by build_mask_conversion.  */
> -      tree mask1 = gimple_call_arg (stmt1, 2);
> -      tree mask2 = gimple_call_arg (stmt2, 2);
> +      tree mask1 = gimple_call_arg (call1, 2);
> +      tree mask2 = gimple_call_arg (call2, 2);
>        if (!operand_equal_p (mask1, mask2, 0))
>         {
>           mask1 = strip_conversion (mask1);
> @@ -3665,7 +3668,7 @@ vect_check_gather_scatter (gimple *stmt,
>    gcall *call = dyn_cast <gcall *> (stmt);
>    if (call && gimple_call_internal_p (call))
>      {
> -      ifn = gimple_call_internal_fn (stmt);
> +      ifn = gimple_call_internal_fn (call);
>        if (internal_gather_scatter_fn_p (ifn))
>         {
>           vect_describe_gather_scatter_call (call, info);
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-24 10:23:22.260848529 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:23:28.456793506 +0100
> @@ -389,30 +389,31 @@ exist_non_indexing_operands_for_use_p (t
>       Therefore, all we need to check is if STMT falls into the
>       first case, and whether var corresponds to USE.  */
>
> -  if (!gimple_assign_copy_p (stmt))
> +  gassign *assign = dyn_cast <gassign *> (stmt);
> +  if (!assign || !gimple_assign_copy_p (assign))
>      {
> -      if (is_gimple_call (stmt)
> -         && gimple_call_internal_p (stmt))
> +      gcall *call = dyn_cast <gcall *> (stmt);
> +      if (call && gimple_call_internal_p (call))
>         {
> -         internal_fn ifn = gimple_call_internal_fn (stmt);
> +         internal_fn ifn = gimple_call_internal_fn (call);
>           int mask_index = internal_fn_mask_index (ifn);
>           if (mask_index >= 0
> -             && use == gimple_call_arg (stmt, mask_index))
> +             && use == gimple_call_arg (call, mask_index))
>             return true;
>           int stored_value_index = internal_fn_stored_value_index (ifn);
>           if (stored_value_index >= 0
> -             && use == gimple_call_arg (stmt, stored_value_index))
> +             && use == gimple_call_arg (call, stored_value_index))
>             return true;
>           if (internal_gather_scatter_fn_p (ifn)
> -             && use == gimple_call_arg (stmt, 1))
> +             && use == gimple_call_arg (call, 1))
>             return true;
>         }
>        return false;
>      }
>
> -  if (TREE_CODE (gimple_assign_lhs (stmt)) == SSA_NAME)
> +  if (TREE_CODE (gimple_assign_lhs (assign)) == SSA_NAME)
>      return false;
> -  operand = gimple_assign_rhs1 (stmt);
> +  operand = gimple_assign_rhs1 (assign);
>    if (TREE_CODE (operand) != SSA_NAME)
>      return false;
>
> @@ -739,10 +740,10 @@ vect_mark_stmts_to_be_vectorized (loop_v
>            /* Pattern statements are not inserted into the code, so
>               FOR_EACH_PHI_OR_STMT_USE optimizes their operands out, and we
>               have to scan the RHS or function arguments instead.  */
> -          if (is_gimple_assign (stmt))
> -            {
> -             enum tree_code rhs_code = gimple_assign_rhs_code (stmt);
> -             tree op = gimple_assign_rhs1 (stmt);
> +         if (gassign *assign = dyn_cast <gassign *> (stmt))
> +           {
> +             enum tree_code rhs_code = gimple_assign_rhs_code (assign);
> +             tree op = gimple_assign_rhs1 (assign);
>
>               i = 1;
>               if (rhs_code == COND_EXPR && COMPARISON_CLASS_P (op))
> @@ -754,25 +755,25 @@ vect_mark_stmts_to_be_vectorized (loop_v
>                     return false;
>                   i = 2;
>                 }
> -             for (; i < gimple_num_ops (stmt); i++)
> -                {
> -                 op = gimple_op (stmt, i);
> +             for (; i < gimple_num_ops (assign); i++)
> +               {
> +                 op = gimple_op (assign, i);
>                    if (TREE_CODE (op) == SSA_NAME
>                       && !process_use (stmt, op, loop_vinfo, relevant,
>                                        &worklist, false))
>                      return false;
>                   }
>              }
> -          else if (is_gimple_call (stmt))
> -            {
> -              for (i = 0; i < gimple_call_num_args (stmt); i++)
> -                {
> -                  tree arg = gimple_call_arg (stmt, i);
> +         else if (gcall *call = dyn_cast <gcall *> (stmt))
> +           {
> +             for (i = 0; i < gimple_call_num_args (call); i++)
> +               {
> +                 tree arg = gimple_call_arg (call, i);
>                   if (!process_use (stmt, arg, loop_vinfo, relevant,
>                                     &worklist, false))
>                      return false;
> -                }
> -            }
> +               }
> +           }
>          }
>        else
>          FOR_EACH_PHI_OR_STMT_USE (use_p, stmt, iter, SSA_OP_USE)
> @@ -6274,9 +6275,9 @@ vectorizable_store (gimple *stmt, gimple
>    /* Is vectorizable store? */
>
>    tree mask = NULL_TREE, mask_vectype = NULL_TREE;
> -  if (is_gimple_assign (stmt))
> +  if (gassign *assign = dyn_cast <gassign *> (stmt))
>      {
> -      tree scalar_dest = gimple_assign_lhs (stmt);
> +      tree scalar_dest = gimple_assign_lhs (assign);
>        if (TREE_CODE (scalar_dest) == VIEW_CONVERT_EXPR
>           && is_pattern_stmt_p (stmt_info))
>         scalar_dest = TREE_OPERAND (scalar_dest, 0);
> @@ -7445,13 +7446,13 @@ vectorizable_load (gimple *stmt, gimple_
>      return false;
>
>    tree mask = NULL_TREE, mask_vectype = NULL_TREE;
> -  if (is_gimple_assign (stmt))
> +  if (gassign *assign = dyn_cast <gassign *> (stmt))
>      {
> -      scalar_dest = gimple_assign_lhs (stmt);
> +      scalar_dest = gimple_assign_lhs (assign);
>        if (TREE_CODE (scalar_dest) != SSA_NAME)
>         return false;
>
> -      tree_code code = gimple_assign_rhs_code (stmt);
> +      tree_code code = gimple_assign_rhs_code (assign);
>        if (code != ARRAY_REF
>           && code != BIT_FIELD_REF
>           && code != INDIRECT_REF
> @@ -9557,9 +9558,9 @@ vect_analyze_stmt (gimple *stmt, bool *n
>    if (STMT_VINFO_RELEVANT_P (stmt_info))
>      {
>        gcc_assert (!VECTOR_MODE_P (TYPE_MODE (gimple_expr_type (stmt))));
> +      gcall *call = dyn_cast <gcall *> (stmt);
>        gcc_assert (STMT_VINFO_VECTYPE (stmt_info)
> -                 || (is_gimple_call (stmt)
> -                     && gimple_call_lhs (stmt) == NULL_TREE));
> +                 || (call && gimple_call_lhs (call) == NULL_TREE));
>        *need_to_vectorize = true;
>      }
>
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:23:22.260848529 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:23:28.456793506 +0100
> @@ -6109,9 +6109,9 @@ vectorizable_reduction (gimple *stmt, gi
>      gcc_assert (slp_node
>                 && REDUC_GROUP_FIRST_ELEMENT (stmt_info) == stmt_info);
>
> -  if (gimple_code (stmt) == GIMPLE_PHI)
> +  if (gphi *phi = dyn_cast <gphi *> (stmt))
>      {
> -      tree phi_result = gimple_phi_result (stmt);
> +      tree phi_result = gimple_phi_result (phi);
>        /* Analysis is fully done on the reduction stmt invocation.  */
>        if (! vec_stmt)
>         {
> @@ -6141,7 +6141,7 @@ vectorizable_reduction (gimple *stmt, gi
>        for (unsigned k = 1; k < gimple_num_ops (reduc_stmt); ++k)
>         {
>           tree op = gimple_op (reduc_stmt, k);
> -         if (op == gimple_phi_result (stmt))
> +         if (op == phi_result)
>             continue;
>           if (k == 1
>               && gimple_assign_rhs_code (reduc_stmt) == COND_EXPR)

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [27/46] Remove duplicated stmt_vec_info lookups
  2018-07-24 10:03 ` [27/46] Remove duplicated stmt_vec_info lookups Richard Sandiford
@ 2018-07-25  9:32   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25  9:32 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 12:03 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> Various places called vect_dr_stmt or vinfo_for_stmt multiple times
> on the same input.  This patch makes them reuse the earlier result.
> It also splits a couple of single vinfo_for_stmt calls out into
> separate statements so that they can be reused in later patches.

OK

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vect-data-refs.c (vect_analyze_data_ref_dependence)
>         (vect_slp_analyze_node_dependences, vect_analyze_data_ref_accesses)
>         (vect_permute_store_chain, vect_permute_load_chain)
>         (vect_shift_permute_load_chain, vect_transform_grouped_load): Avoid
>         repeated stmt_vec_info lookups.
>         * tree-vect-loop-manip.c (vect_can_advance_ivs_p): Likewise.
>         (vect_update_ivs_after_vectorizer): Likewise.
>         * tree-vect-loop.c (vect_is_simple_reduction): Likewise.
>         (vect_create_epilog_for_reduction, vectorizable_reduction): Likewise.
>         * tree-vect-patterns.c (adjust_bool_stmts): Likewise.
>         * tree-vect-slp.c (vect_analyze_slp_instance): Likewise.
>         (vect_bb_slp_scalar_cost): Likewise.
>         * tree-vect-stmts.c (get_group_alias_ptr_type): Likewise.
>
> Index: gcc/tree-vect-data-refs.c
> ===================================================================
> --- gcc/tree-vect-data-refs.c   2018-07-24 10:23:28.452793542 +0100
> +++ gcc/tree-vect-data-refs.c   2018-07-24 10:23:31.736764378 +0100
> @@ -472,8 +472,7 @@ vect_analyze_data_ref_dependence (struct
>                 ... = a[i];
>                 a[i+1] = ...;
>              where loads from the group interleave with the store.  */
> -         if (!vect_preserves_scalar_order_p (vect_dr_stmt(dra),
> -                                             vect_dr_stmt (drb)))
> +         if (!vect_preserves_scalar_order_p (stmtinfo_a, stmtinfo_b))
>             {
>               if (dump_enabled_p ())
>                 dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> @@ -673,6 +672,7 @@ vect_slp_analyze_node_dependences (slp_i
>       in NODE verifying we can sink them up to the last stmt in the
>       group.  */
>    stmt_vec_info last_access_info = vect_find_last_scalar_stmt_in_slp (node);
> +  vec_info *vinfo = last_access_info->vinfo;
>    for (unsigned k = 0; k < SLP_INSTANCE_GROUP_SIZE (instance); ++k)
>      {
>        stmt_vec_info access_info = SLP_TREE_SCALAR_STMTS (node)[k];
> @@ -691,7 +691,8 @@ vect_slp_analyze_node_dependences (slp_i
>
>           /* If we couldn't record a (single) data reference for this
>              stmt we have to resort to the alias oracle.  */
> -         data_reference *dr_b = STMT_VINFO_DATA_REF (vinfo_for_stmt (stmt));
> +         stmt_vec_info stmt_info = vinfo->lookup_stmt (stmt);
> +         data_reference *dr_b = STMT_VINFO_DATA_REF (stmt_info);
>           if (!dr_b)
>             {
>               /* We are moving a store or sinking a load - this means
> @@ -2951,7 +2952,7 @@ vect_analyze_data_ref_accesses (vec_info
>               || data_ref_compare_tree (DR_BASE_ADDRESS (dra),
>                                         DR_BASE_ADDRESS (drb)) != 0
>               || data_ref_compare_tree (DR_OFFSET (dra), DR_OFFSET (drb)) != 0
> -             || !can_group_stmts_p (vect_dr_stmt (dra), vect_dr_stmt (drb)))
> +             || !can_group_stmts_p (stmtinfo_a, stmtinfo_b))
>             break;
>
>           /* Check that the data-refs have the same constant size.  */
> @@ -3040,11 +3041,11 @@ vect_analyze_data_ref_accesses (vec_info
>           /* Link the found element into the group list.  */
>           if (!DR_GROUP_FIRST_ELEMENT (stmtinfo_a))
>             {
> -             DR_GROUP_FIRST_ELEMENT (stmtinfo_a) = vect_dr_stmt (dra);
> +             DR_GROUP_FIRST_ELEMENT (stmtinfo_a) = stmtinfo_a;
>               lastinfo = stmtinfo_a;
>             }
> -         DR_GROUP_FIRST_ELEMENT (stmtinfo_b) = vect_dr_stmt (dra);
> -         DR_GROUP_NEXT_ELEMENT (lastinfo) = vect_dr_stmt (drb);
> +         DR_GROUP_FIRST_ELEMENT (stmtinfo_b) = stmtinfo_a;
> +         DR_GROUP_NEXT_ELEMENT (lastinfo) = stmtinfo_b;
>           lastinfo = stmtinfo_b;
>         }
>      }
> @@ -5219,9 +5220,10 @@ vect_permute_store_chain (vec<tree> dr_c
>                           gimple_stmt_iterator *gsi,
>                           vec<tree> *result_chain)
>  {
> +  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    tree vect1, vect2, high, low;
>    gimple *perm_stmt;
> -  tree vectype = STMT_VINFO_VECTYPE (vinfo_for_stmt (stmt));
> +  tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>    tree perm_mask_low, perm_mask_high;
>    tree data_ref;
>    tree perm3_mask_low, perm3_mask_high;
> @@ -5840,11 +5842,12 @@ vect_permute_load_chain (vec<tree> dr_ch
>                          gimple_stmt_iterator *gsi,
>                          vec<tree> *result_chain)
>  {
> +  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    tree data_ref, first_vect, second_vect;
>    tree perm_mask_even, perm_mask_odd;
>    tree perm3_mask_low, perm3_mask_high;
>    gimple *perm_stmt;
> -  tree vectype = STMT_VINFO_VECTYPE (vinfo_for_stmt (stmt));
> +  tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>    unsigned int i, j, log_length = exact_log2 (length);
>
>    result_chain->quick_grow (length);
> @@ -6043,14 +6046,14 @@ vect_shift_permute_load_chain (vec<tree>
>                                gimple_stmt_iterator *gsi,
>                                vec<tree> *result_chain)
>  {
> +  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    tree vect[3], vect_shift[3], data_ref, first_vect, second_vect;
>    tree perm2_mask1, perm2_mask2, perm3_mask;
>    tree select_mask, shift1_mask, shift2_mask, shift3_mask, shift4_mask;
>    gimple *perm_stmt;
>
> -  tree vectype = STMT_VINFO_VECTYPE (vinfo_for_stmt (stmt));
> +  tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>    unsigned int i;
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>
>    unsigned HOST_WIDE_INT nelt, vf;
> @@ -6310,6 +6313,7 @@ vect_shift_permute_load_chain (vec<tree>
>  vect_transform_grouped_load (gimple *stmt, vec<tree> dr_chain, int size,
>                              gimple_stmt_iterator *gsi)
>  {
> +  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    machine_mode mode;
>    vec<tree> result_chain = vNULL;
>
> @@ -6321,7 +6325,7 @@ vect_transform_grouped_load (gimple *stm
>    /* If reassociation width for vector type is 2 or greater target machine can
>       execute 2 or more vector instructions in parallel.  Otherwise try to
>       get chain for loads group using vect_shift_permute_load_chain.  */
> -  mode = TYPE_MODE (STMT_VINFO_VECTYPE (vinfo_for_stmt (stmt)));
> +  mode = TYPE_MODE (STMT_VINFO_VECTYPE (stmt_info));
>    if (targetm.sched.reassociation_width (VEC_PERM_EXPR, mode) > 1
>        || pow2p_hwi (size)
>        || !vect_shift_permute_load_chain (dr_chain, size, stmt,
> Index: gcc/tree-vect-loop-manip.c
> ===================================================================
> --- gcc/tree-vect-loop-manip.c  2018-07-24 10:23:18.856878757 +0100
> +++ gcc/tree-vect-loop-manip.c  2018-07-24 10:23:31.736764378 +0100
> @@ -1377,6 +1377,7 @@ vect_can_advance_ivs_p (loop_vec_info lo
>        tree evolution_part;
>
>        gphi *phi = gsi.phi ();
> +      stmt_vec_info phi_info = loop_vinfo->lookup_stmt (phi);
>        if (dump_enabled_p ())
>         {
>            dump_printf_loc (MSG_NOTE, vect_location, "Analyze phi: ");
> @@ -1397,8 +1398,7 @@ vect_can_advance_ivs_p (loop_vec_info lo
>
>        /* Analyze the evolution function.  */
>
> -      evolution_part
> -       = STMT_VINFO_LOOP_PHI_EVOLUTION_PART (vinfo_for_stmt (phi));
> +      evolution_part = STMT_VINFO_LOOP_PHI_EVOLUTION_PART (phi_info);
>        if (evolution_part == NULL_TREE)
>          {
>           if (dump_enabled_p ())
> @@ -1500,6 +1500,7 @@ vect_update_ivs_after_vectorizer (loop_v
>
>        gphi *phi = gsi.phi ();
>        gphi *phi1 = gsi1.phi ();
> +      stmt_vec_info phi_info = loop_vinfo->lookup_stmt (phi);
>        if (dump_enabled_p ())
>         {
>           dump_printf_loc (MSG_NOTE, vect_location,
> @@ -1517,7 +1518,7 @@ vect_update_ivs_after_vectorizer (loop_v
>         }
>
>        type = TREE_TYPE (gimple_phi_result (phi));
> -      step_expr = STMT_VINFO_LOOP_PHI_EVOLUTION_PART (vinfo_for_stmt (phi));
> +      step_expr = STMT_VINFO_LOOP_PHI_EVOLUTION_PART (phi_info);
>        step_expr = unshare_expr (step_expr);
>
>        /* FORNOW: We do not support IVs whose evolution function is a polynomial
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:23:28.456793506 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:23:31.740764343 +0100
> @@ -3252,7 +3252,7 @@ vect_is_simple_reduction (loop_vec_info
>      }
>
>    /* Dissolve group eventually half-built by vect_is_slp_reduction.  */
> -  stmt_vec_info first = REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (def_stmt));
> +  stmt_vec_info first = REDUC_GROUP_FIRST_ELEMENT (def_stmt_info);
>    while (first)
>      {
>        stmt_vec_info next = REDUC_GROUP_NEXT_ELEMENT (first);
> @@ -4784,7 +4784,7 @@ vect_create_epilog_for_reduction (vec<tr
>       # b1 = phi <b2, b0>
>       a2 = operation (a1)
>       b2 = operation (b1)  */
> -  slp_reduc = (slp_node && !REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)));
> +  slp_reduc = (slp_node && !REDUC_GROUP_FIRST_ELEMENT (stmt_info));
>
>    /* True if we should implement SLP_REDUC using native reduction operations
>       instead of scalar operations.  */
> @@ -4799,7 +4799,7 @@ vect_create_epilog_for_reduction (vec<tr
>
>       we may end up with more than one vector result.  Here we reduce them to
>       one vector.  */
> -  if (REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)) || direct_slp_reduc)
> +  if (REDUC_GROUP_FIRST_ELEMENT (stmt_info) || direct_slp_reduc)
>      {
>        tree first_vect = PHI_RESULT (new_phis[0]);
>        gassign *new_vec_stmt = NULL;
> @@ -5544,7 +5544,7 @@ vect_create_epilog_for_reduction (vec<tr
>       necessary, hence we set here REDUC_GROUP_SIZE to 1.  SCALAR_DEST is the
>       LHS of the last stmt in the reduction chain, since we are looking for
>       the loop exit phi node.  */
> -  if (REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)))
> +  if (REDUC_GROUP_FIRST_ELEMENT (stmt_info))
>      {
>        stmt_vec_info dest_stmt_info
>         = SLP_TREE_SCALAR_STMTS (slp_node)[group_size - 1];
> @@ -6095,8 +6095,8 @@ vectorizable_reduction (gimple *stmt, gi
>    tree cond_reduc_val = NULL_TREE;
>
>    /* Make sure it was already recognized as a reduction computation.  */
> -  if (STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt)) != vect_reduction_def
> -      && STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt)) != vect_nested_cycle)
> +  if (STMT_VINFO_DEF_TYPE (stmt_info) != vect_reduction_def
> +      && STMT_VINFO_DEF_TYPE (stmt_info) != vect_nested_cycle)
>      return false;
>
>    if (nested_in_vect_loop_p (loop, stmt))
> @@ -6789,7 +6789,7 @@ vectorizable_reduction (gimple *stmt, gi
>
>    if (reduction_type == FOLD_LEFT_REDUCTION
>        && slp_node
> -      && !REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)))
> +      && !REDUC_GROUP_FIRST_ELEMENT (stmt_info))
>      {
>        /* We cannot use in-order reductions in this case because there is
>          an implicit reassociation of the operations involved.  */
> @@ -6818,7 +6818,7 @@ vectorizable_reduction (gimple *stmt, gi
>
>    /* Check extra constraints for variable-length unchained SLP reductions.  */
>    if (STMT_SLP_TYPE (stmt_info)
> -      && !REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt))
> +      && !REDUC_GROUP_FIRST_ELEMENT (stmt_info)
>        && !nunits_out.is_constant ())
>      {
>        /* We checked above that we could build the initial vector when
> Index: gcc/tree-vect-patterns.c
> ===================================================================
> --- gcc/tree-vect-patterns.c    2018-07-24 10:23:08.536970400 +0100
> +++ gcc/tree-vect-patterns.c    2018-07-24 10:23:31.740764343 +0100
> @@ -3505,6 +3505,8 @@ sort_after_uid (const void *p1, const vo
>  adjust_bool_stmts (hash_set <gimple *> &bool_stmt_set,
>                    tree out_type, gimple *stmt)
>  {
> +  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +
>    /* Gather original stmts in the bool pattern in their order of appearance
>       in the IL.  */
>    auto_vec<gimple *> bool_stmts (bool_stmt_set.elements ());
> @@ -3517,11 +3519,11 @@ adjust_bool_stmts (hash_set <gimple *> &
>    hash_map <tree, tree> defs;
>    for (unsigned i = 0; i < bool_stmts.length (); ++i)
>      adjust_bool_pattern (gimple_assign_lhs (bool_stmts[i]),
> -                        out_type, vinfo_for_stmt (stmt), defs);
> +                        out_type, stmt_info, defs);
>
>    /* Pop the last pattern seq stmt and install it as pattern root for STMT.  */
>    gimple *pattern_stmt
> -    = gimple_seq_last_stmt (STMT_VINFO_PATTERN_DEF_SEQ (vinfo_for_stmt (stmt)));
> +    = gimple_seq_last_stmt (STMT_VINFO_PATTERN_DEF_SEQ (stmt_info));
>    return gimple_assign_lhs (pattern_stmt);
>  }
>
> Index: gcc/tree-vect-slp.c
> ===================================================================
> --- gcc/tree-vect-slp.c 2018-07-24 10:23:25.232822136 +0100
> +++ gcc/tree-vect-slp.c 2018-07-24 10:23:31.740764343 +0100
> @@ -2157,8 +2157,8 @@ vect_analyze_slp_instance (vec_info *vin
>       vector size.  */
>    unsigned HOST_WIDE_INT const_nunits;
>    if (is_a <bb_vec_info> (vinfo)
> -      && STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt))
> -      && DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt))
> +      && STMT_VINFO_GROUPED_ACCESS (stmt_info)
> +      && DR_GROUP_FIRST_ELEMENT (stmt_info)
>        && nunits.is_constant (&const_nunits))
>      {
>        /* We consider breaking the group only on VF boundaries from the existing
> @@ -2693,6 +2693,7 @@ vect_bb_slp_scalar_cost (basic_block bb,
>    FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
>      {
>        gimple *stmt = stmt_info->stmt;
> +      vec_info *vinfo = stmt_info->vinfo;
>        ssa_op_iter op_iter;
>        def_operand_p def_p;
>
> @@ -2709,12 +2710,14 @@ vect_bb_slp_scalar_cost (basic_block bb,
>           imm_use_iterator use_iter;
>           gimple *use_stmt;
>           FOR_EACH_IMM_USE_STMT (use_stmt, use_iter, DEF_FROM_PTR (def_p))
> -           if (!is_gimple_debug (use_stmt)
> -               && (! vect_stmt_in_region_p (stmt_info->vinfo, use_stmt)
> -                   || ! PURE_SLP_STMT (vinfo_for_stmt (use_stmt))))
> +           if (!is_gimple_debug (use_stmt))
>               {
> -               (*life)[i] = true;
> -               BREAK_FROM_IMM_USE_STMT (use_iter);
> +               stmt_vec_info use_stmt_info = vinfo->lookup_stmt (use_stmt);
> +               if (!use_stmt_info || !PURE_SLP_STMT (use_stmt_info))
> +                 {
> +                   (*life)[i] = true;
> +                   BREAK_FROM_IMM_USE_STMT (use_iter);
> +                 }
>               }
>         }
>        if ((*life)[i])
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-24 10:23:28.456793506 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:23:31.744764307 +0100
> @@ -6193,11 +6193,11 @@ ensure_base_align (struct data_reference
>  static tree
>  get_group_alias_ptr_type (gimple *first_stmt)
>  {
> +  stmt_vec_info first_stmt_info = vinfo_for_stmt (first_stmt);
>    struct data_reference *first_dr, *next_dr;
>
> -  first_dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (first_stmt));
> -  stmt_vec_info next_stmt_info
> -    = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (first_stmt));
> +  first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
> +  stmt_vec_info next_stmt_info = DR_GROUP_NEXT_ELEMENT (first_stmt_info);
>    while (next_stmt_info)
>      {
>        next_dr = STMT_VINFO_DATA_REF (next_stmt_info);

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [28/46] Use stmt_vec_info instead of gimple stmts internally (part 1)
  2018-07-24 10:04 ` [28/46] Use stmt_vec_info instead of gimple stmts internally (part 1) Richard Sandiford
@ 2018-07-25  9:33   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25  9:33 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 12:04 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This first part makes functions use stmt_vec_infos instead of
> gimple stmts in cases where the stmt_vec_info was already available
> and where the change is mechanical.  Most of it is just replacing
> "stmt" with "stmt_info".

OK

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vect-data-refs.c (vect_slp_analyze_node_dependences):
>         (vect_check_gather_scatter, vect_create_data_ref_ptr, bump_vector_ptr)
>         (vect_permute_store_chain, vect_setup_realignment)
>         (vect_permute_load_chain, vect_shift_permute_load_chain)
>         (vect_transform_grouped_load): Use stmt_vec_info rather than gimple
>         stmts internally, and when passing values to other vectorizer routines.
>         * tree-vect-loop-manip.c (vect_can_advance_ivs_p): Likewise.
>         * tree-vect-loop.c (vect_analyze_scalar_cycles_1)
>         (vect_analyze_loop_operations, get_initial_def_for_reduction)
>         (vect_create_epilog_for_reduction, vectorize_fold_left_reduction)
>         (vectorizable_reduction, vectorizable_induction)
>         (vectorizable_live_operation, vect_transform_loop_stmt)
>         (vect_transform_loop): Likewise.
>         * tree-vect-patterns.c (vect_reassociating_reduction_p)
>         (vect_recog_widen_op_pattern, vect_recog_mixed_size_cond_pattern)
>         (vect_recog_bool_pattern, vect_recog_gather_scatter_pattern): Likewise.
>         * tree-vect-slp.c (vect_analyze_slp_instance): Likewise.
>         (vect_slp_analyze_node_operations_1): Likewise.
>         * tree-vect-stmts.c (vect_mark_relevant, process_use)
>         (exist_non_indexing_operands_for_use_p, vect_init_vector_1)
>         (vect_mark_stmts_to_be_vectorized, vect_get_vec_def_for_operand)
>         (vect_finish_stmt_generation_1, get_group_load_store_type)
>         (get_load_store_type, vect_build_gather_load_calls)
>         (vectorizable_bswap, vectorizable_call, vectorizable_simd_clone_call)
>         (vect_create_vectorized_demotion_stmts, vectorizable_conversion)
>         (vectorizable_assignment, vectorizable_shift, vectorizable_operation)
>         (vectorizable_store, vectorizable_load, vectorizable_condition)
>         (vectorizable_comparison, vect_analyze_stmt, vect_transform_stmt)
>         (supportable_widening_operation): Likewise.
>         (vect_get_vector_types_for_stmt): Likewise.
>         * tree-vectorizer.h (vect_dr_behavior): Likewise.
>
> Index: gcc/tree-vect-data-refs.c
> ===================================================================
> --- gcc/tree-vect-data-refs.c   2018-07-24 10:23:31.736764378 +0100
> +++ gcc/tree-vect-data-refs.c   2018-07-24 10:23:35.376732054 +0100
> @@ -712,7 +712,7 @@ vect_slp_analyze_node_dependences (slp_i
>              been sunk to (and we verify if we can do that as well).  */
>           if (gimple_visited_p (stmt))
>             {
> -             if (stmt != last_store)
> +             if (stmt_info != last_store)
>                 continue;
>               unsigned i;
>               stmt_vec_info store_info;
> @@ -3666,7 +3666,7 @@ vect_check_gather_scatter (gimple *stmt,
>
>    /* See whether this is already a call to a gather/scatter internal function.
>       If not, see whether it's a masked load or store.  */
> -  gcall *call = dyn_cast <gcall *> (stmt);
> +  gcall *call = dyn_cast <gcall *> (stmt_info->stmt);
>    if (call && gimple_call_internal_p (call))
>      {
>        ifn = gimple_call_internal_fn (call);
> @@ -4677,8 +4677,8 @@ vect_create_data_ref_ptr (gimple *stmt,
>    if (loop_vinfo)
>      {
>        loop = LOOP_VINFO_LOOP (loop_vinfo);
> -      nested_in_vect_loop = nested_in_vect_loop_p (loop, stmt);
> -      containing_loop = (gimple_bb (stmt))->loop_father;
> +      nested_in_vect_loop = nested_in_vect_loop_p (loop, stmt_info);
> +      containing_loop = (gimple_bb (stmt_info->stmt))->loop_father;
>        pe = loop_preheader_edge (loop);
>      }
>    else
> @@ -4786,7 +4786,7 @@ vect_create_data_ref_ptr (gimple *stmt,
>
>    /* Create: (&(base[init_val+offset]+byte_offset) in the loop preheader.  */
>
> -  new_temp = vect_create_addr_base_for_vector_ref (stmt, &new_stmt_list,
> +  new_temp = vect_create_addr_base_for_vector_ref (stmt_info, &new_stmt_list,
>                                                    offset, byte_offset);
>    if (new_stmt_list)
>      {
> @@ -4934,7 +4934,7 @@ bump_vector_ptr (tree dataref_ptr, gimpl
>      new_dataref_ptr = make_ssa_name (TREE_TYPE (dataref_ptr));
>    incr_stmt = gimple_build_assign (new_dataref_ptr, POINTER_PLUS_EXPR,
>                                    dataref_ptr, update);
> -  vect_finish_stmt_generation (stmt, incr_stmt, gsi);
> +  vect_finish_stmt_generation (stmt_info, incr_stmt, gsi);
>
>    /* Copy the points-to information if it exists. */
>    if (DR_PTR_INFO (dr))
> @@ -5282,7 +5282,7 @@ vect_permute_store_chain (vec<tree> dr_c
>           data_ref = make_temp_ssa_name (vectype, NULL, "vect_shuffle3_low");
>           perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR, vect1,
>                                            vect2, perm3_mask_low);
> -         vect_finish_stmt_generation (stmt, perm_stmt, gsi);
> +         vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
>
>           vect1 = data_ref;
>           vect2 = dr_chain[2];
> @@ -5293,7 +5293,7 @@ vect_permute_store_chain (vec<tree> dr_c
>           data_ref = make_temp_ssa_name (vectype, NULL, "vect_shuffle3_high");
>           perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR, vect1,
>                                            vect2, perm3_mask_high);
> -         vect_finish_stmt_generation (stmt, perm_stmt, gsi);
> +         vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
>           (*result_chain)[j] = data_ref;
>         }
>      }
> @@ -5332,7 +5332,7 @@ vect_permute_store_chain (vec<tree> dr_c
>                 high = make_temp_ssa_name (vectype, NULL, "vect_inter_high");
>                 perm_stmt = gimple_build_assign (high, VEC_PERM_EXPR, vect1,
>                                                  vect2, perm_mask_high);
> -               vect_finish_stmt_generation (stmt, perm_stmt, gsi);
> +               vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
>                 (*result_chain)[2*j] = high;
>
>                 /* Create interleaving stmt:
> @@ -5342,7 +5342,7 @@ vect_permute_store_chain (vec<tree> dr_c
>                 low = make_temp_ssa_name (vectype, NULL, "vect_inter_low");
>                 perm_stmt = gimple_build_assign (low, VEC_PERM_EXPR, vect1,
>                                                  vect2, perm_mask_low);
> -               vect_finish_stmt_generation (stmt, perm_stmt, gsi);
> +               vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
>                 (*result_chain)[2*j+1] = low;
>               }
>             memcpy (dr_chain.address (), result_chain->address (),
> @@ -5415,7 +5415,7 @@ vect_setup_realignment (gimple *stmt, gi
>    struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    struct loop *loop = NULL;
>    edge pe = NULL;
> -  tree scalar_dest = gimple_assign_lhs (stmt);
> +  tree scalar_dest = gimple_assign_lhs (stmt_info->stmt);
>    tree vec_dest;
>    gimple *inc;
>    tree ptr;
> @@ -5429,13 +5429,13 @@ vect_setup_realignment (gimple *stmt, gi
>    bool inv_p;
>    bool compute_in_loop = false;
>    bool nested_in_vect_loop = false;
> -  struct loop *containing_loop = (gimple_bb (stmt))->loop_father;
> +  struct loop *containing_loop = (gimple_bb (stmt_info->stmt))->loop_father;
>    struct loop *loop_for_initial_load = NULL;
>
>    if (loop_vinfo)
>      {
>        loop = LOOP_VINFO_LOOP (loop_vinfo);
> -      nested_in_vect_loop = nested_in_vect_loop_p (loop, stmt);
> +      nested_in_vect_loop = nested_in_vect_loop_p (loop, stmt_info);
>      }
>
>    gcc_assert (alignment_support_scheme == dr_explicit_realign
> @@ -5518,9 +5518,9 @@ vect_setup_realignment (gimple *stmt, gi
>
>        gcc_assert (!compute_in_loop);
>        vec_dest = vect_create_destination_var (scalar_dest, vectype);
> -      ptr = vect_create_data_ref_ptr (stmt, vectype, loop_for_initial_load,
> -                                     NULL_TREE, &init_addr, NULL, &inc,
> -                                     true, &inv_p);
> +      ptr = vect_create_data_ref_ptr (stmt_info, vectype,
> +                                     loop_for_initial_load, NULL_TREE,
> +                                     &init_addr, NULL, &inc, true, &inv_p);
>        if (TREE_CODE (ptr) == SSA_NAME)
>         new_temp = copy_ssa_name (ptr);
>        else
> @@ -5562,7 +5562,7 @@ vect_setup_realignment (gimple *stmt, gi
>        if (!init_addr)
>         {
>           /* Generate the INIT_ADDR computation outside LOOP.  */
> -         init_addr = vect_create_addr_base_for_vector_ref (stmt, &stmts,
> +         init_addr = vect_create_addr_base_for_vector_ref (stmt_info, &stmts,
>                                                             NULL_TREE);
>            if (loop)
>              {
> @@ -5890,7 +5890,7 @@ vect_permute_load_chain (vec<tree> dr_ch
>           data_ref = make_temp_ssa_name (vectype, NULL, "vect_shuffle3_low");
>           perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR, first_vect,
>                                            second_vect, perm3_mask_low);
> -         vect_finish_stmt_generation (stmt, perm_stmt, gsi);
> +         vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
>
>           /* Create interleaving stmt (high part of):
>              high = VEC_PERM_EXPR <first_vect, second_vect2, {k, 3 + k, 6 + k,
> @@ -5900,7 +5900,7 @@ vect_permute_load_chain (vec<tree> dr_ch
>           data_ref = make_temp_ssa_name (vectype, NULL, "vect_shuffle3_high");
>           perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR, first_vect,
>                                            second_vect, perm3_mask_high);
> -         vect_finish_stmt_generation (stmt, perm_stmt, gsi);
> +         vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
>           (*result_chain)[k] = data_ref;
>         }
>      }
> @@ -5935,7 +5935,7 @@ vect_permute_load_chain (vec<tree> dr_ch
>               perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR,
>                                                first_vect, second_vect,
>                                                perm_mask_even);
> -             vect_finish_stmt_generation (stmt, perm_stmt, gsi);
> +             vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
>               (*result_chain)[j/2] = data_ref;
>
>               /* data_ref = permute_odd (first_data_ref, second_data_ref);  */
> @@ -5943,7 +5943,7 @@ vect_permute_load_chain (vec<tree> dr_ch
>               perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR,
>                                                first_vect, second_vect,
>                                                perm_mask_odd);
> -             vect_finish_stmt_generation (stmt, perm_stmt, gsi);
> +             vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
>               (*result_chain)[j/2+length/2] = data_ref;
>             }
>           memcpy (dr_chain.address (), result_chain->address (),
> @@ -6143,26 +6143,26 @@ vect_shift_permute_load_chain (vec<tree>
>               perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR,
>                                                first_vect, first_vect,
>                                                perm2_mask1);
> -             vect_finish_stmt_generation (stmt, perm_stmt, gsi);
> +             vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
>               vect[0] = data_ref;
>
>               data_ref = make_temp_ssa_name (vectype, NULL, "vect_shuffle2");
>               perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR,
>                                                second_vect, second_vect,
>                                                perm2_mask2);
> -             vect_finish_stmt_generation (stmt, perm_stmt, gsi);
> +             vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
>               vect[1] = data_ref;
>
>               data_ref = make_temp_ssa_name (vectype, NULL, "vect_shift");
>               perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR,
>                                                vect[0], vect[1], shift1_mask);
> -             vect_finish_stmt_generation (stmt, perm_stmt, gsi);
> +             vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
>               (*result_chain)[j/2 + length/2] = data_ref;
>
>               data_ref = make_temp_ssa_name (vectype, NULL, "vect_select");
>               perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR,
>                                                vect[0], vect[1], select_mask);
> -             vect_finish_stmt_generation (stmt, perm_stmt, gsi);
> +             vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
>               (*result_chain)[j/2] = data_ref;
>             }
>           memcpy (dr_chain.address (), result_chain->address (),
> @@ -6259,7 +6259,7 @@ vect_shift_permute_load_chain (vec<tree>
>           perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR,
>                                            dr_chain[k], dr_chain[k],
>                                            perm3_mask);
> -         vect_finish_stmt_generation (stmt, perm_stmt, gsi);
> +         vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
>           vect[k] = data_ref;
>         }
>
> @@ -6269,7 +6269,7 @@ vect_shift_permute_load_chain (vec<tree>
>           perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR,
>                                            vect[k % 3], vect[(k + 1) % 3],
>                                            shift1_mask);
> -         vect_finish_stmt_generation (stmt, perm_stmt, gsi);
> +         vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
>           vect_shift[k] = data_ref;
>         }
>
> @@ -6280,7 +6280,7 @@ vect_shift_permute_load_chain (vec<tree>
>                                            vect_shift[(4 - k) % 3],
>                                            vect_shift[(3 - k) % 3],
>                                            shift2_mask);
> -         vect_finish_stmt_generation (stmt, perm_stmt, gsi);
> +         vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
>           vect[k] = data_ref;
>         }
>
> @@ -6289,13 +6289,13 @@ vect_shift_permute_load_chain (vec<tree>
>        data_ref = make_temp_ssa_name (vectype, NULL, "vect_shift3");
>        perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR, vect[0],
>                                        vect[0], shift3_mask);
> -      vect_finish_stmt_generation (stmt, perm_stmt, gsi);
> +      vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
>        (*result_chain)[nelt % 3] = data_ref;
>
>        data_ref = make_temp_ssa_name (vectype, NULL, "vect_shift4");
>        perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR, vect[1],
>                                        vect[1], shift4_mask);
> -      vect_finish_stmt_generation (stmt, perm_stmt, gsi);
> +      vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
>        (*result_chain)[0] = data_ref;
>        return true;
>      }
> @@ -6328,10 +6328,10 @@ vect_transform_grouped_load (gimple *stm
>    mode = TYPE_MODE (STMT_VINFO_VECTYPE (stmt_info));
>    if (targetm.sched.reassociation_width (VEC_PERM_EXPR, mode) > 1
>        || pow2p_hwi (size)
> -      || !vect_shift_permute_load_chain (dr_chain, size, stmt,
> +      || !vect_shift_permute_load_chain (dr_chain, size, stmt_info,
>                                          gsi, &result_chain))
> -    vect_permute_load_chain (dr_chain, size, stmt, gsi, &result_chain);
> -  vect_record_grouped_load_vectors (stmt, result_chain);
> +    vect_permute_load_chain (dr_chain, size, stmt_info, gsi, &result_chain);
> +  vect_record_grouped_load_vectors (stmt_info, result_chain);
>    result_chain.release ();
>  }
>
> Index: gcc/tree-vect-loop-manip.c
> ===================================================================
> --- gcc/tree-vect-loop-manip.c  2018-07-24 10:23:31.736764378 +0100
> +++ gcc/tree-vect-loop-manip.c  2018-07-24 10:23:35.376732054 +0100
> @@ -1380,8 +1380,8 @@ vect_can_advance_ivs_p (loop_vec_info lo
>        stmt_vec_info phi_info = loop_vinfo->lookup_stmt (phi);
>        if (dump_enabled_p ())
>         {
> -          dump_printf_loc (MSG_NOTE, vect_location, "Analyze phi: ");
> -          dump_gimple_stmt (MSG_NOTE, TDF_SLIM, phi, 0);
> +         dump_printf_loc (MSG_NOTE, vect_location, "Analyze phi: ");
> +         dump_gimple_stmt (MSG_NOTE, TDF_SLIM, phi_info->stmt, 0);
>         }
>
>        /* Skip virtual phi's. The data dependences that are associated with
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:23:31.740764343 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:23:35.376732054 +0100
> @@ -526,7 +526,7 @@ vect_analyze_scalar_cycles_1 (loop_vec_i
>           || (LOOP_VINFO_LOOP (loop_vinfo) != loop
>               && TREE_CODE (step) != INTEGER_CST))
>         {
> -         worklist.safe_push (phi);
> +         worklist.safe_push (stmt_vinfo);
>           continue;
>         }
>
> @@ -1595,11 +1595,12 @@ vect_analyze_loop_operations (loop_vec_i
>                need_to_vectorize = true;
>                if (STMT_VINFO_DEF_TYPE (stmt_info) == vect_induction_def
>                   && ! PURE_SLP_STMT (stmt_info))
> -                ok = vectorizable_induction (phi, NULL, NULL, NULL, &cost_vec);
> +               ok = vectorizable_induction (stmt_info, NULL, NULL, NULL,
> +                                            &cost_vec);
>               else if ((STMT_VINFO_DEF_TYPE (stmt_info) == vect_reduction_def
>                         || STMT_VINFO_DEF_TYPE (stmt_info) == vect_nested_cycle)
>                        && ! PURE_SLP_STMT (stmt_info))
> -               ok = vectorizable_reduction (phi, NULL, NULL, NULL, NULL,
> +               ok = vectorizable_reduction (stmt_info, NULL, NULL, NULL, NULL,
>                                              &cost_vec);
>              }
>
> @@ -1607,7 +1608,7 @@ vect_analyze_loop_operations (loop_vec_i
>           if (ok
>               && STMT_VINFO_LIVE_P (stmt_info)
>               && !PURE_SLP_STMT (stmt_info))
> -           ok = vectorizable_live_operation (phi, NULL, NULL, -1, NULL,
> +           ok = vectorizable_live_operation (stmt_info, NULL, NULL, -1, NULL,
>                                               &cost_vec);
>
>            if (!ok)
> @@ -4045,7 +4046,7 @@ get_initial_def_for_reduction (gimple *s
>    struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
>    tree scalar_type = TREE_TYPE (init_val);
>    tree vectype = get_vectype_for_scalar_type (scalar_type);
> -  enum tree_code code = gimple_assign_rhs_code (stmt);
> +  enum tree_code code = gimple_assign_rhs_code (stmt_vinfo->stmt);
>    tree def_for_init;
>    tree init_def;
>    REAL_VALUE_TYPE real_init_val = dconst0;
> @@ -4057,8 +4058,8 @@ get_initial_def_for_reduction (gimple *s
>    gcc_assert (POINTER_TYPE_P (scalar_type) || INTEGRAL_TYPE_P (scalar_type)
>               || SCALAR_FLOAT_TYPE_P (scalar_type));
>
> -  gcc_assert (nested_in_vect_loop_p (loop, stmt)
> -             || loop == (gimple_bb (stmt))->loop_father);
> +  gcc_assert (nested_in_vect_loop_p (loop, stmt_vinfo)
> +             || loop == (gimple_bb (stmt_vinfo->stmt))->loop_father);
>
>    vect_reduction_type reduction_type
>      = STMT_VINFO_VEC_REDUCTION_TYPE (stmt_vinfo);
> @@ -4127,7 +4128,7 @@ get_initial_def_for_reduction (gimple *s
>             if (reduction_type != COND_REDUCTION
>                 && reduction_type != EXTRACT_LAST_REDUCTION)
>               {
> -               init_def = vect_get_vec_def_for_operand (init_val, stmt);
> +               init_def = vect_get_vec_def_for_operand (init_val, stmt_vinfo);
>                 break;
>               }
>           }
> @@ -4406,7 +4407,7 @@ vect_create_epilog_for_reduction (vec<tr
>    tree vec_dest;
>    tree new_temp = NULL_TREE, new_dest, new_name, new_scalar_dest;
>    gimple *epilog_stmt = NULL;
> -  enum tree_code code = gimple_assign_rhs_code (stmt);
> +  enum tree_code code = gimple_assign_rhs_code (stmt_info->stmt);
>    gimple *exit_phi;
>    tree bitsize;
>    tree adjustment_def = NULL;
> @@ -4435,7 +4436,7 @@ vect_create_epilog_for_reduction (vec<tr
>    if (slp_node)
>      group_size = SLP_TREE_SCALAR_STMTS (slp_node).length ();
>
> -  if (nested_in_vect_loop_p (loop, stmt))
> +  if (nested_in_vect_loop_p (loop, stmt_info))
>      {
>        outer_loop = loop;
>        loop = loop->inner;
> @@ -4504,11 +4505,13 @@ vect_create_epilog_for_reduction (vec<tr
>           /* Do not use an adjustment def as that case is not supported
>              correctly if ncopies is not one.  */
>           vect_is_simple_use (initial_def, loop_vinfo, &initial_def_dt);
> -         vec_initial_def = vect_get_vec_def_for_operand (initial_def, stmt);
> +         vec_initial_def = vect_get_vec_def_for_operand (initial_def,
> +                                                         stmt_info);
>         }
>        else
> -       vec_initial_def = get_initial_def_for_reduction (stmt, initial_def,
> -                                                        &adjustment_def);
> +       vec_initial_def
> +         = get_initial_def_for_reduction (stmt_info, initial_def,
> +                                          &adjustment_def);
>        vec_initial_defs.create (1);
>        vec_initial_defs.quick_push (vec_initial_def);
>      }
> @@ -5676,7 +5679,7 @@ vect_create_epilog_for_reduction (vec<tr
>                    preheader_arg = PHI_ARG_DEF_FROM_EDGE (use_stmt,
>                                               loop_preheader_edge (outer_loop));
>                    vect_phi_init = get_initial_def_for_reduction
> -                   (stmt, preheader_arg, NULL);
> +                   (stmt_info, preheader_arg, NULL);
>
>                    /* Update phi node arguments with vs0 and vs2.  */
>                    add_phi_arg (vect_phi, vect_phi_init,
> @@ -5841,7 +5844,7 @@ vectorize_fold_left_reduction (gimple *s
>    else
>      ncopies = vect_get_num_copies (loop_vinfo, vectype_in);
>
> -  gcc_assert (!nested_in_vect_loop_p (loop, stmt));
> +  gcc_assert (!nested_in_vect_loop_p (loop, stmt_info));
>    gcc_assert (ncopies == 1);
>    gcc_assert (TREE_CODE_LENGTH (code) == binary_op);
>    gcc_assert (reduc_index == (code == MINUS_EXPR ? 0 : 1));
> @@ -5859,13 +5862,14 @@ vectorize_fold_left_reduction (gimple *s
>    auto_vec<tree> vec_oprnds0;
>    if (slp_node)
>      {
> -      vect_get_vec_defs (op0, NULL_TREE, stmt, &vec_oprnds0, NULL, slp_node);
> +      vect_get_vec_defs (op0, NULL_TREE, stmt_info, &vec_oprnds0, NULL,
> +                        slp_node);
>        group_size = SLP_TREE_SCALAR_STMTS (slp_node).length ();
>        scalar_dest_def_info = SLP_TREE_SCALAR_STMTS (slp_node)[group_size - 1];
>      }
>    else
>      {
> -      tree loop_vec_def0 = vect_get_vec_def_for_operand (op0, stmt);
> +      tree loop_vec_def0 = vect_get_vec_def_for_operand (op0, stmt_info);
>        vec_oprnds0.create (1);
>        vec_oprnds0.quick_push (loop_vec_def0);
>        scalar_dest_def_info = stmt_info;
> @@ -6099,7 +6103,7 @@ vectorizable_reduction (gimple *stmt, gi
>        && STMT_VINFO_DEF_TYPE (stmt_info) != vect_nested_cycle)
>      return false;
>
> -  if (nested_in_vect_loop_p (loop, stmt))
> +  if (nested_in_vect_loop_p (loop, stmt_info))
>      {
>        loop = loop->inner;
>        nested_cycle = true;
> @@ -6109,7 +6113,7 @@ vectorizable_reduction (gimple *stmt, gi
>      gcc_assert (slp_node
>                 && REDUC_GROUP_FIRST_ELEMENT (stmt_info) == stmt_info);
>
> -  if (gphi *phi = dyn_cast <gphi *> (stmt))
> +  if (gphi *phi = dyn_cast <gphi *> (stmt_info->stmt))
>      {
>        tree phi_result = gimple_phi_result (phi);
>        /* Analysis is fully done on the reduction stmt invocation.  */
> @@ -6164,7 +6168,7 @@ vectorizable_reduction (gimple *stmt, gi
>           && STMT_VINFO_RELEVANT (reduc_stmt_info) <= vect_used_only_live
>           && (use_stmt_info = loop_vinfo->lookup_single_use (phi_result))
>           && (use_stmt_info == reduc_stmt_info
> -             || STMT_VINFO_RELATED_STMT (use_stmt_info) == reduc_stmt))
> +             || STMT_VINFO_RELATED_STMT (use_stmt_info) == reduc_stmt_info))
>         single_defuse_cycle = true;
>
>        /* Create the destination vector  */
> @@ -6548,7 +6552,7 @@ vectorizable_reduction (gimple *stmt, gi
>      {
>        /* Only call during the analysis stage, otherwise we'll lose
>          STMT_VINFO_TYPE.  */
> -      if (!vec_stmt && !vectorizable_condition (stmt, gsi, NULL,
> +      if (!vec_stmt && !vectorizable_condition (stmt_info, gsi, NULL,
>                                                 ops[reduc_index], 0, NULL,
>                                                 cost_vec))
>          {
> @@ -6935,7 +6939,7 @@ vectorizable_reduction (gimple *stmt, gi
>        && (STMT_VINFO_RELEVANT (stmt_info) <= vect_used_only_live)
>        && (use_stmt_info = loop_vinfo->lookup_single_use (reduc_phi_result))
>        && (use_stmt_info == stmt_info
> -         || STMT_VINFO_RELATED_STMT (use_stmt_info) == stmt))
> +         || STMT_VINFO_RELATED_STMT (use_stmt_info) == stmt_info))
>      {
>        single_defuse_cycle = true;
>        epilog_copies = 1;
> @@ -7015,13 +7019,13 @@ vectorizable_reduction (gimple *stmt, gi
>
>    if (reduction_type == FOLD_LEFT_REDUCTION)
>      return vectorize_fold_left_reduction
> -      (stmt, gsi, vec_stmt, slp_node, reduc_def_phi, code,
> +      (stmt_info, gsi, vec_stmt, slp_node, reduc_def_phi, code,
>         reduc_fn, ops, vectype_in, reduc_index, masks);
>
>    if (reduction_type == EXTRACT_LAST_REDUCTION)
>      {
>        gcc_assert (!slp_node);
> -      return vectorizable_condition (stmt, gsi, vec_stmt,
> +      return vectorizable_condition (stmt_info, gsi, vec_stmt,
>                                      NULL, reduc_index, NULL, NULL);
>      }
>
> @@ -7053,7 +7057,7 @@ vectorizable_reduction (gimple *stmt, gi
>        if (code == COND_EXPR)
>          {
>            gcc_assert (!slp_node);
> -         vectorizable_condition (stmt, gsi, vec_stmt,
> +         vectorizable_condition (stmt_info, gsi, vec_stmt,
>                                   PHI_RESULT (phis[0]->stmt),
>                                   reduc_index, NULL, NULL);
>            /* Multiple types are not supported for condition.  */
> @@ -7090,12 +7094,12 @@ vectorizable_reduction (gimple *stmt, gi
>            else
>             {
>                vec_oprnds0.quick_push
> -               (vect_get_vec_def_for_operand (ops[0], stmt));
> +               (vect_get_vec_def_for_operand (ops[0], stmt_info));
>                vec_oprnds1.quick_push
> -               (vect_get_vec_def_for_operand (ops[1], stmt));
> +               (vect_get_vec_def_for_operand (ops[1], stmt_info));
>                if (op_type == ternary_op)
>                 vec_oprnds2.quick_push
> -                 (vect_get_vec_def_for_operand (ops[2], stmt));
> +                 (vect_get_vec_def_for_operand (ops[2], stmt_info));
>             }
>          }
>        else
> @@ -7144,7 +7148,8 @@ vectorizable_reduction (gimple *stmt, gi
>               new_temp = make_ssa_name (vec_dest, call);
>               gimple_call_set_lhs (call, new_temp);
>               gimple_call_set_nothrow (call, true);
> -             new_stmt_info = vect_finish_stmt_generation (stmt, call, gsi);
> +             new_stmt_info
> +               = vect_finish_stmt_generation (stmt_info, call, gsi);
>             }
>           else
>             {
> @@ -7156,7 +7161,7 @@ vectorizable_reduction (gimple *stmt, gi
>               new_temp = make_ssa_name (vec_dest, new_stmt);
>               gimple_assign_set_lhs (new_stmt, new_temp);
>               new_stmt_info
> -               = vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +               = vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
>             }
>
>            if (slp_node)
> @@ -7184,7 +7189,7 @@ vectorizable_reduction (gimple *stmt, gi
>    if ((!single_defuse_cycle || code == COND_EXPR) && !slp_node)
>      vect_defs[0] = gimple_get_lhs ((*vec_stmt)->stmt);
>
> -  vect_create_epilog_for_reduction (vect_defs, stmt, reduc_def_phi,
> +  vect_create_epilog_for_reduction (vect_defs, stmt_info, reduc_def_phi,
>                                     epilog_copies, reduc_fn, phis,
>                                     double_reduc, slp_node, slp_node_instance,
>                                     cond_reduc_val, cond_reduc_op_code,
> @@ -7293,7 +7298,7 @@ vectorizable_induction (gimple *phi,
>    gcc_assert (ncopies >= 1);
>
>    /* FORNOW. These restrictions should be relaxed.  */
> -  if (nested_in_vect_loop_p (loop, phi))
> +  if (nested_in_vect_loop_p (loop, stmt_info))
>      {
>        imm_use_iterator imm_iter;
>        use_operand_p use_p;
> @@ -7443,10 +7448,10 @@ vectorizable_induction (gimple *phi,
>        new_name = fold_build2 (MULT_EXPR, TREE_TYPE (step_expr),
>                               expr, step_expr);
>        if (! CONSTANT_CLASS_P (new_name))
> -       new_name = vect_init_vector (phi, new_name,
> +       new_name = vect_init_vector (stmt_info, new_name,
>                                      TREE_TYPE (step_expr), NULL);
>        new_vec = build_vector_from_val (vectype, new_name);
> -      vec_step = vect_init_vector (phi, new_vec, vectype, NULL);
> +      vec_step = vect_init_vector (stmt_info, new_vec, vectype, NULL);
>
>        /* Now generate the IVs.  */
>        unsigned group_size = SLP_TREE_SCALAR_STMTS (slp_node).length ();
> @@ -7513,10 +7518,10 @@ vectorizable_induction (gimple *phi,
>           new_name = fold_build2 (MULT_EXPR, TREE_TYPE (step_expr),
>                                   expr, step_expr);
>           if (! CONSTANT_CLASS_P (new_name))
> -           new_name = vect_init_vector (phi, new_name,
> +           new_name = vect_init_vector (stmt_info, new_name,
>                                          TREE_TYPE (step_expr), NULL);
>           new_vec = build_vector_from_val (vectype, new_name);
> -         vec_step = vect_init_vector (phi, new_vec, vectype, NULL);
> +         vec_step = vect_init_vector (stmt_info, new_vec, vectype, NULL);
>           for (; ivn < nvects; ++ivn)
>             {
>               gimple *iv = SLP_TREE_VEC_STMTS (slp_node)[ivn - nivs]->stmt;
> @@ -7549,7 +7554,7 @@ vectorizable_induction (gimple *phi,
>        /* iv_loop is nested in the loop to be vectorized.  init_expr had already
>          been created during vectorization of previous stmts.  We obtain it
>          from the STMT_VINFO_VEC_STMT of the defining stmt.  */
> -      vec_init = vect_get_vec_def_for_operand (init_expr, phi);
> +      vec_init = vect_get_vec_def_for_operand (init_expr, stmt_info);
>        /* If the initial value is not of proper type, convert it.  */
>        if (!useless_type_conversion_p (vectype, TREE_TYPE (vec_init)))
>         {
> @@ -7651,7 +7656,7 @@ vectorizable_induction (gimple *phi,
>    gcc_assert (CONSTANT_CLASS_P (new_name)
>               || TREE_CODE (new_name) == SSA_NAME);
>    new_vec = build_vector_from_val (vectype, t);
> -  vec_step = vect_init_vector (phi, new_vec, vectype, NULL);
> +  vec_step = vect_init_vector (stmt_info, new_vec, vectype, NULL);
>
>
>    /* Create the following def-use cycle:
> @@ -7717,7 +7722,7 @@ vectorizable_induction (gimple *phi,
>        gcc_assert (CONSTANT_CLASS_P (new_name)
>                   || TREE_CODE (new_name) == SSA_NAME);
>        new_vec = build_vector_from_val (vectype, t);
> -      vec_step = vect_init_vector (phi, new_vec, vectype, NULL);
> +      vec_step = vect_init_vector (stmt_info, new_vec, vectype, NULL);
>
>        vec_def = induc_def;
>        prev_stmt_vinfo = induction_phi_info;
> @@ -7815,7 +7820,7 @@ vectorizable_live_operation (gimple *stm
>      return false;
>
>    /* FORNOW.  CHECKME.  */
> -  if (nested_in_vect_loop_p (loop, stmt))
> +  if (nested_in_vect_loop_p (loop, stmt_info))
>      return false;
>
>    /* If STMT is not relevant and it is a simple assignment and its inputs are
> @@ -7823,7 +7828,7 @@ vectorizable_live_operation (gimple *stm
>       scalar value that it computes will be used.  */
>    if (!STMT_VINFO_RELEVANT_P (stmt_info))
>      {
> -      gcc_assert (is_simple_and_all_uses_invariant (stmt, loop_vinfo));
> +      gcc_assert (is_simple_and_all_uses_invariant (stmt_info, loop_vinfo));
>        if (dump_enabled_p ())
>         dump_printf_loc (MSG_NOTE, vect_location,
>                          "statement is simple and uses invariant.  Leaving in "
> @@ -8222,11 +8227,11 @@ vect_transform_loop_stmt (loop_vec_info
>      {
>        dump_printf_loc (MSG_NOTE, vect_location,
>                        "------>vectorizing statement: ");
> -      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
> +      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
>      }
>
>    if (MAY_HAVE_DEBUG_BIND_STMTS && !STMT_VINFO_LIVE_P (stmt_info))
> -    vect_loop_kill_debug_uses (loop, stmt);
> +    vect_loop_kill_debug_uses (loop, stmt_info);
>
>    if (!STMT_VINFO_RELEVANT_P (stmt_info)
>        && !STMT_VINFO_LIVE_P (stmt_info))
> @@ -8267,7 +8272,7 @@ vect_transform_loop_stmt (loop_vec_info
>      dump_printf_loc (MSG_NOTE, vect_location, "transform statement.\n");
>
>    bool grouped_store = false;
> -  if (vect_transform_stmt (stmt, gsi, &grouped_store, NULL, NULL))
> +  if (vect_transform_stmt (stmt_info, gsi, &grouped_store, NULL, NULL))
>      *seen_store = stmt_info;
>  }
>
> @@ -8422,7 +8427,7 @@ vect_transform_loop (loop_vec_info loop_
>             continue;
>
>           if (MAY_HAVE_DEBUG_BIND_STMTS && !STMT_VINFO_LIVE_P (stmt_info))
> -           vect_loop_kill_debug_uses (loop, phi);
> +           vect_loop_kill_debug_uses (loop, stmt_info);
>
>           if (!STMT_VINFO_RELEVANT_P (stmt_info)
>               && !STMT_VINFO_LIVE_P (stmt_info))
> @@ -8441,7 +8446,7 @@ vect_transform_loop (loop_vec_info loop_
>             {
>               if (dump_enabled_p ())
>                 dump_printf_loc (MSG_NOTE, vect_location, "transform phi.\n");
> -             vect_transform_stmt (phi, NULL, NULL, NULL, NULL);
> +             vect_transform_stmt (stmt_info, NULL, NULL, NULL, NULL);
>             }
>         }
>
> Index: gcc/tree-vect-patterns.c
> ===================================================================
> --- gcc/tree-vect-patterns.c    2018-07-24 10:23:31.740764343 +0100
> +++ gcc/tree-vect-patterns.c    2018-07-24 10:23:35.380732018 +0100
> @@ -842,7 +842,7 @@ vect_reassociating_reduction_p (stmt_vec
>    /* We don't allow changing the order of the computation in the inner-loop
>       when doing outer-loop vectorization.  */
>    struct loop *loop = LOOP_VINFO_LOOP (loop_info);
> -  if (loop && nested_in_vect_loop_p (loop, assign))
> +  if (loop && nested_in_vect_loop_p (loop, stmt_info))
>      return false;
>
>    if (!vect_reassociating_reduction_p (stmt_info))
> @@ -1196,7 +1196,7 @@ vect_recog_widen_op_pattern (stmt_vec_in
>    auto_vec<tree> dummy_vec;
>    if (!vectype
>        || !vecitype
> -      || !supportable_widening_operation (wide_code, last_stmt,
> +      || !supportable_widening_operation (wide_code, last_stmt_info,
>                                           vecitype, vectype,
>                                           &dummy_code, &dummy_code,
>                                           &dummy_int, &dummy_vec))
> @@ -3118,11 +3118,11 @@ vect_recog_mixed_size_cond_pattern (stmt
>      return NULL;
>
>    if ((TREE_CODE (then_clause) != INTEGER_CST
> -       && !type_conversion_p (then_clause, last_stmt, false, &orig_type0,
> -                              &def_stmt0, &promotion))
> +       && !type_conversion_p (then_clause, stmt_vinfo, false, &orig_type0,
> +                             &def_stmt0, &promotion))
>        || (TREE_CODE (else_clause) != INTEGER_CST
> -          && !type_conversion_p (else_clause, last_stmt, false, &orig_type1,
> -                                 &def_stmt1, &promotion)))
> +         && !type_conversion_p (else_clause, stmt_vinfo, false, &orig_type1,
> +                                &def_stmt1, &promotion)))
>      return NULL;
>
>    if (orig_type0 && orig_type1
> @@ -3709,7 +3709,7 @@ vect_recog_bool_pattern (stmt_vec_info s
>
>        if (check_bool_pattern (var, vinfo, bool_stmts))
>         {
> -         rhs = adjust_bool_stmts (bool_stmts, TREE_TYPE (lhs), last_stmt);
> +         rhs = adjust_bool_stmts (bool_stmts, TREE_TYPE (lhs), stmt_vinfo);
>           lhs = vect_recog_temp_ssa_var (TREE_TYPE (lhs), NULL);
>           if (useless_type_conversion_p (TREE_TYPE (lhs), TREE_TYPE (rhs)))
>             pattern_stmt = gimple_build_assign (lhs, SSA_NAME, rhs);
> @@ -3776,7 +3776,7 @@ vect_recog_bool_pattern (stmt_vec_info s
>        if (!check_bool_pattern (var, vinfo, bool_stmts))
>         return NULL;
>
> -      rhs = adjust_bool_stmts (bool_stmts, type, last_stmt);
> +      rhs = adjust_bool_stmts (bool_stmts, type, stmt_vinfo);
>
>        lhs = vect_recog_temp_ssa_var (TREE_TYPE (lhs), NULL);
>        pattern_stmt
> @@ -3800,7 +3800,7 @@ vect_recog_bool_pattern (stmt_vec_info s
>         return NULL;
>
>        if (check_bool_pattern (var, vinfo, bool_stmts))
> -       rhs = adjust_bool_stmts (bool_stmts, TREE_TYPE (vectype), last_stmt);
> +       rhs = adjust_bool_stmts (bool_stmts, TREE_TYPE (vectype), stmt_vinfo);
>        else
>         {
>           tree type = search_type_for_mask (var, vinfo);
> @@ -4234,13 +4234,12 @@ vect_recog_gather_scatter_pattern (stmt_
>
>    /* Get the boolean that controls whether the load or store happens.
>       This is null if the operation is unconditional.  */
> -  gimple *stmt = stmt_info->stmt;
> -  tree mask = vect_get_load_store_mask (stmt);
> +  tree mask = vect_get_load_store_mask (stmt_info);
>
>    /* Make sure that the target supports an appropriate internal
>       function for the gather/scatter operation.  */
>    gather_scatter_info gs_info;
> -  if (!vect_check_gather_scatter (stmt, loop_vinfo, &gs_info)
> +  if (!vect_check_gather_scatter (stmt_info, loop_vinfo, &gs_info)
>        || gs_info.decl)
>      return NULL;
>
> @@ -4273,7 +4272,7 @@ vect_recog_gather_scatter_pattern (stmt_
>      }
>    else
>      {
> -      tree rhs = vect_get_store_rhs (stmt);
> +      tree rhs = vect_get_store_rhs (stmt_info);
>        if (mask != NULL)
>         pattern_stmt = gimple_build_call_internal (IFN_MASK_SCATTER_STORE, 5,
>                                                    base, offset, scale, rhs,
> @@ -4295,7 +4294,7 @@ vect_recog_gather_scatter_pattern (stmt_
>
>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>    *type_out = vectype;
> -  vect_pattern_detected ("gather/scatter pattern", stmt);
> +  vect_pattern_detected ("gather/scatter pattern", stmt_info->stmt);
>
>    return pattern_stmt;
>  }
> Index: gcc/tree-vect-slp.c
> ===================================================================
> --- gcc/tree-vect-slp.c 2018-07-24 10:23:31.740764343 +0100
> +++ gcc/tree-vect-slp.c 2018-07-24 10:23:35.380732018 +0100
> @@ -2096,8 +2096,8 @@ vect_analyze_slp_instance (vec_info *vin
>                    dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>                                    "Build SLP failed: unsupported load "
>                                    "permutation ");
> -                     dump_gimple_stmt (MSG_MISSED_OPTIMIZATION,
> -                                       TDF_SLIM, stmt, 0);
> +                 dump_gimple_stmt (MSG_MISSED_OPTIMIZATION,
> +                                   TDF_SLIM, stmt_info->stmt, 0);
>                  }
>               vect_free_slp_instance (new_instance, false);
>                return false;
> @@ -2172,8 +2172,9 @@ vect_analyze_slp_instance (vec_info *vin
>           gcc_assert ((const_nunits & (const_nunits - 1)) == 0);
>           unsigned group1_size = i & ~(const_nunits - 1);
>
> -         gimple *rest = vect_split_slp_store_group (stmt, group1_size);
> -         bool res = vect_analyze_slp_instance (vinfo, stmt, max_tree_size);
> +         gimple *rest = vect_split_slp_store_group (stmt_info, group1_size);
> +         bool res = vect_analyze_slp_instance (vinfo, stmt_info,
> +                                               max_tree_size);
>           /* If the first non-match was in the middle of a vector,
>              skip the rest of that vector.  */
>           if (group1_size < i)
> @@ -2513,7 +2514,6 @@ vect_slp_analyze_node_operations_1 (vec_
>                                     stmt_vector_for_cost *cost_vec)
>  {
>    stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
> -  gimple *stmt = stmt_info->stmt;
>    gcc_assert (STMT_SLP_TYPE (stmt_info) != loop_vect);
>
>    /* For BB vectorization vector types are assigned here.
> @@ -2567,7 +2567,7 @@ vect_slp_analyze_node_operations_1 (vec_
>      }
>
>    bool dummy;
> -  return vect_analyze_stmt (stmt, &dummy, node, node_instance, cost_vec);
> +  return vect_analyze_stmt (stmt_info, &dummy, node, node_instance, cost_vec);
>  }
>
>  /* Analyze statements contained in SLP tree NODE after recursively analyzing
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-24 10:23:31.744764307 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:23:35.384731983 +0100
> @@ -205,7 +205,7 @@ vect_mark_relevant (vec<gimple *> *workl
>      {
>        dump_printf_loc (MSG_NOTE, vect_location,
>                        "mark relevant %d, live %d: ", relevant, live_p);
> -      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
> +      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
>      }
>
>    /* If this stmt is an original stmt in a pattern, we might need to mark its
> @@ -244,7 +244,7 @@ vect_mark_relevant (vec<gimple *> *workl
>        return;
>      }
>
> -  worklist->safe_push (stmt);
> +  worklist->safe_push (stmt_info);
>  }
>
>
> @@ -389,10 +389,10 @@ exist_non_indexing_operands_for_use_p (t
>       Therefore, all we need to check is if STMT falls into the
>       first case, and whether var corresponds to USE.  */
>
> -  gassign *assign = dyn_cast <gassign *> (stmt);
> +  gassign *assign = dyn_cast <gassign *> (stmt_info->stmt);
>    if (!assign || !gimple_assign_copy_p (assign))
>      {
> -      gcall *call = dyn_cast <gcall *> (stmt);
> +      gcall *call = dyn_cast <gcall *> (stmt_info->stmt);
>        if (call && gimple_call_internal_p (call))
>         {
>           internal_fn ifn = gimple_call_internal_fn (call);
> @@ -463,7 +463,7 @@ process_use (gimple *stmt, tree use, loo
>
>    /* case 1: we are only interested in uses that need to be vectorized.  Uses
>       that are used for address computation are not considered relevant.  */
> -  if (!force && !exist_non_indexing_operands_for_use_p (use, stmt))
> +  if (!force && !exist_non_indexing_operands_for_use_p (use, stmt_vinfo))
>       return true;
>
>    if (!vect_is_simple_use (use, loop_vinfo, &dt, &dstmt_vinfo))
> @@ -484,8 +484,8 @@ process_use (gimple *stmt, tree use, loo
>       only way that STMT, which is a reduction-phi, was put in the worklist,
>       as there should be no other uses for DSTMT_VINFO in the loop.  So we just
>       check that everything is as expected, and we are done.  */
> -  bb = gimple_bb (stmt);
> -  if (gimple_code (stmt) == GIMPLE_PHI
> +  bb = gimple_bb (stmt_vinfo->stmt);
> +  if (gimple_code (stmt_vinfo->stmt) == GIMPLE_PHI
>        && STMT_VINFO_DEF_TYPE (stmt_vinfo) == vect_reduction_def
>        && gimple_code (dstmt_vinfo->stmt) != GIMPLE_PHI
>        && STMT_VINFO_DEF_TYPE (dstmt_vinfo) == vect_reduction_def
> @@ -576,10 +576,11 @@ process_use (gimple *stmt, tree use, loo
>       inductions.  Otherwise we'll needlessly vectorize the IV increment
>       and cause hybrid SLP for SLP inductions.  Unless the PHI is live
>       of course.  */
> -  else if (gimple_code (stmt) == GIMPLE_PHI
> +  else if (gimple_code (stmt_vinfo->stmt) == GIMPLE_PHI
>            && STMT_VINFO_DEF_TYPE (stmt_vinfo) == vect_induction_def
>            && ! STMT_VINFO_LIVE_P (stmt_vinfo)
> -          && (PHI_ARG_DEF_FROM_EDGE (stmt, loop_latch_edge (bb->loop_father))
> +          && (PHI_ARG_DEF_FROM_EDGE (stmt_vinfo->stmt,
> +                                     loop_latch_edge (bb->loop_father))
>                == use))
>      {
>        if (dump_enabled_p ())
> @@ -740,7 +741,7 @@ vect_mark_stmts_to_be_vectorized (loop_v
>            /* Pattern statements are not inserted into the code, so
>               FOR_EACH_PHI_OR_STMT_USE optimizes their operands out, and we
>               have to scan the RHS or function arguments instead.  */
> -         if (gassign *assign = dyn_cast <gassign *> (stmt))
> +         if (gassign *assign = dyn_cast <gassign *> (stmt_vinfo->stmt))
>             {
>               enum tree_code rhs_code = gimple_assign_rhs_code (assign);
>               tree op = gimple_assign_rhs1 (assign);
> @@ -748,10 +749,10 @@ vect_mark_stmts_to_be_vectorized (loop_v
>               i = 1;
>               if (rhs_code == COND_EXPR && COMPARISON_CLASS_P (op))
>                 {
> -                 if (!process_use (stmt, TREE_OPERAND (op, 0), loop_vinfo,
> -                                   relevant, &worklist, false)
> -                     || !process_use (stmt, TREE_OPERAND (op, 1), loop_vinfo,
> -                                      relevant, &worklist, false))
> +                 if (!process_use (stmt_vinfo, TREE_OPERAND (op, 0),
> +                                   loop_vinfo, relevant, &worklist, false)
> +                     || !process_use (stmt_vinfo, TREE_OPERAND (op, 1),
> +                                      loop_vinfo, relevant, &worklist, false))
>                     return false;
>                   i = 2;
>                 }
> @@ -759,27 +760,27 @@ vect_mark_stmts_to_be_vectorized (loop_v
>                 {
>                   op = gimple_op (assign, i);
>                    if (TREE_CODE (op) == SSA_NAME
> -                     && !process_use (stmt, op, loop_vinfo, relevant,
> +                     && !process_use (stmt_vinfo, op, loop_vinfo, relevant,
>                                        &worklist, false))
>                      return false;
>                   }
>              }
> -         else if (gcall *call = dyn_cast <gcall *> (stmt))
> +         else if (gcall *call = dyn_cast <gcall *> (stmt_vinfo->stmt))
>             {
>               for (i = 0; i < gimple_call_num_args (call); i++)
>                 {
>                   tree arg = gimple_call_arg (call, i);
> -                 if (!process_use (stmt, arg, loop_vinfo, relevant,
> +                 if (!process_use (stmt_vinfo, arg, loop_vinfo, relevant,
>                                     &worklist, false))
>                      return false;
>                 }
>             }
>          }
>        else
> -        FOR_EACH_PHI_OR_STMT_USE (use_p, stmt, iter, SSA_OP_USE)
> +       FOR_EACH_PHI_OR_STMT_USE (use_p, stmt_vinfo->stmt, iter, SSA_OP_USE)
>            {
>              tree op = USE_FROM_PTR (use_p);
> -           if (!process_use (stmt, op, loop_vinfo, relevant,
> +           if (!process_use (stmt_vinfo, op, loop_vinfo, relevant,
>                               &worklist, false))
>                return false;
>            }
> @@ -787,9 +788,9 @@ vect_mark_stmts_to_be_vectorized (loop_v
>        if (STMT_VINFO_GATHER_SCATTER_P (stmt_vinfo))
>         {
>           gather_scatter_info gs_info;
> -         if (!vect_check_gather_scatter (stmt, loop_vinfo, &gs_info))
> +         if (!vect_check_gather_scatter (stmt_vinfo, loop_vinfo, &gs_info))
>             gcc_unreachable ();
> -         if (!process_use (stmt, gs_info.offset, loop_vinfo, relevant,
> +         if (!process_use (stmt_vinfo, gs_info.offset, loop_vinfo, relevant,
>                             &worklist, true))
>             return false;
>         }
> @@ -1362,8 +1363,8 @@ vect_init_vector_1 (gimple *stmt, gimple
>           basic_block new_bb;
>           edge pe;
>
> -          if (nested_in_vect_loop_p (loop, stmt))
> -            loop = loop->inner;
> +         if (nested_in_vect_loop_p (loop, stmt_vinfo))
> +           loop = loop->inner;
>
>           pe = loop_preheader_edge (loop);
>            new_bb = gsi_insert_on_edge_immediate (pe, new_stmt);
> @@ -1573,7 +1574,7 @@ vect_get_vec_def_for_operand (tree op, g
>         vector_type = get_vectype_for_scalar_type (TREE_TYPE (op));
>
>        gcc_assert (vector_type);
> -      return vect_init_vector (stmt, op, vector_type, NULL);
> +      return vect_init_vector (stmt_vinfo, op, vector_type, NULL);
>      }
>    else
>      return vect_get_vec_def_for_operand_1 (def_stmt_info, dt);
> @@ -1740,12 +1741,12 @@ vect_finish_stmt_generation_1 (gimple *s
>        dump_gimple_stmt (MSG_NOTE, TDF_SLIM, vec_stmt, 0);
>      }
>
> -  gimple_set_location (vec_stmt, gimple_location (stmt));
> +  gimple_set_location (vec_stmt, gimple_location (stmt_info->stmt));
>
>    /* While EH edges will generally prevent vectorization, stmt might
>       e.g. be in a must-not-throw region.  Ensure newly created stmts
>       that could throw are part of the same region.  */
> -  int lp_nr = lookup_stmt_eh_lp (stmt);
> +  int lp_nr = lookup_stmt_eh_lp (stmt_info->stmt);
>    if (lp_nr != 0 && stmt_could_throw_p (vec_stmt))
>      add_stmt_to_eh_lp (vec_stmt, lp_nr);
>
> @@ -2269,7 +2270,7 @@ get_group_load_store_type (gimple *stmt,
>
>        if (!STMT_VINFO_STRIDED_P (stmt_info)
>           && (can_overrun_p || !would_overrun_p)
> -         && compare_step_with_zero (stmt) > 0)
> +         && compare_step_with_zero (stmt_info) > 0)
>         {
>           /* First cope with the degenerate case of a single-element
>              vector.  */
> @@ -2309,7 +2310,7 @@ get_group_load_store_type (gimple *stmt,
>        if (*memory_access_type == VMAT_ELEMENTWISE
>           && single_element_p
>           && loop_vinfo
> -         && vect_use_strided_gather_scatters_p (stmt, loop_vinfo,
> +         && vect_use_strided_gather_scatters_p (stmt_info, loop_vinfo,
>                                                  masked_p, gs_info))
>         *memory_access_type = VMAT_GATHER_SCATTER;
>      }
> @@ -2421,7 +2422,7 @@ get_load_store_type (gimple *stmt, tree
>    if (STMT_VINFO_GATHER_SCATTER_P (stmt_info))
>      {
>        *memory_access_type = VMAT_GATHER_SCATTER;
> -      if (!vect_check_gather_scatter (stmt, loop_vinfo, gs_info))
> +      if (!vect_check_gather_scatter (stmt_info, loop_vinfo, gs_info))
>         gcc_unreachable ();
>        else if (!vect_is_simple_use (gs_info->offset, vinfo,
>                                     &gs_info->offset_dt,
> @@ -2436,15 +2437,15 @@ get_load_store_type (gimple *stmt, tree
>      }
>    else if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
>      {
> -      if (!get_group_load_store_type (stmt, vectype, slp, masked_p, vls_type,
> -                                     memory_access_type, gs_info))
> +      if (!get_group_load_store_type (stmt_info, vectype, slp, masked_p,
> +                                     vls_type, memory_access_type, gs_info))
>         return false;
>      }
>    else if (STMT_VINFO_STRIDED_P (stmt_info))
>      {
>        gcc_assert (!slp);
>        if (loop_vinfo
> -         && vect_use_strided_gather_scatters_p (stmt, loop_vinfo,
> +         && vect_use_strided_gather_scatters_p (stmt_info, loop_vinfo,
>                                                  masked_p, gs_info))
>         *memory_access_type = VMAT_GATHER_SCATTER;
>        else
> @@ -2452,10 +2453,10 @@ get_load_store_type (gimple *stmt, tree
>      }
>    else
>      {
> -      int cmp = compare_step_with_zero (stmt);
> +      int cmp = compare_step_with_zero (stmt_info);
>        if (cmp < 0)
>         *memory_access_type = get_negative_load_store_type
> -         (stmt, vectype, vls_type, ncopies);
> +         (stmt_info, vectype, vls_type, ncopies);
>        else if (cmp == 0)
>         {
>           gcc_assert (vls_type == VLS_LOAD);
> @@ -2742,8 +2743,8 @@ vect_build_gather_load_calls (gimple *st
>    else
>      gcc_unreachable ();
>
> -  tree vec_dest = vect_create_destination_var (gimple_get_lhs (stmt),
> -                                              vectype);
> +  tree scalar_dest = gimple_get_lhs (stmt_info->stmt);
> +  tree vec_dest = vect_create_destination_var (scalar_dest, vectype);
>
>    tree ptr = fold_convert (ptrtype, gs_info->base);
>    if (!is_gimple_min_invariant (ptr))
> @@ -2765,8 +2766,8 @@ vect_build_gather_load_calls (gimple *st
>
>    if (!mask)
>      {
> -      src_op = vect_build_zero_merge_argument (stmt, rettype);
> -      mask_op = vect_build_all_ones_mask (stmt, masktype);
> +      src_op = vect_build_zero_merge_argument (stmt_info, rettype);
> +      mask_op = vect_build_all_ones_mask (stmt_info, masktype);
>      }
>
>    for (int j = 0; j < ncopies; ++j)
> @@ -2774,10 +2775,10 @@ vect_build_gather_load_calls (gimple *st
>        tree op, var;
>        if (modifier == WIDEN && (j & 1))
>         op = permute_vec_elements (vec_oprnd0, vec_oprnd0,
> -                                  perm_mask, stmt, gsi);
> +                                  perm_mask, stmt_info, gsi);
>        else if (j == 0)
>         op = vec_oprnd0
> -         = vect_get_vec_def_for_operand (gs_info->offset, stmt);
> +         = vect_get_vec_def_for_operand (gs_info->offset, stmt_info);
>        else
>         op = vec_oprnd0
>           = vect_get_vec_def_for_stmt_copy (gs_info->offset_dt, vec_oprnd0);
> @@ -2789,7 +2790,7 @@ vect_build_gather_load_calls (gimple *st
>           var = vect_get_new_ssa_name (idxtype, vect_simple_var);
>           op = build1 (VIEW_CONVERT_EXPR, idxtype, op);
>           gassign *new_stmt = gimple_build_assign (var, VIEW_CONVERT_EXPR, op);
> -         vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +         vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
>           op = var;
>         }
>
> @@ -2797,11 +2798,11 @@ vect_build_gather_load_calls (gimple *st
>         {
>           if (mask_perm_mask && (j & 1))
>             mask_op = permute_vec_elements (mask_op, mask_op,
> -                                           mask_perm_mask, stmt, gsi);
> +                                           mask_perm_mask, stmt_info, gsi);
>           else
>             {
>               if (j == 0)
> -               vec_mask = vect_get_vec_def_for_operand (mask, stmt);
> +               vec_mask = vect_get_vec_def_for_operand (mask, stmt_info);
>               else
>                 vec_mask = vect_get_vec_def_for_stmt_copy (mask_dt, vec_mask);
>
> @@ -2815,7 +2816,7 @@ vect_build_gather_load_calls (gimple *st
>                   mask_op = build1 (VIEW_CONVERT_EXPR, masktype, mask_op);
>                   gassign *new_stmt
>                     = gimple_build_assign (var, VIEW_CONVERT_EXPR, mask_op);
> -                 vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +                 vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
>                   mask_op = var;
>                 }
>             }
> @@ -2832,17 +2833,19 @@ vect_build_gather_load_calls (gimple *st
>                                 TYPE_VECTOR_SUBPARTS (rettype)));
>           op = vect_get_new_ssa_name (rettype, vect_simple_var);
>           gimple_call_set_lhs (new_call, op);
> -         vect_finish_stmt_generation (stmt, new_call, gsi);
> +         vect_finish_stmt_generation (stmt_info, new_call, gsi);
>           var = make_ssa_name (vec_dest);
>           op = build1 (VIEW_CONVERT_EXPR, vectype, op);
>           gassign *new_stmt = gimple_build_assign (var, VIEW_CONVERT_EXPR, op);
> -         new_stmt_info = vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +         new_stmt_info
> +           = vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
>         }
>        else
>         {
>           var = make_ssa_name (vec_dest, new_call);
>           gimple_call_set_lhs (new_call, var);
> -         new_stmt_info = vect_finish_stmt_generation (stmt, new_call, gsi);
> +         new_stmt_info
> +           = vect_finish_stmt_generation (stmt_info, new_call, gsi);
>         }
>
>        if (modifier == NARROW)
> @@ -2852,7 +2855,8 @@ vect_build_gather_load_calls (gimple *st
>               prev_res = var;
>               continue;
>             }
> -         var = permute_vec_elements (prev_res, var, perm_mask, stmt, gsi);
> +         var = permute_vec_elements (prev_res, var, perm_mask,
> +                                     stmt_info, gsi);
>           new_stmt_info = loop_vinfo->lookup_def (var);
>         }
>
> @@ -3027,7 +3031,7 @@ vectorizable_bswap (gimple *stmt, gimple
>      {
>        /* Handle uses.  */
>        if (j == 0)
> -        vect_get_vec_defs (op, NULL, stmt, &vec_oprnds, NULL, slp_node);
> +       vect_get_vec_defs (op, NULL, stmt_info, &vec_oprnds, NULL, slp_node);
>        else
>          vect_get_vec_defs_for_stmt_copy (dt, &vec_oprnds, NULL);
>
> @@ -3040,15 +3044,16 @@ vectorizable_bswap (gimple *stmt, gimple
>          tree tem = make_ssa_name (char_vectype);
>          new_stmt = gimple_build_assign (tem, build1 (VIEW_CONVERT_EXPR,
>                                                       char_vectype, vop));
> -        vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +        vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
>          tree tem2 = make_ssa_name (char_vectype);
>          new_stmt = gimple_build_assign (tem2, VEC_PERM_EXPR,
>                                          tem, tem, bswap_vconst);
> -        vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +        vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
>          tem = make_ssa_name (vectype);
>          new_stmt = gimple_build_assign (tem, build1 (VIEW_CONVERT_EXPR,
>                                                       vectype, tem2));
> -        new_stmt_info = vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +        new_stmt_info
> +          = vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
>           if (slp_node)
>            SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
>         }
> @@ -3137,8 +3142,8 @@ vectorizable_call (gimple *gs, gimple_st
>        && ! vec_stmt)
>      return false;
>
> -  /* Is GS a vectorizable call?   */
> -  stmt = dyn_cast <gcall *> (gs);
> +  /* Is STMT_INFO a vectorizable call?   */
> +  stmt = dyn_cast <gcall *> (stmt_info->stmt);
>    if (!stmt)
>      return false;
>
> @@ -3307,7 +3312,7 @@ vectorizable_call (gimple *gs, gimple_st
>                && (gimple_call_builtin_p (stmt, BUILT_IN_BSWAP16)
>                    || gimple_call_builtin_p (stmt, BUILT_IN_BSWAP32)
>                    || gimple_call_builtin_p (stmt, BUILT_IN_BSWAP64)))
> -       return vectorizable_bswap (stmt, gsi, vec_stmt, slp_node,
> +       return vectorizable_bswap (stmt_info, gsi, vec_stmt, slp_node,
>                                    vectype_in, dt, cost_vec);
>        else
>         {
> @@ -3400,7 +3405,7 @@ vectorizable_call (gimple *gs, gimple_st
>                       gimple_call_set_lhs (call, half_res);
>                       gimple_call_set_nothrow (call, true);
>                       new_stmt_info
> -                       = vect_finish_stmt_generation (stmt, call, gsi);
> +                       = vect_finish_stmt_generation (stmt_info, call, gsi);
>                       if ((i & 1) == 0)
>                         {
>                           prev_res = half_res;
> @@ -3411,7 +3416,8 @@ vectorizable_call (gimple *gs, gimple_st
>                         = gimple_build_assign (new_temp, convert_code,
>                                                prev_res, half_res);
>                       new_stmt_info
> -                       = vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +                       = vect_finish_stmt_generation (stmt_info, new_stmt,
> +                                                      gsi);
>                     }
>                   else
>                     {
> @@ -3435,7 +3441,7 @@ vectorizable_call (gimple *gs, gimple_st
>                       gimple_call_set_lhs (call, new_temp);
>                       gimple_call_set_nothrow (call, true);
>                       new_stmt_info
> -                       = vect_finish_stmt_generation (stmt, call, gsi);
> +                       = vect_finish_stmt_generation (stmt_info, call, gsi);
>                     }
>                   SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info);
>                 }
> @@ -3453,7 +3459,7 @@ vectorizable_call (gimple *gs, gimple_st
>               op = gimple_call_arg (stmt, i);
>               if (j == 0)
>                 vec_oprnd0
> -                 = vect_get_vec_def_for_operand (op, stmt);
> +                 = vect_get_vec_def_for_operand (op, stmt_info);
>               else
>                 vec_oprnd0
>                   = vect_get_vec_def_for_stmt_copy (dt[i], orig_vargs[i]);
> @@ -3476,11 +3482,11 @@ vectorizable_call (gimple *gs, gimple_st
>               tree new_var
>                 = vect_get_new_ssa_name (vectype_out, vect_simple_var, "cst_");
>               gimple *init_stmt = gimple_build_assign (new_var, cst);
> -             vect_init_vector_1 (stmt, init_stmt, NULL);
> +             vect_init_vector_1 (stmt_info, init_stmt, NULL);
>               new_temp = make_ssa_name (vec_dest);
>               gimple *new_stmt = gimple_build_assign (new_temp, new_var);
>               new_stmt_info
> -               = vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +               = vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
>             }
>           else if (modifier == NARROW)
>             {
> @@ -3491,7 +3497,8 @@ vectorizable_call (gimple *gs, gimple_st
>               gcall *call = gimple_build_call_internal_vec (ifn, vargs);
>               gimple_call_set_lhs (call, half_res);
>               gimple_call_set_nothrow (call, true);
> -             new_stmt_info = vect_finish_stmt_generation (stmt, call, gsi);
> +             new_stmt_info
> +               = vect_finish_stmt_generation (stmt_info, call, gsi);
>               if ((j & 1) == 0)
>                 {
>                   prev_res = half_res;
> @@ -3501,7 +3508,7 @@ vectorizable_call (gimple *gs, gimple_st
>               gassign *new_stmt = gimple_build_assign (new_temp, convert_code,
>                                                        prev_res, half_res);
>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [29/46] Use stmt_vec_info instead of gimple stmts internally (part 2)
  2018-07-24 10:04 ` [29/46] Use stmt_vec_info instead of gimple stmts internally (part 2) Richard Sandiford
@ 2018-07-25 10:03   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25 10:03 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 12:04 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This second part handles the less mechnical cases, i.e. those that don't
> just involve swapping a gimple stmt for an existing stmt_vec_info.

OK.

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vect-loop.c (vect_analyze_loop_operations): Look up the
>         statement before passing it to vect_analyze_stmt.
>         (vect_create_epilog_for_reduction): Use a stmt_vec_info to walk
>         the chain of phi vector definitions.  Track the exit phi via its
>         stmt_vec_info.
>         (vectorizable_reduction): Set cond_stmt_vinfo directly from the
>         STMT_VINFO_REDUC_DEF.
>         * tree-vect-slp.c (vect_get_place_in_interleaving_chain): Use
>         stmt_vec_infos to handle the statement chains.
>         (vect_get_slp_defs): Record the first statement in the node
>         using a stmt_vec_info.
>         * tree-vect-stmts.c (vect_mark_stmts_to_be_vectorized): Look up
>         statements here and pass their stmt_vec_info down to subroutines.
>         (vect_init_vector_1): Hoist call to vinfo_for_stmt and pass it
>         down to vect_finish_stmt_generation.
>         (vect_init_vector, vect_get_vec_defs, vect_finish_replace_stmt)
>         (vect_finish_stmt_generation): Call vinfo_for_stmt and pass
>         stmt_vec_infos to subroutines.
>         (vect_remove_stores): Use stmt_vec_infos to handle the statement
>         chains.
>
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:23:35.376732054 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:23:38.964700191 +0100
> @@ -1629,8 +1629,9 @@ vect_analyze_loop_operations (loop_vec_i
>          {
>           gimple *stmt = gsi_stmt (si);
>           if (!gimple_clobber_p (stmt)
> -             && !vect_analyze_stmt (stmt, &need_to_vectorize, NULL, NULL,
> -                                    &cost_vec))
> +             && !vect_analyze_stmt (loop_vinfo->lookup_stmt (stmt),
> +                                    &need_to_vectorize,
> +                                    NULL, NULL, &cost_vec))
>             return false;
>          }
>      } /* bbs */
> @@ -4832,11 +4833,11 @@ vect_create_epilog_for_reduction (vec<tr
>        tree first_vect = PHI_RESULT (new_phis[0]);
>        gassign *new_vec_stmt = NULL;
>        vec_dest = vect_create_destination_var (scalar_dest, vectype);
> -      gimple *next_phi = new_phis[0];
> +      stmt_vec_info next_phi_info = loop_vinfo->lookup_stmt (new_phis[0]);
>        for (int k = 1; k < ncopies; ++k)
>         {
> -         next_phi = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (next_phi));
> -         tree second_vect = PHI_RESULT (next_phi);
> +         next_phi_info = STMT_VINFO_RELATED_STMT (next_phi_info);
> +         tree second_vect = PHI_RESULT (next_phi_info->stmt);
>            tree tem = make_ssa_name (vec_dest, new_vec_stmt);
>            new_vec_stmt = gimple_build_assign (tem, code,
>                                               first_vect, second_vect);
> @@ -5573,11 +5574,12 @@ vect_create_epilog_for_reduction (vec<tr
>    else
>      ratio = 1;
>
> +  stmt_vec_info epilog_stmt_info = NULL;
>    for (k = 0; k < group_size; k++)
>      {
>        if (k % ratio == 0)
>          {
> -          epilog_stmt = new_phis[k / ratio];
> +         epilog_stmt_info = loop_vinfo->lookup_stmt (new_phis[k / ratio]);
>           reduction_phi_info = reduction_phis[k / ratio];
>           if (double_reduc)
>             inner_phi = inner_phis[k / ratio];
> @@ -5623,8 +5625,7 @@ vect_create_epilog_for_reduction (vec<tr
>               if (double_reduc)
>                 STMT_VINFO_VEC_STMT (exit_phi_vinfo) = inner_phi;
>               else
> -               STMT_VINFO_VEC_STMT (exit_phi_vinfo)
> -                 = vinfo_for_stmt (epilog_stmt);
> +               STMT_VINFO_VEC_STMT (exit_phi_vinfo) = epilog_stmt_info;
>                if (!double_reduc
>                    || STMT_VINFO_DEF_TYPE (exit_phi_vinfo)
>                        != vect_double_reduction_def)
> @@ -6070,7 +6071,7 @@ vectorizable_reduction (gimple *stmt, gi
>    optab optab;
>    tree new_temp = NULL_TREE;
>    enum vect_def_type dt, cond_reduc_dt = vect_unknown_def_type;
> -  gimple *cond_reduc_def_stmt = NULL;
> +  stmt_vec_info cond_stmt_vinfo = NULL;
>    enum tree_code cond_reduc_op_code = ERROR_MARK;
>    tree scalar_type;
>    bool is_simple_use;
> @@ -6348,7 +6349,7 @@ vectorizable_reduction (gimple *stmt, gi
>               && is_nonwrapping_integer_induction (def_stmt_info, loop))
>             {
>               cond_reduc_dt = dt;
> -             cond_reduc_def_stmt = def_stmt_info;
> +             cond_stmt_vinfo = def_stmt_info;
>             }
>         }
>      }
> @@ -6454,7 +6455,6 @@ vectorizable_reduction (gimple *stmt, gi
>         }
>        else if (cond_reduc_dt == vect_induction_def)
>         {
> -         stmt_vec_info cond_stmt_vinfo = vinfo_for_stmt (cond_reduc_def_stmt);
>           tree base
>             = STMT_VINFO_LOOP_PHI_EVOLUTION_BASE_UNCHANGED (cond_stmt_vinfo);
>           tree step = STMT_VINFO_LOOP_PHI_EVOLUTION_PART (cond_stmt_vinfo);
> Index: gcc/tree-vect-slp.c
> ===================================================================
> --- gcc/tree-vect-slp.c 2018-07-24 10:23:35.380732018 +0100
> +++ gcc/tree-vect-slp.c 2018-07-24 10:23:38.964700191 +0100
> @@ -201,21 +201,23 @@ vect_free_oprnd_info (vec<slp_oprnd_info
>  int
>  vect_get_place_in_interleaving_chain (gimple *stmt, gimple *first_stmt)
>  {
> -  gimple *next_stmt = first_stmt;
> +  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +  stmt_vec_info first_stmt_info = vinfo_for_stmt (first_stmt);
> +  stmt_vec_info next_stmt_info = first_stmt_info;
>    int result = 0;
>
> -  if (first_stmt != DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)))
> +  if (first_stmt_info != DR_GROUP_FIRST_ELEMENT (stmt_info))
>      return -1;
>
>    do
>      {
> -      if (next_stmt == stmt)
> +      if (next_stmt_info == stmt_info)
>         return result;
> -      next_stmt = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next_stmt));
> -      if (next_stmt)
> -       result += DR_GROUP_GAP (vinfo_for_stmt (next_stmt));
> +      next_stmt_info = DR_GROUP_NEXT_ELEMENT (next_stmt_info);
> +      if (next_stmt_info)
> +       result += DR_GROUP_GAP (next_stmt_info);
>      }
> -  while (next_stmt);
> +  while (next_stmt_info);
>
>    return -1;
>  }
> @@ -3577,7 +3579,6 @@ vect_get_slp_vect_defs (slp_tree slp_nod
>  vect_get_slp_defs (vec<tree> ops, slp_tree slp_node,
>                    vec<vec<tree> > *vec_oprnds)
>  {
> -  gimple *first_stmt;
>    int number_of_vects = 0, i;
>    unsigned int child_index = 0;
>    HOST_WIDE_INT lhs_size_unit, rhs_size_unit;
> @@ -3586,7 +3587,7 @@ vect_get_slp_defs (vec<tree> ops, slp_tr
>    tree oprnd;
>    bool vectorized_defs;
>
> -  first_stmt = SLP_TREE_SCALAR_STMTS (slp_node)[0];
> +  stmt_vec_info first_stmt_info = SLP_TREE_SCALAR_STMTS (slp_node)[0];
>    FOR_EACH_VEC_ELT (ops, i, oprnd)
>      {
>        /* For each operand we check if it has vectorized definitions in a child
> @@ -3637,8 +3638,8 @@ vect_get_slp_defs (vec<tree> ops, slp_tr
>                   vect_schedule_slp_instance (), fix it by replacing LHS with
>                   RHS, if necessary.  See vect_get_smallest_scalar_type () for
>                   details.  */
> -              vect_get_smallest_scalar_type (first_stmt, &lhs_size_unit,
> -                                             &rhs_size_unit);
> +             vect_get_smallest_scalar_type (first_stmt_info, &lhs_size_unit,
> +                                            &rhs_size_unit);
>                if (rhs_size_unit != lhs_size_unit)
>                  {
>                    number_of_vects *= rhs_size_unit;
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-24 10:23:35.384731983 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:23:38.968700155 +0100
> @@ -622,7 +622,6 @@ vect_mark_stmts_to_be_vectorized (loop_v
>    unsigned int i;
>    stmt_vec_info stmt_vinfo;
>    basic_block bb;
> -  gimple *phi;
>    bool live_p;
>    enum vect_relevant relevant;
>
> @@ -636,27 +635,27 @@ vect_mark_stmts_to_be_vectorized (loop_v
>        bb = bbs[i];
>        for (si = gsi_start_phis (bb); !gsi_end_p (si); gsi_next (&si))
>         {
> -         phi = gsi_stmt (si);
> +         stmt_vec_info phi_info = loop_vinfo->lookup_stmt (gsi_stmt (si));
>           if (dump_enabled_p ())
>             {
>               dump_printf_loc (MSG_NOTE, vect_location, "init: phi relevant? ");
> -             dump_gimple_stmt (MSG_NOTE, TDF_SLIM, phi, 0);
> +             dump_gimple_stmt (MSG_NOTE, TDF_SLIM, phi_info->stmt, 0);
>             }
>
> -         if (vect_stmt_relevant_p (phi, loop_vinfo, &relevant, &live_p))
> -           vect_mark_relevant (&worklist, phi, relevant, live_p);
> +         if (vect_stmt_relevant_p (phi_info, loop_vinfo, &relevant, &live_p))
> +           vect_mark_relevant (&worklist, phi_info, relevant, live_p);
>         }
>        for (si = gsi_start_bb (bb); !gsi_end_p (si); gsi_next (&si))
>         {
> -         stmt = gsi_stmt (si);
> +         stmt_vec_info stmt_info = loop_vinfo->lookup_stmt (gsi_stmt (si));
>           if (dump_enabled_p ())
>             {
>               dump_printf_loc (MSG_NOTE, vect_location, "init: stmt relevant? ");
> -             dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
> +             dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
>             }
>
> -         if (vect_stmt_relevant_p (stmt, loop_vinfo, &relevant, &live_p))
> -           vect_mark_relevant (&worklist, stmt, relevant, live_p);
> +         if (vect_stmt_relevant_p (stmt_info, loop_vinfo, &relevant, &live_p))
> +           vect_mark_relevant (&worklist, stmt_info, relevant, live_p);
>         }
>      }
>
> @@ -1350,11 +1349,11 @@ vect_get_load_cost (stmt_vec_info stmt_i
>  static void
>  vect_init_vector_1 (gimple *stmt, gimple *new_stmt, gimple_stmt_iterator *gsi)
>  {
> +  stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
>    if (gsi)
> -    vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +    vect_finish_stmt_generation (stmt_vinfo, new_stmt, gsi);
>    else
>      {
> -      stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
>        loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_vinfo);
>
>        if (loop_vinfo)
> @@ -1404,6 +1403,7 @@ vect_init_vector_1 (gimple *stmt, gimple
>  tree
>  vect_init_vector (gimple *stmt, tree val, tree type, gimple_stmt_iterator *gsi)
>  {
> +  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    gimple *init_stmt;
>    tree new_temp;
>
> @@ -1427,7 +1427,7 @@ vect_init_vector (gimple *stmt, tree val
>                   new_temp = make_ssa_name (TREE_TYPE (type));
>                   init_stmt = gimple_build_assign (new_temp, COND_EXPR,
>                                                    val, true_val, false_val);
> -                 vect_init_vector_1 (stmt, init_stmt, gsi);
> +                 vect_init_vector_1 (stmt_info, init_stmt, gsi);
>                   val = new_temp;
>                 }
>             }
> @@ -1443,7 +1443,7 @@ vect_init_vector (gimple *stmt, tree val
>                                                               val));
>               else
>                 init_stmt = gimple_build_assign (new_temp, NOP_EXPR, val);
> -             vect_init_vector_1 (stmt, init_stmt, gsi);
> +             vect_init_vector_1 (stmt_info, init_stmt, gsi);
>               val = new_temp;
>             }
>         }
> @@ -1452,7 +1452,7 @@ vect_init_vector (gimple *stmt, tree val
>
>    new_temp = vect_get_new_ssa_name (type, vect_simple_var, "cst_");
>    init_stmt = gimple_build_assign  (new_temp, val);
> -  vect_init_vector_1 (stmt, init_stmt, gsi);
> +  vect_init_vector_1 (stmt_info, init_stmt, gsi);
>    return new_temp;
>  }
>
> @@ -1690,6 +1690,7 @@ vect_get_vec_defs (tree op0, tree op1, g
>                    vec<tree> *vec_oprnds1,
>                    slp_tree slp_node)
>  {
> +  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    if (slp_node)
>      {
>        int nops = (op1 == NULL_TREE) ? 1 : 2;
> @@ -1711,13 +1712,13 @@ vect_get_vec_defs (tree op0, tree op1, g
>        tree vec_oprnd;
>
>        vec_oprnds0->create (1);
> -      vec_oprnd = vect_get_vec_def_for_operand (op0, stmt);
> +      vec_oprnd = vect_get_vec_def_for_operand (op0, stmt_info);
>        vec_oprnds0->quick_push (vec_oprnd);
>
>        if (op1)
>         {
>           vec_oprnds1->create (1);
> -         vec_oprnd = vect_get_vec_def_for_operand (op1, stmt);
> +         vec_oprnd = vect_get_vec_def_for_operand (op1, stmt_info);
>           vec_oprnds1->quick_push (vec_oprnd);
>         }
>      }
> @@ -1760,12 +1761,13 @@ vect_finish_stmt_generation_1 (gimple *s
>  stmt_vec_info
>  vect_finish_replace_stmt (gimple *stmt, gimple *vec_stmt)
>  {
> -  gcc_assert (gimple_get_lhs (stmt) == gimple_get_lhs (vec_stmt));
> +  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +  gcc_assert (gimple_get_lhs (stmt_info->stmt) == gimple_get_lhs (vec_stmt));
>
> -  gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
> +  gimple_stmt_iterator gsi = gsi_for_stmt (stmt_info->stmt);
>    gsi_replace (&gsi, vec_stmt, false);
>
> -  return vect_finish_stmt_generation_1 (stmt, vec_stmt);
> +  return vect_finish_stmt_generation_1 (stmt_info, vec_stmt);
>  }
>
>  /* Add VEC_STMT to the vectorized implementation of STMT and insert it
> @@ -1775,7 +1777,8 @@ vect_finish_replace_stmt (gimple *stmt,
>  vect_finish_stmt_generation (gimple *stmt, gimple *vec_stmt,
>                              gimple_stmt_iterator *gsi)
>  {
> -  gcc_assert (gimple_code (stmt) != GIMPLE_LABEL);
> +  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +  gcc_assert (gimple_code (stmt_info->stmt) != GIMPLE_LABEL);
>
>    if (!gsi_end_p (*gsi)
>        && gimple_has_mem_ops (vec_stmt))
> @@ -1804,7 +1807,7 @@ vect_finish_stmt_generation (gimple *stm
>         }
>      }
>    gsi_insert_before (gsi, vec_stmt, GSI_SAME_STMT);
> -  return vect_finish_stmt_generation_1 (stmt, vec_stmt);
> +  return vect_finish_stmt_generation_1 (stmt_info, vec_stmt);
>  }
>
>  /* We want to vectorize a call to combined function CFN with function
> @@ -9856,23 +9859,21 @@ vect_transform_stmt (gimple *stmt, gimpl
>  void
>  vect_remove_stores (gimple *first_stmt)
>  {
> -  gimple *next = first_stmt;
> +  stmt_vec_info next_stmt_info = vinfo_for_stmt (first_stmt);
>    gimple_stmt_iterator next_si;
>
> -  while (next)
> +  while (next_stmt_info)
>      {
> -      stmt_vec_info stmt_info = vinfo_for_stmt (next);
> -
> -      stmt_vec_info tmp = DR_GROUP_NEXT_ELEMENT (stmt_info);
> -      if (is_pattern_stmt_p (stmt_info))
> -       next = STMT_VINFO_RELATED_STMT (stmt_info);
> +      stmt_vec_info tmp = DR_GROUP_NEXT_ELEMENT (next_stmt_info);
> +      if (is_pattern_stmt_p (next_stmt_info))
> +       next_stmt_info = STMT_VINFO_RELATED_STMT (next_stmt_info);
>        /* Free the attached stmt_vec_info and remove the stmt.  */
> -      next_si = gsi_for_stmt (next);
> -      unlink_stmt_vdef (next);
> +      next_si = gsi_for_stmt (next_stmt_info->stmt);
> +      unlink_stmt_vdef (next_stmt_info->stmt);
>        gsi_remove (&next_si, true);
> -      release_defs (next);
> -      free_stmt_vec_info (next);
> -      next = tmp;
> +      release_defs (next_stmt_info->stmt);
> +      free_stmt_vec_info (next_stmt_info);
> +      next_stmt_info = tmp;
>      }
>  }
>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [30/46] Use stmt_vec_infos rather than gimple stmts for worklists
  2018-07-24 10:04 ` [30/46] Use stmt_vec_infos rather than gimple stmts for worklists Richard Sandiford
@ 2018-07-25 10:04   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25 10:04 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 12:05 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vect-loop.c (vect_analyze_scalar_cycles_1): Change the type
>         of the worklist from a vector of gimple stmts to a vector of
>         stmt_vec_infos.
>         * tree-vect-stmts.c (vect_mark_relevant, process_use)
>         (vect_mark_stmts_to_be_vectorized): Likewise

OK

> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:23:38.964700191 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:23:42.472669038 +0100
> @@ -474,7 +474,7 @@ vect_analyze_scalar_cycles_1 (loop_vec_i
>  {
>    basic_block bb = loop->header;
>    tree init, step;
> -  auto_vec<gimple *, 64> worklist;
> +  auto_vec<stmt_vec_info, 64> worklist;
>    gphi_iterator gsi;
>    bool double_reduc;
>
> @@ -543,9 +543,9 @@ vect_analyze_scalar_cycles_1 (loop_vec_i
>    /* Second - identify all reductions and nested cycles.  */
>    while (worklist.length () > 0)
>      {
> -      gimple *phi = worklist.pop ();
> +      stmt_vec_info stmt_vinfo = worklist.pop ();
> +      gphi *phi = as_a <gphi *> (stmt_vinfo->stmt);
>        tree def = PHI_RESULT (phi);
> -      stmt_vec_info stmt_vinfo = vinfo_for_stmt (phi);
>
>        if (dump_enabled_p ())
>          {
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-24 10:23:38.968700155 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:23:42.472669038 +0100
> @@ -194,7 +194,7 @@ vect_clobber_variable (gimple *stmt, gim
>     Mark STMT as "relevant for vectorization" and add it to WORKLIST.  */
>
>  static void
> -vect_mark_relevant (vec<gimple *> *worklist, gimple *stmt,
> +vect_mark_relevant (vec<stmt_vec_info> *worklist, gimple *stmt,
>                     enum vect_relevant relevant, bool live_p)
>  {
>    stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> @@ -453,7 +453,7 @@ exist_non_indexing_operands_for_use_p (t
>
>  static bool
>  process_use (gimple *stmt, tree use, loop_vec_info loop_vinfo,
> -            enum vect_relevant relevant, vec<gimple *> *worklist,
> +            enum vect_relevant relevant, vec<stmt_vec_info> *worklist,
>              bool force)
>  {
>    stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
> @@ -618,16 +618,14 @@ vect_mark_stmts_to_be_vectorized (loop_v
>    basic_block *bbs = LOOP_VINFO_BBS (loop_vinfo);
>    unsigned int nbbs = loop->num_nodes;
>    gimple_stmt_iterator si;
> -  gimple *stmt;
>    unsigned int i;
> -  stmt_vec_info stmt_vinfo;
>    basic_block bb;
>    bool live_p;
>    enum vect_relevant relevant;
>
>    DUMP_VECT_SCOPE ("vect_mark_stmts_to_be_vectorized");
>
> -  auto_vec<gimple *, 64> worklist;
> +  auto_vec<stmt_vec_info, 64> worklist;
>
>    /* 1. Init worklist.  */
>    for (i = 0; i < nbbs; i++)
> @@ -665,17 +663,17 @@ vect_mark_stmts_to_be_vectorized (loop_v
>        use_operand_p use_p;
>        ssa_op_iter iter;
>
> -      stmt = worklist.pop ();
> +      stmt_vec_info stmt_vinfo = worklist.pop ();
>        if (dump_enabled_p ())
>         {
> -          dump_printf_loc (MSG_NOTE, vect_location, "worklist: examine stmt: ");
> -          dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
> +         dump_printf_loc (MSG_NOTE, vect_location,
> +                          "worklist: examine stmt: ");
> +         dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_vinfo->stmt, 0);
>         }
>
>        /* Examine the USEs of STMT. For each USE, mark the stmt that defines it
>          (DEF_STMT) as relevant/irrelevant according to the relevance property
>          of STMT.  */
> -      stmt_vinfo = vinfo_for_stmt (stmt);
>        relevant = STMT_VINFO_RELEVANT (stmt_vinfo);
>
>        /* Generally, the relevance property of STMT (in STMT_VINFO_RELEVANT) is

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [31/46] Use stmt_vec_info in function interfaces (part 1)
  2018-07-24 10:05 ` [31/46] Use stmt_vec_info in function interfaces (part 1) Richard Sandiford
@ 2018-07-25 10:05   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25 10:05 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 12:05 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This first (less mechanical) part handles cases that involve changes in
> the callers or non-trivial changes in the functions themselves.

OK

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vect-data-refs.c (vect_describe_gather_scatter_call): Take
>         a stmt_vec_info instead of a gcall.
>         (vect_check_gather_scatter): Update call accordingly.
>         * tree-vect-loop-manip.c (iv_phi_p): Take a stmt_vec_info instead
>         of a gphi.
>         (vect_can_advance_ivs_p, vect_update_ivs_after_vectorizer)
>         (slpeel_update_phi_nodes_for_loops):): Update calls accordingly.
>         * tree-vect-loop.c (vect_transform_loop_stmt): Take a stmt_vec_info
>         instead of a gimple stmt.
>         (vect_transform_loop): Update calls accordingly.
>         * tree-vect-slp.c (vect_split_slp_store_group): Take and return
>         stmt_vec_infos instead of gimple stmts.
>         (vect_analyze_slp_instance): Update use accordingly.
>         * tree-vect-stmts.c (read_vector_array, write_vector_array)
>         (vect_clobber_variable, vect_stmt_relevant_p, permute_vec_elements)
>         (vect_use_strided_gather_scatters_p, vect_build_all_ones_mask)
>         (vect_build_zero_merge_argument, vect_get_gather_scatter_ops)
>         (vect_gen_widened_results_half, vect_get_loop_based_defs)
>         (vect_create_vectorized_promotion_stmts, can_vectorize_live_stmts):
>         Take a stmt_vec_info instead of a gimple stmt and pass stmt_vec_infos
>         down to subroutines.
>
> Index: gcc/tree-vect-data-refs.c
> ===================================================================
> --- gcc/tree-vect-data-refs.c   2018-07-24 10:23:35.376732054 +0100
> +++ gcc/tree-vect-data-refs.c   2018-07-24 10:23:46.108636749 +0100
> @@ -3621,13 +3621,14 @@ vect_gather_scatter_fn_p (bool read_p, b
>    return true;
>  }
>
> -/* CALL is a call to an internal gather load or scatter store function.
> +/* STMT_INFO is a call to an internal gather load or scatter store function.
>     Describe the operation in INFO.  */
>
>  static void
> -vect_describe_gather_scatter_call (gcall *call, gather_scatter_info *info)
> +vect_describe_gather_scatter_call (stmt_vec_info stmt_info,
> +                                  gather_scatter_info *info)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (call);
> +  gcall *call = as_a <gcall *> (stmt_info->stmt);
>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>    data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>
> @@ -3672,7 +3673,7 @@ vect_check_gather_scatter (gimple *stmt,
>        ifn = gimple_call_internal_fn (call);
>        if (internal_gather_scatter_fn_p (ifn))
>         {
> -         vect_describe_gather_scatter_call (call, info);
> +         vect_describe_gather_scatter_call (stmt_info, info);
>           return true;
>         }
>        masked_p = (ifn == IFN_MASK_LOAD || ifn == IFN_MASK_STORE);
> Index: gcc/tree-vect-loop-manip.c
> ===================================================================
> --- gcc/tree-vect-loop-manip.c  2018-07-24 10:23:35.376732054 +0100
> +++ gcc/tree-vect-loop-manip.c  2018-07-24 10:23:46.112636713 +0100
> @@ -1335,16 +1335,16 @@ find_loop_location (struct loop *loop)
>    return dump_user_location_t ();
>  }
>
> -/* Return true if PHI defines an IV of the loop to be vectorized.  */
> +/* Return true if the phi described by STMT_INFO defines an IV of the
> +   loop to be vectorized.  */
>
>  static bool
> -iv_phi_p (gphi *phi)
> +iv_phi_p (stmt_vec_info stmt_info)
>  {
> +  gphi *phi = as_a <gphi *> (stmt_info->stmt);
>    if (virtual_operand_p (PHI_RESULT (phi)))
>      return false;
>
> -  stmt_vec_info stmt_info = vinfo_for_stmt (phi);
> -  gcc_assert (stmt_info != NULL_STMT_VEC_INFO);
>    if (STMT_VINFO_DEF_TYPE (stmt_info) == vect_reduction_def
>        || STMT_VINFO_DEF_TYPE (stmt_info) == vect_double_reduction_def)
>      return false;
> @@ -1388,7 +1388,7 @@ vect_can_advance_ivs_p (loop_vec_info lo
>          virtual defs/uses (i.e., memory accesses) are analyzed elsewhere.
>
>          Skip reduction phis.  */
> -      if (!iv_phi_p (phi))
> +      if (!iv_phi_p (phi_info))
>         {
>           if (dump_enabled_p ())
>             dump_printf_loc (MSG_NOTE, vect_location,
> @@ -1509,7 +1509,7 @@ vect_update_ivs_after_vectorizer (loop_v
>         }
>
>        /* Skip reduction and virtual phis.  */
> -      if (!iv_phi_p (phi))
> +      if (!iv_phi_p (phi_info))
>         {
>           if (dump_enabled_p ())
>             dump_printf_loc (MSG_NOTE, vect_location,
> @@ -2088,7 +2088,8 @@ slpeel_update_phi_nodes_for_loops (loop_
>        tree arg = PHI_ARG_DEF_FROM_EDGE (orig_phi, first_latch_e);
>        /* Generate lcssa PHI node for the first loop.  */
>        gphi *vect_phi = (loop == first) ? orig_phi : update_phi;
> -      if (create_lcssa_for_iv_phis || !iv_phi_p (vect_phi))
> +      stmt_vec_info vect_phi_info = loop_vinfo->lookup_stmt (vect_phi);
> +      if (create_lcssa_for_iv_phis || !iv_phi_p (vect_phi_info))
>         {
>           tree new_res = copy_ssa_name (PHI_RESULT (orig_phi));
>           gphi *lcssa_phi = create_phi_node (new_res, between_bb);
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:23:42.472669038 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:23:46.112636713 +0100
> @@ -8207,21 +8207,18 @@ scale_profile_for_vect_loop (struct loop
>      scale_bbs_frequencies (&loop->latch, 1, exit_l->probability / prob);
>  }
>
> -/* Vectorize STMT if relevant, inserting any new instructions before GSI.
> -   When vectorizing STMT as a store, set *SEEN_STORE to its stmt_vec_info.
> +/* Vectorize STMT_INFO if relevant, inserting any new instructions before GSI.
> +   When vectorizing STMT_INFO as a store, set *SEEN_STORE to its stmt_vec_info.
>     *SLP_SCHEDULE is a running record of whether we have called
>     vect_schedule_slp.  */
>
>  static void
> -vect_transform_loop_stmt (loop_vec_info loop_vinfo, gimple *stmt,
> +vect_transform_loop_stmt (loop_vec_info loop_vinfo, stmt_vec_info stmt_info,
>                           gimple_stmt_iterator *gsi,
>                           stmt_vec_info *seen_store, bool *slp_scheduled)
>  {
>    struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
>    poly_uint64 vf = LOOP_VINFO_VECT_FACTOR (loop_vinfo);
> -  stmt_vec_info stmt_info = loop_vinfo->lookup_stmt (stmt);
> -  if (!stmt_info)
> -    return;
>
>    if (dump_enabled_p ())
>      {
> @@ -8476,15 +8473,19 @@ vect_transform_loop (loop_vec_info loop_
>                       gimple *def_seq = STMT_VINFO_PATTERN_DEF_SEQ (stmt_info);
>                       for (gimple_stmt_iterator subsi = gsi_start (def_seq);
>                            !gsi_end_p (subsi); gsi_next (&subsi))
> -                       vect_transform_loop_stmt (loop_vinfo,
> -                                                 gsi_stmt (subsi), &si,
> -                                                 &seen_store,
> -                                                 &slp_scheduled);
> -                     gimple *pat_stmt = STMT_VINFO_RELATED_STMT (stmt_info);
> -                     vect_transform_loop_stmt (loop_vinfo, pat_stmt, &si,
> +                       {
> +                         stmt_vec_info pat_stmt_info
> +                           = loop_vinfo->lookup_stmt (gsi_stmt (subsi));
> +                         vect_transform_loop_stmt (loop_vinfo, pat_stmt_info,
> +                                                   &si, &seen_store,
> +                                                   &slp_scheduled);
> +                       }
> +                     stmt_vec_info pat_stmt_info
> +                       = STMT_VINFO_RELATED_STMT (stmt_info);
> +                     vect_transform_loop_stmt (loop_vinfo, pat_stmt_info, &si,
>                                                 &seen_store, &slp_scheduled);
>                     }
> -                 vect_transform_loop_stmt (loop_vinfo, stmt, &si,
> +                 vect_transform_loop_stmt (loop_vinfo, stmt_info, &si,
>                                             &seen_store, &slp_scheduled);
>                 }
>               if (seen_store)
> Index: gcc/tree-vect-slp.c
> ===================================================================
> --- gcc/tree-vect-slp.c 2018-07-24 10:23:38.964700191 +0100
> +++ gcc/tree-vect-slp.c 2018-07-24 10:23:46.112636713 +0100
> @@ -1856,16 +1856,15 @@ vect_find_last_scalar_stmt_in_slp (slp_t
>    return last;
>  }
>
> -/* Splits a group of stores, currently beginning at FIRST_STMT, into two groups:
> -   one (still beginning at FIRST_STMT) of size GROUP1_SIZE (also containing
> -   the first GROUP1_SIZE stmts, since stores are consecutive), the second
> -   containing the remainder.
> +/* Splits a group of stores, currently beginning at FIRST_VINFO, into
> +   two groups: one (still beginning at FIRST_VINFO) of size GROUP1_SIZE
> +   (also containing the first GROUP1_SIZE stmts, since stores are
> +   consecutive), the second containing the remainder.
>     Return the first stmt in the second group.  */
>
> -static gimple *
> -vect_split_slp_store_group (gimple *first_stmt, unsigned group1_size)
> +static stmt_vec_info
> +vect_split_slp_store_group (stmt_vec_info first_vinfo, unsigned group1_size)
>  {
> -  stmt_vec_info first_vinfo = vinfo_for_stmt (first_stmt);
>    gcc_assert (DR_GROUP_FIRST_ELEMENT (first_vinfo) == first_vinfo);
>    gcc_assert (group1_size > 0);
>    int group2_size = DR_GROUP_SIZE (first_vinfo) - group1_size;
> @@ -2174,7 +2173,8 @@ vect_analyze_slp_instance (vec_info *vin
>           gcc_assert ((const_nunits & (const_nunits - 1)) == 0);
>           unsigned group1_size = i & ~(const_nunits - 1);
>
> -         gimple *rest = vect_split_slp_store_group (stmt_info, group1_size);
> +         stmt_vec_info rest = vect_split_slp_store_group (stmt_info,
> +                                                          group1_size);
>           bool res = vect_analyze_slp_instance (vinfo, stmt_info,
>                                                 max_tree_size);
>           /* If the first non-match was in the middle of a vector,
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-24 10:23:42.472669038 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:23:46.116636678 +0100
> @@ -117,12 +117,12 @@ create_vector_array (tree elem_type, uns
>
>  /* ARRAY is an array of vectors created by create_vector_array.
>     Return an SSA_NAME for the vector in index N.  The reference
> -   is part of the vectorization of STMT and the vector is associated
> +   is part of the vectorization of STMT_INFO and the vector is associated
>     with scalar destination SCALAR_DEST.  */
>
>  static tree
> -read_vector_array (gimple *stmt, gimple_stmt_iterator *gsi, tree scalar_dest,
> -                  tree array, unsigned HOST_WIDE_INT n)
> +read_vector_array (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
> +                  tree scalar_dest, tree array, unsigned HOST_WIDE_INT n)
>  {
>    tree vect_type, vect, vect_name, array_ref;
>    gimple *new_stmt;
> @@ -137,18 +137,18 @@ read_vector_array (gimple *stmt, gimple_
>    new_stmt = gimple_build_assign (vect, array_ref);
>    vect_name = make_ssa_name (vect, new_stmt);
>    gimple_assign_set_lhs (new_stmt, vect_name);
> -  vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +  vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
>
>    return vect_name;
>  }
>
>  /* ARRAY is an array of vectors created by create_vector_array.
>     Emit code to store SSA_NAME VECT in index N of the array.
> -   The store is part of the vectorization of STMT.  */
> +   The store is part of the vectorization of STMT_INFO.  */
>
>  static void
> -write_vector_array (gimple *stmt, gimple_stmt_iterator *gsi, tree vect,
> -                   tree array, unsigned HOST_WIDE_INT n)
> +write_vector_array (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
> +                   tree vect, tree array, unsigned HOST_WIDE_INT n)
>  {
>    tree array_ref;
>    gimple *new_stmt;
> @@ -158,7 +158,7 @@ write_vector_array (gimple *stmt, gimple
>                       NULL_TREE, NULL_TREE);
>
>    new_stmt = gimple_build_assign (array_ref, vect);
> -  vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +  vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
>  }
>
>  /* PTR is a pointer to an array of type TYPE.  Return a representation
> @@ -176,15 +176,16 @@ create_array_ref (tree type, tree ptr, t
>    return mem_ref;
>  }
>
> -/* Add a clobber of variable VAR to the vectorization of STMT.
> +/* Add a clobber of variable VAR to the vectorization of STMT_INFO.
>     Emit the clobber before *GSI.  */
>
>  static void
> -vect_clobber_variable (gimple *stmt, gimple_stmt_iterator *gsi, tree var)
> +vect_clobber_variable (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
> +                      tree var)
>  {
>    tree clobber = build_clobber (TREE_TYPE (var));
>    gimple *new_stmt = gimple_build_assign (var, clobber);
> -  vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +  vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
>  }
>
>  /* Utility functions used by vect_mark_stmts_to_be_vectorized.  */
> @@ -281,8 +282,8 @@ is_simple_and_all_uses_invariant (gimple
>
>  /* Function vect_stmt_relevant_p.
>
> -   Return true if STMT in loop that is represented by LOOP_VINFO is
> -   "relevant for vectorization".
> +   Return true if STMT_INFO, in the loop that is represented by LOOP_VINFO,
> +   is "relevant for vectorization".
>
>     A stmt is considered "relevant for vectorization" if:
>     - it has uses outside the loop.
> @@ -292,7 +293,7 @@ is_simple_and_all_uses_invariant (gimple
>     CHECKME: what other side effects would the vectorizer allow?  */
>
>  static bool
> -vect_stmt_relevant_p (gimple *stmt, loop_vec_info loop_vinfo,
> +vect_stmt_relevant_p (stmt_vec_info stmt_info, loop_vec_info loop_vinfo,
>                       enum vect_relevant *relevant, bool *live_p)
>  {
>    struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
> @@ -305,15 +306,14 @@ vect_stmt_relevant_p (gimple *stmt, loop
>    *live_p = false;
>
>    /* cond stmt other than loop exit cond.  */
> -  if (is_ctrl_stmt (stmt)
> -      && STMT_VINFO_TYPE (vinfo_for_stmt (stmt))
> -         != loop_exit_ctrl_vec_info_type)
> +  if (is_ctrl_stmt (stmt_info->stmt)
> +      && STMT_VINFO_TYPE (stmt_info) != loop_exit_ctrl_vec_info_type)
>      *relevant = vect_used_in_scope;
>
>    /* changing memory.  */
> -  if (gimple_code (stmt) != GIMPLE_PHI)
> -    if (gimple_vdef (stmt)
> -       && !gimple_clobber_p (stmt))
> +  if (gimple_code (stmt_info->stmt) != GIMPLE_PHI)
> +    if (gimple_vdef (stmt_info->stmt)
> +       && !gimple_clobber_p (stmt_info->stmt))
>        {
>         if (dump_enabled_p ())
>           dump_printf_loc (MSG_NOTE, vect_location,
> @@ -322,7 +322,7 @@ vect_stmt_relevant_p (gimple *stmt, loop
>        }
>
>    /* uses outside the loop.  */
> -  FOR_EACH_PHI_OR_STMT_DEF (def_p, stmt, op_iter, SSA_OP_DEF)
> +  FOR_EACH_PHI_OR_STMT_DEF (def_p, stmt_info->stmt, op_iter, SSA_OP_DEF)
>      {
>        FOR_EACH_IMM_USE_FAST (use_p, imm_iter, DEF_FROM_PTR (def_p))
>         {
> @@ -347,7 +347,7 @@ vect_stmt_relevant_p (gimple *stmt, loop
>      }
>
>    if (*live_p && *relevant == vect_unused_in_scope
> -      && !is_simple_and_all_uses_invariant (stmt, loop_vinfo))
> +      && !is_simple_and_all_uses_invariant (stmt_info, loop_vinfo))
>      {
>        if (dump_enabled_p ())
>         dump_printf_loc (MSG_NOTE, vect_location,
> @@ -1838,7 +1838,7 @@ vectorizable_internal_function (combined
>  }
>
>
> -static tree permute_vec_elements (tree, tree, tree, gimple *,
> +static tree permute_vec_elements (tree, tree, tree, stmt_vec_info,
>                                   gimple_stmt_iterator *);
>
>  /* Check whether a load or store statement in the loop described by
> @@ -2072,19 +2072,19 @@ vect_truncate_gather_scatter_offset (gim
>  }
>
>  /* Return true if we can use gather/scatter internal functions to
> -   vectorize STMT, which is a grouped or strided load or store.
> +   vectorize STMT_INFO, which is a grouped or strided load or store.
>     MASKED_P is true if load or store is conditional.  When returning
>     true, fill in GS_INFO with the information required to perform the
>     operation.  */
>
>  static bool
> -vect_use_strided_gather_scatters_p (gimple *stmt, loop_vec_info loop_vinfo,
> -                                   bool masked_p,
> +vect_use_strided_gather_scatters_p (stmt_vec_info stmt_info,
> +                                   loop_vec_info loop_vinfo, bool masked_p,
>                                     gather_scatter_info *gs_info)
>  {
> -  if (!vect_check_gather_scatter (stmt, loop_vinfo, gs_info)
> +  if (!vect_check_gather_scatter (stmt_info, loop_vinfo, gs_info)
>        || gs_info->decl)
> -    return vect_truncate_gather_scatter_offset (stmt, loop_vinfo,
> +    return vect_truncate_gather_scatter_offset (stmt_info, loop_vinfo,
>                                                 masked_p, gs_info);
>
>    scalar_mode element_mode = SCALAR_TYPE_MODE (gs_info->element_type);
> @@ -2613,12 +2613,12 @@ vect_check_store_rhs (gimple *stmt, tree
>    return true;
>  }
>
> -/* Build an all-ones vector mask of type MASKTYPE while vectorizing STMT.
> +/* Build an all-ones vector mask of type MASKTYPE while vectorizing STMT_INFO.
>     Note that we support masks with floating-point type, in which case the
>     floats are interpreted as a bitmask.  */
>
>  static tree
> -vect_build_all_ones_mask (gimple *stmt, tree masktype)
> +vect_build_all_ones_mask (stmt_vec_info stmt_info, tree masktype)
>  {
>    if (TREE_CODE (masktype) == INTEGER_TYPE)
>      return build_int_cst (masktype, -1);
> @@ -2626,7 +2626,7 @@ vect_build_all_ones_mask (gimple *stmt,
>      {
>        tree mask = build_int_cst (TREE_TYPE (masktype), -1);
>        mask = build_vector_from_val (masktype, mask);
> -      return vect_init_vector (stmt, mask, masktype, NULL);
> +      return vect_init_vector (stmt_info, mask, masktype, NULL);
>      }
>    else if (SCALAR_FLOAT_TYPE_P (TREE_TYPE (masktype)))
>      {
> @@ -2637,16 +2637,16 @@ vect_build_all_ones_mask (gimple *stmt,
>        real_from_target (&r, tmp, TYPE_MODE (TREE_TYPE (masktype)));
>        tree mask = build_real (TREE_TYPE (masktype), r);
>        mask = build_vector_from_val (masktype, mask);
> -      return vect_init_vector (stmt, mask, masktype, NULL);
> +      return vect_init_vector (stmt_info, mask, masktype, NULL);
>      }
>    gcc_unreachable ();
>  }
>
>  /* Build an all-zero merge value of type VECTYPE while vectorizing
> -   STMT as a gather load.  */
> +   STMT_INFO as a gather load.  */
>
>  static tree
> -vect_build_zero_merge_argument (gimple *stmt, tree vectype)
> +vect_build_zero_merge_argument (stmt_vec_info stmt_info, tree vectype)
>  {
>    tree merge;
>    if (TREE_CODE (TREE_TYPE (vectype)) == INTEGER_TYPE)
> @@ -2663,7 +2663,7 @@ vect_build_zero_merge_argument (gimple *
>    else
>      gcc_unreachable ();
>    merge = build_vector_from_val (vectype, merge);
> -  return vect_init_vector (stmt, merge, vectype, NULL);
> +  return vect_init_vector (stmt_info, merge, vectype, NULL);
>  }
>
>  /* Build a gather load call while vectorizing STMT.  Insert new instructions
> @@ -2871,11 +2871,12 @@ vect_build_gather_load_calls (gimple *st
>
>  /* Prepare the base and offset in GS_INFO for vectorization.
>     Set *DATAREF_PTR to the loop-invariant base address and *VEC_OFFSET
> -   to the vectorized offset argument for the first copy of STMT.  STMT
> -   is the statement described by GS_INFO and LOOP is the containing loop.  */
> +   to the vectorized offset argument for the first copy of STMT_INFO.
> +   STMT_INFO is the statement described by GS_INFO and LOOP is the
> +   containing loop.  */
>
>  static void
> -vect_get_gather_scatter_ops (struct loop *loop, gimple *stmt,
> +vect_get_gather_scatter_ops (struct loop *loop, stmt_vec_info stmt_info,
>                              gather_scatter_info *gs_info,
>                              tree *dataref_ptr, tree *vec_offset)
>  {
> @@ -2890,7 +2891,7 @@ vect_get_gather_scatter_ops (struct loop
>      }
>    tree offset_type = TREE_TYPE (gs_info->offset);
>    tree offset_vectype = get_vectype_for_scalar_type (offset_type);
> -  *vec_offset = vect_get_vec_def_for_operand (gs_info->offset, stmt,
> +  *vec_offset = vect_get_vec_def_for_operand (gs_info->offset, stmt_info,
>                                               offset_vectype);
>  }
>
> @@ -4403,14 +4404,14 @@ vectorizable_simd_clone_call (gimple *st
>     VEC_OPRND0 and VEC_OPRND1.  The new vector stmt is to be inserted at BSI.
>     In the case that CODE is a CALL_EXPR, this means that a call to DECL
>     needs to be created (DECL is a function-decl of a target-builtin).
> -   STMT is the original scalar stmt that we are vectorizing.  */
> +   STMT_INFO is the original scalar stmt that we are vectorizing.  */
>
>  static gimple *
>  vect_gen_widened_results_half (enum tree_code code,
>                                tree decl,
>                                 tree vec_oprnd0, tree vec_oprnd1, int op_type,
>                                tree vec_dest, gimple_stmt_iterator *gsi,
> -                              gimple *stmt)
> +                              stmt_vec_info stmt_info)
>  {
>    gimple *new_stmt;
>    tree new_temp;
> @@ -4436,22 +4437,23 @@ vect_gen_widened_results_half (enum tree
>        new_temp = make_ssa_name (vec_dest, new_stmt);
>        gimple_assign_set_lhs (new_stmt, new_temp);
>      }
> -  vect_finish_stmt_generation (stmt, new_stmt, gsi);
> +  vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
>
>    return new_stmt;
>  }
>
>
> -/* Get vectorized definitions for loop-based vectorization.  For the first
> -   operand we call vect_get_vec_def_for_operand() (with OPRND containing
> -   scalar operand), and for the rest we get a copy with
> +/* Get vectorized definitions for loop-based vectorization of STMT_INFO.
> +   For the first operand we call vect_get_vec_def_for_operand (with OPRND
> +   containing scalar operand), and for the rest we get a copy with
>     vect_get_vec_def_for_stmt_copy() using the previous vector definition
>     (stored in OPRND). See vect_get_vec_def_for_stmt_copy() for details.
>     The vectors are collected into VEC_OPRNDS.  */
>
>  static void
> -vect_get_loop_based_defs (tree *oprnd, gimple *stmt, enum vect_def_type dt,
> -                         vec<tree> *vec_oprnds, int multi_step_cvt)
> +vect_get_loop_based_defs (tree *oprnd, stmt_vec_info stmt_info,
> +                         enum vect_def_type dt, vec<tree> *vec_oprnds,
> +                         int multi_step_cvt)
>  {
>    tree vec_oprnd;
>
> @@ -4459,7 +4461,7 @@ vect_get_loop_based_defs (tree *oprnd, g
>    /* All the vector operands except the very first one (that is scalar oprnd)
>       are stmt copies.  */
>    if (TREE_CODE (TREE_TYPE (*oprnd)) != VECTOR_TYPE)
> -    vec_oprnd = vect_get_vec_def_for_operand (*oprnd, stmt);
> +    vec_oprnd = vect_get_vec_def_for_operand (*oprnd, stmt_info);
>    else
>      vec_oprnd = vect_get_vec_def_for_stmt_copy (dt, *oprnd);
>
> @@ -4474,7 +4476,8 @@ vect_get_loop_based_defs (tree *oprnd, g
>    /* For conversion in multiple steps, continue to get operands
>       recursively.  */
>    if (multi_step_cvt)
> -    vect_get_loop_based_defs (oprnd, stmt, dt, vec_oprnds,  multi_step_cvt - 1);
> +    vect_get_loop_based_defs (oprnd, stmt_info, dt, vec_oprnds,
> +                             multi_step_cvt - 1);
>  }
>
>
> @@ -4549,13 +4552,14 @@ vect_create_vectorized_demotion_stmts (v
>
>
>  /* Create vectorized promotion statements for vector operands from VEC_OPRNDS0
> -   and VEC_OPRNDS1 (for binary operations).  For multi-step conversions store
> -   the resulting vectors and call the function recursively.  */
> +   and VEC_OPRNDS1, for a binary operation associated with scalar statement
> +   STMT_INFO.  For multi-step conversions store the resulting vectors and
> +   call the function recursively.  */
>
>  static void
>  vect_create_vectorized_promotion_stmts (vec<tree> *vec_oprnds0,
>                                         vec<tree> *vec_oprnds1,
> -                                       gimple *stmt, tree vec_dest,
> +                                       stmt_vec_info stmt_info, tree vec_dest,
>                                         gimple_stmt_iterator *gsi,
>                                         enum tree_code code1,
>                                         enum tree_code code2, tree decl1,
> @@ -4576,9 +4580,11 @@ vect_create_vectorized_promotion_stmts (
>
>        /* Generate the two halves of promotion operation.  */
>        new_stmt1 = vect_gen_widened_results_half (code1, decl1, vop0, vop1,
> -                                                op_type, vec_dest, gsi, stmt);
> +                                                op_type, vec_dest, gsi,
> +                                                stmt_info);
>        new_stmt2 = vect_gen_widened_results_half (code2, decl2, vop0, vop1,
> -                                                op_type, vec_dest, gsi, stmt);
> +                                                op_type, vec_dest, gsi,
> +                                                stmt_info);
>        if (is_gimple_call (new_stmt1))
>         {
>           new_tmp1 = gimple_call_lhs (new_stmt1);
> @@ -7318,19 +7324,19 @@ vect_gen_perm_mask_checked (tree vectype
>  }
>
>  /* Given a vector variable X and Y, that was generated for the scalar
> -   STMT, generate instructions to permute the vector elements of X and Y
> +   STMT_INFO, generate instructions to permute the vector elements of X and Y
>     using permutation mask MASK_VEC, insert them at *GSI and return the
>     permuted vector variable.  */
>
>  static tree
> -permute_vec_elements (tree x, tree y, tree mask_vec, gimple *stmt,
> +permute_vec_elements (tree x, tree y, tree mask_vec, stmt_vec_info stmt_info,
>                       gimple_stmt_iterator *gsi)
>  {
>    tree vectype = TREE_TYPE (x);
>    tree perm_dest, data_ref;
>    gimple *perm_stmt;
>
> -  tree scalar_dest = gimple_get_lhs (stmt);
> +  tree scalar_dest = gimple_get_lhs (stmt_info->stmt);
>    if (TREE_CODE (scalar_dest) == SSA_NAME)
>      perm_dest = vect_create_destination_var (scalar_dest, vectype);
>    else
> @@ -7339,7 +7345,7 @@ permute_vec_elements (tree x, tree y, tr
>
>    /* Generate the permute statement.  */
>    perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR, x, y, mask_vec);
> -  vect_finish_stmt_generation (stmt, perm_stmt, gsi);
> +  vect_finish_stmt_generation (stmt_info, perm_stmt, gsi);
>
>    return data_ref;
>  }
> @@ -9409,11 +9415,11 @@ vectorizable_comparison (gimple *stmt, g
>
>  /* If SLP_NODE is nonnull, return true if vectorizable_live_operation
>     can handle all live statements in the node.  Otherwise return true
> -   if STMT is not live or if vectorizable_live_operation can handle it.
> +   if STMT_INFO is not live or if vectorizable_live_operation can handle it.
>     GSI and VEC_STMT are as for vectorizable_live_operation.  */
>
>  static bool
> -can_vectorize_live_stmts (gimple *stmt, gimple_stmt_iterator *gsi,
> +can_vectorize_live_stmts (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
>                           slp_tree slp_node, stmt_vec_info *vec_stmt,
>                           stmt_vector_for_cost *cost_vec)
>  {
> @@ -9429,9 +9435,9 @@ can_vectorize_live_stmts (gimple *stmt,
>             return false;
>         }
>      }
> -  else if (STMT_VINFO_LIVE_P (vinfo_for_stmt (stmt))
> -          && !vectorizable_live_operation (stmt, gsi, slp_node, -1, vec_stmt,
> -                                           cost_vec))
> +  else if (STMT_VINFO_LIVE_P (stmt_info)
> +          && !vectorizable_live_operation (stmt_info, gsi, slp_node, -1,
> +                                           vec_stmt, cost_vec))
>      return false;
>
>    return true;

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [33/46] Use stmt_vec_infos instead of vec_info/gimple stmt pairs
  2018-07-24 10:06 ` [33/46] Use stmt_vec_infos instead of vec_info/gimple stmt pairs Richard Sandiford
@ 2018-07-25 10:06   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25 10:06 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 12:06 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch makes vect_record_max_nunits and vect_record_base_alignment
> take a stmt_vec_info instead of a vec_info/gimple pair.

OK

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vect-data-refs.c (vect_record_base_alignment): Replace vec_info
>         and gimple stmt arguments with a stmt_vec_info.
>         (vect_record_base_alignments): Update calls accordingly.
>         * tree-vect-slp.c (vect_record_max_nunits): Replace vec_info
>         and gimple stmt arguments with a stmt_vec_info.
>         (vect_build_slp_tree_1): Remove vinfo argument and update call
>         to vect_record_max_nunits.
>         (vect_build_slp_tree_2): Update calls to vect_build_slp_tree_1
>         and vect_record_max_nunits.
>
> Index: gcc/tree-vect-data-refs.c
> ===================================================================
> --- gcc/tree-vect-data-refs.c   2018-07-24 10:23:50.000602186 +0100
> +++ gcc/tree-vect-data-refs.c   2018-07-24 10:23:53.204573732 +0100
> @@ -794,14 +794,14 @@ vect_slp_analyze_instance_dependence (sl
>    return res;
>  }
>
> -/* Record in VINFO the base alignment guarantee given by DRB.  STMT is
> -   the statement that contains DRB, which is useful for recording in the
> -   dump file.  */
> +/* Record the base alignment guarantee given by DRB, which occurs
> +   in STMT_INFO.  */
>
>  static void
> -vect_record_base_alignment (vec_info *vinfo, gimple *stmt,
> +vect_record_base_alignment (stmt_vec_info stmt_info,
>                             innermost_loop_behavior *drb)
>  {
> +  vec_info *vinfo = stmt_info->vinfo;
>    bool existed;
>    innermost_loop_behavior *&entry
>      = vinfo->base_alignments.get_or_insert (drb->base_address, &existed);
> @@ -820,7 +820,7 @@ vect_record_base_alignment (vec_info *vi
>                            "  misalignment: %d\n", drb->base_misalignment);
>           dump_printf_loc (MSG_NOTE, vect_location,
>                            "  based on:     ");
> -         dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
> +         dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
>         }
>      }
>  }
> @@ -847,13 +847,13 @@ vect_record_base_alignments (vec_info *v
>           && STMT_VINFO_VECTORIZABLE (stmt_info)
>           && !STMT_VINFO_GATHER_SCATTER_P (stmt_info))
>         {
> -         vect_record_base_alignment (vinfo, stmt_info, &DR_INNERMOST (dr));
> +         vect_record_base_alignment (stmt_info, &DR_INNERMOST (dr));
>
>           /* If DR is nested in the loop that is being vectorized, we can also
>              record the alignment of the base wrt the outer loop.  */
>           if (loop && nested_in_vect_loop_p (loop, stmt_info))
>             vect_record_base_alignment
> -               (vinfo, stmt_info, &STMT_VINFO_DR_WRT_VEC_LOOP (stmt_info));
> +             (stmt_info, &STMT_VINFO_DR_WRT_VEC_LOOP (stmt_info));
>         }
>      }
>  }
> Index: gcc/tree-vect-slp.c
> ===================================================================
> --- gcc/tree-vect-slp.c 2018-07-24 10:23:50.004602150 +0100
> +++ gcc/tree-vect-slp.c 2018-07-24 10:23:53.204573732 +0100
> @@ -609,14 +609,14 @@ compatible_calls_p (gcall *call1, gcall
>  }
>
>  /* A subroutine of vect_build_slp_tree for checking VECTYPE, which is the
> -   caller's attempt to find the vector type in STMT with the narrowest
> +   caller's attempt to find the vector type in STMT_INFO with the narrowest
>     element type.  Return true if VECTYPE is nonnull and if it is valid
> -   for VINFO.  When returning true, update MAX_NUNITS to reflect the
> -   number of units in VECTYPE.  VINFO, GORUP_SIZE and MAX_NUNITS are
> -   as for vect_build_slp_tree.  */
> +   for STMT_INFO.  When returning true, update MAX_NUNITS to reflect the
> +   number of units in VECTYPE.  GROUP_SIZE and MAX_NUNITS are as for
> +   vect_build_slp_tree.  */
>
>  static bool
> -vect_record_max_nunits (vec_info *vinfo, gimple *stmt, unsigned int group_size,
> +vect_record_max_nunits (stmt_vec_info stmt_info, unsigned int group_size,
>                         tree vectype, poly_uint64 *max_nunits)
>  {
>    if (!vectype)
> @@ -625,7 +625,8 @@ vect_record_max_nunits (vec_info *vinfo,
>         {
>           dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>                            "Build SLP failed: unsupported data-type in ");
> -         dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM, stmt, 0);
> +         dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
> +                           stmt_info->stmt, 0);
>           dump_printf (MSG_MISSED_OPTIMIZATION, "\n");
>         }
>        /* Fatal mismatch.  */
> @@ -636,7 +637,7 @@ vect_record_max_nunits (vec_info *vinfo,
>       before adjusting *max_nunits for basic-block vectorization.  */
>    poly_uint64 nunits = TYPE_VECTOR_SUBPARTS (vectype);
>    unsigned HOST_WIDE_INT const_nunits;
> -  if (is_a <bb_vec_info> (vinfo)
> +  if (STMT_VINFO_BB_VINFO (stmt_info)
>        && (!nunits.is_constant (&const_nunits)
>           || const_nunits > group_size))
>      {
> @@ -696,7 +697,7 @@ vect_two_operations_perm_ok_p (vec<stmt_
>     to (B1 <= A1 ? X1 : Y1); or be inverted to (A1 < B1) ? Y1 : X1.  */
>
>  static bool
> -vect_build_slp_tree_1 (vec_info *vinfo, unsigned char *swap,
> +vect_build_slp_tree_1 (unsigned char *swap,
>                        vec<stmt_vec_info> stmts, unsigned int group_size,
>                        poly_uint64 *max_nunits, bool *matches,
>                        bool *two_operators)
> @@ -763,7 +764,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>        if (!vect_get_vector_types_for_stmt (stmt_info, &vectype,
>                                            &nunits_vectype)
>           || (nunits_vectype
> -             && !vect_record_max_nunits (vinfo, stmt_info, group_size,
> +             && !vect_record_max_nunits (stmt_info, group_size,
>                                           nunits_vectype, max_nunits)))
>         {
>           /* Fatal mismatch.  */
> @@ -1207,8 +1208,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>      {
>        tree scalar_type = TREE_TYPE (PHI_RESULT (stmt));
>        tree vectype = get_vectype_for_scalar_type (scalar_type);
> -      if (!vect_record_max_nunits (vinfo, stmt_info, group_size, vectype,
> -                                  max_nunits))
> +      if (!vect_record_max_nunits (stmt_info, group_size, vectype, max_nunits))
>         return NULL;
>
>        vect_def_type def_type = STMT_VINFO_DEF_TYPE (stmt_info);
> @@ -1241,7 +1241,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>
>    bool two_operators = false;
>    unsigned char *swap = XALLOCAVEC (unsigned char, group_size);
> -  if (!vect_build_slp_tree_1 (vinfo, swap, stmts, group_size,
> +  if (!vect_build_slp_tree_1 (swap, stmts, group_size,
>                               &this_max_nunits, matches, &two_operators))
>      return NULL;
>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [32/46] Use stmt_vec_info in function interfaces (part 2)
  2018-07-24 10:05 ` [32/46] Use stmt_vec_info in function interfaces (part 2) Richard Sandiford
@ 2018-07-25 10:06   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25 10:06 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 12:06 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This second part handles the mechanical change from a gimple stmt
> argument to a stmt_vec_info argument.  It updates the function
> comments if they referred to the argument by name, but it doesn't
> try to retrofit mentions to other functions.

OK

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (nested_in_vect_loop_p): Move further down
>         file and take a stmt_vec_info instead of a gimple stmt.
>         (supportable_widening_operation, vect_finish_replace_stmt)
>         (vect_finish_stmt_generation, vect_get_store_rhs)
>         (vect_get_vec_def_for_operand_1, vect_get_vec_def_for_operand)
>         (vect_get_vec_defs, vect_init_vector, vect_transform_stmt)
>         (vect_remove_stores, vect_analyze_stmt, vectorizable_condition)
>         (vect_get_smallest_scalar_type, vect_check_gather_scatter)
>         (vect_create_data_ref_ptr, bump_vector_ptr)
>         (vect_permute_store_chain, vect_setup_realignment)
>         (vect_transform_grouped_load, vect_record_grouped_load_vectors)
>         (vect_create_addr_base_for_vector_ref, vectorizable_live_operation)
>         (vectorizable_reduction, vectorizable_induction)
>         (get_initial_def_for_reduction, is_simple_and_all_uses_invariant)
>         (vect_get_place_in_interleaving_chain): Take stmt_vec_infos rather
>         than gimple stmts as arguments.
>         * tree-vect-data-refs.c (vect_get_smallest_scalar_type)
>         (vect_preserves_scalar_order_p, vect_slp_analyze_node_dependences)
>         (can_group_stmts_p, vect_check_gather_scatter)
>         (vect_create_addr_base_for_vector_ref, vect_create_data_ref_ptr)
>         (bump_vector_ptr, vect_permute_store_chain, vect_setup_realignment)
>         (vect_permute_load_chain, vect_shift_permute_load_chain)
>         (vect_transform_grouped_load)
>         (vect_record_grouped_load_vectors): Likewise.
>         * tree-vect-loop.c (vect_fixup_reduc_chain)
>         (get_initial_def_for_reduction, vect_create_epilog_for_reduction)
>         (vectorize_fold_left_reduction, is_nonwrapping_integer_induction)
>         (vectorizable_reduction, vectorizable_induction)
>         (vectorizable_live_operation, vect_loop_kill_debug_uses): Likewise.
>         * tree-vect-patterns.c (type_conversion_p, adjust_bool_stmts)
>         (vect_get_load_store_mask): Likewise.
>         * tree-vect-slp.c (vect_get_place_in_interleaving_chain)
>         (vect_analyze_slp_instance, vect_mask_constant_operand_p): Likewise.
>         * tree-vect-stmts.c (vect_mark_relevant)
>         (is_simple_and_all_uses_invariant)
>         (exist_non_indexing_operands_for_use_p, process_use)
>         (vect_init_vector_1, vect_init_vector, vect_get_vec_def_for_operand_1)
>         (vect_get_vec_def_for_operand, vect_get_vec_defs)
>         (vect_finish_stmt_generation_1, vect_finish_replace_stmt)
>         (vect_finish_stmt_generation, vect_truncate_gather_scatter_offset)
>         (compare_step_with_zero, vect_get_store_rhs, get_group_load_store_type)
>         (get_negative_load_store_type, get_load_store_type)
>         (vect_check_load_store_mask, vect_check_store_rhs)
>         (vect_build_gather_load_calls, vect_get_strided_load_store_ops)
>         (vectorizable_bswap, vectorizable_call, vectorizable_simd_clone_call)
>         (vect_create_vectorized_demotion_stmts, vectorizable_conversion)
>         (vectorizable_assignment, vectorizable_shift, vectorizable_operation)
>         (get_group_alias_ptr_type, vectorizable_store, hoist_defs_of_uses)
>         (vectorizable_load, vectorizable_condition, vectorizable_comparison)
>         (vect_analyze_stmt, vect_transform_stmt, vect_remove_stores)
>         (supportable_widening_operation): Likewise.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:23:35.384731983 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:23:50.008602115 +0100
> @@ -627,13 +627,6 @@ loop_vec_info_for_loop (struct loop *loo
>    return (loop_vec_info) loop->aux;
>  }
>
> -static inline bool
> -nested_in_vect_loop_p (struct loop *loop, gimple *stmt)
> -{
> -  return (loop->inner
> -          && (loop->inner == (gimple_bb (stmt))->loop_father));
> -}
> -
>  typedef struct _bb_vec_info : public vec_info
>  {
>    _bb_vec_info (gimple_stmt_iterator, gimple_stmt_iterator, vec_info_shared *);
> @@ -1119,6 +1112,13 @@ set_vinfo_for_stmt (gimple *stmt, stmt_v
>      }
>  }
>
> +static inline bool
> +nested_in_vect_loop_p (struct loop *loop, stmt_vec_info stmt_info)
> +{
> +  return (loop->inner
> +         && (loop->inner == (gimple_bb (stmt_info->stmt))->loop_father));
> +}
> +
>  /* Return the earlier statement between STMT1_INFO and STMT2_INFO.  */
>
>  static inline stmt_vec_info
> @@ -1493,8 +1493,8 @@ extern bool vect_is_simple_use (tree, ve
>  extern bool vect_is_simple_use (tree, vec_info *, enum vect_def_type *,
>                                 tree *, stmt_vec_info * = NULL,
>                                 gimple ** = NULL);
> -extern bool supportable_widening_operation (enum tree_code, gimple *, tree,
> -                                           tree, enum tree_code *,
> +extern bool supportable_widening_operation (enum tree_code, stmt_vec_info,
> +                                           tree, tree, enum tree_code *,
>                                             enum tree_code *, int *,
>                                             vec<tree> *);
>  extern bool supportable_narrowing_operation (enum tree_code, tree, tree,
> @@ -1505,26 +1505,26 @@ extern void free_stmt_vec_info (gimple *
>  extern unsigned record_stmt_cost (stmt_vector_for_cost *, int,
>                                   enum vect_cost_for_stmt, stmt_vec_info,
>                                   int, enum vect_cost_model_location);
> -extern stmt_vec_info vect_finish_replace_stmt (gimple *, gimple *);
> -extern stmt_vec_info vect_finish_stmt_generation (gimple *, gimple *,
> +extern stmt_vec_info vect_finish_replace_stmt (stmt_vec_info, gimple *);
> +extern stmt_vec_info vect_finish_stmt_generation (stmt_vec_info, gimple *,
>                                                   gimple_stmt_iterator *);
>  extern bool vect_mark_stmts_to_be_vectorized (loop_vec_info);
> -extern tree vect_get_store_rhs (gimple *);
> -extern tree vect_get_vec_def_for_operand_1 (gimple *, enum vect_def_type);
> -extern tree vect_get_vec_def_for_operand (tree, gimple *, tree = NULL);
> -extern void vect_get_vec_defs (tree, tree, gimple *, vec<tree> *,
> +extern tree vect_get_store_rhs (stmt_vec_info);
> +extern tree vect_get_vec_def_for_operand_1 (stmt_vec_info, enum vect_def_type);
> +extern tree vect_get_vec_def_for_operand (tree, stmt_vec_info, tree = NULL);
> +extern void vect_get_vec_defs (tree, tree, stmt_vec_info, vec<tree> *,
>                                vec<tree> *, slp_tree);
>  extern void vect_get_vec_defs_for_stmt_copy (enum vect_def_type *,
>                                              vec<tree> *, vec<tree> *);
> -extern tree vect_init_vector (gimple *, tree, tree,
> +extern tree vect_init_vector (stmt_vec_info, tree, tree,
>                                gimple_stmt_iterator *);
>  extern tree vect_get_vec_def_for_stmt_copy (enum vect_def_type, tree);
> -extern bool vect_transform_stmt (gimple *, gimple_stmt_iterator *,
> +extern bool vect_transform_stmt (stmt_vec_info, gimple_stmt_iterator *,
>                                   bool *, slp_tree, slp_instance);
> -extern void vect_remove_stores (gimple *);
> -extern bool vect_analyze_stmt (gimple *, bool *, slp_tree, slp_instance,
> +extern void vect_remove_stores (stmt_vec_info);
> +extern bool vect_analyze_stmt (stmt_vec_info, bool *, slp_tree, slp_instance,
>                                stmt_vector_for_cost *);
> -extern bool vectorizable_condition (gimple *, gimple_stmt_iterator *,
> +extern bool vectorizable_condition (stmt_vec_info, gimple_stmt_iterator *,
>                                     stmt_vec_info *, tree, int, slp_tree,
>                                     stmt_vector_for_cost *);
>  extern void vect_get_load_cost (stmt_vec_info, int, bool,
> @@ -1546,7 +1546,7 @@ extern tree vect_get_mask_type_for_stmt
>  extern bool vect_can_force_dr_alignment_p (const_tree, unsigned int);
>  extern enum dr_alignment_support vect_supportable_dr_alignment
>                                             (struct data_reference *, bool);
> -extern tree vect_get_smallest_scalar_type (gimple *, HOST_WIDE_INT *,
> +extern tree vect_get_smallest_scalar_type (stmt_vec_info, HOST_WIDE_INT *,
>                                             HOST_WIDE_INT *);
>  extern bool vect_analyze_data_ref_dependences (loop_vec_info, unsigned int *);
>  extern bool vect_slp_analyze_instance_dependence (slp_instance);
> @@ -1558,36 +1558,36 @@ extern bool vect_analyze_data_ref_access
>  extern bool vect_prune_runtime_alias_test_list (loop_vec_info);
>  extern bool vect_gather_scatter_fn_p (bool, bool, tree, tree, unsigned int,
>                                       signop, int, internal_fn *, tree *);
> -extern bool vect_check_gather_scatter (gimple *, loop_vec_info,
> +extern bool vect_check_gather_scatter (stmt_vec_info, loop_vec_info,
>                                        gather_scatter_info *);
>  extern bool vect_find_stmt_data_reference (loop_p, gimple *,
>                                            vec<data_reference_p> *);
>  extern bool vect_analyze_data_refs (vec_info *, poly_uint64 *);
>  extern void vect_record_base_alignments (vec_info *);
> -extern tree vect_create_data_ref_ptr (gimple *, tree, struct loop *, tree,
> +extern tree vect_create_data_ref_ptr (stmt_vec_info, tree, struct loop *, tree,
>                                       tree *, gimple_stmt_iterator *,
>                                       gimple **, bool, bool *,
>                                       tree = NULL_TREE, tree = NULL_TREE);
> -extern tree bump_vector_ptr (tree, gimple *, gimple_stmt_iterator *, gimple *,
> -                            tree);
> +extern tree bump_vector_ptr (tree, gimple *, gimple_stmt_iterator *,
> +                            stmt_vec_info, tree);
>  extern void vect_copy_ref_info (tree, tree);
>  extern tree vect_create_destination_var (tree, tree);
>  extern bool vect_grouped_store_supported (tree, unsigned HOST_WIDE_INT);
>  extern bool vect_store_lanes_supported (tree, unsigned HOST_WIDE_INT, bool);
>  extern bool vect_grouped_load_supported (tree, bool, unsigned HOST_WIDE_INT);
>  extern bool vect_load_lanes_supported (tree, unsigned HOST_WIDE_INT, bool);
> -extern void vect_permute_store_chain (vec<tree> ,unsigned int, gimple *,
> +extern void vect_permute_store_chain (vec<tree> ,unsigned int, stmt_vec_info,
>                                      gimple_stmt_iterator *, vec<tree> *);
> -extern tree vect_setup_realignment (gimple *, gimple_stmt_iterator *, tree *,
> -                                    enum dr_alignment_support, tree,
> +extern tree vect_setup_realignment (stmt_vec_info, gimple_stmt_iterator *,
> +                                   tree *, enum dr_alignment_support, tree,
>                                      struct loop **);
> -extern void vect_transform_grouped_load (gimple *, vec<tree> , int,
> +extern void vect_transform_grouped_load (stmt_vec_info, vec<tree> , int,
>                                           gimple_stmt_iterator *);
> -extern void vect_record_grouped_load_vectors (gimple *, vec<tree> );
> +extern void vect_record_grouped_load_vectors (stmt_vec_info, vec<tree>);
>  extern tree vect_get_new_vect_var (tree, enum vect_var_kind, const char *);
>  extern tree vect_get_new_ssa_name (tree, enum vect_var_kind,
>                                    const char * = NULL);
> -extern tree vect_create_addr_base_for_vector_ref (gimple *, gimple_seq *,
> +extern tree vect_create_addr_base_for_vector_ref (stmt_vec_info, gimple_seq *,
>                                                   tree, tree = NULL_TREE);
>
>  /* In tree-vect-loop.c.  */
> @@ -1613,16 +1613,16 @@ extern tree vect_get_loop_mask (gimple_s
>  /* Drive for loop transformation stage.  */
>  extern struct loop *vect_transform_loop (loop_vec_info);
>  extern loop_vec_info vect_analyze_loop_form (struct loop *, vec_info_shared *);
> -extern bool vectorizable_live_operation (gimple *, gimple_stmt_iterator *,
> +extern bool vectorizable_live_operation (stmt_vec_info, gimple_stmt_iterator *,
>                                          slp_tree, int, stmt_vec_info *,
>                                          stmt_vector_for_cost *);
> -extern bool vectorizable_reduction (gimple *, gimple_stmt_iterator *,
> +extern bool vectorizable_reduction (stmt_vec_info, gimple_stmt_iterator *,
>                                     stmt_vec_info *, slp_tree, slp_instance,
>                                     stmt_vector_for_cost *);
> -extern bool vectorizable_induction (gimple *, gimple_stmt_iterator *,
> +extern bool vectorizable_induction (stmt_vec_info, gimple_stmt_iterator *,
>                                     stmt_vec_info *, slp_tree,
>                                     stmt_vector_for_cost *);
> -extern tree get_initial_def_for_reduction (gimple *, tree, tree *);
> +extern tree get_initial_def_for_reduction (stmt_vec_info, tree, tree *);
>  extern bool vect_worthwhile_without_simd_p (vec_info *, tree_code);
>  extern int vect_get_known_peeling_cost (loop_vec_info, int, int *,
>                                         stmt_vector_for_cost *,
> @@ -1643,13 +1643,13 @@ extern void vect_detect_hybrid_slp (loop
>  extern void vect_get_slp_defs (vec<tree> , slp_tree, vec<vec<tree> > *);
>  extern bool vect_slp_bb (basic_block);
>  extern stmt_vec_info vect_find_last_scalar_stmt_in_slp (slp_tree);
> -extern bool is_simple_and_all_uses_invariant (gimple *, loop_vec_info);
> +extern bool is_simple_and_all_uses_invariant (stmt_vec_info, loop_vec_info);
>  extern bool can_duplicate_and_interleave_p (unsigned int, machine_mode,
>                                             unsigned int * = NULL,
>                                             tree * = NULL, tree * = NULL);
>  extern void duplicate_and_interleave (gimple_seq *, tree, vec<tree>,
>                                       unsigned int, vec<tree> &);
> -extern int vect_get_place_in_interleaving_chain (gimple *, gimple *);
> +extern int vect_get_place_in_interleaving_chain (stmt_vec_info, stmt_vec_info);
>
>  /* In tree-vect-patterns.c.  */
>  /* Pattern recognition functions.
> Index: gcc/tree-vect-data-refs.c
> ===================================================================
> --- gcc/tree-vect-data-refs.c   2018-07-24 10:23:46.108636749 +0100
> +++ gcc/tree-vect-data-refs.c   2018-07-24 10:23:50.000602186 +0100
> @@ -99,7 +99,7 @@ vect_lanes_optab_supported_p (const char
>  }
>
>
> -/* Return the smallest scalar part of STMT.
> +/* Return the smallest scalar part of STMT_INFO.
>     This is used to determine the vectype of the stmt.  We generally set the
>     vectype according to the type of the result (lhs).  For stmts whose
>     result-type is different than the type of the arguments (e.g., demotion,
> @@ -117,10 +117,11 @@ vect_lanes_optab_supported_p (const char
>     types.  */
>
>  tree
> -vect_get_smallest_scalar_type (gimple *stmt, HOST_WIDE_INT *lhs_size_unit,
> -                               HOST_WIDE_INT *rhs_size_unit)
> +vect_get_smallest_scalar_type (stmt_vec_info stmt_info,
> +                              HOST_WIDE_INT *lhs_size_unit,
> +                              HOST_WIDE_INT *rhs_size_unit)
>  {
> -  tree scalar_type = gimple_expr_type (stmt);
> +  tree scalar_type = gimple_expr_type (stmt_info->stmt);
>    HOST_WIDE_INT lhs, rhs;
>
>    /* During the analysis phase, this function is called on arbitrary
> @@ -130,7 +131,7 @@ vect_get_smallest_scalar_type (gimple *s
>
>    lhs = rhs = TREE_INT_CST_LOW (TYPE_SIZE_UNIT (scalar_type));
>
> -  gassign *assign = dyn_cast <gassign *> (stmt);
> +  gassign *assign = dyn_cast <gassign *> (stmt_info->stmt);
>    if (assign
>        && (gimple_assign_cast_p (assign)
>           || gimple_assign_rhs_code (assign) == DOT_PROD_EXPR
> @@ -191,16 +192,14 @@ vect_check_nonzero_value (loop_vec_info
>    LOOP_VINFO_CHECK_NONZERO (loop_vinfo).safe_push (value);
>  }
>
> -/* Return true if we know that the order of vectorized STMT_A and
> -   vectorized STMT_B will be the same as the order of STMT_A and STMT_B.
> -   At least one of the statements is a write.  */
> +/* Return true if we know that the order of vectorized STMTINFO_A and
> +   vectorized STMTINFO_B will be the same as the order of STMTINFO_A and
> +   STMTINFO_B.  At least one of the statements is a write.  */
>
>  static bool
> -vect_preserves_scalar_order_p (gimple *stmt_a, gimple *stmt_b)
> +vect_preserves_scalar_order_p (stmt_vec_info stmtinfo_a,
> +                              stmt_vec_info stmtinfo_b)
>  {
> -  stmt_vec_info stmtinfo_a = vinfo_for_stmt (stmt_a);
> -  stmt_vec_info stmtinfo_b = vinfo_for_stmt (stmt_b);
> -
>    /* Single statements are always kept in their original order.  */
>    if (!STMT_VINFO_GROUPED_ACCESS (stmtinfo_a)
>        && !STMT_VINFO_GROUPED_ACCESS (stmtinfo_b))
> @@ -666,7 +665,7 @@ vect_slp_analyze_data_ref_dependence (st
>  static bool
>  vect_slp_analyze_node_dependences (slp_instance instance, slp_tree node,
>                                    vec<stmt_vec_info> stores,
> -                                  gimple *last_store)
> +                                  stmt_vec_info last_store_info)
>  {
>    /* This walks over all stmts involved in the SLP load/store done
>       in NODE verifying we can sink them up to the last stmt in the
> @@ -712,7 +711,7 @@ vect_slp_analyze_node_dependences (slp_i
>              been sunk to (and we verify if we can do that as well).  */
>           if (gimple_visited_p (stmt))
>             {
> -             if (stmt_info != last_store)
> +             if (stmt_info != last_store_info)
>                 continue;
>               unsigned i;
>               stmt_vec_info store_info;
> @@ -2843,20 +2842,20 @@ strip_conversion (tree op)
>    return gimple_assign_rhs1 (stmt);
>  }
>
> -/* Return true if vectorizable_* routines can handle statements STMT1
> -   and STMT2 being in a single group.  */
> +/* Return true if vectorizable_* routines can handle statements STMT1_INFO
> +   and STMT2_INFO being in a single group.  */
>
>  static bool
> -can_group_stmts_p (gimple *stmt1, gimple *stmt2)
> +can_group_stmts_p (stmt_vec_info stmt1_info, stmt_vec_info stmt2_info)
>  {
> -  if (gimple_assign_single_p (stmt1))
> -    return gimple_assign_single_p (stmt2);
> +  if (gimple_assign_single_p (stmt1_info->stmt))
> +    return gimple_assign_single_p (stmt2_info->stmt);
>
> -  gcall *call1 = dyn_cast <gcall *> (stmt1);
> +  gcall *call1 = dyn_cast <gcall *> (stmt1_info->stmt);
>    if (call1 && gimple_call_internal_p (call1))
>      {
>        /* Check for two masked loads or two masked stores.  */
> -      gcall *call2 = dyn_cast <gcall *> (stmt2);
> +      gcall *call2 = dyn_cast <gcall *> (stmt2_info->stmt);
>        if (!call2 || !gimple_call_internal_p (call2))
>         return false;
>        internal_fn ifn = gimple_call_internal_fn (call1);
> @@ -3643,17 +3642,16 @@ vect_describe_gather_scatter_call (stmt_
>    info->memory_type = TREE_TYPE (DR_REF (dr));
>  }
>
> -/* Return true if a non-affine read or write in STMT is suitable for a
> +/* Return true if a non-affine read or write in STMT_INFO is suitable for a
>     gather load or scatter store.  Describe the operation in *INFO if so.  */
>
>  bool
> -vect_check_gather_scatter (gimple *stmt, loop_vec_info loop_vinfo,
> +vect_check_gather_scatter (stmt_vec_info stmt_info, loop_vec_info loop_vinfo,
>                            gather_scatter_info *info)
>  {
>    HOST_WIDE_INT scale = 1;
>    poly_int64 pbitpos, pbitsize;
>    struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    tree offtype = NULL_TREE;
>    tree decl = NULL_TREE, base, off;
> @@ -4473,7 +4471,7 @@ vect_duplicate_ssa_name_ptr_info (tree n
>     that will be accessed for a data reference.
>
>     Input:
> -   STMT: The statement containing the data reference.
> +   STMT_INFO: The statement containing the data reference.
>     NEW_STMT_LIST: Must be initialized to NULL_TREE or a statement list.
>     OFFSET: Optional. If supplied, it is be added to the initial address.
>     LOOP:    Specify relative to which loop-nest should the address be computed.
> @@ -4502,12 +4500,11 @@ vect_duplicate_ssa_name_ptr_info (tree n
>     FORNOW: We are only handling array accesses with step 1.  */
>
>  tree
> -vect_create_addr_base_for_vector_ref (gimple *stmt,
> +vect_create_addr_base_for_vector_ref (stmt_vec_info stmt_info,
>                                       gimple_seq *new_stmt_list,
>                                       tree offset,
>                                       tree byte_offset)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    const char *base_name;
>    tree addr_base;
> @@ -4588,26 +4585,26 @@ vect_create_addr_base_for_vector_ref (gi
>  /* Function vect_create_data_ref_ptr.
>
>     Create a new pointer-to-AGGR_TYPE variable (ap), that points to the first
> -   location accessed in the loop by STMT, along with the def-use update
> +   location accessed in the loop by STMT_INFO, along with the def-use update
>     chain to appropriately advance the pointer through the loop iterations.
>     Also set aliasing information for the pointer.  This pointer is used by
>     the callers to this function to create a memory reference expression for
>     vector load/store access.
>
>     Input:
> -   1. STMT: a stmt that references memory. Expected to be of the form
> +   1. STMT_INFO: a stmt that references memory. Expected to be of the form
>           GIMPLE_ASSIGN <name, data-ref> or
>          GIMPLE_ASSIGN <data-ref, name>.
>     2. AGGR_TYPE: the type of the reference, which should be either a vector
>          or an array.
>     3. AT_LOOP: the loop where the vector memref is to be created.
>     4. OFFSET (optional): an offset to be added to the initial address accessed
> -        by the data-ref in STMT.
> +       by the data-ref in STMT_INFO.
>     5. BSI: location where the new stmts are to be placed if there is no loop
>     6. ONLY_INIT: indicate if ap is to be updated in the loop, or remain
>          pointing to the initial address.
>     7. BYTE_OFFSET (optional, defaults to NULL): a byte offset to be added
> -       to the initial address accessed by the data-ref in STMT.  This is
> +       to the initial address accessed by the data-ref in STMT_INFO.  This is
>         similar to OFFSET, but OFFSET is counted in elements, while BYTE_OFFSET
>         in bytes.
>     8. IV_STEP (optional, defaults to NULL): the amount that should be added
> @@ -4643,14 +4640,13 @@ vect_create_addr_base_for_vector_ref (gi
>     4. Return the pointer.  */
>
>  tree
> -vect_create_data_ref_ptr (gimple *stmt, tree aggr_type, struct loop *at_loop,
> -                         tree offset, tree *initial_address,
> -                         gimple_stmt_iterator *gsi, gimple **ptr_incr,
> -                         bool only_init, bool *inv_p, tree byte_offset,
> -                         tree iv_step)
> +vect_create_data_ref_ptr (stmt_vec_info stmt_info, tree aggr_type,
> +                         struct loop *at_loop, tree offset,
> +                         tree *initial_address, gimple_stmt_iterator *gsi,
> +                         gimple **ptr_incr, bool only_init, bool *inv_p,
> +                         tree byte_offset, tree iv_step)
>  {
>    const char *base_name;
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    struct loop *loop = NULL;
>    bool nested_in_vect_loop = false;
> @@ -4905,7 +4901,7 @@ vect_create_data_ref_ptr (gimple *stmt,
>               the loop.  The increment amount across iterations is expected
>               to be vector_size.
>     BSI - location where the new update stmt is to be placed.
> -   STMT - the original scalar memory-access stmt that is being vectorized.
> +   STMT_INFO - the original scalar memory-access stmt that is being vectorized.
>     BUMP - optional. The offset by which to bump the pointer. If not given,
>           the offset is assumed to be vector_size.
>
> @@ -4915,9 +4911,8 @@ vect_create_data_ref_ptr (gimple *stmt,
>
>  tree
>  bump_vector_ptr (tree dataref_ptr, gimple *ptr_incr, gimple_stmt_iterator *gsi,
> -                gimple *stmt, tree bump)
> +                stmt_vec_info stmt_info, tree bump)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>    tree update = TYPE_SIZE_UNIT (vectype);
> @@ -5217,11 +5212,10 @@ vect_store_lanes_supported (tree vectype
>  void
>  vect_permute_store_chain (vec<tree> dr_chain,
>                           unsigned int length,
> -                         gimple *stmt,
> +                         stmt_vec_info stmt_info,
>                           gimple_stmt_iterator *gsi,
>                           vec<tree> *result_chain)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    tree vect1, vect2, high, low;
>    gimple *perm_stmt;
>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
> @@ -5368,12 +5362,12 @@ vect_permute_store_chain (vec<tree> dr_c
>     dr_explicit_realign_optimized.
>
>     The code above sets up a new (vector) pointer, pointing to the first
> -   location accessed by STMT, and a "floor-aligned" load using that pointer.
> -   It also generates code to compute the "realignment-token" (if the relevant
> -   target hook was defined), and creates a phi-node at the loop-header bb
> -   whose arguments are the result of the prolog-load (created by this
> -   function) and the result of a load that takes place in the loop (to be
> -   created by the caller to this function).
> +   location accessed by STMT_INFO, and a "floor-aligned" load using that
> +   pointer.  It also generates code to compute the "realignment-token"
> +   (if the relevant target hook was defined), and creates a phi-node at the
> +   loop-header bb whose arguments are the result of the prolog-load (created
> +   by this function) and the result of a load that takes place in the loop
> +   (to be created by the caller to this function).
>
>     For the case of dr_explicit_realign_optimized:
>     The caller to this function uses the phi-result (msq) to create the
> @@ -5392,8 +5386,8 @@ vect_permute_store_chain (vec<tree> dr_c
>        result = realign_load (msq, lsq, realignment_token);
>
>     Input:
> -   STMT - (scalar) load stmt to be vectorized. This load accesses
> -          a memory location that may be unaligned.
> +   STMT_INFO - (scalar) load stmt to be vectorized. This load accesses
> +              a memory location that may be unaligned.
>     BSI - place where new code is to be inserted.
>     ALIGNMENT_SUPPORT_SCHEME - which of the two misalignment handling schemes
>                               is used.
> @@ -5404,13 +5398,12 @@ vect_permute_store_chain (vec<tree> dr_c
>     Return value - the result of the loop-header phi node.  */
>
>  tree
> -vect_setup_realignment (gimple *stmt, gimple_stmt_iterator *gsi,
> +vect_setup_realignment (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
>                          tree *realignment_token,
>                         enum dr_alignment_support alignment_support_scheme,
>                         tree init_addr,
>                         struct loop **at_loop)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
> @@ -5839,11 +5832,10 @@ vect_load_lanes_supported (tree vectype,
>  static void
>  vect_permute_load_chain (vec<tree> dr_chain,
>                          unsigned int length,
> -                        gimple *stmt,
> +                        stmt_vec_info stmt_info,
>                          gimple_stmt_iterator *gsi,
>                          vec<tree> *result_chain)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    tree data_ref, first_vect, second_vect;
>    tree perm_mask_even, perm_mask_odd;
>    tree perm3_mask_low, perm3_mask_high;
> @@ -6043,11 +6035,10 @@ vect_permute_load_chain (vec<tree> dr_ch
>  static bool
>  vect_shift_permute_load_chain (vec<tree> dr_chain,
>                                unsigned int length,
> -                              gimple *stmt,
> +                              stmt_vec_info stmt_info,
>                                gimple_stmt_iterator *gsi,
>                                vec<tree> *result_chain)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    tree vect[3], vect_shift[3], data_ref, first_vect, second_vect;
>    tree perm2_mask1, perm2_mask2, perm3_mask;
>    tree select_mask, shift1_mask, shift2_mask, shift3_mask, shift4_mask;
> @@ -6311,10 +6302,9 @@ vect_shift_permute_load_chain (vec<tree>
>  */
>
>  void
> -vect_transform_grouped_load (gimple *stmt, vec<tree> dr_chain, int size,
> -                            gimple_stmt_iterator *gsi)
> +vect_transform_grouped_load (stmt_vec_info stmt_info, vec<tree> dr_chain,
> +                            int size, gimple_stmt_iterator *gsi)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    machine_mode mode;
>    vec<tree> result_chain = vNULL;
>
> @@ -6337,13 +6327,13 @@ vect_transform_grouped_load (gimple *stm
>  }
>
>  /* RESULT_CHAIN contains the output of a group of grouped loads that were
> -   generated as part of the vectorization of STMT.  Assign the statement
> +   generated as part of the vectorization of STMT_INFO.  Assign the statement
>     for each vector to the associated scalar statement.  */
>
>  void
> -vect_record_grouped_load_vectors (gimple *stmt, vec<tree> result_chain)
> +vect_record_grouped_load_vectors (stmt_vec_info stmt_info,
> +                                 vec<tree> result_chain)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    vec_info *vinfo = stmt_info->vinfo;
>    stmt_vec_info first_stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
>    unsigned int i, gap_count;
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:23:46.112636713 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:23:50.004602150 +0100
> @@ -648,12 +648,12 @@ vect_analyze_scalar_cycles (loop_vec_inf
>      vect_analyze_scalar_cycles_1 (loop_vinfo, loop->inner);
>  }
>
> -/* Transfer group and reduction information from STMT to its pattern stmt.  */
> +/* Transfer group and reduction information from STMT_INFO to its
> +   pattern stmt.  */
>
>  static void
> -vect_fixup_reduc_chain (gimple *stmt)
> +vect_fixup_reduc_chain (stmt_vec_info stmt_info)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    stmt_vec_info firstp = STMT_VINFO_RELATED_STMT (stmt_info);
>    stmt_vec_info stmtp;
>    gcc_assert (!REDUC_GROUP_FIRST_ELEMENT (firstp)
> @@ -3998,15 +3998,15 @@ vect_model_induction_cost (stmt_vec_info
>  /* Function get_initial_def_for_reduction
>
>     Input:
> -   STMT - a stmt that performs a reduction operation in the loop.
> +   STMT_VINFO - a stmt that performs a reduction operation in the loop.
>     INIT_VAL - the initial value of the reduction variable
>
>     Output:
>     ADJUSTMENT_DEF - a tree that holds a value to be added to the final result
>          of the reduction (used for adjusting the epilog - see below).
> -   Return a vector variable, initialized according to the operation that STMT
> -        performs. This vector will be used as the initial value of the
> -        vector of partial results.
> +   Return a vector variable, initialized according to the operation that
> +       STMT_VINFO performs. This vector will be used as the initial value
> +       of the vector of partial results.
>
>     Option1 (adjust in epilog): Initialize the vector as follows:
>       add/bit or/xor:    [0,0,...,0,0]
> @@ -4027,7 +4027,7 @@ vect_model_induction_cost (stmt_vec_info
>     for (i=0;i<n;i++)
>       s = s + a[i];
>
> -   STMT is 's = s + a[i]', and the reduction variable is 's'.
> +   STMT_VINFO is 's = s + a[i]', and the reduction variable is 's'.
>     For a vector of 4 units, we want to return either [0,0,0,init_val],
>     or [0,0,0,0] and let the caller know that it needs to adjust
>     the result at the end by 'init_val'.
> @@ -4039,10 +4039,9 @@ vect_model_induction_cost (stmt_vec_info
>     A cost model should help decide between these two schemes.  */
>
>  tree
> -get_initial_def_for_reduction (gimple *stmt, tree init_val,
> +get_initial_def_for_reduction (stmt_vec_info stmt_vinfo, tree init_val,
>                                 tree *adjustment_def)
>  {
> -  stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_vinfo);
>    struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
>    tree scalar_type = TREE_TYPE (init_val);
> @@ -4321,7 +4320,7 @@ get_initial_defs_for_reduction (slp_tree
>
>     VECT_DEFS is list of vector of partial results, i.e., the lhs's of vector
>       reduction statements.
> -   STMT is the scalar reduction stmt that is being vectorized.
> +   STMT_INFO is the scalar reduction stmt that is being vectorized.
>     NCOPIES is > 1 in case the vectorization factor (VF) is bigger than the
>       number of elements that we can fit in a vectype (nunits).  In this case
>       we have to generate more than one vector stmt - i.e - we need to "unroll"
> @@ -4334,7 +4333,7 @@ get_initial_defs_for_reduction (slp_tree
>       statement that is defined by REDUCTION_PHI.
>     DOUBLE_REDUC is TRUE if double reduction phi nodes should be handled.
>     SLP_NODE is an SLP node containing a group of reduction statements. The
> -     first one in this group is STMT.
> +     first one in this group is STMT_INFO.
>     INDUC_VAL is for INTEGER_INDUC_COND_REDUCTION the value to use for the case
>       when the COND_EXPR is never true in the loop.  For MAX_EXPR, it needs to
>       be smaller than any value of the IV in the loop, for MIN_EXPR larger than
> @@ -4359,8 +4358,8 @@ get_initial_defs_for_reduction (slp_tree
>
>          loop:
>            vec_def = phi <null, null>            # REDUCTION_PHI
> -          VECT_DEF = vector_stmt                # vectorized form of STMT
> -          s_loop = scalar_stmt                  # (scalar) STMT
> +          VECT_DEF = vector_stmt                # vectorized form of STMT_INFO
> +          s_loop = scalar_stmt                  # (scalar) STMT_INFO
>          loop_exit:
>            s_out0 = phi <s_loop>                 # (scalar) EXIT_PHI
>            use <s_out0>
> @@ -4370,8 +4369,8 @@ get_initial_defs_for_reduction (slp_tree
>
>          loop:
>            vec_def = phi <vec_init, VECT_DEF>    # REDUCTION_PHI
> -          VECT_DEF = vector_stmt                # vectorized form of STMT
> -          s_loop = scalar_stmt                  # (scalar) STMT
> +          VECT_DEF = vector_stmt                # vectorized form of STMT_INFO
> +          s_loop = scalar_stmt                  # (scalar) STMT_INFO
>          loop_exit:
>            s_out0 = phi <s_loop>                 # (scalar) EXIT_PHI
>            v_out1 = phi <VECT_DEF>               # NEW_EXIT_PHI
> @@ -4383,7 +4382,8 @@ get_initial_defs_for_reduction (slp_tree
>  */
>
>  static void
> -vect_create_epilog_for_reduction (vec<tree> vect_defs, gimple *stmt,
> +vect_create_epilog_for_reduction (vec<tree> vect_defs,
> +                                 stmt_vec_info stmt_info,
>                                   gimple *reduc_def_stmt,
>                                   int ncopies, internal_fn reduc_fn,
>                                   vec<stmt_vec_info> reduction_phis,
> @@ -4393,7 +4393,6 @@ vect_create_epilog_for_reduction (vec<tr
>                                   tree induc_val, enum tree_code induc_code,
>                                   tree neutral_op)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    stmt_vec_info prev_phi_info;
>    tree vectype;
>    machine_mode mode;
> @@ -5816,9 +5815,9 @@ vect_expand_fold_left (gimple_stmt_itera
>    return lhs;
>  }
>
> -/* Perform an in-order reduction (FOLD_LEFT_REDUCTION).  STMT is the
> +/* Perform an in-order reduction (FOLD_LEFT_REDUCTION).  STMT_INFO is the
>     statement that sets the live-out value.  REDUC_DEF_STMT is the phi
> -   statement.  CODE is the operation performed by STMT and OPS are
> +   statement.  CODE is the operation performed by STMT_INFO and OPS are
>     its scalar operands.  REDUC_INDEX is the index of the operand in
>     OPS that is set by REDUC_DEF_STMT.  REDUC_FN is the function that
>     implements in-order reduction, or IFN_LAST if we should open-code it.
> @@ -5826,14 +5825,14 @@ vect_expand_fold_left (gimple_stmt_itera
>     that should be used to control the operation in a fully-masked loop.  */
>
>  static bool
> -vectorize_fold_left_reduction (gimple *stmt, gimple_stmt_iterator *gsi,
> +vectorize_fold_left_reduction (stmt_vec_info stmt_info,
> +                              gimple_stmt_iterator *gsi,
>                                stmt_vec_info *vec_stmt, slp_tree slp_node,
>                                gimple *reduc_def_stmt,
>                                tree_code code, internal_fn reduc_fn,
>                                tree ops[3], tree vectype_in,
>                                int reduc_index, vec_loop_masks *masks)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
>    tree vectype_out = STMT_VINFO_VECTYPE (stmt_info);
> @@ -5962,16 +5961,16 @@ vectorize_fold_left_reduction (gimple *s
>
>  /* Function is_nonwrapping_integer_induction.
>
> -   Check if STMT (which is part of loop LOOP) both increments and
> +   Check if STMT_VINO (which is part of loop LOOP) both increments and
>     does not cause overflow.  */
>
>  static bool
> -is_nonwrapping_integer_induction (gimple *stmt, struct loop *loop)
> +is_nonwrapping_integer_induction (stmt_vec_info stmt_vinfo, struct loop *loop)
>  {
> -  stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
> +  gphi *phi = as_a <gphi *> (stmt_vinfo->stmt);
>    tree base = STMT_VINFO_LOOP_PHI_EVOLUTION_BASE_UNCHANGED (stmt_vinfo);
>    tree step = STMT_VINFO_LOOP_PHI_EVOLUTION_PART (stmt_vinfo);
> -  tree lhs_type = TREE_TYPE (gimple_phi_result (stmt));
> +  tree lhs_type = TREE_TYPE (gimple_phi_result (phi));
>    widest_int ni, max_loop_value, lhs_max;
>    wi::overflow_type overflow = wi::OVF_NONE;
>
> @@ -6004,17 +6003,18 @@ is_nonwrapping_integer_induction (gimple
>
>  /* Function vectorizable_reduction.
>
> -   Check if STMT performs a reduction operation that can be vectorized.
> -   If VEC_STMT is also passed, vectorize the STMT: create a vectorized
> +   Check if STMT_INFO performs a reduction operation that can be vectorized.
> +   If VEC_STMT is also passed, vectorize STMT_INFO: create a vectorized
>     stmt to replace it, put it in VEC_STMT, and insert it at GSI.
> -   Return FALSE if not a vectorizable STMT, TRUE otherwise.
> +   Return true if STMT_INFO is vectorizable in this way.
>
>     This function also handles reduction idioms (patterns) that have been
> -   recognized in advance during vect_pattern_recog.  In this case, STMT may be
> -   of this form:
> +   recognized in advance during vect_pattern_recog.  In this case, STMT_INFO
> +   may be of this form:
>       X = pattern_expr (arg0, arg1, ..., X)
> -   and it's STMT_VINFO_RELATED_STMT points to the last stmt in the original
> -   sequence that had been detected and replaced by the pattern-stmt (STMT).
> +   and its STMT_VINFO_RELATED_STMT points to the last stmt in the original
> +   sequence that had been detected and replaced by the pattern-stmt
> +   (STMT_INFO).
>
>     This function also handles reduction of condition expressions, for example:
>       for (int i = 0; i < N; i++)
> @@ -6026,9 +6026,9 @@ is_nonwrapping_integer_induction (gimple
>     index into the vector of results.
>
>     In some cases of reduction patterns, the type of the reduction variable X is
> -   different than the type of the other arguments of STMT.
> -   In such cases, the vectype that is used when transforming STMT into a vector
> -   stmt is different than the vectype that is used to determine the
> +   different than the type of the other arguments of STMT_INFO.
> +   In such cases, the vectype that is used when transforming STMT_INFO into
> +   a vector stmt is different than the vectype that is used to determine the
>     vectorization factor, because it consists of a different number of elements
>     than the actual number of elements that are being operated upon in parallel.
>
> @@ -6052,14 +6052,13 @@ is_nonwrapping_integer_induction (gimple
>     does *NOT* necessarily hold for reduction patterns.  */
>
>  bool
> -vectorizable_reduction (gimple *stmt, gimple_stmt_iterator *gsi,
> +vectorizable_reduction (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
>                         stmt_vec_info *vec_stmt, slp_tree slp_node,
>                         slp_instance slp_node_instance,
>                         stmt_vector_for_cost *cost_vec)
>  {
>    tree vec_dest;
>    tree scalar_dest;
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    tree vectype_out = STMT_VINFO_VECTYPE (stmt_info);
>    tree vectype_in = NULL_TREE;
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
> @@ -6247,7 +6246,7 @@ vectorizable_reduction (gimple *stmt, gi
>          inside the loop body. The last operand is the reduction variable,
>          which is defined by the loop-header-phi.  */
>
> -  gcc_assert (is_gimple_assign (stmt));
> +  gassign *stmt = as_a <gassign *> (stmt_info->stmt);
>
>    /* Flatten RHS.  */
>    switch (get_gimple_rhs_class (gimple_assign_rhs_code (stmt)))
> @@ -7240,18 +7239,17 @@ vect_worthwhile_without_simd_p (vec_info
>
>  /* Function vectorizable_induction
>
> -   Check if PHI performs an induction computation that can be vectorized.
> +   Check if STMT_INFO performs an induction computation that can be vectorized.
>     If VEC_STMT is also passed, vectorize the induction PHI: create a vectorized
>     phi to replace it, put it in VEC_STMT, and add it to the same basic block.
> -   Return FALSE if not a vectorizable STMT, TRUE otherwise.  */
> +   Return true if STMT_INFO is vectorizable in this way.  */
>
>  bool
> -vectorizable_induction (gimple *phi,
> +vectorizable_induction (stmt_vec_info stmt_info,
>                         gimple_stmt_iterator *gsi ATTRIBUTE_UNUSED,
>                         stmt_vec_info *vec_stmt, slp_tree slp_node,
>                         stmt_vector_for_cost *cost_vec)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (phi);
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
>    unsigned ncopies;
> @@ -7276,9 +7274,9 @@ vectorizable_induction (gimple *phi,
>    edge latch_e;
>    tree loop_arg;
>    gimple_stmt_iterator si;
> -  basic_block bb = gimple_bb (phi);
>
> -  if (gimple_code (phi) != GIMPLE_PHI)
> +  gphi *phi = dyn_cast <gphi *> (stmt_info->stmt);
> +  if (!phi)
>      return false;
>
>    if (!STMT_VINFO_RELEVANT_P (stmt_info))
> @@ -7426,6 +7424,7 @@ vectorizable_induction (gimple *phi,
>      }
>
>    /* Find the first insertion point in the BB.  */
> +  basic_block bb = gimple_bb (phi);
>    si = gsi_after_labels (bb);
>
>    /* For SLP induction we have to generate several IVs as for example
> @@ -7791,17 +7790,16 @@ vectorizable_induction (gimple *phi,
>
>  /* Function vectorizable_live_operation.
>
> -   STMT computes a value that is used outside the loop.  Check if
> +   STMT_INFO computes a value that is used outside the loop.  Check if
>     it can be supported.  */
>
>  bool
> -vectorizable_live_operation (gimple *stmt,
> +vectorizable_live_operation (stmt_vec_info stmt_info,
>                              gimple_stmt_iterator *gsi ATTRIBUTE_UNUSED,
>                              slp_tree slp_node, int slp_index,
>                              stmt_vec_info *vec_stmt,
>                              stmt_vector_for_cost *)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
>    imm_use_iterator imm_iter;
> @@ -7908,8 +7906,9 @@ vectorizable_live_operation (gimple *stm
>      }
>
>    /* If stmt has a related stmt, then use that for getting the lhs.  */
> -  if (is_pattern_stmt_p (stmt_info))
> -    stmt = STMT_VINFO_RELATED_STMT (stmt_info);
> +  gimple *stmt = (is_pattern_stmt_p (stmt_info)
> +                 ? STMT_VINFO_RELATED_STMT (stmt_info)->stmt
> +                 : stmt_info->stmt);
>
>    lhs = (is_a <gphi *> (stmt)) ? gimple_phi_result (stmt)
>         : gimple_get_lhs (stmt);
> @@ -8010,17 +8009,17 @@ vectorizable_live_operation (gimple *stm
>    return true;
>  }
>
> -/* Kill any debug uses outside LOOP of SSA names defined in STMT.  */
> +/* Kill any debug uses outside LOOP of SSA names defined in STMT_INFO.  */
>
>  static void
> -vect_loop_kill_debug_uses (struct loop *loop, gimple *stmt)
> +vect_loop_kill_debug_uses (struct loop *loop, stmt_vec_info stmt_info)
>  {
>    ssa_op_iter op_iter;
>    imm_use_iterator imm_iter;
>    def_operand_p def_p;
>    gimple *ustmt;
>
> -  FOR_EACH_PHI_OR_STMT_DEF (def_p, stmt, op_iter, SSA_OP_DEF)
> +  FOR_EACH_PHI_OR_STMT_DEF (def_p, stmt_info->stmt, op_iter, SSA_OP_DEF)
>      {
>        FOR_EACH_IMM_USE_STMT (ustmt, imm_iter, DEF_FROM_PTR (def_p))
>         {
> Index: gcc/tree-vect-patterns.c
> ===================================================================
> --- gcc/tree-vect-patterns.c    2018-07-24 10:23:35.380732018 +0100
> +++ gcc/tree-vect-patterns.c    2018-07-24 10:23:50.004602150 +0100
> @@ -236,22 +236,20 @@ vect_get_internal_def (vec_info *vinfo,
>    return NULL;
>  }
>
> -/* Check whether NAME, an ssa-name used in USE_STMT,
> +/* Check whether NAME, an ssa-name used in STMT_VINFO,
>     is a result of a type promotion, such that:
>       DEF_STMT: NAME = NOP (name0)
>     If CHECK_SIGN is TRUE, check that either both types are signed or both are
>     unsigned.  */
>
>  static bool
> -type_conversion_p (tree name, gimple *use_stmt, bool check_sign,
> +type_conversion_p (tree name, stmt_vec_info stmt_vinfo, bool check_sign,
>                    tree *orig_type, gimple **def_stmt, bool *promotion)
>  {
> -  stmt_vec_info stmt_vinfo;
>    tree type = TREE_TYPE (name);
>    tree oprnd0;
>    enum vect_def_type dt;
>
> -  stmt_vinfo = vinfo_for_stmt (use_stmt);
>    stmt_vec_info def_stmt_info;
>    if (!vect_is_simple_use (name, stmt_vinfo->vinfo, &dt, &def_stmt_info,
>                            def_stmt))
> @@ -3498,15 +3496,13 @@ sort_after_uid (const void *p1, const vo
>  }
>
>  /* Create pattern stmts for all stmts participating in the bool pattern
> -   specified by BOOL_STMT_SET and its root STMT with the desired type
> +   specified by BOOL_STMT_SET and its root STMT_INFO with the desired type
>     OUT_TYPE.  Return the def of the pattern root.  */
>
>  static tree
>  adjust_bool_stmts (hash_set <gimple *> &bool_stmt_set,
> -                  tree out_type, gimple *stmt)
> +                  tree out_type, stmt_vec_info stmt_info)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> -
>    /* Gather original stmts in the bool pattern in their order of appearance
>       in the IL.  */
>    auto_vec<gimple *> bool_stmts (bool_stmt_set.elements ());
> @@ -4126,19 +4122,19 @@ vect_recog_mask_conversion_pattern (stmt
>    return pattern_stmt;
>  }
>
> -/* STMT is a load or store.  If the load or store is conditional, return
> +/* STMT_INFO is a load or store.  If the load or store is conditional, return
>     the boolean condition under which it occurs, otherwise return null.  */
>
>  static tree
> -vect_get_load_store_mask (gimple *stmt)
> +vect_get_load_store_mask (stmt_vec_info stmt_info)
>  {
> -  if (gassign *def_assign = dyn_cast <gassign *> (stmt))
> +  if (gassign *def_assign = dyn_cast <gassign *> (stmt_info->stmt))
>      {
>        gcc_assert (gimple_assign_single_p (def_assign));
>        return NULL_TREE;
>      }
>
> -  if (gcall *def_call = dyn_cast <gcall *> (stmt))
> +  if (gcall *def_call = dyn_cast <gcall *> (stmt_info->stmt))
>      {
>        internal_fn ifn = gimple_call_internal_fn (def_call);
>        int mask_index = internal_fn_mask_index (ifn);
> Index: gcc/tree-vect-slp.c
> ===================================================================
> --- gcc/tree-vect-slp.c 2018-07-24 10:23:46.112636713 +0100
> +++ gcc/tree-vect-slp.c 2018-07-24 10:23:50.004602150 +0100
> @@ -195,14 +195,14 @@ vect_free_oprnd_info (vec<slp_oprnd_info
>  }
>
>
> -/* Find the place of the data-ref in STMT in the interleaving chain that starts
> -   from FIRST_STMT.  Return -1 if the data-ref is not a part of the chain.  */
> +/* Find the place of the data-ref in STMT_INFO in the interleaving chain
> +   that starts from FIRST_STMT_INFO.  Return -1 if the data-ref is not a part
> +   of the chain.  */
>
>  int
> -vect_get_place_in_interleaving_chain (gimple *stmt, gimple *first_stmt)
> +vect_get_place_in_interleaving_chain (stmt_vec_info stmt_info,
> +                                     stmt_vec_info first_stmt_info)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> -  stmt_vec_info first_stmt_info = vinfo_for_stmt (first_stmt);
>    stmt_vec_info next_stmt_info = first_stmt_info;
>    int result = 0;
>
> @@ -1918,9 +1918,8 @@ calculate_unrolling_factor (poly_uint64
>
>  static bool
>  vect_analyze_slp_instance (vec_info *vinfo,
> -                          gimple *stmt, unsigned max_tree_size)
> +                          stmt_vec_info stmt_info, unsigned max_tree_size)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    slp_instance new_instance;
>    slp_tree node;
>    unsigned int group_size;
> @@ -3118,13 +3117,12 @@ vect_slp_bb (basic_block bb)
>
>
>  /* Return 1 if vector type of boolean constant which is OPNUM
> -   operand in statement STMT is a boolean vector.  */
> +   operand in statement STMT_VINFO is a boolean vector.  */
>
>  static bool
> -vect_mask_constant_operand_p (gimple *stmt, int opnum)
> +vect_mask_constant_operand_p (stmt_vec_info stmt_vinfo, int opnum)
>  {
> -  stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
> -  enum tree_code code = gimple_expr_code (stmt);
> +  enum tree_code code = gimple_expr_code (stmt_vinfo->stmt);
>    tree op, vectype;
>    enum vect_def_type dt;
>
> @@ -3132,6 +3130,7 @@ vect_mask_constant_operand_p (gimple *st
>       on the other comparison operand.  */
>    if (TREE_CODE_CLASS (code) == tcc_comparison)
>      {
> +      gassign *stmt = as_a <gassign *> (stmt_vinfo->stmt);
>        if (opnum)
>         op = gimple_assign_rhs1 (stmt);
>        else
> @@ -3145,6 +3144,7 @@ vect_mask_constant_operand_p (gimple *st
>
>    if (code == COND_EXPR)
>      {
> +      gassign *stmt = as_a <gassign *> (stmt_vinfo->stmt);
>        tree cond = gimple_assign_rhs1 (stmt);
>
>        if (TREE_CODE (cond) == SSA_NAME)
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-24 10:23:46.116636678 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:23:50.008602115 +0100
> @@ -192,13 +192,12 @@ vect_clobber_variable (stmt_vec_info stm
>
>  /* Function vect_mark_relevant.
>
> -   Mark STMT as "relevant for vectorization" and add it to WORKLIST.  */
> +   Mark STMT_INFO as "relevant for vectorization" and add it to WORKLIST.  */
>
>  static void
> -vect_mark_relevant (vec<stmt_vec_info> *worklist, gimple *stmt,
> +vect_mark_relevant (vec<stmt_vec_info> *worklist, stmt_vec_info stmt_info,
>                     enum vect_relevant relevant, bool live_p)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    enum vect_relevant save_relevant = STMT_VINFO_RELEVANT (stmt_info);
>    bool save_live_p = STMT_VINFO_LIVE_P (stmt_info);
>
> @@ -229,7 +228,6 @@ vect_mark_relevant (vec<stmt_vec_info> *
>        gcc_assert (STMT_VINFO_RELATED_STMT (stmt_info) == old_stmt_info);
>        save_relevant = STMT_VINFO_RELEVANT (stmt_info);
>        save_live_p = STMT_VINFO_LIVE_P (stmt_info);
> -      stmt = stmt_info->stmt;
>      }
>
>    STMT_VINFO_LIVE_P (stmt_info) |= live_p;
> @@ -251,15 +249,17 @@ vect_mark_relevant (vec<stmt_vec_info> *
>
>  /* Function is_simple_and_all_uses_invariant
>
> -   Return true if STMT is simple and all uses of it are invariant.  */
> +   Return true if STMT_INFO is simple and all uses of it are invariant.  */
>
>  bool
> -is_simple_and_all_uses_invariant (gimple *stmt, loop_vec_info loop_vinfo)
> +is_simple_and_all_uses_invariant (stmt_vec_info stmt_info,
> +                                 loop_vec_info loop_vinfo)
>  {
>    tree op;
>    ssa_op_iter iter;
>
> -  if (!is_gimple_assign (stmt))
> +  gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt);
> +  if (!stmt)
>      return false;
>
>    FOR_EACH_SSA_TREE_OPERAND (op, stmt, iter, SSA_OP_USE)
> @@ -361,14 +361,13 @@ vect_stmt_relevant_p (stmt_vec_info stmt
>
>  /* Function exist_non_indexing_operands_for_use_p
>
> -   USE is one of the uses attached to STMT.  Check if USE is
> -   used in STMT for anything other than indexing an array.  */
> +   USE is one of the uses attached to STMT_INFO.  Check if USE is
> +   used in STMT_INFO for anything other than indexing an array.  */
>
>  static bool
> -exist_non_indexing_operands_for_use_p (tree use, gimple *stmt)
> +exist_non_indexing_operands_for_use_p (tree use, stmt_vec_info stmt_info)
>  {
>    tree operand;
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>
>    /* USE corresponds to some operand in STMT.  If there is no data
>       reference in STMT, then any operand that corresponds to USE
> @@ -428,7 +427,7 @@ exist_non_indexing_operands_for_use_p (t
>     Function process_use.
>
>     Inputs:
> -   - a USE in STMT in a loop represented by LOOP_VINFO
> +   - a USE in STMT_VINFO in a loop represented by LOOP_VINFO
>     - RELEVANT - enum value to be set in the STMT_VINFO of the stmt
>       that defined USE.  This is done by calling mark_relevant and passing it
>       the WORKLIST (to add DEF_STMT to the WORKLIST in case it is relevant).
> @@ -438,25 +437,24 @@ exist_non_indexing_operands_for_use_p (t
>     Outputs:
>     Generally, LIVE_P and RELEVANT are used to define the liveness and
>     relevance info of the DEF_STMT of this USE:
> -       STMT_VINFO_LIVE_P (DEF_STMT_info) <-- live_p
> -       STMT_VINFO_RELEVANT (DEF_STMT_info) <-- relevant
> +       STMT_VINFO_LIVE_P (DEF_stmt_vinfo) <-- live_p
> +       STMT_VINFO_RELEVANT (DEF_stmt_vinfo) <-- relevant
>     Exceptions:
>     - case 1: If USE is used only for address computations (e.g. array indexing),
>     which does not need to be directly vectorized, then the liveness/relevance
>     of the respective DEF_STMT is left unchanged.
> -   - case 2: If STMT is a reduction phi and DEF_STMT is a reduction stmt, we
> -   skip DEF_STMT cause it had already been processed.
> -   - case 3: If DEF_STMT and STMT are in different nests, then  "relevant" will
> -   be modified accordingly.
> +   - case 2: If STMT_VINFO is a reduction phi and DEF_STMT is a reduction stmt,
> +   we skip DEF_STMT cause it had already been processed.
> +   - case 3: If DEF_STMT and STMT_VINFO are in different nests, then
> +   "relevant" will be modified accordingly.
>
>     Return true if everything is as expected. Return false otherwise.  */
>
>  static bool
> -process_use (gimple *stmt, tree use, loop_vec_info loop_vinfo,
> +process_use (stmt_vec_info stmt_vinfo, tree use, loop_vec_info loop_vinfo,
>              enum vect_relevant relevant, vec<stmt_vec_info> *worklist,
>              bool force)
>  {
> -  stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
>    stmt_vec_info dstmt_vinfo;
>    basic_block bb, def_bb;
>    enum vect_def_type dt;
> @@ -1342,12 +1340,12 @@ vect_get_load_cost (stmt_vec_info stmt_i
>  }
>
>  /* Insert the new stmt NEW_STMT at *GSI or at the appropriate place in
> -   the loop preheader for the vectorized stmt STMT.  */
> +   the loop preheader for the vectorized stmt STMT_VINFO.  */
>
>  static void
> -vect_init_vector_1 (gimple *stmt, gimple *new_stmt, gimple_stmt_iterator *gsi)
> +vect_init_vector_1 (stmt_vec_info stmt_vinfo, gimple *new_stmt,
> +                   gimple_stmt_iterator *gsi)
>  {
> -  stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
>    if (gsi)
>      vect_finish_stmt_generation (stmt_vinfo, new_stmt, gsi);
>    else
> @@ -1396,12 +1394,12 @@ vect_init_vector_1 (gimple *stmt, gimple
>     Place the initialization at BSI if it is not NULL.  Otherwise, place the
>     initialization at the loop preheader.
>     Return the DEF of INIT_STMT.
> -   It will be used in the vectorization of STMT.  */
> +   It will be used in the vectorization of STMT_INFO.  */
>
>  tree
> -vect_init_vector (gimple *stmt, tree val, tree type, gimple_stmt_iterator *gsi)
> +vect_init_vector (stmt_vec_info stmt_info, tree val, tree type,
> +                 gimple_stmt_iterator *gsi)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    gimple *init_stmt;
>    tree new_temp;
>
> @@ -1456,15 +1454,15 @@ vect_init_vector (gimple *stmt, tree val
>
>  /* Function vect_get_vec_def_for_operand_1.
>
> -   For a defining stmt DEF_STMT of a scalar stmt, return a vector def with type
> -   DT that will be used in the vectorized stmt.  */
> +   For a defining stmt DEF_STMT_INFO of a scalar stmt, return a vector def
> +   with type DT that will be used in the vectorized stmt.  */
>
>  tree
> -vect_get_vec_def_for_operand_1 (gimple *def_stmt, enum vect_def_type dt)
> +vect_get_vec_def_for_operand_1 (stmt_vec_info def_stmt_info,
> +                               enum vect_def_type dt)
>  {
>    tree vec_oprnd;
>    stmt_vec_info vec_stmt_info;
> -  stmt_vec_info def_stmt_info = NULL;
>
>    switch (dt)
>      {
> @@ -1478,8 +1476,6 @@ vect_get_vec_def_for_operand_1 (gimple *
>      case vect_internal_def:
>        {
>          /* Get the def from the vectorized stmt.  */
> -        def_stmt_info = vinfo_for_stmt (def_stmt);
> -
>         vec_stmt_info = STMT_VINFO_VEC_STMT (def_stmt_info);
>         /* Get vectorized pattern statement.  */
>         if (!vec_stmt_info
> @@ -1501,10 +1497,9 @@ vect_get_vec_def_for_operand_1 (gimple *
>      case vect_nested_cycle:
>      case vect_induction_def:
>        {
> -       gcc_assert (gimple_code (def_stmt) == GIMPLE_PHI);
> +       gcc_assert (gimple_code (def_stmt_info->stmt) == GIMPLE_PHI);
>
>         /* Get the def from the vectorized stmt.  */
> -       def_stmt_info = vinfo_for_stmt (def_stmt);
>         vec_stmt_info = STMT_VINFO_VEC_STMT (def_stmt_info);
>         if (gphi *phi = dyn_cast <gphi *> (vec_stmt_info->stmt))
>           vec_oprnd = PHI_RESULT (phi);
> @@ -1521,8 +1516,8 @@ vect_get_vec_def_for_operand_1 (gimple *
>
>  /* Function vect_get_vec_def_for_operand.
>
> -   OP is an operand in STMT.  This function returns a (vector) def that will be
> -   used in the vectorized stmt for STMT.
> +   OP is an operand in STMT_VINFO.  This function returns a (vector) def
> +   that will be used in the vectorized stmt for STMT_VINFO.
>
>     In the case that OP is an SSA_NAME which is defined in the loop, then
>     STMT_VINFO_VEC_STMT of the defining stmt holds the relevant def.
> @@ -1532,12 +1527,11 @@ vect_get_vec_def_for_operand_1 (gimple *
>     vector invariant.  */
>
>  tree
> -vect_get_vec_def_for_operand (tree op, gimple *stmt, tree vectype)
> +vect_get_vec_def_for_operand (tree op, stmt_vec_info stmt_vinfo, tree vectype)
>  {
>    gimple *def_stmt;
>    enum vect_def_type dt;
>    bool is_simple_use;
> -  stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_vinfo);
>
>    if (dump_enabled_p ())
> @@ -1683,12 +1677,11 @@ vect_get_vec_defs_for_stmt_copy (enum ve
>  /* Get vectorized definitions for OP0 and OP1.  */
>
>  void
> -vect_get_vec_defs (tree op0, tree op1, gimple *stmt,
> +vect_get_vec_defs (tree op0, tree op1, stmt_vec_info stmt_info,
>                    vec<tree> *vec_oprnds0,
>                    vec<tree> *vec_oprnds1,
>                    slp_tree slp_node)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    if (slp_node)
>      {
>        int nops = (op1 == NULL_TREE) ? 1 : 2;
> @@ -1727,9 +1720,8 @@ vect_get_vec_defs (tree op0, tree op1, g
>     statement and create and return a stmt_vec_info for it.  */
>
>  static stmt_vec_info
> -vect_finish_stmt_generation_1 (gimple *stmt, gimple *vec_stmt)
> +vect_finish_stmt_generation_1 (stmt_vec_info stmt_info, gimple *vec_stmt)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    vec_info *vinfo = stmt_info->vinfo;
>
>    stmt_vec_info vec_stmt_info = vinfo->add_stmt (vec_stmt);
> @@ -1752,14 +1744,13 @@ vect_finish_stmt_generation_1 (gimple *s
>    return vec_stmt_info;
>  }
>
> -/* Replace the scalar statement STMT with a new vector statement VEC_STMT,
> -   which sets the same scalar result as STMT did.  Create and return a
> +/* Replace the scalar statement STMT_INFO with a new vector statement VEC_STMT,
> +   which sets the same scalar result as STMT_INFO did.  Create and return a
>     stmt_vec_info for VEC_STMT.  */
>
>  stmt_vec_info
> -vect_finish_replace_stmt (gimple *stmt, gimple *vec_stmt)
> +vect_finish_replace_stmt (stmt_vec_info stmt_info, gimple *vec_stmt)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    gcc_assert (gimple_get_lhs (stmt_info->stmt) == gimple_get_lhs (vec_stmt));
>
>    gimple_stmt_iterator gsi = gsi_for_stmt (stmt_info->stmt);
> @@ -1768,14 +1759,13 @@ vect_finish_replace_stmt (gimple *stmt,
>    return vect_finish_stmt_generation_1 (stmt_info, vec_stmt);
>  }
>
> -/* Add VEC_STMT to the vectorized implementation of STMT and insert it
> +/* Add VEC_STMT to the vectorized implementation of STMT_INFO and insert it
>     before *GSI.  Create and return a stmt_vec_info for VEC_STMT.  */
>
>  stmt_vec_info
> -vect_finish_stmt_generation (gimple *stmt, gimple *vec_stmt,
> +vect_finish_stmt_generation (stmt_vec_info stmt_info, gimple *vec_stmt,
>                              gimple_stmt_iterator *gsi)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    gcc_assert (gimple_code (stmt_info->stmt) != GIMPLE_LABEL);
>
>    if (!gsi_end_p (*gsi)
> @@ -1976,22 +1966,21 @@ prepare_load_store_mask (tree mask_type,
>  }
>
>  /* Determine whether we can use a gather load or scatter store to vectorize
> -   strided load or store STMT by truncating the current offset to a smaller
> -   width.  We need to be able to construct an offset vector:
> +   strided load or store STMT_INFO by truncating the current offset to a
> +   smaller width.  We need to be able to construct an offset vector:
>
>       { 0, X, X*2, X*3, ... }
>
> -   without loss of precision, where X is STMT's DR_STEP.
> +   without loss of precision, where X is STMT_INFO's DR_STEP.
>
>     Return true if this is possible, describing the gather load or scatter
>     store in GS_INFO.  MASKED_P is true if the load or store is conditional.  */
>
>  static bool
> -vect_truncate_gather_scatter_offset (gimple *stmt, loop_vec_info loop_vinfo,
> -                                    bool masked_p,
> +vect_truncate_gather_scatter_offset (stmt_vec_info stmt_info,
> +                                    loop_vec_info loop_vinfo, bool masked_p,
>                                      gather_scatter_info *gs_info)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    tree step = DR_STEP (dr);
>    if (TREE_CODE (step) != INTEGER_CST)
> @@ -2112,14 +2101,13 @@ vect_use_strided_gather_scatters_p (stmt
>    return true;
>  }
>
> -/* STMT is a non-strided load or store, meaning that it accesses
> +/* STMT_INFO is a non-strided load or store, meaning that it accesses
>     elements with a known constant step.  Return -1 if that step
>     is negative, 0 if it is zero, and 1 if it is greater than zero.  */
>
>  static int
> -compare_step_with_zero (gimple *stmt)
> +compare_step_with_zero (stmt_vec_info stmt_info)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    return tree_int_cst_compare (vect_dr_behavior (dr)->step,
>                                size_zero_node);
> @@ -2144,29 +2132,29 @@ perm_mask_for_reverse (tree vectype)
>    return vect_gen_perm_mask_checked (vectype, indices);
>  }
>
> -/* STMT is either a masked or unconditional store.  Return the value
> +/* STMT_INFO is either a masked or unconditional store.  Return the value
>     being stored.  */
>
>  tree
> -vect_get_store_rhs (gimple *stmt)
> +vect_get_store_rhs (stmt_vec_info stmt_info)
>  {
> -  if (gassign *assign = dyn_cast <gassign *> (stmt))
> +  if (gassign *assign = dyn_cast <gassign *> (stmt_info->stmt))
>      {
>        gcc_assert (gimple_assign_single_p (assign));
>        return gimple_assign_rhs1 (assign);
>      }
> -  if (gcall *call = dyn_cast <gcall *> (stmt))
> +  if (gcall *call = dyn_cast <gcall *> (stmt_info->stmt))
>      {
>        internal_fn ifn = gimple_call_internal_fn (call);
>        int index = internal_fn_stored_value_index (ifn);
>        gcc_assert (index >= 0);
> -      return gimple_call_arg (stmt, index);
> +      return gimple_call_arg (call, index);
>      }
>    gcc_unreachable ();
>  }
>
>  /* A subroutine of get_load_store_type, with a subset of the same
> -   arguments.  Handle the case where STMT is part of a grouped load
> +   arguments.  Handle the case where STMT_INFO is part of a grouped load
>     or store.
>
>     For stores, the statements in the group are all consecutive
> @@ -2175,12 +2163,11 @@ vect_get_store_rhs (gimple *stmt)
>     as well as at the end.  */
>
>  static bool
> -get_group_load_store_type (gimple *stmt, tree vectype, bool slp,
> +get_group_load_store_type (stmt_vec_info stmt_info, tree vectype, bool slp,
>                            bool masked_p, vec_load_store_type vls_type,
>                            vect_memory_access_type *memory_access_type,
>                            gather_scatter_info *gs_info)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    vec_info *vinfo = stmt_info->vinfo;
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    struct loop *loop = loop_vinfo ? LOOP_VINFO_LOOP (loop_vinfo) : NULL;
> @@ -2350,15 +2337,14 @@ get_group_load_store_type (gimple *stmt,
>  }
>
>  /* A subroutine of get_load_store_type, with a subset of the same
> -   arguments.  Handle the case where STMT is a load or store that
> +   arguments.  Handle the case where STMT_INFO is a load or store that
>     accesses consecutive elements with a negative step.  */
>
>  static vect_memory_access_type
> -get_negative_load_store_type (gimple *stmt, tree vectype,
> +get_negative_load_store_type (stmt_vec_info stmt_info, tree vectype,
>                               vec_load_store_type vls_type,
>                               unsigned int ncopies)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    dr_alignment_support alignment_support_scheme;
>
> @@ -2400,7 +2386,7 @@ get_negative_load_store_type (gimple *st
>    return VMAT_CONTIGUOUS_REVERSE;
>  }
>
> -/* Analyze load or store statement STMT of type VLS_TYPE.  Return true
> +/* Analyze load or store statement STMT_INFO of type VLS_TYPE.  Return true
>     if there is a memory access type that the vectorized form can use,
>     storing it in *MEMORY_ACCESS_TYPE if so.  If we decide to use gathers
>     or scatters, fill in GS_INFO accordingly.
> @@ -2411,12 +2397,12 @@ get_negative_load_store_type (gimple *st
>     NCOPIES is the number of vector statements that will be needed.  */
>
>  static bool
> -get_load_store_type (gimple *stmt, tree vectype, bool slp, bool masked_p,
> -                    vec_load_store_type vls_type, unsigned int ncopies,
> +get_load_store_type (stmt_vec_info stmt_info, tree vectype, bool slp,
> +                    bool masked_p, vec_load_store_type vls_type,
> +                    unsigned int ncopies,
>                      vect_memory_access_type *memory_access_type,
>                      gather_scatter_info *gs_info)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    vec_info *vinfo = stmt_info->vinfo;
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    poly_uint64 nunits = TYPE_VECTOR_SUBPARTS (vectype);
> @@ -2496,12 +2482,12 @@ get_load_store_type (gimple *stmt, tree
>  }
>
>  /* Return true if boolean argument MASK is suitable for vectorizing
> -   conditional load or store STMT.  When returning true, store the type
> +   conditional load or store STMT_INFO.  When returning true, store the type
>     of the definition in *MASK_DT_OUT and the type of the vectorized mask
>     in *MASK_VECTYPE_OUT.  */
>
>  static bool
> -vect_check_load_store_mask (gimple *stmt, tree mask,
> +vect_check_load_store_mask (stmt_vec_info stmt_info, tree mask,
>                             vect_def_type *mask_dt_out,
>                             tree *mask_vectype_out)
>  {
> @@ -2521,7 +2507,6 @@ vect_check_load_store_mask (gimple *stmt
>        return false;
>      }
>
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    enum vect_def_type mask_dt;
>    tree mask_vectype;
>    if (!vect_is_simple_use (mask, stmt_info->vinfo, &mask_dt, &mask_vectype))
> @@ -2566,13 +2551,14 @@ vect_check_load_store_mask (gimple *stmt
>  }
>
>  /* Return true if stored value RHS is suitable for vectorizing store
> -   statement STMT.  When returning true, store the type of the
> +   statement STMT_INFO.  When returning true, store the type of the
>     definition in *RHS_DT_OUT, the type of the vectorized store value in
>     *RHS_VECTYPE_OUT and the type of the store in *VLS_TYPE_OUT.  */
>
>  static bool
> -vect_check_store_rhs (gimple *stmt, tree rhs, vect_def_type *rhs_dt_out,
> -                     tree *rhs_vectype_out, vec_load_store_type *vls_type_out)
> +vect_check_store_rhs (stmt_vec_info stmt_info, tree rhs,
> +                     vect_def_type *rhs_dt_out, tree *rhs_vectype_out,
> +                     vec_load_store_type *vls_type_out)
>  {
>    /* In the case this is a store from a constant make sure
>       native_encode_expr can handle it.  */
> @@ -2584,7 +2570,6 @@ vect_check_store_rhs (gimple *stmt, tree
>        return false;
>      }
>
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    enum vect_def_type rhs_dt;
>    tree rhs_vectype;
>    if (!vect_is_simple_use (rhs, stmt_info->vinfo, &rhs_dt, &rhs_vectype))
> @@ -2666,18 +2651,19 @@ vect_build_zero_merge_argument (stmt_vec
>    return vect_init_vector (stmt_info, merge, vectype, NULL);
>  }
>
> -/* Build a gather load call while vectorizing STMT.  Insert new instructions
> -   before GSI and add them to VEC_STMT.  GS_INFO describes the gather load
> -   operation.  If the load is conditional, MASK is the unvectorized
> -   condition and MASK_DT is its definition type, otherwise MASK is null.  */
> +/* Build a gather load call while vectorizing STMT_INFO.  Insert new
> +   instructions before GSI and add them to VEC_STMT.  GS_INFO describes
> +   the gather load operation.  If the load is conditional, MASK is the
> +

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [34/46] Alter interface to vect_get_vec_def_for_stmt_copy
  2018-07-24 10:06 ` [34/46] Alter interface to vect_get_vec_def_for_stmt_copy Richard Sandiford
@ 2018-07-25 10:13   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25 10:13 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 12:06 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch makes vect_get_vec_def_for_stmt_copy take a vec_info
> rather than a vect_def_type.  If the vector operand passed in is
> defined in the vectorised region, we should look for copies in
> the normal way.  If it's defined in an external statement
> (such as by vect_init_vector_1) we should just use the original value.

Ok, that works for non-SLP (which this is all about).

Would be nice to refactor this to a iterator interface somehow...

> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (vect_get_vec_defs_for_stmt_copy)
>         (vect_get_vec_def_for_stmt_copy): Take a vec_info rather than
>         a vect_def_type for the first argument.
>         * tree-vect-stmts.c (vect_get_vec_defs_for_stmt_copy): Likewise.
>         (vect_get_vec_def_for_stmt_copy): Likewise.  Return the original
>         operand if it isn't defined by a vectorized statement.
>         (vect_build_gather_load_calls): Remove the mask_dt argument and
>         update calls to vect_get_vec_def_for_stmt_copy.
>         (vectorizable_bswap): Likewise the dt argument.
>         (vectorizable_call): Update calls to vectorizable_bswap and
>         vect_get_vec_def_for_stmt_copy.
>         (vectorizable_simd_clone_call, vectorizable_assignment)
>         (vectorizable_shift, vectorizable_operation, vectorizable_condition)
>         (vectorizable_comparison): Update calls to
>         vect_get_vec_def_for_stmt_copy.
>         (vectorizable_store): Likewise.  Remove now-unnecessary calls to
>         vect_is_simple_use.
>         (vect_get_loop_based_defs): Remove dt argument and update call
>         to vect_get_vec_def_for_stmt_copy.
>         (vectorizable_conversion): Update calls to vect_get_loop_based_defs
>         and vect_get_vec_def_for_stmt_copy.
>         (vectorizable_load): Update calls to vect_build_gather_load_calls
>         and vect_get_vec_def_for_stmt_copy.
>         * tree-vect-loop.c (vect_create_epilog_for_reduction)
>         (vectorizable_reduction, vectorizable_live_operation): Update calls
>         to vect_get_vec_def_for_stmt_copy.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:23:50.008602115 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:23:56.440544995 +0100
> @@ -1514,11 +1514,11 @@ extern tree vect_get_vec_def_for_operand
>  extern tree vect_get_vec_def_for_operand (tree, stmt_vec_info, tree = NULL);
>  extern void vect_get_vec_defs (tree, tree, stmt_vec_info, vec<tree> *,
>                                vec<tree> *, slp_tree);
> -extern void vect_get_vec_defs_for_stmt_copy (enum vect_def_type *,
> +extern void vect_get_vec_defs_for_stmt_copy (vec_info *,
>                                              vec<tree> *, vec<tree> *);
>  extern tree vect_init_vector (stmt_vec_info, tree, tree,
>                                gimple_stmt_iterator *);
> -extern tree vect_get_vec_def_for_stmt_copy (enum vect_def_type, tree);
> +extern tree vect_get_vec_def_for_stmt_copy (vec_info *, tree);
>  extern bool vect_transform_stmt (stmt_vec_info, gimple_stmt_iterator *,
>                                   bool *, slp_tree, slp_instance);
>  extern void vect_remove_stores (stmt_vec_info);
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-24 10:23:50.008602115 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:23:56.440544995 +0100
> @@ -1580,8 +1580,7 @@ vect_get_vec_def_for_operand (tree op, s
>     created in case the vectorized result cannot fit in one vector, and several
>     copies of the vector-stmt are required.  In this case the vector-def is
>     retrieved from the vector stmt recorded in the STMT_VINFO_RELATED_STMT field
> -   of the stmt that defines VEC_OPRND.
> -   DT is the type of the vector def VEC_OPRND.
> +   of the stmt that defines VEC_OPRND.  VINFO describes the vectorization.
>
>     Context:
>          In case the vectorization factor (VF) is bigger than the number
> @@ -1625,29 +1624,24 @@ vect_get_vec_def_for_operand (tree op, s
>     STMT_VINFO_RELATED_STMT field of 'VS1.0' we obtain the next copy - 'VS1.1',
>     and return its def ('vx.1').
>     Overall, to create the above sequence this function will be called 3 times:
> -        vx.1 = vect_get_vec_def_for_stmt_copy (dt, vx.0);
> -        vx.2 = vect_get_vec_def_for_stmt_copy (dt, vx.1);
> -        vx.3 = vect_get_vec_def_for_stmt_copy (dt, vx.2);  */
> +       vx.1 = vect_get_vec_def_for_stmt_copy (vinfo, vx.0);
> +       vx.2 = vect_get_vec_def_for_stmt_copy (vinfo, vx.1);
> +       vx.3 = vect_get_vec_def_for_stmt_copy (vinfo, vx.2);  */
>
>  tree
> -vect_get_vec_def_for_stmt_copy (enum vect_def_type dt, tree vec_oprnd)
> +vect_get_vec_def_for_stmt_copy (vec_info *vinfo, tree vec_oprnd)
>  {
> -  gimple *vec_stmt_for_operand;
> -  stmt_vec_info def_stmt_info;
> -
> -  /* Do nothing; can reuse same def.  */
> -  if (dt == vect_external_def || dt == vect_constant_def )
> +  stmt_vec_info def_stmt_info = vinfo->lookup_def (vec_oprnd);
> +  if (!def_stmt_info)
> +    /* Do nothing; can reuse same def.  */
>      return vec_oprnd;
>
> -  vec_stmt_for_operand = SSA_NAME_DEF_STMT (vec_oprnd);
> -  def_stmt_info = vinfo_for_stmt (vec_stmt_for_operand);
> +  def_stmt_info = STMT_VINFO_RELATED_STMT (def_stmt_info);
>    gcc_assert (def_stmt_info);
> -  vec_stmt_for_operand = STMT_VINFO_RELATED_STMT (def_stmt_info);
> -  gcc_assert (vec_stmt_for_operand);
> -  if (gimple_code (vec_stmt_for_operand) == GIMPLE_PHI)
> -    vec_oprnd = PHI_RESULT (vec_stmt_for_operand);
> +  if (gphi *phi = dyn_cast <gphi *> (def_stmt_info->stmt))
> +    vec_oprnd = PHI_RESULT (phi);
>    else
> -    vec_oprnd = gimple_get_lhs (vec_stmt_for_operand);
> +    vec_oprnd = gimple_get_lhs (def_stmt_info->stmt);
>    return vec_oprnd;
>  }
>
> @@ -1656,19 +1650,19 @@ vect_get_vec_def_for_stmt_copy (enum vec
>     stmt.  See vect_get_vec_def_for_stmt_copy () for details.  */
>
>  void
> -vect_get_vec_defs_for_stmt_copy (enum vect_def_type *dt,
> +vect_get_vec_defs_for_stmt_copy (vec_info *vinfo,
>                                  vec<tree> *vec_oprnds0,
>                                  vec<tree> *vec_oprnds1)
>  {
>    tree vec_oprnd = vec_oprnds0->pop ();
>
> -  vec_oprnd = vect_get_vec_def_for_stmt_copy (dt[0], vec_oprnd);
> +  vec_oprnd = vect_get_vec_def_for_stmt_copy (vinfo, vec_oprnd);
>    vec_oprnds0->quick_push (vec_oprnd);
>
>    if (vec_oprnds1 && vec_oprnds1->length ())
>      {
>        vec_oprnd = vec_oprnds1->pop ();
> -      vec_oprnd = vect_get_vec_def_for_stmt_copy (dt[1], vec_oprnd);
> +      vec_oprnd = vect_get_vec_def_for_stmt_copy (vinfo, vec_oprnd);
>        vec_oprnds1->quick_push (vec_oprnd);
>      }
>  }
> @@ -2662,7 +2656,7 @@ vect_build_gather_load_calls (stmt_vec_i
>                               gimple_stmt_iterator *gsi,
>                               stmt_vec_info *vec_stmt,
>                               gather_scatter_info *gs_info,
> -                             tree mask, vect_def_type mask_dt)
> +                             tree mask)
>  {
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
> @@ -2767,8 +2761,8 @@ vect_build_gather_load_calls (stmt_vec_i
>         op = vec_oprnd0
>           = vect_get_vec_def_for_operand (gs_info->offset, stmt_info);
>        else
> -       op = vec_oprnd0
> -         = vect_get_vec_def_for_stmt_copy (gs_info->offset_dt, vec_oprnd0);
> +       op = vec_oprnd0 = vect_get_vec_def_for_stmt_copy (loop_vinfo,
> +                                                         vec_oprnd0);
>
>        if (!useless_type_conversion_p (idxtype, TREE_TYPE (op)))
>         {
> @@ -2791,7 +2785,8 @@ vect_build_gather_load_calls (stmt_vec_i
>               if (j == 0)
>                 vec_mask = vect_get_vec_def_for_operand (mask, stmt_info);
>               else
> -               vec_mask = vect_get_vec_def_for_stmt_copy (mask_dt, vec_mask);
> +               vec_mask = vect_get_vec_def_for_stmt_copy (loop_vinfo,
> +                                                          vec_mask);
>
>               mask_op = vec_mask;
>               if (!useless_type_conversion_p (masktype, TREE_TYPE (vec_mask)))
> @@ -2951,11 +2946,11 @@ vect_get_data_ptr_increment (data_refere
>  static bool
>  vectorizable_bswap (stmt_vec_info stmt_info, gimple_stmt_iterator *gsi,
>                     stmt_vec_info *vec_stmt, slp_tree slp_node,
> -                   tree vectype_in, enum vect_def_type *dt,
> -                   stmt_vector_for_cost *cost_vec)
> +                   tree vectype_in, stmt_vector_for_cost *cost_vec)
>  {
>    tree op, vectype;
>    gcall *stmt = as_a <gcall *> (stmt_info->stmt);
> +  vec_info *vinfo = stmt_info->vinfo;
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    unsigned ncopies;
>    unsigned HOST_WIDE_INT nunits, num_bytes;
> @@ -3021,7 +3016,7 @@ vectorizable_bswap (stmt_vec_info stmt_i
>        if (j == 0)
>         vect_get_vec_defs (op, NULL, stmt_info, &vec_oprnds, NULL, slp_node);
>        else
> -        vect_get_vec_defs_for_stmt_copy (dt, &vec_oprnds, NULL);
> +       vect_get_vec_defs_for_stmt_copy (vinfo, &vec_oprnds, NULL);
>
>        /* Arguments are ready. create the new vector stmt.  */
>        unsigned i;
> @@ -3301,7 +3296,7 @@ vectorizable_call (stmt_vec_info stmt_in
>                    || gimple_call_builtin_p (stmt, BUILT_IN_BSWAP32)
>                    || gimple_call_builtin_p (stmt, BUILT_IN_BSWAP64)))
>         return vectorizable_bswap (stmt_info, gsi, vec_stmt, slp_node,
> -                                  vectype_in, dt, cost_vec);
> +                                  vectype_in, cost_vec);
>        else
>         {
>           if (dump_enabled_p ())
> @@ -3450,7 +3445,7 @@ vectorizable_call (stmt_vec_info stmt_in
>                   = vect_get_vec_def_for_operand (op, stmt_info);
>               else
>                 vec_oprnd0
> -                 = vect_get_vec_def_for_stmt_copy (dt[i], orig_vargs[i]);
> +                 = vect_get_vec_def_for_stmt_copy (vinfo, orig_vargs[i]);
>
>               orig_vargs[i] = vargs[i] = vec_oprnd0;
>             }
> @@ -3582,16 +3577,16 @@ vectorizable_call (stmt_vec_info stmt_in
>                   vec_oprnd0
>                     = vect_get_vec_def_for_operand (op, stmt_info);
>                   vec_oprnd1
> -                   = vect_get_vec_def_for_stmt_copy (dt[i], vec_oprnd0);
> +                   = vect_get_vec_def_for_stmt_copy (vinfo, vec_oprnd0);
>                 }
>               else
>                 {
>                   vec_oprnd1 = gimple_call_arg (new_stmt_info->stmt,
>                                                 2 * i + 1);
>                   vec_oprnd0
> -                   = vect_get_vec_def_for_stmt_copy (dt[i], vec_oprnd1);
> +                   = vect_get_vec_def_for_stmt_copy (vinfo, vec_oprnd1);
>                   vec_oprnd1
> -                   = vect_get_vec_def_for_stmt_copy (dt[i], vec_oprnd0);
> +                   = vect_get_vec_def_for_stmt_copy (vinfo, vec_oprnd0);
>                 }
>
>               vargs.quick_push (vec_oprnd0);
> @@ -4103,7 +4098,7 @@ vectorizable_simd_clone_call (stmt_vec_i
>                           vec_oprnd0 = arginfo[i].op;
>                           if ((m & (k - 1)) == 0)
>                             vec_oprnd0
> -                             = vect_get_vec_def_for_stmt_copy (arginfo[i].dt,
> +                             = vect_get_vec_def_for_stmt_copy (vinfo,
>                                                                 vec_oprnd0);
>                         }
>                       arginfo[i].op = vec_oprnd0;
> @@ -4134,7 +4129,7 @@ vectorizable_simd_clone_call (stmt_vec_i
>                               = vect_get_vec_def_for_operand (op, stmt_info);
>                           else
>                             vec_oprnd0
> -                             = vect_get_vec_def_for_stmt_copy (arginfo[i].dt,
> +                             = vect_get_vec_def_for_stmt_copy (vinfo,
>                                                                 arginfo[i].op);
>                           arginfo[i].op = vec_oprnd0;
>                           if (k == 1)
> @@ -4440,9 +4435,9 @@ vect_gen_widened_results_half (enum tree
>
>  static void
>  vect_get_loop_based_defs (tree *oprnd, stmt_vec_info stmt_info,
> -                         enum vect_def_type dt, vec<tree> *vec_oprnds,
> -                         int multi_step_cvt)
> +                         vec<tree> *vec_oprnds, int multi_step_cvt)
>  {
> +  vec_info *vinfo = stmt_info->vinfo;
>    tree vec_oprnd;
>
>    /* Get first vector operand.  */
> @@ -4451,12 +4446,12 @@ vect_get_loop_based_defs (tree *oprnd, s
>    if (TREE_CODE (TREE_TYPE (*oprnd)) != VECTOR_TYPE)
>      vec_oprnd = vect_get_vec_def_for_operand (*oprnd, stmt_info);
>    else
> -    vec_oprnd = vect_get_vec_def_for_stmt_copy (dt, *oprnd);
> +    vec_oprnd = vect_get_vec_def_for_stmt_copy (vinfo, *oprnd);
>
>    vec_oprnds->quick_push (vec_oprnd);
>
>    /* Get second vector operand.  */
> -  vec_oprnd = vect_get_vec_def_for_stmt_copy (dt, vec_oprnd);
> +  vec_oprnd = vect_get_vec_def_for_stmt_copy (vinfo, vec_oprnd);
>    vec_oprnds->quick_push (vec_oprnd);
>
>    *oprnd = vec_oprnd;
> @@ -4464,7 +4459,7 @@ vect_get_loop_based_defs (tree *oprnd, s
>    /* For conversion in multiple steps, continue to get operands
>       recursively.  */
>    if (multi_step_cvt)
> -    vect_get_loop_based_defs (oprnd, stmt_info, dt, vec_oprnds,
> +    vect_get_loop_based_defs (oprnd, stmt_info, vec_oprnds,
>                               multi_step_cvt - 1);
>  }
>
> @@ -4983,7 +4978,7 @@ vectorizable_conversion (stmt_vec_info s
>             vect_get_vec_defs (op0, NULL, stmt_info, &vec_oprnds0,
>                                NULL, slp_node);
>           else
> -           vect_get_vec_defs_for_stmt_copy (dt, &vec_oprnds0, NULL);
> +           vect_get_vec_defs_for_stmt_copy (vinfo, &vec_oprnds0, NULL);
>
>           FOR_EACH_VEC_ELT (vec_oprnds0, i, vop0)
>             {
> @@ -5070,7 +5065,7 @@ vectorizable_conversion (stmt_vec_info s
>             }
>           else
>             {
> -             vec_oprnd0 = vect_get_vec_def_for_stmt_copy (dt[0], vec_oprnd0);
> +             vec_oprnd0 = vect_get_vec_def_for_stmt_copy (vinfo, vec_oprnd0);
>               vec_oprnds0.truncate (0);
>               vec_oprnds0.quick_push (vec_oprnd0);
>               if (op_type == binary_op)
> @@ -5078,7 +5073,7 @@ vectorizable_conversion (stmt_vec_info s
>                   if (code == WIDEN_LSHIFT_EXPR)
>                     vec_oprnd1 = op1;
>                   else
> -                   vec_oprnd1 = vect_get_vec_def_for_stmt_copy (dt[1],
> +                   vec_oprnd1 = vect_get_vec_def_for_stmt_copy (vinfo,
>                                                                  vec_oprnd1);
>                   vec_oprnds1.truncate (0);
>                   vec_oprnds1.quick_push (vec_oprnd1);
> @@ -5160,8 +5155,7 @@ vectorizable_conversion (stmt_vec_info s
>           else
>             {
>               vec_oprnds0.truncate (0);
> -             vect_get_loop_based_defs (&last_oprnd, stmt_info, dt[0],
> -                                       &vec_oprnds0,
> +             vect_get_loop_based_defs (&last_oprnd, stmt_info, &vec_oprnds0,
>                                         vect_pow2 (multi_step_cvt) - 1);
>             }
>
> @@ -5338,7 +5332,7 @@ vectorizable_assignment (stmt_vec_info s
>        if (j == 0)
>         vect_get_vec_defs (op, NULL, stmt_info, &vec_oprnds, NULL, slp_node);
>        else
> -        vect_get_vec_defs_for_stmt_copy (dt, &vec_oprnds, NULL);
> +       vect_get_vec_defs_for_stmt_copy (vinfo, &vec_oprnds, NULL);
>
>        /* Arguments are ready. create the new vector stmt.  */
>        stmt_vec_info new_stmt_info = NULL;
> @@ -5742,7 +5736,7 @@ vectorizable_shift (stmt_vec_info stmt_i
>                                slp_node);
>          }
>        else
> -        vect_get_vec_defs_for_stmt_copy (dt, &vec_oprnds0, &vec_oprnds1);
> +       vect_get_vec_defs_for_stmt_copy (vinfo, &vec_oprnds0, &vec_oprnds1);
>
>        /* Arguments are ready.  Create the new vector stmt.  */
>        stmt_vec_info new_stmt_info = NULL;
> @@ -6120,11 +6114,11 @@ vectorizable_operation (stmt_vec_info st
>         }
>        else
>         {
> -         vect_get_vec_defs_for_stmt_copy (dt, &vec_oprnds0, &vec_oprnds1);
> +         vect_get_vec_defs_for_stmt_copy (vinfo, &vec_oprnds0, &vec_oprnds1);
>           if (op_type == ternary_op)
>             {
>               tree vec_oprnd = vec_oprnds2.pop ();
> -             vec_oprnds2.quick_push (vect_get_vec_def_for_stmt_copy (dt[2],
> +             vec_oprnds2.quick_push (vect_get_vec_def_for_stmt_copy (vinfo,
>                                                                    vec_oprnd));
>             }
>         }
> @@ -6533,7 +6527,7 @@ vectorizable_store (stmt_vec_info stmt_i
>               if (modifier == WIDEN)
>                 {
>                   src = vec_oprnd1
> -                   = vect_get_vec_def_for_stmt_copy (rhs_dt, vec_oprnd1);
> +                   = vect_get_vec_def_for_stmt_copy (vinfo, vec_oprnd1);
>                   op = permute_vec_elements (vec_oprnd0, vec_oprnd0, perm_mask,
>                                              stmt_info, gsi);
>                 }
> @@ -6542,8 +6536,7 @@ vectorizable_store (stmt_vec_info stmt_i
>                   src = permute_vec_elements (vec_oprnd1, vec_oprnd1, perm_mask,
>                                               stmt_info, gsi);
>                   op = vec_oprnd0
> -                   = vect_get_vec_def_for_stmt_copy (gs_info.offset_dt,
> -                                                     vec_oprnd0);
> +                   = vect_get_vec_def_for_stmt_copy (vinfo, vec_oprnd0);
>                 }
>               else
>                 gcc_unreachable ();
> @@ -6551,10 +6544,9 @@ vectorizable_store (stmt_vec_info stmt_i
>           else
>             {
>               src = vec_oprnd1
> -               = vect_get_vec_def_for_stmt_copy (rhs_dt, vec_oprnd1);
> +               = vect_get_vec_def_for_stmt_copy (vinfo, vec_oprnd1);
>               op = vec_oprnd0
> -               = vect_get_vec_def_for_stmt_copy (gs_info.offset_dt,
> -                                                 vec_oprnd0);
> +               = vect_get_vec_def_for_stmt_copy (vinfo, vec_oprnd0);
>             }
>
>           if (!useless_type_conversion_p (srctype, TREE_TYPE (src)))
> @@ -6811,11 +6803,8 @@ vectorizable_store (stmt_vec_info stmt_i
>                   if (slp)
>                     vec_oprnd = vec_oprnds[j];
>                   else
> -                   {
> -                     vect_is_simple_use (op, vinfo, &rhs_dt);
> -                     vec_oprnd = vect_get_vec_def_for_stmt_copy (rhs_dt,
> -                                                                 vec_oprnd);
> -                   }
> +                   vec_oprnd = vect_get_vec_def_for_stmt_copy (vinfo,
> +                                                               vec_oprnd);
>                 }
>               /* Pun the vector to extract from if necessary.  */
>               if (lvectype != vectype)
> @@ -7060,19 +7049,17 @@ vectorizable_store (stmt_vec_info stmt_i
>           for (i = 0; i < group_size; i++)
>             {
>               op = oprnds[i];
> -             vect_is_simple_use (op, vinfo, &rhs_dt);
> -             vec_oprnd = vect_get_vec_def_for_stmt_copy (rhs_dt, op);
> +             vec_oprnd = vect_get_vec_def_for_stmt_copy (vinfo, op);
>               dr_chain[i] = vec_oprnd;
>               oprnds[i] = vec_oprnd;
>             }
>           if (mask)
> -           vec_mask = vect_get_vec_def_for_stmt_copy (mask_dt, vec_mask);
> +           vec_mask = vect_get_vec_def_for_stmt_copy (vinfo, vec_mask);
>           if (dataref_offset)
>             dataref_offset
>               = int_const_binop (PLUS_EXPR, dataref_offset, bump);
>           else if (STMT_VINFO_GATHER_SCATTER_P (stmt_info))
> -           vec_offset = vect_get_vec_def_for_stmt_copy (gs_info.offset_dt,
> -                                                        vec_offset);
> +           vec_offset = vect_get_vec_def_for_stmt_copy (vinfo, vec_offset);
>           else
>             dataref_ptr = bump_vector_ptr (dataref_ptr, ptr_incr, gsi,
>                                            stmt_info, bump);
> @@ -7680,8 +7667,7 @@ vectorizable_load (stmt_vec_info stmt_in
>
>    if (memory_access_type == VMAT_GATHER_SCATTER && gs_info.decl)
>      {
> -      vect_build_gather_load_calls (stmt_info, gsi, vec_stmt, &gs_info, mask,
> -                                   mask_dt);
> +      vect_build_gather_load_calls (stmt_info, gsi, vec_stmt, &gs_info, mask);
>        return true;
>      }
>
> @@ -8233,13 +8219,12 @@ vectorizable_load (stmt_vec_info stmt_in
>             dataref_offset = int_const_binop (PLUS_EXPR, dataref_offset,
>                                               bump);
>           else if (STMT_VINFO_GATHER_SCATTER_P (stmt_info))
> -           vec_offset = vect_get_vec_def_for_stmt_copy (gs_info.offset_dt,
> -                                                        vec_offset);
> +           vec_offset = vect_get_vec_def_for_stmt_copy (vinfo, vec_offset);
>           else
>             dataref_ptr = bump_vector_ptr (dataref_ptr, ptr_incr, gsi,
>                                            stmt_info, bump);
>           if (mask)
> -           vec_mask = vect_get_vec_def_for_stmt_copy (mask_dt, vec_mask);
> +           vec_mask = vect_get_vec_def_for_stmt_copy (vinfo, vec_mask);
>         }
>
>        if (grouped_load || slp_perm)
> @@ -8733,6 +8718,7 @@ vectorizable_condition (stmt_vec_info st
>                         int reduc_index, slp_tree slp_node,
>                         stmt_vector_for_cost *cost_vec)
>  {
> +  vec_info *vinfo = stmt_info->vinfo;
>    tree scalar_dest = NULL_TREE;
>    tree vec_dest = NULL_TREE;
>    tree cond_expr, cond_expr0 = NULL_TREE, cond_expr1 = NULL_TREE;
> @@ -8994,16 +8980,14 @@ vectorizable_condition (stmt_vec_info st
>        else
>         {
>           vec_cond_lhs
> -           = vect_get_vec_def_for_stmt_copy (dts[0],
> -                                             vec_oprnds0.pop ());
> +           = vect_get_vec_def_for_stmt_copy (vinfo, vec_oprnds0.pop ());
>           if (!masked)
>             vec_cond_rhs
> -             = vect_get_vec_def_for_stmt_copy (dts[1],
> -                                               vec_oprnds1.pop ());
> +             = vect_get_vec_def_for_stmt_copy (vinfo, vec_oprnds1.pop ());
>
> -         vec_then_clause = vect_get_vec_def_for_stmt_copy (dts[2],
> +         vec_then_clause = vect_get_vec_def_for_stmt_copy (vinfo,
>                                                             vec_oprnds2.pop ());
> -         vec_else_clause = vect_get_vec_def_for_stmt_copy (dts[3],
> +         vec_else_clause = vect_get_vec_def_for_stmt_copy (vinfo,
>                                                             vec_oprnds3.pop ());
>         }
>
> @@ -9135,6 +9119,7 @@ vectorizable_comparison (stmt_vec_info s
>                          stmt_vec_info *vec_stmt, tree reduc_def,
>                          slp_tree slp_node, stmt_vector_for_cost *cost_vec)
>  {
> +  vec_info *vinfo = stmt_info->vinfo;
>    tree lhs, rhs1, rhs2;
>    tree vectype1 = NULL_TREE, vectype2 = NULL_TREE;
>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
> @@ -9331,9 +9316,9 @@ vectorizable_comparison (stmt_vec_info s
>         }
>        else
>         {
> -         vec_rhs1 = vect_get_vec_def_for_stmt_copy (dts[0],
> +         vec_rhs1 = vect_get_vec_def_for_stmt_copy (vinfo,
>                                                      vec_oprnds0.pop ());
> -         vec_rhs2 = vect_get_vec_def_for_stmt_copy (dts[1],
> +         vec_rhs2 = vect_get_vec_def_for_stmt_copy (vinfo,
>                                                      vec_oprnds1.pop ());
>         }
>
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:23:50.004602150 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:23:56.436545030 +0100
> @@ -4421,7 +4421,6 @@ vect_create_epilog_for_reduction (vec<tr
>    bool nested_in_vect_loop = false;
>    auto_vec<gimple *> new_phis;
>    auto_vec<stmt_vec_info> inner_phis;
> -  enum vect_def_type dt = vect_unknown_def_type;
>    int j, i;
>    auto_vec<tree> scalar_results;
>    unsigned int group_size = 1, k, ratio;
> @@ -4528,8 +4527,7 @@ vect_create_epilog_for_reduction (vec<tr
>               phi_info = STMT_VINFO_RELATED_STMT (phi_info);
>               if (nested_in_vect_loop)
>                 vec_init_def
> -                 = vect_get_vec_def_for_stmt_copy (initial_def_dt,
> -                                                   vec_init_def);
> +                 = vect_get_vec_def_for_stmt_copy (loop_vinfo, vec_init_def);
>             }
>
>           /* Set the loop-entry arg of the reduction-phi.  */
> @@ -4556,7 +4554,7 @@ vect_create_epilog_for_reduction (vec<tr
>
>            /* Set the loop-latch arg for the reduction-phi.  */
>            if (j > 0)
> -            def = vect_get_vec_def_for_stmt_copy (vect_unknown_def_type, def);
> +           def = vect_get_vec_def_for_stmt_copy (loop_vinfo, def);
>
>           add_phi_arg (phi, def, loop_latch_edge (loop), UNKNOWN_LOCATION);
>
> @@ -4697,7 +4695,7 @@ vect_create_epilog_for_reduction (vec<tr
>              new_phis.quick_push (phi);
>            else
>             {
> -             def = vect_get_vec_def_for_stmt_copy (dt, def);
> +             def = vect_get_vec_def_for_stmt_copy (loop_vinfo, def);
>               STMT_VINFO_RELATED_STMT (prev_phi_info) = phi_info;
>             }
>
> @@ -7111,19 +7109,22 @@ vectorizable_reduction (stmt_vec_info st
>                 vec_oprnds0[0] = gimple_get_lhs (new_stmt_info->stmt);
>               else
>                 vec_oprnds0[0]
> -                 = vect_get_vec_def_for_stmt_copy (dts[0], vec_oprnds0[0]);
> +                 = vect_get_vec_def_for_stmt_copy (loop_vinfo,
> +                                                   vec_oprnds0[0]);
>               if (single_defuse_cycle && reduc_index == 1)
>                 vec_oprnds1[0] = gimple_get_lhs (new_stmt_info->stmt);
>               else
>                 vec_oprnds1[0]
> -                 = vect_get_vec_def_for_stmt_copy (dts[1], vec_oprnds1[0]);
> +                 = vect_get_vec_def_for_stmt_copy (loop_vinfo,
> +                                                   vec_oprnds1[0]);
>               if (op_type == ternary_op)
>                 {
>                   if (single_defuse_cycle && reduc_index == 2)
>                     vec_oprnds2[0] = gimple_get_lhs (new_stmt_info->stmt);
>                   else
>                     vec_oprnds2[0]
> -                     = vect_get_vec_def_for_stmt_copy (dts[2], vec_oprnds2[0]);
> +                     = vect_get_vec_def_for_stmt_copy (loop_vinfo,
> +                                                       vec_oprnds2[0]);
>                 }
>              }
>          }
> @@ -7945,8 +7946,7 @@ vectorizable_live_operation (stmt_vec_in
>
>        /* For multiple copies, get the last copy.  */
>        for (int i = 1; i < ncopies; ++i)
> -       vec_lhs = vect_get_vec_def_for_stmt_copy (vect_unknown_def_type,
> -                                                 vec_lhs);
> +       vec_lhs = vect_get_vec_def_for_stmt_copy (loop_vinfo, vec_lhs);
>
>        /* Get the last lane in the vector.  */
>        bitstart = int_const_binop (MINUS_EXPR, vec_bitsize, bitsize);

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [35/46] Alter interfaces within vect_pattern_recog
  2018-07-24 10:06 ` [35/46] Alter interfaces within vect_pattern_recog Richard Sandiford
@ 2018-07-25 10:14   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-25 10:14 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 12:06 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> vect_pattern_recog_1 took a gimple_stmt_iterator as argument, but was
> only interested in the gsi_stmt, not anything else.  This patch makes
> the associated routines operate directly on stmt_vec_infos.

OK

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vect-patterns.c (vect_mark_pattern_stmts): Take the
>         original stmt as a stmt_vec_info rather than a gimple stmt.
>         (vect_pattern_recog_1): Take the statement directly as a
>         stmt_vec_info, rather than via a gimple_stmt_iterator.
>         Update call to vect_mark_pattern_stmts.
>         (vect_pattern_recog): Update calls accordingly.
>
> Index: gcc/tree-vect-patterns.c
> ===================================================================
> --- gcc/tree-vect-patterns.c    2018-07-24 10:23:50.004602150 +0100
> +++ gcc/tree-vect-patterns.c    2018-07-24 10:23:59.408518638 +0100
> @@ -4720,29 +4720,29 @@ const unsigned int NUM_PATTERNS = ARRAY_
>  /* Mark statements that are involved in a pattern.  */
>
>  static inline void
> -vect_mark_pattern_stmts (gimple *orig_stmt, gimple *pattern_stmt,
> +vect_mark_pattern_stmts (stmt_vec_info orig_stmt_info, gimple *pattern_stmt,
>                           tree pattern_vectype)
>  {
> -  stmt_vec_info orig_stmt_info = vinfo_for_stmt (orig_stmt);
>    gimple *def_seq = STMT_VINFO_PATTERN_DEF_SEQ (orig_stmt_info);
>
> -  bool old_pattern_p = is_pattern_stmt_p (orig_stmt_info);
> -  if (old_pattern_p)
> +  gimple *orig_pattern_stmt = NULL;
> +  if (is_pattern_stmt_p (orig_stmt_info))
>      {
>        /* We're replacing a statement in an existing pattern definition
>          sequence.  */
> +      orig_pattern_stmt = orig_stmt_info->stmt;
>        if (dump_enabled_p ())
>         {
>           dump_printf_loc (MSG_NOTE, vect_location,
>                            "replacing earlier pattern ");
> -         dump_gimple_stmt (MSG_NOTE, TDF_SLIM, orig_stmt, 0);
> +         dump_gimple_stmt (MSG_NOTE, TDF_SLIM, orig_pattern_stmt, 0);
>         }
>
>        /* To keep the book-keeping simple, just swap the lhs of the
>          old and new statements, so that the old one has a valid but
>          unused lhs.  */
> -      tree old_lhs = gimple_get_lhs (orig_stmt);
> -      gimple_set_lhs (orig_stmt, gimple_get_lhs (pattern_stmt));
> +      tree old_lhs = gimple_get_lhs (orig_pattern_stmt);
> +      gimple_set_lhs (orig_pattern_stmt, gimple_get_lhs (pattern_stmt));
>        gimple_set_lhs (pattern_stmt, old_lhs);
>
>        if (dump_enabled_p ())
> @@ -4755,7 +4755,8 @@ vect_mark_pattern_stmts (gimple *orig_st
>        orig_stmt_info = STMT_VINFO_RELATED_STMT (orig_stmt_info);
>
>        /* We shouldn't be replacing the main pattern statement.  */
> -      gcc_assert (STMT_VINFO_RELATED_STMT (orig_stmt_info) != orig_stmt);
> +      gcc_assert (STMT_VINFO_RELATED_STMT (orig_stmt_info)->stmt
> +                 != orig_pattern_stmt);
>      }
>
>    if (def_seq)
> @@ -4763,13 +4764,14 @@ vect_mark_pattern_stmts (gimple *orig_st
>          !gsi_end_p (si); gsi_next (&si))
>        vect_init_pattern_stmt (gsi_stmt (si), orig_stmt_info, pattern_vectype);
>
> -  if (old_pattern_p)
> +  if (orig_pattern_stmt)
>      {
>        vect_init_pattern_stmt (pattern_stmt, orig_stmt_info, pattern_vectype);
>
>        /* Insert all the new pattern statements before the original one.  */
>        gimple_seq *orig_def_seq = &STMT_VINFO_PATTERN_DEF_SEQ (orig_stmt_info);
> -      gimple_stmt_iterator gsi = gsi_for_stmt (orig_stmt, orig_def_seq);
> +      gimple_stmt_iterator gsi = gsi_for_stmt (orig_pattern_stmt,
> +                                              orig_def_seq);
>        gsi_insert_seq_before_without_update (&gsi, def_seq, GSI_SAME_STMT);
>        gsi_insert_before_without_update (&gsi, pattern_stmt, GSI_SAME_STMT);
>
> @@ -4785,12 +4787,12 @@ vect_mark_pattern_stmts (gimple *orig_st
>     Input:
>     PATTERN_RECOG_FUNC: A pointer to a function that detects a certain
>          computation pattern.
> -   STMT: A stmt from which the pattern search should start.
> +   STMT_INFO: A stmt from which the pattern search should start.
>
>     If PATTERN_RECOG_FUNC successfully detected the pattern, it creates
>     a sequence of statements that has the same functionality and can be
> -   used to replace STMT.  It returns the last statement in the sequence
> -   and adds any earlier statements to STMT's STMT_VINFO_PATTERN_DEF_SEQ.
> +   used to replace STMT_INFO.  It returns the last statement in the sequence
> +   and adds any earlier statements to STMT_INFO's STMT_VINFO_PATTERN_DEF_SEQ.
>     PATTERN_RECOG_FUNC also sets *TYPE_OUT to the vector type of the final
>     statement, having first checked that the target supports the new operation
>     in that type.
> @@ -4799,10 +4801,10 @@ vect_mark_pattern_stmts (gimple *orig_st
>     for vect_recog_pattern.  */
>
>  static void
> -vect_pattern_recog_1 (vect_recog_func *recog_func, gimple_stmt_iterator si)
> +vect_pattern_recog_1 (vect_recog_func *recog_func, stmt_vec_info stmt_info)
>  {
> -  gimple *stmt = gsi_stmt (si), *pattern_stmt;
> -  stmt_vec_info stmt_info;
> +  vec_info *vinfo = stmt_info->vinfo;
> +  gimple *pattern_stmt;
>    loop_vec_info loop_vinfo;
>    tree pattern_vectype;
>
> @@ -4810,13 +4812,12 @@ vect_pattern_recog_1 (vect_recog_func *r
>       leave the original statement alone, since the first match wins.
>       Instead try to match against the definition statements that feed
>       the main pattern statement.  */
> -  stmt_info = vinfo_for_stmt (stmt);
>    if (STMT_VINFO_IN_PATTERN_P (stmt_info))
>      {
>        gimple_stmt_iterator gsi;
>        for (gsi = gsi_start (STMT_VINFO_PATTERN_DEF_SEQ (stmt_info));
>            !gsi_end_p (gsi); gsi_next (&gsi))
> -       vect_pattern_recog_1 (recog_func, gsi);
> +       vect_pattern_recog_1 (recog_func, vinfo->lookup_stmt (gsi_stmt (gsi)));
>        return;
>      }
>
> @@ -4841,7 +4842,7 @@ vect_pattern_recog_1 (vect_recog_func *r
>      }
>
>    /* Mark the stmts that are involved in the pattern. */
> -  vect_mark_pattern_stmts (stmt, pattern_stmt, pattern_vectype);
> +  vect_mark_pattern_stmts (stmt_info, pattern_stmt, pattern_vectype);
>
>    /* Patterns cannot be vectorized using SLP, because they change the order of
>       computation.  */
> @@ -4957,9 +4958,13 @@ vect_pattern_recog (vec_info *vinfo)
>         {
>           basic_block bb = bbs[i];
>           for (si = gsi_start_bb (bb); !gsi_end_p (si); gsi_next (&si))
> -           /* Scan over all generic vect_recog_xxx_pattern functions.  */
> -           for (j = 0; j < NUM_PATTERNS; j++)
> -             vect_pattern_recog_1 (&vect_vect_recog_func_ptrs[j], si);
> +           {
> +             stmt_vec_info stmt_info = vinfo->lookup_stmt (gsi_stmt (si));
> +             /* Scan over all generic vect_recog_xxx_pattern functions.  */
> +             for (j = 0; j < NUM_PATTERNS; j++)
> +               vect_pattern_recog_1 (&vect_vect_recog_func_ptrs[j],
> +                                     stmt_info);
> +           }
>         }
>      }
>    else
> @@ -4975,7 +4980,7 @@ vect_pattern_recog (vec_info *vinfo)
>
>           /* Scan over all generic vect_recog_xxx_pattern functions.  */
>           for (j = 0; j < NUM_PATTERNS; j++)
> -           vect_pattern_recog_1 (&vect_vect_recog_func_ptrs[j], si);
> +           vect_pattern_recog_1 (&vect_vect_recog_func_ptrs[j], stmt_info);
>         }
>      }
>  }

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [36/46] Add a pattern_stmt_p field to stmt_vec_info
  2018-07-24 10:07 ` [36/46] Add a pattern_stmt_p field to stmt_vec_info Richard Sandiford
@ 2018-07-25 10:15   ` Richard Biener
  2018-07-25 11:09     ` Richard Sandiford
  0 siblings, 1 reply; 108+ messages in thread
From: Richard Biener @ 2018-07-25 10:15 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 12:07 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch adds a pattern_stmt_p field to stmt_vec_info, so that it's
> possible to tell whether the statement is a pattern statement without
> referring to other statements.  The new field goes in what was
> previously a hole in the structure, so the size is the same as before.

Not sure what the advantage is?  is_pattern_stmt_p () looks nicer
than ->is_pattern_p

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (_stmt_vec_info::pattern_stmt_p): New field.
>         (is_pattern_stmt_p): Delete.
>         * tree-vect-patterns.c (vect_init_pattern_stmt): Set pattern_stmt_p
>         on pattern statements.
>         (vect_split_statement, vect_mark_pattern_stmts): Use the new
>         pattern_stmt_p field instead of is_pattern_stmt_p.
>         * tree-vect-data-refs.c (vect_preserves_scalar_order_p): Likewise.
>         * tree-vect-loop.c (vectorizable_live_operation): Likewise.
>         * tree-vect-slp.c (vect_build_slp_tree_2): Likewise.
>         (vect_find_last_scalar_stmt_in_slp, vect_remove_slp_scalar_calls)
>         (vect_schedule_slp): Likewise.
>         * tree-vect-stmts.c (vect_mark_stmts_to_be_vectorized): Likewise.
>         (vectorizable_call, vectorizable_simd_clone_call, vectorizable_shift)
>         (vectorizable_store, vect_remove_stores): Likewise.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:23:56.440544995 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:24:02.364492386 +0100
> @@ -791,6 +791,12 @@ struct _stmt_vec_info {
>    /* Stmt is part of some pattern (computation idiom)  */
>    bool in_pattern_p;
>
> +  /* True if the statement was created during pattern recognition as
> +     part of the replacement for RELATED_STMT.  This implies that the
> +     statement isn't part of any basic block, although for convenience
> +     its gimple_bb is the same as for RELATED_STMT.  */
> +  bool pattern_stmt_p;
> +
>    /* Is this statement vectorizable or should it be skipped in (partial)
>       vectorization.  */
>    bool vectorizable;
> @@ -1151,16 +1157,6 @@ get_later_stmt (stmt_vec_info stmt1_info
>      return stmt2_info;
>  }
>
> -/* Return TRUE if a statement represented by STMT_INFO is a part of a
> -   pattern.  */
> -
> -static inline bool
> -is_pattern_stmt_p (stmt_vec_info stmt_info)
> -{
> -  stmt_vec_info related_stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
> -  return related_stmt_info && STMT_VINFO_IN_PATTERN_P (related_stmt_info);
> -}
> -
>  /* Return true if BB is a loop header.  */
>
>  static inline bool
> Index: gcc/tree-vect-patterns.c
> ===================================================================
> --- gcc/tree-vect-patterns.c    2018-07-24 10:23:59.408518638 +0100
> +++ gcc/tree-vect-patterns.c    2018-07-24 10:24:02.360492422 +0100
> @@ -108,6 +108,7 @@ vect_init_pattern_stmt (gimple *pattern_
>      pattern_stmt_info = orig_stmt_info->vinfo->add_stmt (pattern_stmt);
>    gimple_set_bb (pattern_stmt, gimple_bb (orig_stmt_info->stmt));
>
> +  pattern_stmt_info->pattern_stmt_p = true;
>    STMT_VINFO_RELATED_STMT (pattern_stmt_info) = orig_stmt_info;
>    STMT_VINFO_DEF_TYPE (pattern_stmt_info)
>      = STMT_VINFO_DEF_TYPE (orig_stmt_info);
> @@ -630,7 +631,7 @@ vect_recog_temp_ssa_var (tree type, gimp
>  vect_split_statement (stmt_vec_info stmt2_info, tree new_rhs,
>                       gimple *stmt1, tree vectype)
>  {
> -  if (is_pattern_stmt_p (stmt2_info))
> +  if (stmt2_info->pattern_stmt_p)
>      {
>        /* STMT2_INFO is part of a pattern.  Get the statement to which
>          the pattern is attached.  */
> @@ -4726,7 +4727,7 @@ vect_mark_pattern_stmts (stmt_vec_info o
>    gimple *def_seq = STMT_VINFO_PATTERN_DEF_SEQ (orig_stmt_info);
>
>    gimple *orig_pattern_stmt = NULL;
> -  if (is_pattern_stmt_p (orig_stmt_info))
> +  if (orig_stmt_info->pattern_stmt_p)
>      {
>        /* We're replacing a statement in an existing pattern definition
>          sequence.  */
> Index: gcc/tree-vect-data-refs.c
> ===================================================================
> --- gcc/tree-vect-data-refs.c   2018-07-24 10:23:53.204573732 +0100
> +++ gcc/tree-vect-data-refs.c   2018-07-24 10:24:02.356492457 +0100
> @@ -212,9 +212,9 @@ vect_preserves_scalar_order_p (stmt_vec_
>       (but could happen later) while reads will happen no later than their
>       current position (but could happen earlier).  Reordering is therefore
>       only possible if the first access is a write.  */
> -  if (is_pattern_stmt_p (stmtinfo_a))
> +  if (stmtinfo_a->pattern_stmt_p)
>      stmtinfo_a = STMT_VINFO_RELATED_STMT (stmtinfo_a);
> -  if (is_pattern_stmt_p (stmtinfo_b))
> +  if (stmtinfo_b->pattern_stmt_p)
>      stmtinfo_b = STMT_VINFO_RELATED_STMT (stmtinfo_b);
>    stmt_vec_info earlier_stmt_info = get_earlier_stmt (stmtinfo_a, stmtinfo_b);
>    return !DR_IS_WRITE (STMT_VINFO_DATA_REF (earlier_stmt_info));
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:23:56.436545030 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:24:02.360492422 +0100
> @@ -7907,7 +7907,7 @@ vectorizable_live_operation (stmt_vec_in
>      }
>
>    /* If stmt has a related stmt, then use that for getting the lhs.  */
> -  gimple *stmt = (is_pattern_stmt_p (stmt_info)
> +  gimple *stmt = (stmt_info->pattern_stmt_p
>                   ? STMT_VINFO_RELATED_STMT (stmt_info)->stmt
>                   : stmt_info->stmt);
>
> Index: gcc/tree-vect-slp.c
> ===================================================================
> --- gcc/tree-vect-slp.c 2018-07-24 10:23:53.204573732 +0100
> +++ gcc/tree-vect-slp.c 2018-07-24 10:24:02.360492422 +0100
> @@ -376,7 +376,7 @@ vect_get_and_check_slp_defs (vec_info *v
>        /* Check if DEF_STMT_INFO is a part of a pattern in LOOP and get
>          the def stmt from the pattern.  Check that all the stmts of the
>          node are in the pattern.  */
> -      if (def_stmt_info && is_pattern_stmt_p (def_stmt_info))
> +      if (def_stmt_info && def_stmt_info->pattern_stmt_p)
>          {
>            pattern = true;
>            if (!first && !oprnd_info->first_pattern
> @@ -1315,7 +1315,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>               /* ???  Rejecting patterns this way doesn't work.  We'd have to
>                  do extra work to cancel the pattern so the uses see the
>                  scalar version.  */
> -             && !is_pattern_stmt_p (SLP_TREE_SCALAR_STMTS (child)[0]))
> +             && !SLP_TREE_SCALAR_STMTS (child)[0]->pattern_stmt_p)
>             {
>               slp_tree grandchild;
>
> @@ -1359,7 +1359,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>           /* ???  Rejecting patterns this way doesn't work.  We'd have to
>              do extra work to cancel the pattern so the uses see the
>              scalar version.  */
> -         && !is_pattern_stmt_p (stmt_info))
> +         && !stmt_info->pattern_stmt_p)
>         {
>           dump_printf_loc (MSG_NOTE, vect_location,
>                            "Building vector operands from scalars\n");
> @@ -1486,7 +1486,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>                   /* ???  Rejecting patterns this way doesn't work.  We'd have
>                      to do extra work to cancel the pattern so the uses see the
>                      scalar version.  */
> -                 && !is_pattern_stmt_p (SLP_TREE_SCALAR_STMTS (child)[0]))
> +                 && !SLP_TREE_SCALAR_STMTS (child)[0]->pattern_stmt_p)
>                 {
>                   unsigned int j;
>                   slp_tree grandchild;
> @@ -1848,7 +1848,7 @@ vect_find_last_scalar_stmt_in_slp (slp_t
>
>    for (int i = 0; SLP_TREE_SCALAR_STMTS (node).iterate (i, &stmt_vinfo); i++)
>      {
> -      if (is_pattern_stmt_p (stmt_vinfo))
> +      if (stmt_vinfo->pattern_stmt_p)
>         stmt_vinfo = STMT_VINFO_RELATED_STMT (stmt_vinfo);
>        last = last ? get_later_stmt (stmt_vinfo, last) : stmt_vinfo;
>      }
> @@ -4044,8 +4044,7 @@ vect_remove_slp_scalar_calls (slp_tree n
>        gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt);
>        if (!stmt || gimple_bb (stmt) == NULL)
>         continue;
> -      if (is_pattern_stmt_p (stmt_info)
> -         || !PURE_SLP_STMT (stmt_info))
> +      if (stmt_info->pattern_stmt_p || !PURE_SLP_STMT (stmt_info))
>         continue;
>        lhs = gimple_call_lhs (stmt);
>        new_stmt = gimple_build_assign (lhs, build_zero_cst (TREE_TYPE (lhs)));
> @@ -4106,7 +4105,7 @@ vect_schedule_slp (vec_info *vinfo)
>           if (!STMT_VINFO_DATA_REF (store_info))
>             break;
>
> -         if (is_pattern_stmt_p (store_info))
> +         if (store_info->pattern_stmt_p)
>             store_info = STMT_VINFO_RELATED_STMT (store_info);
>           /* Free the attached stmt_vec_info and remove the stmt.  */
>           gsi = gsi_for_stmt (store_info);
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-24 10:23:56.440544995 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:24:02.364492386 +0100
> @@ -731,7 +731,7 @@ vect_mark_stmts_to_be_vectorized (loop_v
>              break;
>          }
>
> -      if (is_pattern_stmt_p (stmt_vinfo))
> +      if (stmt_vinfo->pattern_stmt_p)
>          {
>            /* Pattern statements are not inserted into the code, so
>               FOR_EACH_PHI_OR_STMT_USE optimizes their operands out, and we
> @@ -3623,7 +3623,7 @@ vectorizable_call (stmt_vec_info stmt_in
>    if (slp_node)
>      return true;
>
> -  if (is_pattern_stmt_p (stmt_info))
> +  if (stmt_info->pattern_stmt_p)
>      stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
>    lhs = gimple_get_lhs (stmt_info->stmt);
>
> @@ -4362,7 +4362,7 @@ vectorizable_simd_clone_call (stmt_vec_i
>    if (scalar_dest)
>      {
>        type = TREE_TYPE (scalar_dest);
> -      if (is_pattern_stmt_p (stmt_info))
> +      if (stmt_info->pattern_stmt_p)
>         lhs = gimple_call_lhs (STMT_VINFO_RELATED_STMT (stmt_info)->stmt);
>        else
>         lhs = gimple_call_lhs (stmt);
> @@ -5552,7 +5552,7 @@ vectorizable_shift (stmt_vec_info stmt_i
>        /* If the shift amount is computed by a pattern stmt we cannot
>           use the scalar amount directly thus give up and use a vector
>          shift.  */
> -      if (op1_def_stmt_info && is_pattern_stmt_p (op1_def_stmt_info))
> +      if (op1_def_stmt_info && op1_def_stmt_info->pattern_stmt_p)
>         scalar_shift_arg = false;
>      }
>    else
> @@ -6286,7 +6286,7 @@ vectorizable_store (stmt_vec_info stmt_i
>      {
>        tree scalar_dest = gimple_assign_lhs (assign);
>        if (TREE_CODE (scalar_dest) == VIEW_CONVERT_EXPR
> -         && is_pattern_stmt_p (stmt_info))
> +         && stmt_info->pattern_stmt_p)
>         scalar_dest = TREE_OPERAND (scalar_dest, 0);
>        if (TREE_CODE (scalar_dest) != ARRAY_REF
>           && TREE_CODE (scalar_dest) != BIT_FIELD_REF
> @@ -9839,7 +9839,7 @@ vect_remove_stores (stmt_vec_info first_
>    while (next_stmt_info)
>      {
>        stmt_vec_info tmp = DR_GROUP_NEXT_ELEMENT (next_stmt_info);
> -      if (is_pattern_stmt_p (next_stmt_info))
> +      if (next_stmt_info->pattern_stmt_p)
>         next_stmt_info = STMT_VINFO_RELATED_STMT (next_stmt_info);
>        /* Free the attached stmt_vec_info and remove the stmt.  */
>        next_si = gsi_for_stmt (next_stmt_info->stmt);

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [37/46] Associate alignment information with stmt_vec_infos
  2018-07-24 10:07 ` [37/46] Associate alignment information with stmt_vec_infos Richard Sandiford
@ 2018-07-25 10:18   ` Richard Biener
  2018-07-26 10:55     ` Richard Sandiford
  0 siblings, 1 reply; 108+ messages in thread
From: Richard Biener @ 2018-07-25 10:18 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 12:08 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> Alignment information is really a property of a stmt_vec_info
> (and the way we want to vectorise it) rather than the original scalar dr.
> I think that was true even before the recent dr sharing.

But that is only so as long as we handle only stmts with a single DR.
In reality alignment info _is_ a property of the DR and not of the stmt.

So you're doing a shortcut here, shouldn't we rename
dr_misalignment to stmt_dr_misalignment then?

Otherwise I don't see how this makes sense semantically.

> This patch therefore makes the alignment-related interfaces take
> stmt_vec_infos rather than data_references.
>
>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (STMT_VINFO_TARGET_ALIGNMENT): New macro.
>         (DR_VECT_AUX, DR_MISALIGNMENT, SET_DR_MISALIGNMENT)
>         (DR_TARGET_ALIGNMENT): Delete.
>         (set_dr_misalignment, dr_misalignment, aligned_access_p)
>         (known_alignment_for_access_p, vect_known_alignment_in_bytes)
>         (vect_dr_behavior): Take a stmt_vec_info rather than a data_reference.
>         * tree-vect-data-refs.c (vect_calculate_target_alignment)
>         (vect_compute_data_ref_alignment, vect_update_misalignment_for_peel)
>         (vector_alignment_reachable_p, vect_get_peeling_costs_all_drs)
>         (vect_peeling_supportable, vect_enhance_data_refs_alignment)
>         (vect_duplicate_ssa_name_ptr_info): Update after above changes.
>         (vect_create_addr_base_for_vector_ref, vect_create_data_ref_ptr)
>         (vect_setup_realignment, vect_supportable_dr_alignment): Likewise.
>         * tree-vect-loop-manip.c (get_misalign_in_elems): Likewise.
>         (vect_gen_prolog_loop_niters): Likewise.
>         * tree-vect-stmts.c (vect_get_store_cost, vect_get_load_cost)
>         (compare_step_with_zero, get_group_load_store_type): Likewise.
>         (vect_get_data_ptr_increment, ensure_base_align, vectorizable_store)
>         (vectorizable_load): Likewise.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:24:02.364492386 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:24:05.744462369 +0100
> @@ -1031,6 +1031,9 @@ #define STMT_VINFO_NUM_SLP_USES(S)        (S)->
>  #define STMT_VINFO_REDUC_TYPE(S)       (S)->reduc_type
>  #define STMT_VINFO_REDUC_DEF(S)                (S)->reduc_def
>
> +/* Only defined once dr_misalignment is defined.  */
> +#define STMT_VINFO_TARGET_ALIGNMENT(S) (S)->dr_aux.target_alignment
> +
>  #define DR_GROUP_FIRST_ELEMENT(S)  (gcc_checking_assert ((S)->data_ref_info), (S)->first_element)
>  #define DR_GROUP_NEXT_ELEMENT(S)   (gcc_checking_assert ((S)->data_ref_info), (S)->next_element)
>  #define DR_GROUP_SIZE(S)           (gcc_checking_assert ((S)->data_ref_info), (S)->size)
> @@ -1048,8 +1051,6 @@ #define HYBRID_SLP_STMT(S)
>  #define PURE_SLP_STMT(S)                  ((S)->slp_type == pure_slp)
>  #define STMT_SLP_TYPE(S)                   (S)->slp_type
>
> -#define DR_VECT_AUX(dr) (&vinfo_for_stmt (DR_STMT (dr))->dr_aux)
> -
>  #define VECT_MAX_COST 1000
>
>  /* The maximum number of intermediate steps required in multi-step type
> @@ -1256,73 +1257,72 @@ add_stmt_costs (void *data, stmt_vector_
>  #define DR_MISALIGNMENT_UNKNOWN (-1)
>  #define DR_MISALIGNMENT_UNINITIALIZED (-2)
>
> +/* Record that the vectorized form of the data access in STMT_INFO
> +   will be misaligned by VAL bytes wrt its target alignment.
> +   Negative values have the meanings above.  */
> +
>  inline void
> -set_dr_misalignment (struct data_reference *dr, int val)
> +set_dr_misalignment (stmt_vec_info stmt_info, int val)
>  {
> -  dataref_aux *data_aux = DR_VECT_AUX (dr);
> -  data_aux->misalignment = val;
> +  stmt_info->dr_aux.misalignment = val;
>  }
>
> +/* Return the misalignment in bytes of the vectorized form of the data
> +   access in STMT_INFO, relative to its target alignment.  Negative
> +   values have the meanings above.  */
> +
>  inline int
> -dr_misalignment (struct data_reference *dr)
> +dr_misalignment (stmt_vec_info stmt_info)
>  {
> -  int misalign = DR_VECT_AUX (dr)->misalignment;
> +  int misalign = stmt_info->dr_aux.misalignment;
>    gcc_assert (misalign != DR_MISALIGNMENT_UNINITIALIZED);
>    return misalign;
>  }
>
> -/* Reflects actual alignment of first access in the vectorized loop,
> -   taking into account peeling/versioning if applied.  */
> -#define DR_MISALIGNMENT(DR) dr_misalignment (DR)
> -#define SET_DR_MISALIGNMENT(DR, VAL) set_dr_misalignment (DR, VAL)
> -
> -/* Only defined once DR_MISALIGNMENT is defined.  */
> -#define DR_TARGET_ALIGNMENT(DR) DR_VECT_AUX (DR)->target_alignment
> -
> -/* Return true if data access DR is aligned to its target alignment
> -   (which may be less than a full vector).  */
> +/* Return true if the vectorized form of the data access in STMT_INFO is
> +   aligned to its target alignment (which may be less than a full vector).  */
>
>  static inline bool
> -aligned_access_p (struct data_reference *data_ref_info)
> +aligned_access_p (stmt_vec_info stmt_info)
>  {
> -  return (DR_MISALIGNMENT (data_ref_info) == 0);
> +  return (dr_misalignment (stmt_info) == 0);
>  }
>
> -/* Return TRUE if the alignment of the data access is known, and FALSE
> -   otherwise.  */
> +/* Return true if the alignment of the vectorized form of the data
> +   access in STMT_INFO is known at compile time.  */
>
>  static inline bool
> -known_alignment_for_access_p (struct data_reference *data_ref_info)
> +known_alignment_for_access_p (stmt_vec_info stmt_info)
>  {
> -  return (DR_MISALIGNMENT (data_ref_info) != DR_MISALIGNMENT_UNKNOWN);
> +  return (dr_misalignment (stmt_info) != DR_MISALIGNMENT_UNKNOWN);
>  }
>
>  /* Return the minimum alignment in bytes that the vectorized version
> -   of DR is guaranteed to have.  */
> +   of the data reference in STMT_INFO is guaranteed to have.  */
>
>  static inline unsigned int
> -vect_known_alignment_in_bytes (struct data_reference *dr)
> +vect_known_alignment_in_bytes (stmt_vec_info stmt_info)
>  {
> -  if (DR_MISALIGNMENT (dr) == DR_MISALIGNMENT_UNKNOWN)
> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
> +  int misalignment = dr_misalignment (stmt_info);
> +  if (misalignment == DR_MISALIGNMENT_UNKNOWN)
>      return TYPE_ALIGN_UNIT (TREE_TYPE (DR_REF (dr)));
> -  if (DR_MISALIGNMENT (dr) == 0)
> -    return DR_TARGET_ALIGNMENT (dr);
> -  return DR_MISALIGNMENT (dr) & -DR_MISALIGNMENT (dr);
> +  if (misalignment == 0)
> +    return STMT_VINFO_TARGET_ALIGNMENT (stmt_info);
> +  return misalignment & -misalignment;
>  }
>
> -/* Return the behavior of DR with respect to the vectorization context
> -   (which for outer loop vectorization might not be the behavior recorded
> -   in DR itself).  */
> +/* Return the data reference behavior of STMT_INFO with respect to the
> +   vectorization context (which for outer loop vectorization might not
> +   be the behavior recorded in STMT_VINFO_DATA_DEF).  */
>
>  static inline innermost_loop_behavior *
> -vect_dr_behavior (data_reference *dr)
> +vect_dr_behavior (stmt_vec_info stmt_info)
>  {
> -  gimple *stmt = DR_STMT (dr);
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    if (loop_vinfo == NULL
>        || !nested_in_vect_loop_p (LOOP_VINFO_LOOP (loop_vinfo), stmt_info))
> -    return &DR_INNERMOST (dr);
> +    return &DR_INNERMOST (STMT_VINFO_DATA_REF (stmt_info));
>    else
>      return &STMT_VINFO_DR_WRT_VEC_LOOP (stmt_info);
>  }
> Index: gcc/tree-vect-data-refs.c
> ===================================================================
> --- gcc/tree-vect-data-refs.c   2018-07-24 10:24:02.356492457 +0100
> +++ gcc/tree-vect-data-refs.c   2018-07-24 10:24:05.740462405 +0100
> @@ -873,7 +873,7 @@ vect_calculate_target_alignment (struct
>     Compute the misalignment of the data reference DR.
>
>     Output:
> -   1. DR_MISALIGNMENT (DR) is defined.
> +   1. dr_misalignment (STMT_INFO) is defined.
>
>     FOR NOW: No analysis is actually performed. Misalignment is calculated
>     only for trivial cases. TODO.  */
> @@ -896,17 +896,17 @@ vect_compute_data_ref_alignment (struct
>      loop = LOOP_VINFO_LOOP (loop_vinfo);
>
>    /* Initialize misalignment to unknown.  */
> -  SET_DR_MISALIGNMENT (dr, DR_MISALIGNMENT_UNKNOWN);
> +  set_dr_misalignment (stmt_info, DR_MISALIGNMENT_UNKNOWN);
>
>    if (STMT_VINFO_GATHER_SCATTER_P (stmt_info))
>      return;
>
> -  innermost_loop_behavior *drb = vect_dr_behavior (dr);
> +  innermost_loop_behavior *drb = vect_dr_behavior (stmt_info);
>    bool step_preserves_misalignment_p;
>
>    unsigned HOST_WIDE_INT vector_alignment
>      = vect_calculate_target_alignment (dr) / BITS_PER_UNIT;
> -  DR_TARGET_ALIGNMENT (dr) = vector_alignment;
> +  STMT_VINFO_TARGET_ALIGNMENT (stmt_info) = vector_alignment;
>
>    /* No step for BB vectorization.  */
>    if (!loop)
> @@ -1009,8 +1009,8 @@ vect_compute_data_ref_alignment (struct
>            dump_printf (MSG_NOTE, "\n");
>          }
>
> -      DR_VECT_AUX (dr)->base_decl = base;
> -      DR_VECT_AUX (dr)->base_misaligned = true;
> +      stmt_info->dr_aux.base_decl = base;
> +      stmt_info->dr_aux.base_misaligned = true;
>        base_misalignment = 0;
>      }
>    poly_int64 misalignment
> @@ -1038,12 +1038,13 @@ vect_compute_data_ref_alignment (struct
>        return;
>      }
>
> -  SET_DR_MISALIGNMENT (dr, const_misalignment);
> +  set_dr_misalignment (stmt_info, const_misalignment);
>
>    if (dump_enabled_p ())
>      {
>        dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> -                       "misalign = %d bytes of ref ", DR_MISALIGNMENT (dr));
> +                      "misalign = %d bytes of ref ",
> +                      dr_misalignment (stmt_info));
>        dump_generic_expr (MSG_MISSED_OPTIMIZATION, TDF_SLIM, ref);
>        dump_printf (MSG_MISSED_OPTIMIZATION, "\n");
>      }
> @@ -1089,29 +1090,29 @@ vect_update_misalignment_for_peel (struc
>      {
>        if (current_dr != dr)
>          continue;
> -      gcc_assert (!known_alignment_for_access_p (dr)
> -                 || !known_alignment_for_access_p (dr_peel)
> -                 || (DR_MISALIGNMENT (dr) / dr_size
> -                     == DR_MISALIGNMENT (dr_peel) / dr_peel_size));
> -      SET_DR_MISALIGNMENT (dr, 0);
> +      gcc_assert (!known_alignment_for_access_p (stmt_info)
> +                 || !known_alignment_for_access_p (peel_stmt_info)
> +                 || (dr_misalignment (stmt_info) / dr_size
> +                     == dr_misalignment (peel_stmt_info) / dr_peel_size));
> +      set_dr_misalignment (stmt_info, 0);
>        return;
>      }
>
> -  if (known_alignment_for_access_p (dr)
> -      && known_alignment_for_access_p (dr_peel))
> +  if (known_alignment_for_access_p (stmt_info)
> +      && known_alignment_for_access_p (peel_stmt_info))
>      {
>        bool negative = tree_int_cst_compare (DR_STEP (dr), size_zero_node) < 0;
> -      int misal = DR_MISALIGNMENT (dr);
> +      int misal = dr_misalignment (stmt_info);
>        misal += negative ? -npeel * dr_size : npeel * dr_size;
> -      misal &= DR_TARGET_ALIGNMENT (dr) - 1;
> -      SET_DR_MISALIGNMENT (dr, misal);
> +      misal &= STMT_VINFO_TARGET_ALIGNMENT (stmt_info) - 1;
> +      set_dr_misalignment (stmt_info, misal);
>        return;
>      }
>
>    if (dump_enabled_p ())
>      dump_printf_loc (MSG_NOTE, vect_location, "Setting misalignment " \
>                      "to unknown (-1).\n");
> -  SET_DR_MISALIGNMENT (dr, DR_MISALIGNMENT_UNKNOWN);
> +  set_dr_misalignment (stmt_info, DR_MISALIGNMENT_UNKNOWN);
>  }
>
>
> @@ -1219,13 +1220,13 @@ vector_alignment_reachable_p (struct dat
>        int elem_size, mis_in_elements;
>
>        /* FORNOW: handle only known alignment.  */
> -      if (!known_alignment_for_access_p (dr))
> +      if (!known_alignment_for_access_p (stmt_info))
>         return false;
>
>        poly_uint64 nelements = TYPE_VECTOR_SUBPARTS (vectype);
>        poly_uint64 vector_size = GET_MODE_SIZE (TYPE_MODE (vectype));
>        elem_size = vector_element_size (vector_size, nelements);
> -      mis_in_elements = DR_MISALIGNMENT (dr) / elem_size;
> +      mis_in_elements = dr_misalignment (stmt_info) / elem_size;
>
>        if (!multiple_p (nelements - mis_in_elements, DR_GROUP_SIZE (stmt_info)))
>         return false;
> @@ -1233,7 +1234,8 @@ vector_alignment_reachable_p (struct dat
>
>    /* If misalignment is known at the compile time then allow peeling
>       only if natural alignment is reachable through peeling.  */
> -  if (known_alignment_for_access_p (dr) && !aligned_access_p (dr))
> +  if (known_alignment_for_access_p (stmt_info)
> +      && !aligned_access_p (stmt_info))
>      {
>        HOST_WIDE_INT elmsize =
>                 int_cst_value (TYPE_SIZE_UNIT (TREE_TYPE (vectype)));
> @@ -1241,10 +1243,10 @@ vector_alignment_reachable_p (struct dat
>         {
>           dump_printf_loc (MSG_NOTE, vect_location,
>                            "data size =" HOST_WIDE_INT_PRINT_DEC, elmsize);
> -         dump_printf (MSG_NOTE,
> -                      ". misalignment = %d.\n", DR_MISALIGNMENT (dr));
> +         dump_printf (MSG_NOTE, ". misalignment = %d.\n",
> +                      dr_misalignment (stmt_info));
>         }
> -      if (DR_MISALIGNMENT (dr) % elmsize)
> +      if (dr_misalignment (stmt_info) % elmsize)
>         {
>           if (dump_enabled_p ())
>             dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> @@ -1253,7 +1255,7 @@ vector_alignment_reachable_p (struct dat
>         }
>      }
>
> -  if (!known_alignment_for_access_p (dr))
> +  if (!known_alignment_for_access_p (stmt_info))
>      {
>        tree type = TREE_TYPE (DR_REF (dr));
>        bool is_packed = not_size_aligned (DR_REF (dr));
> @@ -1401,6 +1403,8 @@ vect_get_peeling_costs_all_drs (vec<data
>                                 unsigned int npeel,
>                                 bool unknown_misalignment)
>  {
> +  stmt_vec_info peel_stmt_info = (dr0 ? vect_dr_stmt (dr0)
> +                                 : NULL_STMT_VEC_INFO);
>    unsigned i;
>    data_reference *dr;
>
> @@ -1423,16 +1427,16 @@ vect_get_peeling_costs_all_drs (vec<data
>         continue;
>
>        int save_misalignment;
> -      save_misalignment = DR_MISALIGNMENT (dr);
> +      save_misalignment = dr_misalignment (stmt_info);
>        if (npeel == 0)
>         ;
> -      else if (unknown_misalignment && dr == dr0)
> -       SET_DR_MISALIGNMENT (dr, 0);
> +      else if (unknown_misalignment && stmt_info == peel_stmt_info)
> +       set_dr_misalignment (stmt_info, 0);
>        else
>         vect_update_misalignment_for_peel (dr, dr0, npeel);
>        vect_get_data_access_cost (dr, inside_cost, outside_cost,
>                                  body_cost_vec, prologue_cost_vec);
> -      SET_DR_MISALIGNMENT (dr, save_misalignment);
> +      set_dr_misalignment (stmt_info, save_misalignment);
>      }
>  }
>
> @@ -1552,10 +1556,10 @@ vect_peeling_supportable (loop_vec_info
>           && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
>         continue;
>
> -      save_misalignment = DR_MISALIGNMENT (dr);
> +      save_misalignment = dr_misalignment (stmt_info);
>        vect_update_misalignment_for_peel (dr, dr0, npeel);
>        supportable_dr_alignment = vect_supportable_dr_alignment (dr, false);
> -      SET_DR_MISALIGNMENT (dr, save_misalignment);
> +      set_dr_misalignment (stmt_info, save_misalignment);
>
>        if (!supportable_dr_alignment)
>         return false;
> @@ -1598,27 +1602,27 @@ vect_peeling_supportable (loop_vec_info
>
>       -- original loop, before alignment analysis:
>         for (i=0; i<N; i++){
> -         x = q[i];                     # DR_MISALIGNMENT(q) = unknown
> -         p[i] = y;                     # DR_MISALIGNMENT(p) = unknown
> +         x = q[i];                     # dr_misalignment(q) = unknown
> +         p[i] = y;                     # dr_misalignment(p) = unknown
>         }
>
>       -- After vect_compute_data_refs_alignment:
>         for (i=0; i<N; i++){
> -         x = q[i];                     # DR_MISALIGNMENT(q) = 3
> -         p[i] = y;                     # DR_MISALIGNMENT(p) = unknown
> +         x = q[i];                     # dr_misalignment(q) = 3
> +         p[i] = y;                     # dr_misalignment(p) = unknown
>         }
>
>       -- Possibility 1: we do loop versioning:
>       if (p is aligned) {
>         for (i=0; i<N; i++){    # loop 1A
> -         x = q[i];                     # DR_MISALIGNMENT(q) = 3
> -         p[i] = y;                     # DR_MISALIGNMENT(p) = 0
> +         x = q[i];                     # dr_misalignment(q) = 3
> +         p[i] = y;                     # dr_misalignment(p) = 0
>         }
>       }
>       else {
>         for (i=0; i<N; i++){    # loop 1B
> -         x = q[i];                     # DR_MISALIGNMENT(q) = 3
> -         p[i] = y;                     # DR_MISALIGNMENT(p) = unaligned
> +         x = q[i];                     # dr_misalignment(q) = 3
> +         p[i] = y;                     # dr_misalignment(p) = unaligned
>         }
>       }
>
> @@ -1628,8 +1632,8 @@ vect_peeling_supportable (loop_vec_info
>         p[i] = y;
>       }
>       for (i = 3; i < N; i++){  # loop 2A
> -       x = q[i];                       # DR_MISALIGNMENT(q) = 0
> -       p[i] = y;                       # DR_MISALIGNMENT(p) = unknown
> +       x = q[i];                       # dr_misalignment(q) = 0
> +       p[i] = y;                       # dr_misalignment(p) = unknown
>       }
>
>       -- Possibility 3: combination of loop peeling and versioning:
> @@ -1639,14 +1643,14 @@ vect_peeling_supportable (loop_vec_info
>       }
>       if (p is aligned) {
>         for (i = 3; i<N; i++){  # loop 3A
> -         x = q[i];                     # DR_MISALIGNMENT(q) = 0
> -         p[i] = y;                     # DR_MISALIGNMENT(p) = 0
> +         x = q[i];                     # dr_misalignment(q) = 0
> +         p[i] = y;                     # dr_misalignment(p) = 0
>         }
>       }
>       else {
>         for (i = 3; i<N; i++){  # loop 3B
> -         x = q[i];                     # DR_MISALIGNMENT(q) = 0
> -         p[i] = y;                     # DR_MISALIGNMENT(p) = unaligned
> +         x = q[i];                     # dr_misalignment(q) = 0
> +         p[i] = y;                     # dr_misalignment(p) = unaligned
>         }
>       }
>
> @@ -1745,17 +1749,20 @@ vect_enhance_data_refs_alignment (loop_v
>        do_peeling = vector_alignment_reachable_p (dr);
>        if (do_peeling)
>          {
> -          if (known_alignment_for_access_p (dr))
> +         if (known_alignment_for_access_p (stmt_info))
>              {
>               unsigned int npeel_tmp = 0;
>               bool negative = tree_int_cst_compare (DR_STEP (dr),
>                                                     size_zero_node) < 0;
>
>               vectype = STMT_VINFO_VECTYPE (stmt_info);
> -             unsigned int target_align = DR_TARGET_ALIGNMENT (dr);
> +             unsigned int target_align
> +               = STMT_VINFO_TARGET_ALIGNMENT (stmt_info);
>               unsigned int dr_size = vect_get_scalar_dr_size (dr);
> -             mis = (negative ? DR_MISALIGNMENT (dr) : -DR_MISALIGNMENT (dr));
> -             if (DR_MISALIGNMENT (dr) != 0)
> +             mis = (negative
> +                    ? dr_misalignment (stmt_info)
> +                    : -dr_misalignment (stmt_info));
> +             if (mis != 0)
>                 npeel_tmp = (mis & (target_align - 1)) / dr_size;
>
>                /* For multiple types, it is possible that the bigger type access
> @@ -1780,7 +1787,7 @@ vect_enhance_data_refs_alignment (loop_v
>
>                   /* NPEEL_TMP is 0 when there is no misalignment, but also
>                      allow peeling NELEMENTS.  */
> -                 if (DR_MISALIGNMENT (dr) == 0)
> +                 if (dr_misalignment (stmt_info) == 0)
>                     possible_npeel_number++;
>                 }
>
> @@ -1841,7 +1848,7 @@ vect_enhance_data_refs_alignment (loop_v
>          }
>        else
>          {
> -          if (!aligned_access_p (dr))
> +         if (!aligned_access_p (stmt_info))
>              {
>                if (dump_enabled_p ())
>                  dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> @@ -2010,10 +2017,10 @@ vect_enhance_data_refs_alignment (loop_v
>
>    if (do_peeling)
>      {
> -      stmt_vec_info stmt_info = vect_dr_stmt (dr0);
> -      vectype = STMT_VINFO_VECTYPE (stmt_info);
> +      stmt_vec_info peel_stmt_info = vect_dr_stmt (dr0);
> +      vectype = STMT_VINFO_VECTYPE (peel_stmt_info);
>
> -      if (known_alignment_for_access_p (dr0))
> +      if (known_alignment_for_access_p (peel_stmt_info))
>          {
>           bool negative = tree_int_cst_compare (DR_STEP (dr0),
>                                                 size_zero_node) < 0;
> @@ -2021,11 +2028,14 @@ vect_enhance_data_refs_alignment (loop_v
>              {
>                /* Since it's known at compile time, compute the number of
>                   iterations in the peeled loop (the peeling factor) for use in
> -                 updating DR_MISALIGNMENT values.  The peeling factor is the
> +                 updating dr_misalignment values.  The peeling factor is the
>                   vectorization factor minus the misalignment as an element
>                   count.  */
> -             mis = negative ? DR_MISALIGNMENT (dr0) : -DR_MISALIGNMENT (dr0);
> -             unsigned int target_align = DR_TARGET_ALIGNMENT (dr0);
> +             mis = (negative
> +                    ? dr_misalignment (peel_stmt_info)
> +                    : -dr_misalignment (peel_stmt_info));
> +             unsigned int target_align
> +               = STMT_VINFO_TARGET_ALIGNMENT (peel_stmt_info);
>               npeel = ((mis & (target_align - 1))
>                        / vect_get_scalar_dr_size (dr0));
>              }
> @@ -2033,9 +2043,8 @@ vect_enhance_data_refs_alignment (loop_v
>           /* For interleaved data access every iteration accesses all the
>              members of the group, therefore we divide the number of iterations
>              by the group size.  */
> -         stmt_info = vect_dr_stmt (dr0);
> -         if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
> -           npeel /= DR_GROUP_SIZE (stmt_info);
> +         if (STMT_VINFO_GROUPED_ACCESS (peel_stmt_info))
> +           npeel /= DR_GROUP_SIZE (peel_stmt_info);
>
>            if (dump_enabled_p ())
>              dump_printf_loc (MSG_NOTE, vect_location,
> @@ -2047,7 +2056,9 @@ vect_enhance_data_refs_alignment (loop_v
>         do_peeling = false;
>
>        /* Check if all datarefs are supportable and log.  */
> -      if (do_peeling && known_alignment_for_access_p (dr0) && npeel == 0)
> +      if (do_peeling
> +         && known_alignment_for_access_p (peel_stmt_info)
> +         && npeel == 0)
>          {
>            stat = vect_verify_datarefs_alignment (loop_vinfo);
>            if (!stat)
> @@ -2066,7 +2077,8 @@ vect_enhance_data_refs_alignment (loop_v
>                unsigned max_peel = npeel;
>                if (max_peel == 0)
>                  {
> -                 unsigned int target_align = DR_TARGET_ALIGNMENT (dr0);
> +                 unsigned int target_align
> +                   = STMT_VINFO_TARGET_ALIGNMENT (peel_stmt_info);
>                   max_peel = target_align / vect_get_scalar_dr_size (dr0) - 1;
>                  }
>                if (max_peel > max_allowed_peel)
> @@ -2095,19 +2107,20 @@ vect_enhance_data_refs_alignment (loop_v
>
>        if (do_peeling)
>          {
> -          /* (1.2) Update the DR_MISALIGNMENT of each data reference DR_i.
> -             If the misalignment of DR_i is identical to that of dr0 then set
> -             DR_MISALIGNMENT (DR_i) to zero.  If the misalignment of DR_i and
> -             dr0 are known at compile time then increment DR_MISALIGNMENT (DR_i)
> -             by the peeling factor times the element size of DR_i (MOD the
> -             vectorization factor times the size).  Otherwise, the
> -             misalignment of DR_i must be set to unknown.  */
> +         /* (1.2) Update the dr_misalignment of each data reference
> +            statement STMT_i.  If the misalignment of STMT_i is identical
> +            to that of PEEL_STMT_INFO then set dr_misalignment (STMT_i)
> +            to zero.  If the misalignment of STMT_i and PEEL_STMT_INFO are
> +            known at compile time then increment dr_misalignment (STMT_i)
> +            by the peeling factor times the element size of STMT_i (MOD
> +            the vectorization factor times the size).  Otherwise, the
> +            misalignment of STMT_i must be set to unknown.  */
>           FOR_EACH_VEC_ELT (datarefs, i, dr)
>             if (dr != dr0)
>               {
>                 /* Strided accesses perform only component accesses, alignment
>                    is irrelevant for them.  */
> -               stmt_info = vect_dr_stmt (dr);
> +               stmt_vec_info stmt_info = vect_dr_stmt (dr);
>                 if (STMT_VINFO_STRIDED_P (stmt_info)
>                     && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
>                   continue;
> @@ -2120,8 +2133,8 @@ vect_enhance_data_refs_alignment (loop_v
>              LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) = npeel;
>            else
>              LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo)
> -             = DR_MISALIGNMENT (dr0);
> -         SET_DR_MISALIGNMENT (dr0, 0);
> +             = dr_misalignment (peel_stmt_info);
> +         set_dr_misalignment (peel_stmt_info, 0);
>           if (dump_enabled_p ())
>              {
>                dump_printf_loc (MSG_NOTE, vect_location,
> @@ -2160,7 +2173,7 @@ vect_enhance_data_refs_alignment (loop_v
>
>           /* For interleaving, only the alignment of the first access
>              matters.  */
> -         if (aligned_access_p (dr)
> +         if (aligned_access_p (stmt_info)
>               || (STMT_VINFO_GROUPED_ACCESS (stmt_info)
>                   && DR_GROUP_FIRST_ELEMENT (stmt_info) != stmt_info))
>             continue;
> @@ -2182,7 +2195,7 @@ vect_enhance_data_refs_alignment (loop_v
>                int mask;
>                tree vectype;
>
> -              if (known_alignment_for_access_p (dr)
> +              if (known_alignment_for_access_p (stmt_info)
>                    || LOOP_VINFO_MAY_MISALIGN_STMTS (loop_vinfo).length ()
>                       >= (unsigned) PARAM_VALUE (PARAM_VECT_MAX_VERSION_FOR_ALIGNMENT_CHECKS))
>                  {
> @@ -2241,8 +2254,7 @@ vect_enhance_data_refs_alignment (loop_v
>           of the loop being vectorized.  */
>        FOR_EACH_VEC_ELT (may_misalign_stmts, i, stmt_info)
>          {
> -          dr = STMT_VINFO_DATA_REF (stmt_info);
> -         SET_DR_MISALIGNMENT (dr, 0);
> +         set_dr_misalignment (stmt_info, 0);
>           if (dump_enabled_p ())
>              dump_printf_loc (MSG_NOTE, vect_location,
>                               "Alignment of access forced using versioning.\n");
> @@ -4456,13 +4468,14 @@ vect_get_new_ssa_name (tree type, enum v
>  static void
>  vect_duplicate_ssa_name_ptr_info (tree name, data_reference *dr)
>  {
> +  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>    duplicate_ssa_name_ptr_info (name, DR_PTR_INFO (dr));
> -  int misalign = DR_MISALIGNMENT (dr);
> +  int misalign = dr_misalignment (stmt_info);
>    if (misalign == DR_MISALIGNMENT_UNKNOWN)
>      mark_ptr_info_alignment_unknown (SSA_NAME_PTR_INFO (name));
>    else
>      set_ptr_info_alignment (SSA_NAME_PTR_INFO (name),
> -                           DR_TARGET_ALIGNMENT (dr), misalign);
> +                           STMT_VINFO_TARGET_ALIGNMENT (stmt_info), misalign);
>  }
>
>  /* Function vect_create_addr_base_for_vector_ref.
> @@ -4513,7 +4526,7 @@ vect_create_addr_base_for_vector_ref (st
>    tree vect_ptr_type;
>    tree step = TYPE_SIZE_UNIT (TREE_TYPE (DR_REF (dr)));
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
> -  innermost_loop_behavior *drb = vect_dr_behavior (dr);
> +  innermost_loop_behavior *drb = vect_dr_behavior (stmt_info);
>
>    tree data_ref_base = unshare_expr (drb->base_address);
>    tree base_offset = unshare_expr (drb->offset);
> @@ -4687,7 +4700,7 @@ vect_create_data_ref_ptr (stmt_vec_info
>
>    /* Check the step (evolution) of the load in LOOP, and record
>       whether it's invariant.  */
> -  step = vect_dr_behavior (dr)->step;
> +  step = vect_dr_behavior (stmt_info)->step;
>    if (integer_zerop (step))
>      *inv_p = true;
>    else
> @@ -5519,7 +5532,7 @@ vect_setup_realignment (stmt_vec_info st
>         new_temp = copy_ssa_name (ptr);
>        else
>         new_temp = make_ssa_name (TREE_TYPE (ptr));
> -      unsigned int align = DR_TARGET_ALIGNMENT (dr);
> +      unsigned int align = STMT_VINFO_TARGET_ALIGNMENT (stmt_info);
>        new_stmt = gimple_build_assign
>                    (new_temp, BIT_AND_EXPR, ptr,
>                     build_int_cst (TREE_TYPE (ptr), -(HOST_WIDE_INT) align));
> @@ -6438,7 +6451,7 @@ vect_supportable_dr_alignment (struct da
>    struct loop *vect_loop = NULL;
>    bool nested_in_vect_loop = false;
>
> -  if (aligned_access_p (dr) && !check_aligned_accesses)
> +  if (aligned_access_p (stmt_info) && !check_aligned_accesses)
>      return dr_aligned;
>
>    /* For now assume all conditional loads/stores support unaligned
> @@ -6546,11 +6559,11 @@ vect_supportable_dr_alignment (struct da
>           else
>             return dr_explicit_realign_optimized;
>         }
> -      if (!known_alignment_for_access_p (dr))
> +      if (!known_alignment_for_access_p (stmt_info))
>         is_packed = not_size_aligned (DR_REF (dr));
>
>        if (targetm.vectorize.support_vector_misalignment
> -           (mode, type, DR_MISALIGNMENT (dr), is_packed))
> +           (mode, type, dr_misalignment (stmt_info), is_packed))
>         /* Can't software pipeline the loads, but can at least do them.  */
>         return dr_unaligned_supported;
>      }
> @@ -6559,11 +6572,11 @@ vect_supportable_dr_alignment (struct da
>        bool is_packed = false;
>        tree type = (TREE_TYPE (DR_REF (dr)));
>
> -      if (!known_alignment_for_access_p (dr))
> +      if (!known_alignment_for_access_p (stmt_info))
>         is_packed = not_size_aligned (DR_REF (dr));
>
>       if (targetm.vectorize.support_vector_misalignment
> -          (mode, type, DR_MISALIGNMENT (dr), is_packed))
> +          (mode, type, dr_misalignment (stmt_info), is_packed))
>         return dr_unaligned_supported;
>      }
>
> Index: gcc/tree-vect-loop-manip.c
> ===================================================================
> --- gcc/tree-vect-loop-manip.c  2018-07-24 10:23:46.112636713 +0100
> +++ gcc/tree-vect-loop-manip.c  2018-07-24 10:24:05.740462405 +0100
> @@ -1564,7 +1564,7 @@ get_misalign_in_elems (gimple **seq, loo
>    stmt_vec_info stmt_info = vect_dr_stmt (dr);
>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>
> -  unsigned int target_align = DR_TARGET_ALIGNMENT (dr);
> +  unsigned int target_align = STMT_VINFO_TARGET_ALIGNMENT (stmt_info);
>    gcc_assert (target_align != 0);
>
>    bool negative = tree_int_cst_compare (DR_STEP (dr), size_zero_node) < 0;
> @@ -1600,7 +1600,7 @@ get_misalign_in_elems (gimple **seq, loo
>     refer to an aligned location.  The following computation is generated:
>
>     If the misalignment of DR is known at compile time:
> -     addr_mis = int mis = DR_MISALIGNMENT (dr);
> +     addr_mis = int mis = dr_misalignment (stmt-containing-DR);
>     Else, compute address misalignment in bytes:
>       addr_mis = addr & (target_align - 1)
>
> @@ -1633,7 +1633,7 @@ vect_gen_prolog_loop_niters (loop_vec_in
>    tree iters, iters_name;
>    stmt_vec_info stmt_info = vect_dr_stmt (dr);
>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
> -  unsigned int target_align = DR_TARGET_ALIGNMENT (dr);
> +  unsigned int target_align = STMT_VINFO_TARGET_ALIGNMENT (stmt_info);
>
>    if (LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) > 0)
>      {
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-24 10:24:02.364492386 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:24:05.744462369 +0100
> @@ -1079,7 +1079,8 @@ vect_get_store_cost (stmt_vec_info stmt_
>          /* Here, we assign an additional cost for the unaligned store.  */
>         *inside_cost += record_stmt_cost (body_cost_vec, ncopies,
>                                           unaligned_store, stmt_info,
> -                                         DR_MISALIGNMENT (dr), vect_body);
> +                                         dr_misalignment (stmt_info),
> +                                         vect_body);
>          if (dump_enabled_p ())
>            dump_printf_loc (MSG_NOTE, vect_location,
>                             "vect_model_store_cost: unaligned supported by "
> @@ -1257,7 +1258,8 @@ vect_get_load_cost (stmt_vec_info stmt_i
>          /* Here, we assign an additional cost for the unaligned load.  */
>         *inside_cost += record_stmt_cost (body_cost_vec, ncopies,
>                                           unaligned_load, stmt_info,
> -                                         DR_MISALIGNMENT (dr), vect_body);
> +                                         dr_misalignment (stmt_info),
> +                                         vect_body);
>
>          if (dump_enabled_p ())
>            dump_printf_loc (MSG_NOTE, vect_location,
> @@ -2102,8 +2104,7 @@ vect_use_strided_gather_scatters_p (stmt
>  static int
>  compare_step_with_zero (stmt_vec_info stmt_info)
>  {
> -  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
> -  return tree_int_cst_compare (vect_dr_behavior (dr)->step,
> +  return tree_int_cst_compare (vect_dr_behavior (stmt_info)->step,
>                                size_zero_node);
>  }
>
> @@ -2218,7 +2219,7 @@ get_group_load_store_type (stmt_vec_info
>              be a multiple of B and so we are guaranteed to access a
>              non-gap element in the same B-sized block.  */
>           if (overrun_p
> -             && gap < (vect_known_alignment_in_bytes (first_dr)
> +             && gap < (vect_known_alignment_in_bytes (first_stmt_info)
>                         / vect_get_scalar_dr_size (first_dr)))
>             overrun_p = false;
>           if (overrun_p && !can_overrun_p)
> @@ -2246,7 +2247,7 @@ get_group_load_store_type (stmt_vec_info
>          same B-sized block.  */
>        if (would_overrun_p
>           && !masked_p
> -         && gap < (vect_known_alignment_in_bytes (first_dr)
> +         && gap < (vect_known_alignment_in_bytes (first_stmt_info)
>                     / vect_get_scalar_dr_size (first_dr)))
>         would_overrun_p = false;
>
> @@ -2931,11 +2932,12 @@ vect_get_strided_load_store_ops (stmt_ve
>  vect_get_data_ptr_increment (data_reference *dr, tree aggr_type,
>                              vect_memory_access_type memory_access_type)
>  {
> +  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>    if (memory_access_type == VMAT_INVARIANT)
>      return size_zero_node;
>
>    tree iv_step = TYPE_SIZE_UNIT (aggr_type);
> -  tree step = vect_dr_behavior (dr)->step;
> +  tree step = vect_dr_behavior (stmt_info)->step;
>    if (tree_int_cst_sgn (step) == -1)
>      iv_step = fold_build1 (NEGATE_EXPR, TREE_TYPE (iv_step), iv_step);
>    return iv_step;
> @@ -6174,14 +6176,16 @@ vectorizable_operation (stmt_vec_info st
>  static void
>  ensure_base_align (struct data_reference *dr)
>  {
> -  if (DR_VECT_AUX (dr)->misalignment == DR_MISALIGNMENT_UNINITIALIZED)
> +  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +  if (stmt_info->dr_aux.misalignment == DR_MISALIGNMENT_UNINITIALIZED)
>      return;
>
> -  if (DR_VECT_AUX (dr)->base_misaligned)
> +  if (stmt_info->dr_aux.base_misaligned)
>      {
> -      tree base_decl = DR_VECT_AUX (dr)->base_decl;
> +      tree base_decl = stmt_info->dr_aux.base_decl;
>
> -      unsigned int align_base_to = DR_TARGET_ALIGNMENT (dr) * BITS_PER_UNIT;
> +      unsigned int align_base_to = (stmt_info->dr_aux.target_alignment
> +                                   * BITS_PER_UNIT);
>
>        if (decl_in_symtab_p (base_decl))
>         symtab_node::get (base_decl)->increase_alignment (align_base_to);
> @@ -6190,7 +6194,7 @@ ensure_base_align (struct data_reference
>           SET_DECL_ALIGN (base_decl, align_base_to);
>            DECL_USER_ALIGN (base_decl) = 1;
>         }
> -      DR_VECT_AUX (dr)->base_misaligned = false;
> +      stmt_info->dr_aux.base_misaligned = false;
>      }
>  }
>
> @@ -7175,16 +7179,16 @@ vectorizable_store (stmt_vec_info stmt_i
>                    vect_permute_store_chain().  */
>                 vec_oprnd = result_chain[i];
>
> -             align = DR_TARGET_ALIGNMENT (first_dr);
> -             if (aligned_access_p (first_dr))
> +             align = STMT_VINFO_TARGET_ALIGNMENT (first_stmt_info);
> +             if (aligned_access_p (first_stmt_info))
>                 misalign = 0;
> -             else if (DR_MISALIGNMENT (first_dr) == -1)
> +             else if (dr_misalignment (first_stmt_info) == -1)
>                 {
> -                 align = dr_alignment (vect_dr_behavior (first_dr));
> +                 align = dr_alignment (vect_dr_behavior (first_stmt_info));
>                   misalign = 0;
>                 }
>               else
> -               misalign = DR_MISALIGNMENT (first_dr);
> +               misalign = dr_misalignment (first_stmt_info);
>               if (dataref_offset == NULL_TREE
>                   && TREE_CODE (dataref_ptr) == SSA_NAME)
>                 set_ptr_info_alignment (get_ptr_info (dataref_ptr), align,
> @@ -7227,9 +7231,9 @@ vectorizable_store (stmt_vec_info stmt_i
>                                           dataref_offset
>                                           ? dataref_offset
>                                           : build_int_cst (ref_type, 0));
> -                 if (aligned_access_p (first_dr))
> +                 if (aligned_access_p (first_stmt_info))
>                     ;
> -                 else if (DR_MISALIGNMENT (first_dr) == -1)
> +                 else if (dr_misalignment (first_stmt_info) == -1)
>                     TREE_TYPE (data_ref)
>                       = build_aligned_type (TREE_TYPE (data_ref),
>                                             align * BITS_PER_UNIT);
> @@ -8326,19 +8330,20 @@ vectorizable_load (stmt_vec_info stmt_in
>                         break;
>                       }
>
> -                   align = DR_TARGET_ALIGNMENT (dr);
> +                   align = STMT_VINFO_TARGET_ALIGNMENT (stmt_info);
>                     if (alignment_support_scheme == dr_aligned)
>                       {
> -                       gcc_assert (aligned_access_p (first_dr));
> +                       gcc_assert (aligned_access_p (first_stmt_info));
>                         misalign = 0;
>                       }
> -                   else if (DR_MISALIGNMENT (first_dr) == -1)
> +                   else if (dr_misalignment (first_stmt_info) == -1)
>                       {
> -                       align = dr_alignment (vect_dr_behavior (first_dr));
> +                       align = dr_alignment
> +                         (vect_dr_behavior (first_stmt_info));
>                         misalign = 0;
>                       }
>                     else
> -                     misalign = DR_MISALIGNMENT (first_dr);
> +                     misalign = dr_misalignment (first_stmt_info);
>                     if (dataref_offset == NULL_TREE
>                         && TREE_CODE (dataref_ptr) == SSA_NAME)
>                       set_ptr_info_alignment (get_ptr_info (dataref_ptr),
> @@ -8365,7 +8370,7 @@ vectorizable_load (stmt_vec_info stmt_in
>                                          : build_int_cst (ref_type, 0));
>                         if (alignment_support_scheme == dr_aligned)
>                           ;
> -                       else if (DR_MISALIGNMENT (first_dr) == -1)
> +                       else if (dr_misalignment (first_stmt_info) == -1)
>                           TREE_TYPE (data_ref)
>                             = build_aligned_type (TREE_TYPE (data_ref),
>                                                   align * BITS_PER_UNIT);
> @@ -8392,7 +8397,8 @@ vectorizable_load (stmt_vec_info stmt_in
>                       ptr = copy_ssa_name (dataref_ptr);
>                     else
>                       ptr = make_ssa_name (TREE_TYPE (dataref_ptr));
> -                   unsigned int align = DR_TARGET_ALIGNMENT (first_dr);
> +                   unsigned int align
> +                     = STMT_VINFO_TARGET_ALIGNMENT (first_stmt_info);
>                     new_stmt = gimple_build_assign
>                                  (ptr, BIT_AND_EXPR, dataref_ptr,
>                                   build_int_cst
> @@ -8436,7 +8442,8 @@ vectorizable_load (stmt_vec_info stmt_in
>                       new_temp = copy_ssa_name (dataref_ptr);
>                     else
>                       new_temp = make_ssa_name (TREE_TYPE (dataref_ptr));
> -                   unsigned int align = DR_TARGET_ALIGNMENT (first_dr);
> +                   unsigned int align
> +                     = STMT_VINFO_TARGET_ALIGNMENT (first_stmt_info);
>                     new_stmt = gimple_build_assign
>                       (new_temp, BIT_AND_EXPR, dataref_ptr,
>                        build_int_cst (TREE_TYPE (dataref_ptr),

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [38/46] Pass stmt_vec_infos instead of data_references where relevant
  2018-07-24 10:08 ` [38/46] Pass stmt_vec_infos instead of data_references where relevant Richard Sandiford
@ 2018-07-25 10:21   ` Richard Biener
  2018-07-25 11:21     ` Richard Sandiford
  0 siblings, 1 reply; 108+ messages in thread
From: Richard Biener @ 2018-07-25 10:21 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 12:08 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch makes various routines (mostly in tree-vect-data-refs.c)
> take stmt_vec_infos rather than data_references.  The affected routines
> are really dealing with the way that an access is going to vectorised
> for a particular stmt_vec_info, rather than with the original scalar
> access described by the data_reference.

Similar.  Doesn't it make more sense to pass both stmt_info and DR to
the functions?

We currently cannot handle aggregate copies in the to-be-vectorized IL
but rely on SRA and friends to elide those.  That's the only two-DR
stmt I can think of for vectorization.  Maybe aggregate by-value / return
function calls with OMP SIMD if that supports this somehow.

Richard.

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (vect_supportable_dr_alignment): Take
>         a stmt_vec_info rather than a data_reference.
>         * tree-vect-data-refs.c (vect_calculate_target_alignment)
>         (vect_compute_data_ref_alignment, vect_update_misalignment_for_peel)
>         (verify_data_ref_alignment, vector_alignment_reachable_p)
>         (vect_get_data_access_cost, vect_get_peeling_costs_all_drs)
>         (vect_peeling_supportable, vect_analyze_group_access_1)
>         (vect_analyze_group_access, vect_analyze_data_ref_access)
>         (vect_vfa_segment_size, vect_vfa_access_size, vect_small_gap_p)
>         (vectorizable_with_step_bound_p, vect_duplicate_ssa_name_ptr_info)
>         (vect_supportable_dr_alignment): Likewise.  Update calls to other
>         functions for which the same change is being made.
>         (vect_verify_datarefs_alignment, vect_find_same_alignment_drs)
>         (vect_analyze_data_refs_alignment): Update calls accordingly.
>         (vect_slp_analyze_and_verify_node_alignment): Likewise.
>         (vect_analyze_data_ref_accesses): Likewise.
>         (vect_prune_runtime_alias_test_list): Likewise.
>         (vect_create_addr_base_for_vector_ref): Likewise.
>         (vect_create_data_ref_ptr): Likewise.
>         (_vect_peel_info::dr): Replace with...
>         (_vect_peel_info::stmt_info): ...this new field.
>         (vect_peeling_hash_get_most_frequent): Update _vect_peel_info uses
>         accordingly, and update after above interface changes.
>         (vect_peeling_hash_get_lowest_cost): Likewise
>         (vect_peeling_hash_choose_best_peeling): Likewise.
>         (vect_enhance_data_refs_alignment): Likewise.
>         (vect_peeling_hash_insert): Likewise.  Take a stmt_vec_info
>         rather than a data_reference.
>         * tree-vect-stmts.c (vect_get_store_cost, vect_get_load_cost)
>         (get_negative_load_store_type): Update calls to
>         vect_supportable_dr_alignment.
>         (vect_get_data_ptr_increment, ensure_base_align): Take a
>         stmt_vec_info instead of a data_reference.
>         (vectorizable_store, vectorizable_load): Update calls after
>         above interface changes.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:24:05.744462369 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:24:08.924434128 +0100
> @@ -1541,7 +1541,7 @@ extern tree vect_get_mask_type_for_stmt
>  /* In tree-vect-data-refs.c.  */
>  extern bool vect_can_force_dr_alignment_p (const_tree, unsigned int);
>  extern enum dr_alignment_support vect_supportable_dr_alignment
> -                                           (struct data_reference *, bool);
> +  (stmt_vec_info, bool);
>  extern tree vect_get_smallest_scalar_type (stmt_vec_info, HOST_WIDE_INT *,
>                                             HOST_WIDE_INT *);
>  extern bool vect_analyze_data_ref_dependences (loop_vec_info, unsigned int *);
> Index: gcc/tree-vect-data-refs.c
> ===================================================================
> --- gcc/tree-vect-data-refs.c   2018-07-24 10:24:05.740462405 +0100
> +++ gcc/tree-vect-data-refs.c   2018-07-24 10:24:08.924434128 +0100
> @@ -858,19 +858,19 @@ vect_record_base_alignments (vec_info *v
>      }
>  }
>
> -/* Return the target alignment for the vectorized form of DR.  */
> +/* Return the target alignment for the vectorized form of the load or store
> +   in STMT_INFO.  */
>
>  static unsigned int
> -vect_calculate_target_alignment (struct data_reference *dr)
> +vect_calculate_target_alignment (stmt_vec_info stmt_info)
>  {
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>    return targetm.vectorize.preferred_vector_alignment (vectype);
>  }
>
>  /* Function vect_compute_data_ref_alignment
>
> -   Compute the misalignment of the data reference DR.
> +   Compute the misalignment of the load or store in STMT_INFO.
>
>     Output:
>     1. dr_misalignment (STMT_INFO) is defined.
> @@ -879,9 +879,9 @@ vect_calculate_target_alignment (struct
>     only for trivial cases. TODO.  */
>
>  static void
> -vect_compute_data_ref_alignment (struct data_reference *dr)
> +vect_compute_data_ref_alignment (stmt_vec_info stmt_info)
>  {
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    vec_base_alignments *base_alignments = &stmt_info->vinfo->base_alignments;
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    struct loop *loop = NULL;
> @@ -905,7 +905,7 @@ vect_compute_data_ref_alignment (struct
>    bool step_preserves_misalignment_p;
>
>    unsigned HOST_WIDE_INT vector_alignment
> -    = vect_calculate_target_alignment (dr) / BITS_PER_UNIT;
> +    = vect_calculate_target_alignment (stmt_info) / BITS_PER_UNIT;
>    STMT_VINFO_TARGET_ALIGNMENT (stmt_info) = vector_alignment;
>
>    /* No step for BB vectorization.  */
> @@ -1053,28 +1053,28 @@ vect_compute_data_ref_alignment (struct
>  }
>
>  /* Function vect_update_misalignment_for_peel.
> -   Sets DR's misalignment
> -   - to 0 if it has the same alignment as DR_PEEL,
> +   Sets the misalignment of the load or store in STMT_INFO
> +   - to 0 if it has the same alignment as PEEL_STMT_INFO,
>     - to the misalignment computed using NPEEL if DR's salignment is known,
>     - to -1 (unknown) otherwise.
>
> -   DR - the data reference whose misalignment is to be adjusted.
> -   DR_PEEL - the data reference whose misalignment is being made
> -             zero in the vector loop by the peel.
> +   STMT_INFO - the load or store whose misalignment is to be adjusted.
> +   PEEL_STMT_INFO - the load or store whose misalignment is being made
> +                   zero in the vector loop by the peel.
>     NPEEL - the number of iterations in the peel loop if the misalignment
> -           of DR_PEEL is known at compile time.  */
> +          of PEEL_STMT_INFO is known at compile time.  */
>
>  static void
> -vect_update_misalignment_for_peel (struct data_reference *dr,
> -                                   struct data_reference *dr_peel, int npeel)
> +vect_update_misalignment_for_peel (stmt_vec_info stmt_info,
> +                                  stmt_vec_info peel_stmt_info, int npeel)
>  {
>    unsigned int i;
>    vec<dr_p> same_aligned_drs;
>    struct data_reference *current_dr;
> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
> +  data_reference *dr_peel = STMT_VINFO_DATA_REF (peel_stmt_info);
>    int dr_size = vect_get_scalar_dr_size (dr);
>    int dr_peel_size = vect_get_scalar_dr_size (dr_peel);
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> -  stmt_vec_info peel_stmt_info = vect_dr_stmt (dr_peel);
>
>   /* For interleaved data accesses the step in the loop must be multiplied by
>       the size of the interleaving group.  */
> @@ -1085,7 +1085,7 @@ vect_update_misalignment_for_peel (struc
>
>    /* It can be assumed that the data refs with the same alignment as dr_peel
>       are aligned in the vector loop.  */
> -  same_aligned_drs = STMT_VINFO_SAME_ALIGN_REFS (vect_dr_stmt (dr_peel));
> +  same_aligned_drs = STMT_VINFO_SAME_ALIGN_REFS (peel_stmt_info);
>    FOR_EACH_VEC_ELT (same_aligned_drs, i, current_dr)
>      {
>        if (current_dr != dr)
> @@ -1118,13 +1118,15 @@ vect_update_misalignment_for_peel (struc
>
>  /* Function verify_data_ref_alignment
>
> -   Return TRUE if DR can be handled with respect to alignment.  */
> +   Return TRUE if the load or store in STMT_INFO can be handled with
> +   respect to alignment.  */
>
>  static bool
> -verify_data_ref_alignment (data_reference_p dr)
> +verify_data_ref_alignment (stmt_vec_info stmt_info)
>  {
> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    enum dr_alignment_support supportable_dr_alignment
> -    = vect_supportable_dr_alignment (dr, false);
> +    = vect_supportable_dr_alignment (stmt_info, false);
>    if (!supportable_dr_alignment)
>      {
>        if (dump_enabled_p ())
> @@ -1181,7 +1183,7 @@ vect_verify_datarefs_alignment (loop_vec
>           && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
>         continue;
>
> -      if (! verify_data_ref_alignment (dr))
> +      if (! verify_data_ref_alignment (stmt_info))
>         return false;
>      }
>
> @@ -1203,13 +1205,13 @@ not_size_aligned (tree exp)
>
>  /* Function vector_alignment_reachable_p
>
> -   Return true if vector alignment for DR is reachable by peeling
> -   a few loop iterations.  Return false otherwise.  */
> +   Return true if the vector alignment is reachable for the load or store
> +   in STMT_INFO by peeling a few loop iterations.  Return false otherwise.  */
>
>  static bool
> -vector_alignment_reachable_p (struct data_reference *dr)
> +vector_alignment_reachable_p (stmt_vec_info stmt_info)
>  {
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>
>    if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
> @@ -1270,16 +1272,16 @@ vector_alignment_reachable_p (struct dat
>  }
>
>
> -/* Calculate the cost of the memory access represented by DR.  */
> +/* Calculate the cost of the memory access in STMT_INFO.  */
>
>  static void
> -vect_get_data_access_cost (struct data_reference *dr,
> +vect_get_data_access_cost (stmt_vec_info stmt_info,
>                             unsigned int *inside_cost,
>                             unsigned int *outside_cost,
>                            stmt_vector_for_cost *body_cost_vec,
>                            stmt_vector_for_cost *prologue_cost_vec)
>  {
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    int ncopies;
>
> @@ -1303,7 +1305,7 @@ vect_get_data_access_cost (struct data_r
>
>  typedef struct _vect_peel_info
>  {
> -  struct data_reference *dr;
> +  stmt_vec_info stmt_info;
>    int npeel;
>    unsigned int count;
>  } *vect_peel_info;
> @@ -1337,16 +1339,17 @@ peel_info_hasher::equal (const _vect_pee
>  }
>
>
> -/* Insert DR into peeling hash table with NPEEL as key.  */
> +/* Insert STMT_INFO into peeling hash table with NPEEL as key.  */
>
>  static void
>  vect_peeling_hash_insert (hash_table<peel_info_hasher> *peeling_htab,
> -                         loop_vec_info loop_vinfo, struct data_reference *dr,
> +                         loop_vec_info loop_vinfo, stmt_vec_info stmt_info,
>                            int npeel)
>  {
>    struct _vect_peel_info elem, *slot;
>    _vect_peel_info **new_slot;
> -  bool supportable_dr_alignment = vect_supportable_dr_alignment (dr, true);
> +  bool supportable_dr_alignment
> +    = vect_supportable_dr_alignment (stmt_info, true);
>
>    elem.npeel = npeel;
>    slot = peeling_htab->find (&elem);
> @@ -1356,7 +1359,7 @@ vect_peeling_hash_insert (hash_table<pee
>      {
>        slot = XNEW (struct _vect_peel_info);
>        slot->npeel = npeel;
> -      slot->dr = dr;
> +      slot->stmt_info = stmt_info;
>        slot->count = 1;
>        new_slot = peeling_htab->find_slot (slot, INSERT);
>        *new_slot = slot;
> @@ -1383,19 +1386,19 @@ vect_peeling_hash_get_most_frequent (_ve
>      {
>        max->peel_info.npeel = elem->npeel;
>        max->peel_info.count = elem->count;
> -      max->peel_info.dr = elem->dr;
> +      max->peel_info.stmt_info = elem->stmt_info;
>      }
>
>    return 1;
>  }
>
>  /* Get the costs of peeling NPEEL iterations checking data access costs
> -   for all data refs.  If UNKNOWN_MISALIGNMENT is true, we assume DR0's
> -   misalignment will be zero after peeling.  */
> +   for all data refs.  If UNKNOWN_MISALIGNMENT is true, we assume
> +   PEEL_STMT_INFO's misalignment will be zero after peeling.  */
>
>  static void
>  vect_get_peeling_costs_all_drs (vec<data_reference_p> datarefs,
> -                               struct data_reference *dr0,
> +                               stmt_vec_info peel_stmt_info,
>                                 unsigned int *inside_cost,
>                                 unsigned int *outside_cost,
>                                 stmt_vector_for_cost *body_cost_vec,
> @@ -1403,8 +1406,6 @@ vect_get_peeling_costs_all_drs (vec<data
>                                 unsigned int npeel,
>                                 bool unknown_misalignment)
>  {
> -  stmt_vec_info peel_stmt_info = (dr0 ? vect_dr_stmt (dr0)
> -                                 : NULL_STMT_VEC_INFO);
>    unsigned i;
>    data_reference *dr;
>
> @@ -1433,8 +1434,8 @@ vect_get_peeling_costs_all_drs (vec<data
>        else if (unknown_misalignment && stmt_info == peel_stmt_info)
>         set_dr_misalignment (stmt_info, 0);
>        else
> -       vect_update_misalignment_for_peel (dr, dr0, npeel);
> -      vect_get_data_access_cost (dr, inside_cost, outside_cost,
> +       vect_update_misalignment_for_peel (stmt_info, peel_stmt_info, npeel);
> +      vect_get_data_access_cost (stmt_info, inside_cost, outside_cost,
>                                  body_cost_vec, prologue_cost_vec);
>        set_dr_misalignment (stmt_info, save_misalignment);
>      }
> @@ -1450,7 +1451,7 @@ vect_peeling_hash_get_lowest_cost (_vect
>    vect_peel_info elem = *slot;
>    int dummy;
>    unsigned int inside_cost = 0, outside_cost = 0;
> -  stmt_vec_info stmt_info = vect_dr_stmt (elem->dr);
> +  stmt_vec_info stmt_info = elem->stmt_info;
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    stmt_vector_for_cost prologue_cost_vec, body_cost_vec,
>                        epilogue_cost_vec;
> @@ -1460,7 +1461,7 @@ vect_peeling_hash_get_lowest_cost (_vect
>    epilogue_cost_vec.create (2);
>
>    vect_get_peeling_costs_all_drs (LOOP_VINFO_DATAREFS (loop_vinfo),
> -                                 elem->dr, &inside_cost, &outside_cost,
> +                                 elem->stmt_info, &inside_cost, &outside_cost,
>                                   &body_cost_vec, &prologue_cost_vec,
>                                   elem->npeel, false);
>
> @@ -1484,7 +1485,7 @@ vect_peeling_hash_get_lowest_cost (_vect
>      {
>        min->inside_cost = inside_cost;
>        min->outside_cost = outside_cost;
> -      min->peel_info.dr = elem->dr;
> +      min->peel_info.stmt_info = elem->stmt_info;
>        min->peel_info.npeel = elem->npeel;
>        min->peel_info.count = elem->count;
>      }
> @@ -1503,7 +1504,7 @@ vect_peeling_hash_choose_best_peeling (h
>  {
>     struct _vect_peel_extended_info res;
>
> -   res.peel_info.dr = NULL;
> +   res.peel_info.stmt_info = NULL;
>
>     if (!unlimited_cost_model (LOOP_VINFO_LOOP (loop_vinfo)))
>       {
> @@ -1527,8 +1528,8 @@ vect_peeling_hash_choose_best_peeling (h
>  /* Return true if the new peeling NPEEL is supported.  */
>
>  static bool
> -vect_peeling_supportable (loop_vec_info loop_vinfo, struct data_reference *dr0,
> -                         unsigned npeel)
> +vect_peeling_supportable (loop_vec_info loop_vinfo,
> +                         stmt_vec_info peel_stmt_info, unsigned npeel)
>  {
>    unsigned i;
>    struct data_reference *dr = NULL;
> @@ -1540,10 +1541,10 @@ vect_peeling_supportable (loop_vec_info
>      {
>        int save_misalignment;
>
> -      if (dr == dr0)
> +      stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +      if (stmt_info == peel_stmt_info)
>         continue;
>
> -      stmt_vec_info stmt_info = vect_dr_stmt (dr);
>        /* For interleaving, only the alignment of the first access
>          matters.  */
>        if (STMT_VINFO_GROUPED_ACCESS (stmt_info)
> @@ -1557,8 +1558,9 @@ vect_peeling_supportable (loop_vec_info
>         continue;
>
>        save_misalignment = dr_misalignment (stmt_info);
> -      vect_update_misalignment_for_peel (dr, dr0, npeel);
> -      supportable_dr_alignment = vect_supportable_dr_alignment (dr, false);
> +      vect_update_misalignment_for_peel (stmt_info, peel_stmt_info, npeel);
> +      supportable_dr_alignment
> +       = vect_supportable_dr_alignment (stmt_info, false);
>        set_dr_misalignment (stmt_info, save_misalignment);
>
>        if (!supportable_dr_alignment)
> @@ -1665,8 +1667,9 @@ vect_enhance_data_refs_alignment (loop_v
>    vec<data_reference_p> datarefs = LOOP_VINFO_DATAREFS (loop_vinfo);
>    struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
>    enum dr_alignment_support supportable_dr_alignment;
> -  struct data_reference *dr0 = NULL, *first_store = NULL;
>    struct data_reference *dr;
> +  stmt_vec_info peel_stmt_info = NULL;
> +  stmt_vec_info first_store_info = NULL;
>    unsigned int i, j;
>    bool do_peeling = false;
>    bool do_versioning = false;
> @@ -1675,7 +1678,7 @@ vect_enhance_data_refs_alignment (loop_v
>    bool one_misalignment_known = false;
>    bool one_misalignment_unknown = false;
>    bool one_dr_unsupportable = false;
> -  struct data_reference *unsupportable_dr = NULL;
> +  stmt_vec_info unsupportable_stmt_info = NULL;
>    poly_uint64 vf = LOOP_VINFO_VECT_FACTOR (loop_vinfo);
>    unsigned possible_npeel_number = 1;
>    tree vectype;
> @@ -1745,8 +1748,9 @@ vect_enhance_data_refs_alignment (loop_v
>           && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
>         continue;
>
> -      supportable_dr_alignment = vect_supportable_dr_alignment (dr, true);
> -      do_peeling = vector_alignment_reachable_p (dr);
> +      supportable_dr_alignment
> +       = vect_supportable_dr_alignment (stmt_info, true);
> +      do_peeling = vector_alignment_reachable_p (stmt_info);
>        if (do_peeling)
>          {
>           if (known_alignment_for_access_p (stmt_info))
> @@ -1796,7 +1800,7 @@ vect_enhance_data_refs_alignment (loop_v
>                for (j = 0; j < possible_npeel_number; j++)
>                  {
>                    vect_peeling_hash_insert (&peeling_htab, loop_vinfo,
> -                                           dr, npeel_tmp);
> +                                           stmt_info, npeel_tmp);
>                   npeel_tmp += target_align / dr_size;
>                  }
>
> @@ -1810,11 +1814,11 @@ vect_enhance_data_refs_alignment (loop_v
>                   stores over load.  */
>               unsigned same_align_drs
>                 = STMT_VINFO_SAME_ALIGN_REFS (stmt_info).length ();
> -             if (!dr0
> +             if (!peel_stmt_info
>                   || same_align_drs_max < same_align_drs)
>                 {
>                   same_align_drs_max = same_align_drs;
> -                 dr0 = dr;
> +                 peel_stmt_info = stmt_info;
>                 }
>               /* For data-refs with the same number of related
>                  accesses prefer the one where the misalign
> @@ -1822,6 +1826,7 @@ vect_enhance_data_refs_alignment (loop_v
>               else if (same_align_drs_max == same_align_drs)
>                 {
>                   struct loop *ivloop0, *ivloop;
> +                 data_reference *dr0 = STMT_VINFO_DATA_REF (peel_stmt_info);
>                   ivloop0 = outermost_invariant_loop_for_expr
>                     (loop, DR_BASE_ADDRESS (dr0));
>                   ivloop = outermost_invariant_loop_for_expr
> @@ -1829,7 +1834,7 @@ vect_enhance_data_refs_alignment (loop_v
>                   if ((ivloop && !ivloop0)
>                       || (ivloop && ivloop0
>                           && flow_loop_nested_p (ivloop, ivloop0)))
> -                   dr0 = dr;
> +                   peel_stmt_info = stmt_info;
>                 }
>
>               one_misalignment_unknown = true;
> @@ -1839,11 +1844,11 @@ vect_enhance_data_refs_alignment (loop_v
>               if (!supportable_dr_alignment)
>               {
>                 one_dr_unsupportable = true;
> -               unsupportable_dr = dr;
> +               unsupportable_stmt_info = stmt_info;
>               }
>
> -             if (!first_store && DR_IS_WRITE (dr))
> -               first_store = dr;
> +             if (!first_store_info && DR_IS_WRITE (dr))
> +               first_store_info = stmt_info;
>              }
>          }
>        else
> @@ -1886,16 +1891,16 @@ vect_enhance_data_refs_alignment (loop_v
>
>        stmt_vector_for_cost dummy;
>        dummy.create (2);
> -      vect_get_peeling_costs_all_drs (datarefs, dr0,
> +      vect_get_peeling_costs_all_drs (datarefs, peel_stmt_info,
>                                       &load_inside_cost,
>                                       &load_outside_cost,
>                                       &dummy, &dummy, estimated_npeels, true);
>        dummy.release ();
>
> -      if (first_store)
> +      if (first_store_info)
>         {
>           dummy.create (2);
> -         vect_get_peeling_costs_all_drs (datarefs, first_store,
> +         vect_get_peeling_costs_all_drs (datarefs, first_store_info,
>                                           &store_inside_cost,
>                                           &store_outside_cost,
>                                           &dummy, &dummy,
> @@ -1912,7 +1917,7 @@ vect_enhance_data_refs_alignment (loop_v
>           || (load_inside_cost == store_inside_cost
>               && load_outside_cost > store_outside_cost))
>         {
> -         dr0 = first_store;
> +         peel_stmt_info = first_store_info;
>           peel_for_unknown_alignment.inside_cost = store_inside_cost;
>           peel_for_unknown_alignment.outside_cost = store_outside_cost;
>         }
> @@ -1936,18 +1941,18 @@ vect_enhance_data_refs_alignment (loop_v
>        epilogue_cost_vec.release ();
>
>        peel_for_unknown_alignment.peel_info.count = 1
> -       + STMT_VINFO_SAME_ALIGN_REFS (vect_dr_stmt (dr0)).length ();
> +       + STMT_VINFO_SAME_ALIGN_REFS (peel_stmt_info).length ();
>      }
>
>    peel_for_unknown_alignment.peel_info.npeel = 0;
> -  peel_for_unknown_alignment.peel_info.dr = dr0;
> +  peel_for_unknown_alignment.peel_info.stmt_info = peel_stmt_info;
>
>    best_peel = peel_for_unknown_alignment;
>
>    peel_for_known_alignment.inside_cost = INT_MAX;
>    peel_for_known_alignment.outside_cost = INT_MAX;
>    peel_for_known_alignment.peel_info.count = 0;
> -  peel_for_known_alignment.peel_info.dr = NULL;
> +  peel_for_known_alignment.peel_info.stmt_info = NULL;
>
>    if (do_peeling && one_misalignment_known)
>      {
> @@ -1959,7 +1964,7 @@ vect_enhance_data_refs_alignment (loop_v
>      }
>
>    /* Compare costs of peeling for known and unknown alignment. */
> -  if (peel_for_known_alignment.peel_info.dr != NULL
> +  if (peel_for_known_alignment.peel_info.stmt_info
>        && peel_for_unknown_alignment.inside_cost
>        >= peel_for_known_alignment.inside_cost)
>      {
> @@ -1976,7 +1981,7 @@ vect_enhance_data_refs_alignment (loop_v
>       since we'd have to discard a chosen peeling except when it accidentally
>       aligned the unsupportable data ref.  */
>    if (one_dr_unsupportable)
> -    dr0 = unsupportable_dr;
> +    peel_stmt_info = unsupportable_stmt_info;
>    else if (do_peeling)
>      {
>        /* Calculate the penalty for no peeling, i.e. leaving everything as-is.
> @@ -2007,7 +2012,7 @@ vect_enhance_data_refs_alignment (loop_v
>        epilogue_cost_vec.release ();
>
>        npeel = best_peel.peel_info.npeel;
> -      dr0 = best_peel.peel_info.dr;
> +      peel_stmt_info = best_peel.peel_info.stmt_info;
>
>        /* If no peeling is not more expensive than the best peeling we
>          have so far, don't perform any peeling.  */
> @@ -2017,8 +2022,8 @@ vect_enhance_data_refs_alignment (loop_v
>
>    if (do_peeling)
>      {
> -      stmt_vec_info peel_stmt_info = vect_dr_stmt (dr0);
>        vectype = STMT_VINFO_VECTYPE (peel_stmt_info);
> +      data_reference *dr0 = STMT_VINFO_DATA_REF (peel_stmt_info);
>
>        if (known_alignment_for_access_p (peel_stmt_info))
>          {
> @@ -2052,7 +2057,7 @@ vect_enhance_data_refs_alignment (loop_v
>          }
>
>        /* Ensure that all datarefs can be vectorized after the peel.  */
> -      if (!vect_peeling_supportable (loop_vinfo, dr0, npeel))
> +      if (!vect_peeling_supportable (loop_vinfo, peel_stmt_info, npeel))
>         do_peeling = false;
>
>        /* Check if all datarefs are supportable and log.  */
> @@ -2125,7 +2130,8 @@ vect_enhance_data_refs_alignment (loop_v
>                     && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
>                   continue;
>
> -               vect_update_misalignment_for_peel (dr, dr0, npeel);
> +               vect_update_misalignment_for_peel (stmt_info,
> +                                                  peel_stmt_info, npeel);
>               }
>
>            LOOP_VINFO_UNALIGNED_DR (loop_vinfo) = dr0;
> @@ -2188,7 +2194,8 @@ vect_enhance_data_refs_alignment (loop_v
>               break;
>             }
>
> -         supportable_dr_alignment = vect_supportable_dr_alignment (dr, false);
> +         supportable_dr_alignment
> +           = vect_supportable_dr_alignment (stmt_info, false);
>
>            if (!supportable_dr_alignment)
>              {
> @@ -2203,7 +2210,6 @@ vect_enhance_data_refs_alignment (loop_v
>                    break;
>                  }
>
> -             stmt_info = vect_dr_stmt (dr);
>               vectype = STMT_VINFO_VECTYPE (stmt_info);
>               gcc_assert (vectype);
>
> @@ -2314,9 +2320,9 @@ vect_find_same_alignment_drs (struct dat
>    if (maybe_ne (diff, 0))
>      {
>        /* Get the wider of the two alignments.  */
> -      unsigned int align_a = (vect_calculate_target_alignment (dra)
> +      unsigned int align_a = (vect_calculate_target_alignment (stmtinfo_a)
>                               / BITS_PER_UNIT);
> -      unsigned int align_b = (vect_calculate_target_alignment (drb)
> +      unsigned int align_b = (vect_calculate_target_alignment (stmtinfo_b)
>                               / BITS_PER_UNIT);
>        unsigned int max_align = MAX (align_a, align_b);
>
> @@ -2366,7 +2372,7 @@ vect_analyze_data_refs_alignment (loop_v
>      {
>        stmt_vec_info stmt_info = vect_dr_stmt (dr);
>        if (STMT_VINFO_VECTORIZABLE (stmt_info))
> -       vect_compute_data_ref_alignment (dr);
> +       vect_compute_data_ref_alignment (stmt_info);
>      }
>
>    return true;
> @@ -2382,17 +2388,16 @@ vect_slp_analyze_and_verify_node_alignme
>       the node is permuted in which case we start from the first
>       element in the group.  */
>    stmt_vec_info first_stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
> -  data_reference_p first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
> +  stmt_vec_info stmt_info = first_stmt_info;
>    if (SLP_TREE_LOAD_PERMUTATION (node).exists ())
> -    first_stmt_info = DR_GROUP_FIRST_ELEMENT (first_stmt_info);
> +    stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
>
> -  data_reference_p dr = STMT_VINFO_DATA_REF (first_stmt_info);
> -  vect_compute_data_ref_alignment (dr);
> +  vect_compute_data_ref_alignment (stmt_info);
>    /* For creating the data-ref pointer we need alignment of the
>       first element anyway.  */
> -  if (dr != first_dr)
> -    vect_compute_data_ref_alignment (first_dr);
> -  if (! verify_data_ref_alignment (dr))
> +  if (stmt_info != first_stmt_info)
> +    vect_compute_data_ref_alignment (first_stmt_info);
> +  if (! verify_data_ref_alignment (first_stmt_info))
>      {
>        if (dump_enabled_p ())
>         dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> @@ -2430,19 +2435,19 @@ vect_slp_analyze_and_verify_instance_ali
>  }
>
>
> -/* Analyze groups of accesses: check that DR belongs to a group of
> -   accesses of legal size, step, etc.  Detect gaps, single element
> -   interleaving, and other special cases. Set grouped access info.
> -   Collect groups of strided stores for further use in SLP analysis.
> -   Worker for vect_analyze_group_access.  */
> +/* Analyze groups of accesses: check that the load or store in STMT_INFO
> +   belongs to a group of accesses of legal size, step, etc.  Detect gaps,
> +   single element interleaving, and other special cases.  Set grouped
> +   access info.  Collect groups of strided stores for further use in
> +   SLP analysis.  Worker for vect_analyze_group_access.  */
>
>  static bool
> -vect_analyze_group_access_1 (struct data_reference *dr)
> +vect_analyze_group_access_1 (stmt_vec_info stmt_info)
>  {
> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    tree step = DR_STEP (dr);
>    tree scalar_type = TREE_TYPE (DR_REF (dr));
>    HOST_WIDE_INT type_size = TREE_INT_CST_LOW (TYPE_SIZE_UNIT (scalar_type));
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    bb_vec_info bb_vinfo = STMT_VINFO_BB_VINFO (stmt_info);
>    HOST_WIDE_INT dr_step = -1;
> @@ -2519,7 +2524,7 @@ vect_analyze_group_access_1 (struct data
>        if (bb_vinfo)
>         {
>           /* Mark the statement as unvectorizable.  */
> -         STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr)) = false;
> +         STMT_VINFO_VECTORIZABLE (stmt_info) = false;
>           return true;
>         }
>
> @@ -2667,18 +2672,18 @@ vect_analyze_group_access_1 (struct data
>    return true;
>  }
>
> -/* Analyze groups of accesses: check that DR belongs to a group of
> -   accesses of legal size, step, etc.  Detect gaps, single element
> -   interleaving, and other special cases. Set grouped access info.
> -   Collect groups of strided stores for further use in SLP analysis.  */
> +/* Analyze groups of accesses: check that the load or store in STMT_INFO
> +   belongs to a group of accesses of legal size, step, etc.  Detect gaps,
> +   single element interleaving, and other special cases.  Set grouped
> +   access info.  Collect groups of strided stores for further use in
> +   SLP analysis.  */
>
>  static bool
> -vect_analyze_group_access (struct data_reference *dr)
> +vect_analyze_group_access (stmt_vec_info stmt_info)
>  {
> -  if (!vect_analyze_group_access_1 (dr))
> +  if (!vect_analyze_group_access_1 (stmt_info))
>      {
>        /* Dissolve the group if present.  */
> -      stmt_vec_info stmt_info = DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (dr));
>        while (stmt_info)
>         {
>           stmt_vec_info next = DR_GROUP_NEXT_ELEMENT (stmt_info);
> @@ -2691,16 +2696,16 @@ vect_analyze_group_access (struct data_r
>    return true;
>  }
>
> -/* Analyze the access pattern of the data-reference DR.
> +/* Analyze the access pattern of the load or store in STMT_INFO.
>     In case of non-consecutive accesses call vect_analyze_group_access() to
>     analyze groups of accesses.  */
>
>  static bool
> -vect_analyze_data_ref_access (struct data_reference *dr)
> +vect_analyze_data_ref_access (stmt_vec_info stmt_info)
>  {
> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    tree step = DR_STEP (dr);
>    tree scalar_type = TREE_TYPE (DR_REF (dr));
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    struct loop *loop = NULL;
>
> @@ -2780,10 +2785,10 @@ vect_analyze_data_ref_access (struct dat
>    if (TREE_CODE (step) != INTEGER_CST)
>      return (STMT_VINFO_STRIDED_P (stmt_info)
>             && (!STMT_VINFO_GROUPED_ACCESS (stmt_info)
> -               || vect_analyze_group_access (dr)));
> +               || vect_analyze_group_access (stmt_info)));
>
>    /* Not consecutive access - check if it's a part of interleaving group.  */
> -  return vect_analyze_group_access (dr);
> +  return vect_analyze_group_access (stmt_info);
>  }
>
>  /* Compare two data-references DRA and DRB to group them into chunks
> @@ -3062,25 +3067,28 @@ vect_analyze_data_ref_accesses (vec_info
>      }
>
>    FOR_EACH_VEC_ELT (datarefs_copy, i, dr)
> -    if (STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr))
> -        && !vect_analyze_data_ref_access (dr))
> -      {
> -       if (dump_enabled_p ())
> -         dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> -                          "not vectorized: complicated access pattern.\n");
> +    {
> +      stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +      if (STMT_VINFO_VECTORIZABLE (stmt_info)
> +         && !vect_analyze_data_ref_access (stmt_info))
> +       {
> +         if (dump_enabled_p ())
> +           dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> +                            "not vectorized: complicated access pattern.\n");
>
> -        if (is_a <bb_vec_info> (vinfo))
> -         {
> -           /* Mark the statement as not vectorizable.  */
> -           STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr)) = false;
> -           continue;
> -         }
> -        else
> -         {
> -           datarefs_copy.release ();
> -           return false;
> -         }
> -      }
> +         if (is_a <bb_vec_info> (vinfo))
> +           {
> +             /* Mark the statement as not vectorizable.  */
> +             STMT_VINFO_VECTORIZABLE (stmt_info) = false;
> +             continue;
> +           }
> +         else
> +           {
> +             datarefs_copy.release ();
> +             return false;
> +           }
> +       }
> +    }
>
>    datarefs_copy.release ();
>    return true;
> @@ -3089,7 +3097,7 @@ vect_analyze_data_ref_accesses (vec_info
>  /* Function vect_vfa_segment_size.
>
>     Input:
> -     DR: The data reference.
> +     STMT_INFO: the load or store statement.
>       LENGTH_FACTOR: segment length to consider.
>
>     Return a value suitable for the dr_with_seg_len::seg_len field.
> @@ -3098,8 +3106,9 @@ vect_analyze_data_ref_accesses (vec_info
>     the size of the access; in effect it only describes the first byte.  */
>
>  static tree
> -vect_vfa_segment_size (struct data_reference *dr, tree length_factor)
> +vect_vfa_segment_size (stmt_vec_info stmt_info, tree length_factor)
>  {
> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    length_factor = size_binop (MINUS_EXPR,
>                               fold_convert (sizetype, length_factor),
>                               size_one_node);
> @@ -3107,23 +3116,23 @@ vect_vfa_segment_size (struct data_refer
>                      length_factor);
>  }
>
> -/* Return a value that, when added to abs (vect_vfa_segment_size (dr)),
> +/* Return a value that, when added to abs (vect_vfa_segment_size (STMT_INFO)),
>     gives the worst-case number of bytes covered by the segment.  */
>
>  static unsigned HOST_WIDE_INT
> -vect_vfa_access_size (data_reference *dr)
> +vect_vfa_access_size (stmt_vec_info stmt_vinfo)
>  {
> -  stmt_vec_info stmt_vinfo = vect_dr_stmt (dr);
> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_vinfo);
>    tree ref_type = TREE_TYPE (DR_REF (dr));
>    unsigned HOST_WIDE_INT ref_size = tree_to_uhwi (TYPE_SIZE_UNIT (ref_type));
>    unsigned HOST_WIDE_INT access_size = ref_size;
>    if (DR_GROUP_FIRST_ELEMENT (stmt_vinfo))
>      {
> -      gcc_assert (DR_GROUP_FIRST_ELEMENT (stmt_vinfo) == vect_dr_stmt (dr));
> +      gcc_assert (DR_GROUP_FIRST_ELEMENT (stmt_vinfo) == stmt_vinfo);
>        access_size *= DR_GROUP_SIZE (stmt_vinfo) - DR_GROUP_GAP (stmt_vinfo);
>      }
>    if (STMT_VINFO_VEC_STMT (stmt_vinfo)
> -      && (vect_supportable_dr_alignment (dr, false)
> +      && (vect_supportable_dr_alignment (stmt_vinfo, false)
>           == dr_explicit_realign_optimized))
>      {
>        /* We might access a full vector's worth.  */
> @@ -3281,13 +3290,14 @@ vect_check_lower_bound (loop_vec_info lo
>    LOOP_VINFO_LOWER_BOUNDS (loop_vinfo).safe_push (lower_bound);
>  }
>
> -/* Return true if it's unlikely that the step of the vectorized form of DR
> -   will span fewer than GAP bytes.  */
> +/* Return true if it's unlikely that the step of the vectorized form of
> +   the load or store in STMT_INFO will span fewer than GAP bytes.  */
>
>  static bool
> -vect_small_gap_p (loop_vec_info loop_vinfo, data_reference *dr, poly_int64 gap)
> +vect_small_gap_p (stmt_vec_info stmt_info, poly_int64 gap)
>  {
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +  loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    HOST_WIDE_INT count
>      = estimated_poly_value (LOOP_VINFO_VECT_FACTOR (loop_vinfo));
>    if (DR_GROUP_FIRST_ELEMENT (stmt_info))
> @@ -3295,16 +3305,20 @@ vect_small_gap_p (loop_vec_info loop_vin
>    return estimated_poly_value (gap) <= count * vect_get_scalar_dr_size (dr);
>  }
>
> -/* Return true if we know that there is no alias between DR_A and DR_B
> -   when abs (DR_STEP (DR_A)) >= N for some N.  When returning true, set
> -   *LOWER_BOUND_OUT to this N.  */
> +/* Return true if we know that there is no alias between the loads and
> +   stores in STMT_INFO_A and STMT_INFO_B when the absolute step of
> +   STMT_INFO_A's access is >= some N.  When returning true,
> +   set *LOWER_BOUND_OUT to this N.  */
>
>  static bool
> -vectorizable_with_step_bound_p (data_reference *dr_a, data_reference *dr_b,
> +vectorizable_with_step_bound_p (stmt_vec_info stmt_info_a,
> +                               stmt_vec_info stmt_info_b,
>                                 poly_uint64 *lower_bound_out)
>  {
>    /* Check that there is a constant gap of known sign between DR_A
>       and DR_B.  */
> +  data_reference *dr_a = STMT_VINFO_DATA_REF (stmt_info_a);
> +  data_reference *dr_b = STMT_VINFO_DATA_REF (stmt_info_b);
>    poly_int64 init_a, init_b;
>    if (!operand_equal_p (DR_BASE_ADDRESS (dr_a), DR_BASE_ADDRESS (dr_b), 0)
>        || !operand_equal_p (DR_OFFSET (dr_a), DR_OFFSET (dr_b), 0)
> @@ -3324,8 +3338,7 @@ vectorizable_with_step_bound_p (data_ref
>    /* If the two accesses could be dependent within a scalar iteration,
>       make sure that we'd retain their order.  */
>    if (maybe_gt (init_a + vect_get_scalar_dr_size (dr_a), init_b)
> -      && !vect_preserves_scalar_order_p (vect_dr_stmt (dr_a),
> -                                        vect_dr_stmt (dr_b)))
> +      && !vect_preserves_scalar_order_p (stmt_info_a, stmt_info_b))
>      return false;
>
>    /* There is no alias if abs (DR_STEP) is greater than or equal to
> @@ -3426,7 +3439,8 @@ vect_prune_runtime_alias_test_list (loop
>          and intra-iteration dependencies are guaranteed to be honored.  */
>        if (ignore_step_p
>           && (vect_preserves_scalar_order_p (stmt_info_a, stmt_info_b)
> -             || vectorizable_with_step_bound_p (dr_a, dr_b, &lower_bound)))
> +             || vectorizable_with_step_bound_p (stmt_info_a, stmt_info_b,
> +                                                &lower_bound)))
>         {
>           if (dump_enabled_p ())
>             {
> @@ -3446,9 +3460,10 @@ vect_prune_runtime_alias_test_list (loop
>          than the number of bytes handled by one vector iteration.)  */
>        if (!ignore_step_p
>           && TREE_CODE (DR_STEP (dr_a)) != INTEGER_CST
> -         && vectorizable_with_step_bound_p (dr_a, dr_b, &lower_bound)
> -         && (vect_small_gap_p (loop_vinfo, dr_a, lower_bound)
> -             || vect_small_gap_p (loop_vinfo, dr_b, lower_bound)))
> +         && vectorizable_with_step_bound_p (stmt_info_a, stmt_info_b,
> +                                            &lower_bound)
> +         && (vect_small_gap_p (stmt_info_a, lower_bound)
> +             || vect_small_gap_p (stmt_info_b, lower_bound)))
>         {
>           bool unsigned_p = dr_known_forward_stride_p (dr_a);
>           if (dump_enabled_p ())
> @@ -3501,11 +3516,13 @@ vect_prune_runtime_alias_test_list (loop
>             length_factor = scalar_loop_iters;
>           else
>             length_factor = size_int (vect_factor);
> -         segment_length_a = vect_vfa_segment_size (dr_a, length_factor);
> -         segment_length_b = vect_vfa_segment_size (dr_b, length_factor);
> +         segment_length_a = vect_vfa_segment_size (stmt_info_a,
> +                                                   length_factor);
> +         segment_length_b = vect_vfa_segment_size (stmt_info_b,
> +                                                   length_factor);
>         }
> -      access_size_a = vect_vfa_access_size (dr_a);
> -      access_size_b = vect_vfa_access_size (dr_b);
> +      access_size_a = vect_vfa_access_size (stmt_info_a);
> +      access_size_b = vect_vfa_access_size (stmt_info_b);
>        align_a = vect_vfa_align (dr_a);
>        align_b = vect_vfa_align (dr_b);
>
> @@ -4463,12 +4480,12 @@ vect_get_new_ssa_name (tree type, enum v
>    return new_vect_var;
>  }
>
> -/* Duplicate ptr info and set alignment/misaligment on NAME from DR.  */
> +/* Duplicate ptr info and set alignment/misaligment on NAME from STMT_INFO.  */
>
>  static void
> -vect_duplicate_ssa_name_ptr_info (tree name, data_reference *dr)
> +vect_duplicate_ssa_name_ptr_info (tree name, stmt_vec_info stmt_info)
>  {
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    duplicate_ssa_name_ptr_info (name, DR_PTR_INFO (dr));
>    int misalign = dr_misalignment (stmt_info);
>    if (misalign == DR_MISALIGNMENT_UNKNOWN)
> @@ -4579,7 +4596,7 @@ vect_create_addr_base_for_vector_ref (st
>        && TREE_CODE (addr_base) == SSA_NAME
>        && !SSA_NAME_PTR_INFO (addr_base))
>      {
> -      vect_duplicate_ssa_name_ptr_info (addr_base, dr);
> +      vect_duplicate_ssa_name_ptr_info (addr_base, stmt_info);
>        if (offset || byte_offset)
>         mark_ptr_info_alignment_unknown (SSA_NAME_PTR_INFO (addr_base));
>      }
> @@ -4845,8 +4862,8 @@ vect_create_data_ref_ptr (stmt_vec_info
>        /* Copy the points-to information if it exists. */
>        if (DR_PTR_INFO (dr))
>         {
> -         vect_duplicate_ssa_name_ptr_info (indx_before_incr, dr);
> -         vect_duplicate_ssa_name_ptr_info (indx_after_incr, dr);
> +         vect_duplicate_ssa_name_ptr_info (indx_before_incr, stmt_info);
> +         vect_duplicate_ssa_name_ptr_info (indx_after_incr, stmt_info);
>         }
>        if (ptr_incr)
>         *ptr_incr = incr;
> @@ -4875,8 +4892,8 @@ vect_create_data_ref_ptr (stmt_vec_info
>        /* Copy the points-to information if it exists. */
>        if (DR_PTR_INFO (dr))
>         {
> -         vect_duplicate_ssa_name_ptr_info (indx_before_incr, dr);
> -         vect_duplicate_ssa_name_ptr_info (indx_after_incr, dr);
> +         vect_duplicate_ssa_name_ptr_info (indx_before_incr, stmt_info);
> +         vect_duplicate_ssa_name_ptr_info (indx_after_incr, stmt_info);
>         }
>        if (ptr_incr)
>         *ptr_incr = incr;
> @@ -6434,17 +6451,17 @@ vect_can_force_dr_alignment_p (const_tre
>  }
>
>
> -/* Return whether the data reference DR is supported with respect to its
> -   alignment.
> +/* Return whether the load or store in STMT_INFO is supported with
> +   respect to its alignment.
>     If CHECK_ALIGNED_ACCESSES is TRUE, check if the access is supported even
>     it is aligned, i.e., check if it is possible to vectorize it with different
>     alignment.  */
>
>  enum dr_alignment_support
> -vect_supportable_dr_alignment (struct data_reference *dr,
> +vect_supportable_dr_alignment (stmt_vec_info stmt_info,
>                                 bool check_aligned_accesses)
>  {
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>    machine_mode mode = TYPE_MODE (vectype);
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-24 10:24:05.744462369 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:24:08.924434128 +0100
> @@ -1057,8 +1057,8 @@ vect_get_store_cost (stmt_vec_info stmt_
>                      unsigned int *inside_cost,
>                      stmt_vector_for_cost *body_cost_vec)
>  {
> -  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
> -  int alignment_support_scheme = vect_supportable_dr_alignment (dr, false);
> +  int alignment_support_scheme
> +    = vect_supportable_dr_alignment (stmt_info, false);
>
>    switch (alignment_support_scheme)
>      {
> @@ -1237,8 +1237,8 @@ vect_get_load_cost (stmt_vec_info stmt_i
>                     stmt_vector_for_cost *body_cost_vec,
>                     bool record_prologue_costs)
>  {
> -  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
> -  int alignment_support_scheme = vect_supportable_dr_alignment (dr, false);
> +  int alignment_support_scheme
> +    = vect_supportable_dr_alignment (stmt_info, false);
>
>    switch (alignment_support_scheme)
>      {
> @@ -2340,7 +2340,6 @@ get_negative_load_store_type (stmt_vec_i
>                               vec_load_store_type vls_type,
>                               unsigned int ncopies)
>  {
> -  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    dr_alignment_support alignment_support_scheme;
>
>    if (ncopies > 1)
> @@ -2351,7 +2350,7 @@ get_negative_load_store_type (stmt_vec_i
>        return VMAT_ELEMENTWISE;
>      }
>
> -  alignment_support_scheme = vect_supportable_dr_alignment (dr, false);
> +  alignment_support_scheme = vect_supportable_dr_alignment (stmt_info, false);
>    if (alignment_support_scheme != dr_aligned
>        && alignment_support_scheme != dr_unaligned_supported)
>      {
> @@ -2924,15 +2923,14 @@ vect_get_strided_load_store_ops (stmt_ve
>  }
>
>  /* Return the amount that should be added to a vector pointer to move
> -   to the next or previous copy of AGGR_TYPE.  DR is the data reference
> -   being vectorized and MEMORY_ACCESS_TYPE describes the type of
> +   to the next or previous copy of AGGR_TYPE.  STMT_INFO is the load or
> +   store being vectorized and MEMORY_ACCESS_TYPE describes the type of
>     vectorization.  */
>
>  static tree
> -vect_get_data_ptr_increment (data_reference *dr, tree aggr_type,
> +vect_get_data_ptr_increment (stmt_vec_info stmt_info, tree aggr_type,
>                              vect_memory_access_type memory_access_type)
>  {
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>    if (memory_access_type == VMAT_INVARIANT)
>      return size_zero_node;
>
> @@ -6171,12 +6169,12 @@ vectorizable_operation (stmt_vec_info st
>    return true;
>  }
>
> -/* A helper function to ensure data reference DR's base alignment.  */
> +/* If we decided to increase the base alignment for the memory access in
> +   STMT_INFO, but haven't increased it yet, do so now.  */
>
>  static void
> -ensure_base_align (struct data_reference *dr)
> +ensure_base_align (stmt_vec_info stmt_info)
>  {
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>    if (stmt_info->dr_aux.misalignment == DR_MISALIGNMENT_UNINITIALIZED)
>      return;
>
> @@ -6439,7 +6437,7 @@ vectorizable_store (stmt_vec_info stmt_i
>
>    /* Transform.  */
>
> -  ensure_base_align (dr);
> +  ensure_base_align (stmt_info);
>
>    if (memory_access_type == VMAT_GATHER_SCATTER && gs_info.decl)
>      {
> @@ -6882,7 +6880,8 @@ vectorizable_store (stmt_vec_info stmt_i
>    auto_vec<tree> dr_chain (group_size);
>    oprnds.create (group_size);
>
> -  alignment_support_scheme = vect_supportable_dr_alignment (first_dr, false);
> +  alignment_support_scheme
> +    = vect_supportable_dr_alignment (first_stmt_info, false);
>    gcc_assert (alignment_support_scheme);
>    vec_loop_masks *loop_masks
>      = (loop_vinfo && LOOP_VINFO_FULLY_MASKED_P (loop_vinfo)
> @@ -6920,7 +6919,8 @@ vectorizable_store (stmt_vec_info stmt_i
>         aggr_type = build_array_type_nelts (elem_type, vec_num * nunits);
>        else
>         aggr_type = vectype;
> -      bump = vect_get_data_ptr_increment (dr, aggr_type, memory_access_type);
> +      bump = vect_get_data_ptr_increment (stmt_info, aggr_type,
> +                                         memory_access_type);
>      }
>
>    if (mask)
> @@ -7667,7 +7667,7 @@ vectorizable_load (stmt_vec_info stmt_in
>
>    /* Transform.  */
>
> -  ensure_base_align (dr);
> +  ensure_base_align (stmt_info);
>
>    if (memory_access_type == VMAT_GATHER_SCATTER && gs_info.decl)
>      {
> @@ -7990,7 +7990,8 @@ vectorizable_load (stmt_vec_info stmt_in
>        ref_type = reference_alias_ptr_type (DR_REF (first_dr));
>      }
>
> -  alignment_support_scheme = vect_supportable_dr_alignment (first_dr, false);
> +  alignment_support_scheme
> +    = vect_supportable_dr_alignment (first_stmt_info, false);
>    gcc_assert (alignment_support_scheme);
>    vec_loop_masks *loop_masks
>      = (loop_vinfo && LOOP_VINFO_FULLY_MASKED_P (loop_vinfo)
> @@ -8155,7 +8156,8 @@ vectorizable_load (stmt_vec_info stmt_in
>         aggr_type = build_array_type_nelts (elem_type, vec_num * nunits);
>        else
>         aggr_type = vectype;
> -      bump = vect_get_data_ptr_increment (dr, aggr_type, memory_access_type);
> +      bump = vect_get_data_ptr_increment (stmt_info, aggr_type,
> +                                         memory_access_type);
>      }
>
>    tree vec_mask = NULL_TREE;

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [14/46] Make STMT_VINFO_VEC_STMT a stmt_vec_info
  2018-07-25  9:21   ` Richard Biener
@ 2018-07-25 11:03     ` Richard Sandiford
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Sandiford @ 2018-07-25 11:03 UTC (permalink / raw)
  To: Richard Biener; +Cc: GCC Patches

Richard Biener <richard.guenther@gmail.com> writes:
> On Tue, Jul 24, 2018 at 11:58 AM Richard Sandiford
> <richard.sandiford@arm.com> wrote:
>>
>> This patch changes STMT_VINFO_VEC_STMT from a gimple stmt to a
>> stmt_vec_info and makes the vectorizable_* routines pass back
>> a stmt_vec_info to vect_transform_stmt.
>
> OK, but - I don't think we ever "use" that stmt_info on vectorized stmts apart
> from the chaining via related-stmt?  I'd also like to get rid of that chaining
> and instead do sth similar to SLP where we simply have a vec<> of
> vectorized stmts.

Yeah, agree that would be better.

Thanks,
Richard

>
> Richard.
>
>>
>> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>>
>> gcc/
>>         * tree-vectorizer.h (_stmt_vec_info::vectorized_stmt): Change from
>>         a gimple stmt to a stmt_vec_info.
>>         (vectorizable_condition, vectorizable_live_operation)
>>         (vectorizable_reduction, vectorizable_induction): Pass back the
>>         vectorized statement as a stmt_vec_info.
>>         * tree-vect-data-refs.c (vect_record_grouped_load_vectors): Update
>>         use of STMT_VINFO_VEC_STMT.
>>         * tree-vect-loop.c (vect_create_epilog_for_reduction): Likewise,
>>         accumulating the inner phis that feed the STMT_VINFO_VEC_STMT
>>         as stmt_vec_infos rather than gimple stmts.
>>         (vectorize_fold_left_reduction): Change vec_stmt from a gimple stmt
>>         to a stmt_vec_info.
>>         (vectorizable_live_operation): Likewise.
>>         (vectorizable_reduction, vectorizable_induction): Likewise,
>>         updating use of STMT_VINFO_VEC_STMT.
>>         * tree-vect-stmts.c (vect_get_vec_def_for_operand_1): Update use
>>         of STMT_VINFO_VEC_STMT.
>>         (vect_build_gather_load_calls, vectorizable_bswap, vectorizable_call)
>>         (vectorizable_simd_clone_call, vectorizable_conversion)
>>         (vectorizable_assignment, vectorizable_shift, vectorizable_operation)
>>         (vectorizable_store, vectorizable_load, vectorizable_condition)
>>         (vectorizable_comparison, can_vectorize_live_stmts): Change vec_stmt
>>         from a gimple stmt to a stmt_vec_info.
>>         (vect_transform_stmt): Update use of STMT_VINFO_VEC_STMT.  Pass a
>>         pointer to a stmt_vec_info to the vectorizable_* routines.
>>
>> Index: gcc/tree-vectorizer.h
>> ===================================================================
>> --- gcc/tree-vectorizer.h       2018-07-24 10:22:44.297185652 +0100
>> +++ gcc/tree-vectorizer.h       2018-07-24 10:22:47.489157307 +0100
>> @@ -812,7 +812,7 @@ struct _stmt_vec_info {
>>    tree vectype;
>>
>>    /* The vectorized version of the stmt.  */
>> -  gimple *vectorized_stmt;
>> +  stmt_vec_info vectorized_stmt;
>>
>>
>>    /* The following is relevant only for stmts that contain a non-scalar
>> @@ -1560,7 +1560,7 @@ extern void vect_remove_stores (gimple *
>>  extern bool vect_analyze_stmt (gimple *, bool *, slp_tree, slp_instance,
>>                                stmt_vector_for_cost *);
>>  extern bool vectorizable_condition (gimple *, gimple_stmt_iterator *,
>> -                                   gimple **, tree, int, slp_tree,
>> +                                   stmt_vec_info *, tree, int, slp_tree,
>>                                     stmt_vector_for_cost *);
>>  extern void vect_get_load_cost (stmt_vec_info, int, bool,
>>                                 unsigned int *, unsigned int *,
>> @@ -1649,13 +1649,13 @@ extern tree vect_get_loop_mask (gimple_s
>>  extern struct loop *vect_transform_loop (loop_vec_info);
>> extern loop_vec_info vect_analyze_loop_form (struct loop *,
> vec_info_shared *);
>>  extern bool vectorizable_live_operation (gimple *, gimple_stmt_iterator *,
>> -                                        slp_tree, int, gimple **,
>> +                                        slp_tree, int, stmt_vec_info *,
>>                                          stmt_vector_for_cost *);
>>  extern bool vectorizable_reduction (gimple *, gimple_stmt_iterator *,
>> -                                   gimple **, slp_tree, slp_instance,
>> +                                   stmt_vec_info *, slp_tree, slp_instance,
>>                                     stmt_vector_for_cost *);
>>  extern bool vectorizable_induction (gimple *, gimple_stmt_iterator *,
>> -                                   gimple **, slp_tree,
>> +                                   stmt_vec_info *, slp_tree,
>>                                     stmt_vector_for_cost *);
>>  extern tree get_initial_def_for_reduction (gimple *, tree, tree *);
>>  extern bool vect_worthwhile_without_simd_p (vec_info *, tree_code);
>> Index: gcc/tree-vect-data-refs.c
>> ===================================================================
>> --- gcc/tree-vect-data-refs.c   2018-07-24 10:22:44.285185759 +0100
>> +++ gcc/tree-vect-data-refs.c   2018-07-24 10:22:47.485157343 +0100
>> @@ -6401,18 +6401,17 @@ vect_record_grouped_load_vectors (gimple
>>              {
>>                if (!DR_GROUP_SAME_DR_STMT (vinfo_for_stmt (next_stmt)))
>>                  {
>> -                 gimple *prev_stmt =
>> -                   STMT_VINFO_VEC_STMT (vinfo_for_stmt (next_stmt));
>> +                 stmt_vec_info prev_stmt_info
>> +                   = STMT_VINFO_VEC_STMT (vinfo_for_stmt (next_stmt));
>>                   stmt_vec_info rel_stmt_info
>> -                   = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (prev_stmt));
>> +                   = STMT_VINFO_RELATED_STMT (prev_stmt_info);
>>                   while (rel_stmt_info)
>>                     {
>> -                     prev_stmt = rel_stmt_info;
>> +                     prev_stmt_info = rel_stmt_info;
>>                       rel_stmt_info = STMT_VINFO_RELATED_STMT (rel_stmt_info);
>>                     }
>>
>> -                 STMT_VINFO_RELATED_STMT (vinfo_for_stmt (prev_stmt))
>> -                   = new_stmt_info;
>> +                 STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
>>                  }
>>              }
>>
>> Index: gcc/tree-vect-loop.c
>> ===================================================================
>> --- gcc/tree-vect-loop.c        2018-07-24 10:22:44.289185723 +0100
>> +++ gcc/tree-vect-loop.c        2018-07-24 10:22:47.489157307 +0100
>> @@ -4445,7 +4445,7 @@ vect_create_epilog_for_reduction (vec<tr
>>    gimple *use_stmt, *reduction_phi = NULL;
>>    bool nested_in_vect_loop = false;
>>    auto_vec<gimple *> new_phis;
>> -  auto_vec<gimple *> inner_phis;
>> +  auto_vec<stmt_vec_info> inner_phis;
>>    enum vect_def_type dt = vect_unknown_def_type;
>>    int j, i;
>>    auto_vec<tree> scalar_results;
>> @@ -4455,7 +4455,7 @@ vect_create_epilog_for_reduction (vec<tr
>>    bool slp_reduc = false;
>>    bool direct_slp_reduc;
>>    tree new_phi_result;
>> -  gimple *inner_phi = NULL;
>> +  stmt_vec_info inner_phi = NULL;
>>    tree induction_index = NULL_TREE;
>>
>>    if (slp_node)
>> @@ -4605,7 +4605,7 @@ vect_create_epilog_for_reduction (vec<tr
>>        tree indx_before_incr, indx_after_incr;
>>        poly_uint64 nunits_out = TYPE_VECTOR_SUBPARTS (vectype);
>>
>> -      gimple *vec_stmt = STMT_VINFO_VEC_STMT (stmt_info);
>> +      gimple *vec_stmt = STMT_VINFO_VEC_STMT (stmt_info)->stmt;
>>        gcc_assert (gimple_assign_rhs_code (vec_stmt) == VEC_COND_EXPR);
>>
>>        int scalar_precision
>> @@ -4738,20 +4738,21 @@ vect_create_epilog_for_reduction (vec<tr
>>        inner_phis.create (vect_defs.length ());
>>        FOR_EACH_VEC_ELT (new_phis, i, phi)
>>         {
>> +         stmt_vec_info phi_info = loop_vinfo->lookup_stmt (phi);
>>           tree new_result = copy_ssa_name (PHI_RESULT (phi));
>>           gphi *outer_phi = create_phi_node (new_result, exit_bb);
>>           SET_PHI_ARG_DEF (outer_phi, single_exit (loop)->dest_idx,
>>                            PHI_RESULT (phi));
>>           prev_phi_info = loop_vinfo->add_stmt (outer_phi);
>> -         inner_phis.quick_push (phi);
>> +         inner_phis.quick_push (phi_info);
>>           new_phis[i] = outer_phi;
>> -          while (STMT_VINFO_RELATED_STMT (vinfo_for_stmt (phi)))
>> +         while (STMT_VINFO_RELATED_STMT (phi_info))
>>              {
>> -             phi = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (phi));
>> -             new_result = copy_ssa_name (PHI_RESULT (phi));
>> +             phi_info = STMT_VINFO_RELATED_STMT (phi_info);
>> +             new_result = copy_ssa_name (PHI_RESULT (phi_info->stmt));
>>               outer_phi = create_phi_node (new_result, exit_bb);
>>               SET_PHI_ARG_DEF (outer_phi, single_exit (loop)->dest_idx,
>> -                              PHI_RESULT (phi));
>> +                              PHI_RESULT (phi_info->stmt));
>>               stmt_vec_info outer_phi_info = loop_vinfo->add_stmt (outer_phi);
>>               STMT_VINFO_RELATED_STMT (prev_phi_info) = outer_phi_info;
>>               prev_phi_info = outer_phi_info;
>> @@ -5644,7 +5645,8 @@ vect_create_epilog_for_reduction (vec<tr
>>               if (double_reduc)
>>                 STMT_VINFO_VEC_STMT (exit_phi_vinfo) = inner_phi;
>>               else
>> -               STMT_VINFO_VEC_STMT (exit_phi_vinfo) = epilog_stmt;
>> +               STMT_VINFO_VEC_STMT (exit_phi_vinfo)
>> +                 = vinfo_for_stmt (epilog_stmt);
>>                if (!double_reduc
>>                    || STMT_VINFO_DEF_TYPE (exit_phi_vinfo)
>>                        != vect_double_reduction_def)
>> @@ -5706,8 +5708,8 @@ vect_create_epilog_for_reduction (vec<tr
>>                    add_phi_arg (vect_phi, vect_phi_init,
>>                                 loop_preheader_edge (outer_loop),
>>                                 UNKNOWN_LOCATION);
>> -                  add_phi_arg (vect_phi, PHI_RESULT (inner_phi),
>> - loop_latch_edge (outer_loop), UNKNOWN_LOCATION);
>> +                 add_phi_arg (vect_phi, PHI_RESULT (inner_phi->stmt),
>> + loop_latch_edge (outer_loop), UNKNOWN_LOCATION);
>>                    if (dump_enabled_p ())
>>                      {
>>                        dump_printf_loc (MSG_NOTE, vect_location,
>> @@ -5846,7 +5848,7 @@ vect_expand_fold_left (gimple_stmt_itera
>>
>>  static bool
>>  vectorize_fold_left_reduction (gimple *stmt, gimple_stmt_iterator *gsi,
>> -                              gimple **vec_stmt, slp_tree slp_node,
>> +                              stmt_vec_info *vec_stmt, slp_tree slp_node,
>>                                gimple *reduc_def_stmt,
>>                                tree_code code, internal_fn reduc_fn,
>>                                tree ops[3], tree vectype_in,
>> @@ -6070,7 +6072,7 @@ is_nonwrapping_integer_induction (gimple
>>
>>  bool
>>  vectorizable_reduction (gimple *stmt, gimple_stmt_iterator *gsi,
>> -                       gimple **vec_stmt, slp_tree slp_node,
>> +                       stmt_vec_info *vec_stmt, slp_tree slp_node,
>>                         slp_instance slp_node_instance,
>>                         stmt_vector_for_cost *cost_vec)
>>  {
>> @@ -6220,7 +6222,8 @@ vectorizable_reduction (gimple *stmt, gi
>>                   else
>>                     {
>>                       if (j == 0)
>> -                       STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_phi;
>> +                       STMT_VINFO_VEC_STMT (stmt_info)
>> +                         = *vec_stmt = new_phi_info;
>>                       else
>> STMT_VINFO_RELATED_STMT (prev_phi_info) = new_phi_info;
>>                       prev_phi_info = new_phi_info;
>> @@ -7201,7 +7204,7 @@ vectorizable_reduction (gimple *stmt, gi
>>    /* Finalize the reduction-phi (set its arguments) and create the
>>       epilog reduction code.  */
>>    if ((!single_defuse_cycle || code == COND_EXPR) && !slp_node)
>> -    vect_defs[0] = gimple_get_lhs (*vec_stmt);
>> +    vect_defs[0] = gimple_get_lhs ((*vec_stmt)->stmt);
>>
>>    vect_create_epilog_for_reduction (vect_defs, stmt, reduc_def_stmt,
>>                                     epilog_copies, reduc_fn, phis,
>> @@ -7262,7 +7265,7 @@ vect_worthwhile_without_simd_p (vec_info
>>  bool
>>  vectorizable_induction (gimple *phi,
>>                         gimple_stmt_iterator *gsi ATTRIBUTE_UNUSED,
>> -                       gimple **vec_stmt, slp_tree slp_node,
>> +                       stmt_vec_info *vec_stmt, slp_tree slp_node,
>>                         stmt_vector_for_cost *cost_vec)
>>  {
>>    stmt_vec_info stmt_info = vinfo_for_stmt (phi);
>> @@ -7700,7 +7703,7 @@ vectorizable_induction (gimple *phi,
>>    add_phi_arg (induction_phi, vec_def, loop_latch_edge (iv_loop),
>>                UNKNOWN_LOCATION);
>>
>> -  STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = induction_phi;
>> +  STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = induction_phi_info;
>>
>>    /* In case that vectorization factor (VF) is bigger than the number
>>       of elements that we can fit in a vectype (nunits), we have to generate
>> @@ -7779,7 +7782,7 @@ vectorizable_induction (gimple *phi,
>>           gcc_assert (STMT_VINFO_RELEVANT_P (stmt_vinfo)
>>                       && !STMT_VINFO_LIVE_P (stmt_vinfo));
>>
>> -         STMT_VINFO_VEC_STMT (stmt_vinfo) = new_stmt;
>> +         STMT_VINFO_VEC_STMT (stmt_vinfo) = new_stmt_info;
>>           if (dump_enabled_p ())
>>             {
>>               dump_printf_loc (MSG_NOTE, vect_location,
>> @@ -7811,7 +7814,7 @@ vectorizable_induction (gimple *phi,
>>  vectorizable_live_operation (gimple *stmt,
>>                              gimple_stmt_iterator *gsi ATTRIBUTE_UNUSED,
>>                              slp_tree slp_node, int slp_index,
>> -                            gimple **vec_stmt,
>> +                            stmt_vec_info *vec_stmt,
>>                              stmt_vector_for_cost *)
>>  {
>>    stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>> Index: gcc/tree-vect-stmts.c
>> ===================================================================
>> --- gcc/tree-vect-stmts.c       2018-07-24 10:22:44.293185688 +0100
>> +++ gcc/tree-vect-stmts.c       2018-07-24 10:22:47.489157307 +0100
>> @@ -1465,7 +1465,7 @@ vect_init_vector (gimple *stmt, tree val
>>  vect_get_vec_def_for_operand_1 (gimple *def_stmt, enum vect_def_type dt)
>>  {
>>    tree vec_oprnd;
>> -  gimple *vec_stmt;
>> +  stmt_vec_info vec_stmt_info;
>>    stmt_vec_info def_stmt_info = NULL;
>>
>>    switch (dt)
>> @@ -1482,21 +1482,19 @@ vect_get_vec_def_for_operand_1 (gimple *
>>          /* Get the def from the vectorized stmt.  */
>>          def_stmt_info = vinfo_for_stmt (def_stmt);
>>
>> -        vec_stmt = STMT_VINFO_VEC_STMT (def_stmt_info);
>> -        /* Get vectorized pattern statement.  */
>> -        if (!vec_stmt
>> -            && STMT_VINFO_IN_PATTERN_P (def_stmt_info)
>> -            && !STMT_VINFO_RELEVANT (def_stmt_info))
>> -         vec_stmt = (STMT_VINFO_VEC_STMT
>> -                     (STMT_VINFO_RELATED_STMT (def_stmt_info)));
>> -        gcc_assert (vec_stmt);
>> -       if (gimple_code (vec_stmt) == GIMPLE_PHI)
>> -         vec_oprnd = PHI_RESULT (vec_stmt);
>> -       else if (is_gimple_call (vec_stmt))
>> -         vec_oprnd = gimple_call_lhs (vec_stmt);
>> +       vec_stmt_info = STMT_VINFO_VEC_STMT (def_stmt_info);
>> +       /* Get vectorized pattern statement.  */
>> +       if (!vec_stmt_info
>> +           && STMT_VINFO_IN_PATTERN_P (def_stmt_info)
>> +           && !STMT_VINFO_RELEVANT (def_stmt_info))
>> +         vec_stmt_info = (STMT_VINFO_VEC_STMT
>> +                          (STMT_VINFO_RELATED_STMT (def_stmt_info)));
>> +       gcc_assert (vec_stmt_info);
>> +       if (gphi *phi = dyn_cast <gphi *> (vec_stmt_info->stmt))
>> +         vec_oprnd = PHI_RESULT (phi);
>>         else
>> -         vec_oprnd = gimple_assign_lhs (vec_stmt);
>> -        return vec_oprnd;
>> +         vec_oprnd = gimple_get_lhs (vec_stmt_info->stmt);
>> +       return vec_oprnd;
>>        }
>>
>>      /* operand is defined by a loop header phi.  */
>> @@ -1507,14 +1505,14 @@ vect_get_vec_def_for_operand_1 (gimple *
>>        {
>>         gcc_assert (gimple_code (def_stmt) == GIMPLE_PHI);
>>
>> -        /* Get the def from the vectorized stmt.  */
>> -        def_stmt_info = vinfo_for_stmt (def_stmt);
>> -        vec_stmt = STMT_VINFO_VEC_STMT (def_stmt_info);
>> -       if (gimple_code (vec_stmt) == GIMPLE_PHI)
>> -         vec_oprnd = PHI_RESULT (vec_stmt);
>> +       /* Get the def from the vectorized stmt.  */
>> +       def_stmt_info = vinfo_for_stmt (def_stmt);
>> +       vec_stmt_info = STMT_VINFO_VEC_STMT (def_stmt_info);
>> +       if (gphi *phi = dyn_cast <gphi *> (vec_stmt_info->stmt))
>> +         vec_oprnd = PHI_RESULT (phi);
>>         else
>> -         vec_oprnd = gimple_get_lhs (vec_stmt);
>> -        return vec_oprnd;
>> +         vec_oprnd = gimple_get_lhs (vec_stmt_info->stmt);
>> +       return vec_oprnd;
>>        }
>>
>>      default:
>> @@ -2674,8 +2672,9 @@ vect_build_zero_merge_argument (gimple *
>>
>>  static void
>>  vect_build_gather_load_calls (gimple *stmt, gimple_stmt_iterator *gsi,
>> -                             gimple **vec_stmt, gather_scatter_info *gs_info,
>> -                             tree mask, vect_def_type mask_dt)
>> +                             stmt_vec_info *vec_stmt,
>> +                             gather_scatter_info *gs_info, tree mask,
>> +                             vect_def_type mask_dt)
>>  {
>>    stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>> @@ -2960,7 +2959,7 @@ vect_get_data_ptr_increment (data_refere
>>
>>  static bool
>>  vectorizable_bswap (gimple *stmt, gimple_stmt_iterator *gsi,
>> -                   gimple **vec_stmt, slp_tree slp_node,
>> +                   stmt_vec_info *vec_stmt, slp_tree slp_node,
>>                     tree vectype_in, enum vect_def_type *dt,
>>                     stmt_vector_for_cost *cost_vec)
>>  {
>> @@ -3104,8 +3103,9 @@ simple_integer_narrowing (tree vectype_o
>>     Return FALSE if not a vectorizable STMT, TRUE otherwise.  */
>>
>>  static bool
>> -vectorizable_call (gimple *gs, gimple_stmt_iterator *gsi, gimple **vec_stmt,
>> -                  slp_tree slp_node, stmt_vector_for_cost *cost_vec)
>> +vectorizable_call (gimple *gs, gimple_stmt_iterator *gsi,
>> +                  stmt_vec_info *vec_stmt, slp_tree slp_node,
>> +                  stmt_vector_for_cost *cost_vec)
>>  {
>>    gcall *stmt;
>>    tree vec_dest;
>> @@ -3745,7 +3745,7 @@ simd_clone_subparts (tree vectype)
>>
>>  static bool
>>  vectorizable_simd_clone_call (gimple *stmt, gimple_stmt_iterator *gsi,
>> -                             gimple **vec_stmt, slp_tree slp_node,
>> +                             stmt_vec_info *vec_stmt, slp_tree slp_node,
>>                               stmt_vector_for_cost *)
>>  {
>>    tree vec_dest;
>> @@ -4596,7 +4596,7 @@ vect_create_vectorized_promotion_stmts (
>>
>>  static bool
>>  vectorizable_conversion (gimple *stmt, gimple_stmt_iterator *gsi,
>> -                        gimple **vec_stmt, slp_tree slp_node,
>> +                        stmt_vec_info *vec_stmt, slp_tree slp_node,
>>                          stmt_vector_for_cost *cost_vec)
>>  {
>>    tree vec_dest;
>> @@ -5204,7 +5204,7 @@ vectorizable_conversion (gimple *stmt, g
>>
>>  static bool
>>  vectorizable_assignment (gimple *stmt, gimple_stmt_iterator *gsi,
>> -                        gimple **vec_stmt, slp_tree slp_node,
>> +                        stmt_vec_info *vec_stmt, slp_tree slp_node,
>>                          stmt_vector_for_cost *cost_vec)
>>  {
>>    tree vec_dest;
>> @@ -5405,7 +5405,7 @@ vect_supportable_shift (enum tree_code c
>>
>>  static bool
>>  vectorizable_shift (gimple *stmt, gimple_stmt_iterator *gsi,
>> -                    gimple **vec_stmt, slp_tree slp_node,
>> +                   stmt_vec_info *vec_stmt, slp_tree slp_node,
>>                     stmt_vector_for_cost *cost_vec)
>>  {
>>    tree vec_dest;
>> @@ -5769,7 +5769,7 @@ vectorizable_shift (gimple *stmt, gimple
>>
>>  static bool
>>  vectorizable_operation (gimple *stmt, gimple_stmt_iterator *gsi,
>> -                       gimple **vec_stmt, slp_tree slp_node,
>> +                       stmt_vec_info *vec_stmt, slp_tree slp_node,
>>                         stmt_vector_for_cost *cost_vec)
>>  {
>>    tree vec_dest;
>> @@ -6222,8 +6222,9 @@ get_group_alias_ptr_type (gimple *first_
>>     Return FALSE if not a vectorizable STMT, TRUE otherwise.  */
>>
>>  static bool
>> -vectorizable_store (gimple *stmt, gimple_stmt_iterator *gsi, gimple
> **vec_stmt,
>> -                    slp_tree slp_node, stmt_vector_for_cost *cost_vec)
>> +vectorizable_store (gimple *stmt, gimple_stmt_iterator *gsi,
>> +                   stmt_vec_info *vec_stmt, slp_tree slp_node,
>> +                   stmt_vector_for_cost *cost_vec)
>>  {
>>    tree data_ref;
>>    tree op;
>> @@ -7385,8 +7386,9 @@ hoist_defs_of_uses (gimple *stmt, struct
>>     Return FALSE if not a vectorizable STMT, TRUE otherwise.  */
>>
>>  static bool
>> -vectorizable_load (gimple *stmt, gimple_stmt_iterator *gsi, gimple
> **vec_stmt,
>> -                   slp_tree slp_node, slp_instance slp_node_instance,
>> +vectorizable_load (gimple *stmt, gimple_stmt_iterator *gsi,
>> +                  stmt_vec_info *vec_stmt, slp_tree slp_node,
>> +                  slp_instance slp_node_instance,
>>                    stmt_vector_for_cost *cost_vec)
>>  {
>>    tree scalar_dest;
>> @@ -8710,8 +8712,9 @@ vect_is_simple_cond (tree cond, vec_info
>>
>>  bool
>>  vectorizable_condition (gimple *stmt, gimple_stmt_iterator *gsi,
>> -                       gimple **vec_stmt, tree reduc_def, int reduc_index,
>> -                       slp_tree slp_node, stmt_vector_for_cost *cost_vec)
>> +                       stmt_vec_info *vec_stmt, tree reduc_def,
>> +                       int reduc_index, slp_tree slp_node,
>> +                       stmt_vector_for_cost *cost_vec)
>>  {
>>    tree scalar_dest = NULL_TREE;
>>    tree vec_dest = NULL_TREE;
>> @@ -9111,7 +9114,7 @@ vectorizable_condition (gimple *stmt, gi
>>
>>  static bool
>>  vectorizable_comparison (gimple *stmt, gimple_stmt_iterator *gsi,
>> -                        gimple **vec_stmt, tree reduc_def,
>> +                        stmt_vec_info *vec_stmt, tree reduc_def,
>>                          slp_tree slp_node, stmt_vector_for_cost *cost_vec)
>>  {
>>    tree lhs, rhs1, rhs2;
>> @@ -9383,7 +9386,7 @@ vectorizable_comparison (gimple *stmt, g
>>
>>  static bool
>>  can_vectorize_live_stmts (gimple *stmt, gimple_stmt_iterator *gsi,
>> -                         slp_tree slp_node, gimple **vec_stmt,
>> +                         slp_tree slp_node, stmt_vec_info *vec_stmt,
>>                           stmt_vector_for_cost *cost_vec)
>>  {
>>    if (slp_node)
>> @@ -9647,11 +9650,11 @@ vect_transform_stmt (gimple *stmt, gimpl
>>    stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>>    vec_info *vinfo = stmt_info->vinfo;
>>    bool is_store = false;
>> -  gimple *vec_stmt = NULL;
>> +  stmt_vec_info vec_stmt = NULL;
>>    bool done;
>>
>>    gcc_assert (slp_node || !PURE_SLP_STMT (stmt_info));
>> -  gimple *old_vec_stmt = STMT_VINFO_VEC_STMT (stmt_info);
>> +  stmt_vec_info old_vec_stmt_info = STMT_VINFO_VEC_STMT (stmt_info);
>>
>>    bool nested_p = (STMT_VINFO_LOOP_VINFO (stmt_info)
>>                    && nested_in_vect_loop_p
>> @@ -9752,7 +9755,7 @@ vect_transform_stmt (gimple *stmt, gimpl
>>       This would break hybrid SLP vectorization.  */
>>    if (slp_node)
>>      gcc_assert (!vec_stmt
>> -               && STMT_VINFO_VEC_STMT (stmt_info) == old_vec_stmt);
>> +               && STMT_VINFO_VEC_STMT (stmt_info) == old_vec_stmt_info);
>>
>>    /* Handle inner-loop stmts whose DEF is used in the loop-nest that
>>       is being vectorized, but outside the immediately enclosing loop.  */

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [36/46] Add a pattern_stmt_p field to stmt_vec_info
  2018-07-25 10:15   ` Richard Biener
@ 2018-07-25 11:09     ` Richard Sandiford
  2018-07-25 11:48       ` Richard Biener
  0 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-25 11:09 UTC (permalink / raw)
  To: Richard Biener; +Cc: GCC Patches

Richard Biener <richard.guenther@gmail.com> writes:
> On Tue, Jul 24, 2018 at 12:07 PM Richard Sandiford
> <richard.sandiford@arm.com> wrote:
>>
>> This patch adds a pattern_stmt_p field to stmt_vec_info, so that it's
>> possible to tell whether the statement is a pattern statement without
>> referring to other statements.  The new field goes in what was
>> previously a hole in the structure, so the size is the same as before.
>
> Not sure what the advantage is?  is_pattern_stmt_p () looks nicer
> than ->is_pattern_p

I can keep the function wrapper if you prefer that.  But having a
statement "know" whether it's a pattern stmt makes things like
freeing stmt_vec_infos simpler (see later patches in the series).
It should also be cheaper to test, but that's much more minor.

Thanks,
Richard

>
>>
>> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>>
>> gcc/
>>         * tree-vectorizer.h (_stmt_vec_info::pattern_stmt_p): New field.
>>         (is_pattern_stmt_p): Delete.
>>         * tree-vect-patterns.c (vect_init_pattern_stmt): Set pattern_stmt_p
>>         on pattern statements.
>>         (vect_split_statement, vect_mark_pattern_stmts): Use the new
>>         pattern_stmt_p field instead of is_pattern_stmt_p.
>>         * tree-vect-data-refs.c (vect_preserves_scalar_order_p): Likewise.
>>         * tree-vect-loop.c (vectorizable_live_operation): Likewise.
>>         * tree-vect-slp.c (vect_build_slp_tree_2): Likewise.
>>         (vect_find_last_scalar_stmt_in_slp, vect_remove_slp_scalar_calls)
>>         (vect_schedule_slp): Likewise.
>>         * tree-vect-stmts.c (vect_mark_stmts_to_be_vectorized): Likewise.
>>         (vectorizable_call, vectorizable_simd_clone_call, vectorizable_shift)
>>         (vectorizable_store, vect_remove_stores): Likewise.
>>
>> Index: gcc/tree-vectorizer.h
>> ===================================================================
>> --- gcc/tree-vectorizer.h       2018-07-24 10:23:56.440544995 +0100
>> +++ gcc/tree-vectorizer.h       2018-07-24 10:24:02.364492386 +0100
>> @@ -791,6 +791,12 @@ struct _stmt_vec_info {
>>    /* Stmt is part of some pattern (computation idiom)  */
>>    bool in_pattern_p;
>>
>> +  /* True if the statement was created during pattern recognition as
>> +     part of the replacement for RELATED_STMT.  This implies that the
>> +     statement isn't part of any basic block, although for convenience
>> +     its gimple_bb is the same as for RELATED_STMT.  */
>> +  bool pattern_stmt_p;
>> +
>>    /* Is this statement vectorizable or should it be skipped in (partial)
>>       vectorization.  */
>>    bool vectorizable;
>> @@ -1151,16 +1157,6 @@ get_later_stmt (stmt_vec_info stmt1_info
>>      return stmt2_info;
>>  }
>>
>> -/* Return TRUE if a statement represented by STMT_INFO is a part of a
>> -   pattern.  */
>> -
>> -static inline bool
>> -is_pattern_stmt_p (stmt_vec_info stmt_info)
>> -{
>> -  stmt_vec_info related_stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
>> -  return related_stmt_info && STMT_VINFO_IN_PATTERN_P (related_stmt_info);
>> -}
>> -
>>  /* Return true if BB is a loop header.  */
>>
>>  static inline bool
>> Index: gcc/tree-vect-patterns.c
>> ===================================================================
>> --- gcc/tree-vect-patterns.c    2018-07-24 10:23:59.408518638 +0100
>> +++ gcc/tree-vect-patterns.c    2018-07-24 10:24:02.360492422 +0100
>> @@ -108,6 +108,7 @@ vect_init_pattern_stmt (gimple *pattern_
>>      pattern_stmt_info = orig_stmt_info->vinfo->add_stmt (pattern_stmt);
>>    gimple_set_bb (pattern_stmt, gimple_bb (orig_stmt_info->stmt));
>>
>> +  pattern_stmt_info->pattern_stmt_p = true;
>>    STMT_VINFO_RELATED_STMT (pattern_stmt_info) = orig_stmt_info;
>>    STMT_VINFO_DEF_TYPE (pattern_stmt_info)
>>      = STMT_VINFO_DEF_TYPE (orig_stmt_info);
>> @@ -630,7 +631,7 @@ vect_recog_temp_ssa_var (tree type, gimp
>>  vect_split_statement (stmt_vec_info stmt2_info, tree new_rhs,
>>                       gimple *stmt1, tree vectype)
>>  {
>> -  if (is_pattern_stmt_p (stmt2_info))
>> +  if (stmt2_info->pattern_stmt_p)
>>      {
>>        /* STMT2_INFO is part of a pattern.  Get the statement to which
>>          the pattern is attached.  */
>> @@ -4726,7 +4727,7 @@ vect_mark_pattern_stmts (stmt_vec_info o
>>    gimple *def_seq = STMT_VINFO_PATTERN_DEF_SEQ (orig_stmt_info);
>>
>>    gimple *orig_pattern_stmt = NULL;
>> -  if (is_pattern_stmt_p (orig_stmt_info))
>> +  if (orig_stmt_info->pattern_stmt_p)
>>      {
>>        /* We're replacing a statement in an existing pattern definition
>>          sequence.  */
>> Index: gcc/tree-vect-data-refs.c
>> ===================================================================
>> --- gcc/tree-vect-data-refs.c   2018-07-24 10:23:53.204573732 +0100
>> +++ gcc/tree-vect-data-refs.c   2018-07-24 10:24:02.356492457 +0100
>> @@ -212,9 +212,9 @@ vect_preserves_scalar_order_p (stmt_vec_
>>       (but could happen later) while reads will happen no later than their
>>       current position (but could happen earlier).  Reordering is therefore
>>       only possible if the first access is a write.  */
>> -  if (is_pattern_stmt_p (stmtinfo_a))
>> +  if (stmtinfo_a->pattern_stmt_p)
>>      stmtinfo_a = STMT_VINFO_RELATED_STMT (stmtinfo_a);
>> -  if (is_pattern_stmt_p (stmtinfo_b))
>> +  if (stmtinfo_b->pattern_stmt_p)
>>      stmtinfo_b = STMT_VINFO_RELATED_STMT (stmtinfo_b);
>> stmt_vec_info earlier_stmt_info = get_earlier_stmt (stmtinfo_a,
> stmtinfo_b);
>>    return !DR_IS_WRITE (STMT_VINFO_DATA_REF (earlier_stmt_info));
>> Index: gcc/tree-vect-loop.c
>> ===================================================================
>> --- gcc/tree-vect-loop.c        2018-07-24 10:23:56.436545030 +0100
>> +++ gcc/tree-vect-loop.c        2018-07-24 10:24:02.360492422 +0100
>> @@ -7907,7 +7907,7 @@ vectorizable_live_operation (stmt_vec_in
>>      }
>>
>>    /* If stmt has a related stmt, then use that for getting the lhs.  */
>> -  gimple *stmt = (is_pattern_stmt_p (stmt_info)
>> +  gimple *stmt = (stmt_info->pattern_stmt_p
>>                   ? STMT_VINFO_RELATED_STMT (stmt_info)->stmt
>>                   : stmt_info->stmt);
>>
>> Index: gcc/tree-vect-slp.c
>> ===================================================================
>> --- gcc/tree-vect-slp.c 2018-07-24 10:23:53.204573732 +0100
>> +++ gcc/tree-vect-slp.c 2018-07-24 10:24:02.360492422 +0100
>> @@ -376,7 +376,7 @@ vect_get_and_check_slp_defs (vec_info *v
>>        /* Check if DEF_STMT_INFO is a part of a pattern in LOOP and get
>>          the def stmt from the pattern.  Check that all the stmts of the
>>          node are in the pattern.  */
>> -      if (def_stmt_info && is_pattern_stmt_p (def_stmt_info))
>> +      if (def_stmt_info && def_stmt_info->pattern_stmt_p)
>>          {
>>            pattern = true;
>>            if (!first && !oprnd_info->first_pattern
>> @@ -1315,7 +1315,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>>               /* ???  Rejecting patterns this way doesn't work.  We'd have to
>>                  do extra work to cancel the pattern so the uses see the
>>                  scalar version.  */
>> -             && !is_pattern_stmt_p (SLP_TREE_SCALAR_STMTS (child)[0]))
>> +             && !SLP_TREE_SCALAR_STMTS (child)[0]->pattern_stmt_p)
>>             {
>>               slp_tree grandchild;
>>
>> @@ -1359,7 +1359,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>>           /* ???  Rejecting patterns this way doesn't work.  We'd have to
>>              do extra work to cancel the pattern so the uses see the
>>              scalar version.  */
>> -         && !is_pattern_stmt_p (stmt_info))
>> +         && !stmt_info->pattern_stmt_p)
>>         {
>>           dump_printf_loc (MSG_NOTE, vect_location,
>>                            "Building vector operands from scalars\n");
>> @@ -1486,7 +1486,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>>                   /* ???  Rejecting patterns this way doesn't work.  We'd have
>> to do extra work to cancel the pattern so the uses see the
>>                      scalar version.  */
>> -                 && !is_pattern_stmt_p (SLP_TREE_SCALAR_STMTS (child)[0]))
>> +                 && !SLP_TREE_SCALAR_STMTS (child)[0]->pattern_stmt_p)
>>                 {
>>                   unsigned int j;
>>                   slp_tree grandchild;
>> @@ -1848,7 +1848,7 @@ vect_find_last_scalar_stmt_in_slp (slp_t
>>
>>    for (int i = 0; SLP_TREE_SCALAR_STMTS (node).iterate (i, &stmt_vinfo); i++)
>>      {
>> -      if (is_pattern_stmt_p (stmt_vinfo))
>> +      if (stmt_vinfo->pattern_stmt_p)
>>         stmt_vinfo = STMT_VINFO_RELATED_STMT (stmt_vinfo);
>>        last = last ? get_later_stmt (stmt_vinfo, last) : stmt_vinfo;
>>      }
>> @@ -4044,8 +4044,7 @@ vect_remove_slp_scalar_calls (slp_tree n
>>        gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt);
>>        if (!stmt || gimple_bb (stmt) == NULL)
>>         continue;
>> -      if (is_pattern_stmt_p (stmt_info)
>> -         || !PURE_SLP_STMT (stmt_info))
>> +      if (stmt_info->pattern_stmt_p || !PURE_SLP_STMT (stmt_info))
>>         continue;
>>        lhs = gimple_call_lhs (stmt);
>>        new_stmt = gimple_build_assign (lhs, build_zero_cst (TREE_TYPE (lhs)));
>> @@ -4106,7 +4105,7 @@ vect_schedule_slp (vec_info *vinfo)
>>           if (!STMT_VINFO_DATA_REF (store_info))
>>             break;
>>
>> -         if (is_pattern_stmt_p (store_info))
>> +         if (store_info->pattern_stmt_p)
>>             store_info = STMT_VINFO_RELATED_STMT (store_info);
>>           /* Free the attached stmt_vec_info and remove the stmt.  */
>>           gsi = gsi_for_stmt (store_info);
>> Index: gcc/tree-vect-stmts.c
>> ===================================================================
>> --- gcc/tree-vect-stmts.c       2018-07-24 10:23:56.440544995 +0100
>> +++ gcc/tree-vect-stmts.c       2018-07-24 10:24:02.364492386 +0100
>> @@ -731,7 +731,7 @@ vect_mark_stmts_to_be_vectorized (loop_v
>>              break;
>>          }
>>
>> -      if (is_pattern_stmt_p (stmt_vinfo))
>> +      if (stmt_vinfo->pattern_stmt_p)
>>          {
>>            /* Pattern statements are not inserted into the code, so
>>               FOR_EACH_PHI_OR_STMT_USE optimizes their operands out, and we
>> @@ -3623,7 +3623,7 @@ vectorizable_call (stmt_vec_info stmt_in
>>    if (slp_node)
>>      return true;
>>
>> -  if (is_pattern_stmt_p (stmt_info))
>> +  if (stmt_info->pattern_stmt_p)
>>      stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
>>    lhs = gimple_get_lhs (stmt_info->stmt);
>>
>> @@ -4362,7 +4362,7 @@ vectorizable_simd_clone_call (stmt_vec_i
>>    if (scalar_dest)
>>      {
>>        type = TREE_TYPE (scalar_dest);
>> -      if (is_pattern_stmt_p (stmt_info))
>> +      if (stmt_info->pattern_stmt_p)
>>         lhs = gimple_call_lhs (STMT_VINFO_RELATED_STMT (stmt_info)->stmt);
>>        else
>>         lhs = gimple_call_lhs (stmt);
>> @@ -5552,7 +5552,7 @@ vectorizable_shift (stmt_vec_info stmt_i
>>        /* If the shift amount is computed by a pattern stmt we cannot
>>           use the scalar amount directly thus give up and use a vector
>>          shift.  */
>> -      if (op1_def_stmt_info && is_pattern_stmt_p (op1_def_stmt_info))
>> +      if (op1_def_stmt_info && op1_def_stmt_info->pattern_stmt_p)
>>         scalar_shift_arg = false;
>>      }
>>    else
>> @@ -6286,7 +6286,7 @@ vectorizable_store (stmt_vec_info stmt_i
>>      {
>>        tree scalar_dest = gimple_assign_lhs (assign);
>>        if (TREE_CODE (scalar_dest) == VIEW_CONVERT_EXPR
>> -         && is_pattern_stmt_p (stmt_info))
>> +         && stmt_info->pattern_stmt_p)
>>         scalar_dest = TREE_OPERAND (scalar_dest, 0);
>>        if (TREE_CODE (scalar_dest) != ARRAY_REF
>>           && TREE_CODE (scalar_dest) != BIT_FIELD_REF
>> @@ -9839,7 +9839,7 @@ vect_remove_stores (stmt_vec_info first_
>>    while (next_stmt_info)
>>      {
>>        stmt_vec_info tmp = DR_GROUP_NEXT_ELEMENT (next_stmt_info);
>> -      if (is_pattern_stmt_p (next_stmt_info))
>> +      if (next_stmt_info->pattern_stmt_p)
>>         next_stmt_info = STMT_VINFO_RELATED_STMT (next_stmt_info);
>>        /* Free the attached stmt_vec_info and remove the stmt.  */
>>        next_si = gsi_for_stmt (next_stmt_info->stmt);

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [38/46] Pass stmt_vec_infos instead of data_references where relevant
  2018-07-25 10:21   ` Richard Biener
@ 2018-07-25 11:21     ` Richard Sandiford
  2018-07-26 11:05       ` Richard Sandiford
  0 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-25 11:21 UTC (permalink / raw)
  To: Richard Biener; +Cc: GCC Patches

Richard Biener <richard.guenther@gmail.com> writes:
> On Tue, Jul 24, 2018 at 12:08 PM Richard Sandiford
> <richard.sandiford@arm.com> wrote:
>>
>> This patch makes various routines (mostly in tree-vect-data-refs.c)
>> take stmt_vec_infos rather than data_references.  The affected routines
>> are really dealing with the way that an access is going to vectorised
>> for a particular stmt_vec_info, rather than with the original scalar
>> access described by the data_reference.
>
> Similar.  Doesn't it make more sense to pass both stmt_info and DR to
> the functions?

Not sure.  If we...

> We currently cannot handle aggregate copies in the to-be-vectorized IL
> but rely on SRA and friends to elide those.  That's the only two-DR
> stmt I can think of for vectorization.  Maybe aggregate by-value / return
> function calls with OMP SIMD if that supports this somehow.

...did this then I don't think a data_refrence would be the natural
way of identifying a DR within a stmt_vec_info.  Presumably the
stmt_vec_info would need multiple STMT_VINFO_DATA_REFS and dr_auxs.
If both of those were vectors then a (stmt_vec_info, index) pair
might make more sense than (stmt_vec_info, data_reference).

Alternatively we could move STMT_VINFO_DATA_REF into dataref_aux,
so that there's a back-pointer to the DR, add a stmt_vec_info
field to dataref_aux too, and then use dataref_aux instead of
stmt_vec_info as the key.

Thanks,
Richard

>
> Richard.
>
>>
>> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>>
>> gcc/
>>         * tree-vectorizer.h (vect_supportable_dr_alignment): Take
>>         a stmt_vec_info rather than a data_reference.
>>         * tree-vect-data-refs.c (vect_calculate_target_alignment)
>>         (vect_compute_data_ref_alignment, vect_update_misalignment_for_peel)
>>         (verify_data_ref_alignment, vector_alignment_reachable_p)
>>         (vect_get_data_access_cost, vect_get_peeling_costs_all_drs)
>>         (vect_peeling_supportable, vect_analyze_group_access_1)
>>         (vect_analyze_group_access, vect_analyze_data_ref_access)
>>         (vect_vfa_segment_size, vect_vfa_access_size, vect_small_gap_p)
>>         (vectorizable_with_step_bound_p, vect_duplicate_ssa_name_ptr_info)
>>         (vect_supportable_dr_alignment): Likewise.  Update calls to other
>>         functions for which the same change is being made.
>>         (vect_verify_datarefs_alignment, vect_find_same_alignment_drs)
>>         (vect_analyze_data_refs_alignment): Update calls accordingly.
>>         (vect_slp_analyze_and_verify_node_alignment): Likewise.
>>         (vect_analyze_data_ref_accesses): Likewise.
>>         (vect_prune_runtime_alias_test_list): Likewise.
>>         (vect_create_addr_base_for_vector_ref): Likewise.
>>         (vect_create_data_ref_ptr): Likewise.
>>         (_vect_peel_info::dr): Replace with...
>>         (_vect_peel_info::stmt_info): ...this new field.
>>         (vect_peeling_hash_get_most_frequent): Update _vect_peel_info uses
>>         accordingly, and update after above interface changes.
>>         (vect_peeling_hash_get_lowest_cost): Likewise
>>         (vect_peeling_hash_choose_best_peeling): Likewise.
>>         (vect_enhance_data_refs_alignment): Likewise.
>>         (vect_peeling_hash_insert): Likewise.  Take a stmt_vec_info
>>         rather than a data_reference.
>>         * tree-vect-stmts.c (vect_get_store_cost, vect_get_load_cost)
>>         (get_negative_load_store_type): Update calls to
>>         vect_supportable_dr_alignment.
>>         (vect_get_data_ptr_increment, ensure_base_align): Take a
>>         stmt_vec_info instead of a data_reference.
>>         (vectorizable_store, vectorizable_load): Update calls after
>>         above interface changes.
>>
>> Index: gcc/tree-vectorizer.h
>> ===================================================================
>> --- gcc/tree-vectorizer.h       2018-07-24 10:24:05.744462369 +0100
>> +++ gcc/tree-vectorizer.h       2018-07-24 10:24:08.924434128 +0100
>> @@ -1541,7 +1541,7 @@ extern tree vect_get_mask_type_for_stmt
>>  /* In tree-vect-data-refs.c.  */
>>  extern bool vect_can_force_dr_alignment_p (const_tree, unsigned int);
>>  extern enum dr_alignment_support vect_supportable_dr_alignment
>> -                                           (struct data_reference *, bool);
>> +  (stmt_vec_info, bool);
>>  extern tree vect_get_smallest_scalar_type (stmt_vec_info, HOST_WIDE_INT *,
>>                                             HOST_WIDE_INT *);
>> extern bool vect_analyze_data_ref_dependences (loop_vec_info, unsigned
> int *);
>> Index: gcc/tree-vect-data-refs.c
>> ===================================================================
>> --- gcc/tree-vect-data-refs.c   2018-07-24 10:24:05.740462405 +0100
>> +++ gcc/tree-vect-data-refs.c   2018-07-24 10:24:08.924434128 +0100
>> @@ -858,19 +858,19 @@ vect_record_base_alignments (vec_info *v
>>      }
>>  }
>>
>> -/* Return the target alignment for the vectorized form of DR.  */
>> +/* Return the target alignment for the vectorized form of the load or store
>> +   in STMT_INFO.  */
>>
>>  static unsigned int
>> -vect_calculate_target_alignment (struct data_reference *dr)
>> +vect_calculate_target_alignment (stmt_vec_info stmt_info)
>>  {
>> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>>    return targetm.vectorize.preferred_vector_alignment (vectype);
>>  }
>>
>>  /* Function vect_compute_data_ref_alignment
>>
>> -   Compute the misalignment of the data reference DR.
>> +   Compute the misalignment of the load or store in STMT_INFO.
>>
>>     Output:
>>     1. dr_misalignment (STMT_INFO) is defined.
>> @@ -879,9 +879,9 @@ vect_calculate_target_alignment (struct
>>     only for trivial cases. TODO.  */
>>
>>  static void
>> -vect_compute_data_ref_alignment (struct data_reference *dr)
>> +vect_compute_data_ref_alignment (stmt_vec_info stmt_info)
>>  {
>> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>>    vec_base_alignments *base_alignments = &stmt_info->vinfo->base_alignments;
>>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>>    struct loop *loop = NULL;
>> @@ -905,7 +905,7 @@ vect_compute_data_ref_alignment (struct
>>    bool step_preserves_misalignment_p;
>>
>>    unsigned HOST_WIDE_INT vector_alignment
>> -    = vect_calculate_target_alignment (dr) / BITS_PER_UNIT;
>> +    = vect_calculate_target_alignment (stmt_info) / BITS_PER_UNIT;
>>    STMT_VINFO_TARGET_ALIGNMENT (stmt_info) = vector_alignment;
>>
>>    /* No step for BB vectorization.  */
>> @@ -1053,28 +1053,28 @@ vect_compute_data_ref_alignment (struct
>>  }
>>
>>  /* Function vect_update_misalignment_for_peel.
>> -   Sets DR's misalignment
>> -   - to 0 if it has the same alignment as DR_PEEL,
>> +   Sets the misalignment of the load or store in STMT_INFO
>> +   - to 0 if it has the same alignment as PEEL_STMT_INFO,
>>     - to the misalignment computed using NPEEL if DR's salignment is known,
>>     - to -1 (unknown) otherwise.
>>
>> -   DR - the data reference whose misalignment is to be adjusted.
>> -   DR_PEEL - the data reference whose misalignment is being made
>> -             zero in the vector loop by the peel.
>> +   STMT_INFO - the load or store whose misalignment is to be adjusted.
>> +   PEEL_STMT_INFO - the load or store whose misalignment is being made
>> +                   zero in the vector loop by the peel.
>>     NPEEL - the number of iterations in the peel loop if the misalignment
>> -           of DR_PEEL is known at compile time.  */
>> +          of PEEL_STMT_INFO is known at compile time.  */
>>
>>  static void
>> -vect_update_misalignment_for_peel (struct data_reference *dr,
>> -                                   struct data_reference *dr_peel, int npeel)
>> +vect_update_misalignment_for_peel (stmt_vec_info stmt_info,
>> +                                  stmt_vec_info peel_stmt_info, int npeel)
>>  {
>>    unsigned int i;
>>    vec<dr_p> same_aligned_drs;
>>    struct data_reference *current_dr;
>> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>> +  data_reference *dr_peel = STMT_VINFO_DATA_REF (peel_stmt_info);
>>    int dr_size = vect_get_scalar_dr_size (dr);
>>    int dr_peel_size = vect_get_scalar_dr_size (dr_peel);
>> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>> -  stmt_vec_info peel_stmt_info = vect_dr_stmt (dr_peel);
>>
>>   /* For interleaved data accesses the step in the loop must be multiplied by
>>       the size of the interleaving group.  */
>> @@ -1085,7 +1085,7 @@ vect_update_misalignment_for_peel (struc
>>
>>    /* It can be assumed that the data refs with the same alignment as dr_peel
>>       are aligned in the vector loop.  */
>> -  same_aligned_drs = STMT_VINFO_SAME_ALIGN_REFS (vect_dr_stmt (dr_peel));
>> +  same_aligned_drs = STMT_VINFO_SAME_ALIGN_REFS (peel_stmt_info);
>>    FOR_EACH_VEC_ELT (same_aligned_drs, i, current_dr)
>>      {
>>        if (current_dr != dr)
>> @@ -1118,13 +1118,15 @@ vect_update_misalignment_for_peel (struc
>>
>>  /* Function verify_data_ref_alignment
>>
>> -   Return TRUE if DR can be handled with respect to alignment.  */
>> +   Return TRUE if the load or store in STMT_INFO can be handled with
>> +   respect to alignment.  */
>>
>>  static bool
>> -verify_data_ref_alignment (data_reference_p dr)
>> +verify_data_ref_alignment (stmt_vec_info stmt_info)
>>  {
>> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>>    enum dr_alignment_support supportable_dr_alignment
>> -    = vect_supportable_dr_alignment (dr, false);
>> +    = vect_supportable_dr_alignment (stmt_info, false);
>>    if (!supportable_dr_alignment)
>>      {
>>        if (dump_enabled_p ())
>> @@ -1181,7 +1183,7 @@ vect_verify_datarefs_alignment (loop_vec
>>           && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
>>         continue;
>>
>> -      if (! verify_data_ref_alignment (dr))
>> +      if (! verify_data_ref_alignment (stmt_info))
>>         return false;
>>      }
>>
>> @@ -1203,13 +1205,13 @@ not_size_aligned (tree exp)
>>
>>  /* Function vector_alignment_reachable_p
>>
>> -   Return true if vector alignment for DR is reachable by peeling
>> -   a few loop iterations.  Return false otherwise.  */
>> +   Return true if the vector alignment is reachable for the load or store
>> + in STMT_INFO by peeling a few loop iterations.  Return false
> otherwise.  */
>>
>>  static bool
>> -vector_alignment_reachable_p (struct data_reference *dr)
>> +vector_alignment_reachable_p (stmt_vec_info stmt_info)
>>  {
>> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>>
>>    if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
>> @@ -1270,16 +1272,16 @@ vector_alignment_reachable_p (struct dat
>>  }
>>
>>
>> -/* Calculate the cost of the memory access represented by DR.  */
>> +/* Calculate the cost of the memory access in STMT_INFO.  */
>>
>>  static void
>> -vect_get_data_access_cost (struct data_reference *dr,
>> +vect_get_data_access_cost (stmt_vec_info stmt_info,
>>                             unsigned int *inside_cost,
>>                             unsigned int *outside_cost,
>>                            stmt_vector_for_cost *body_cost_vec,
>>                            stmt_vector_for_cost *prologue_cost_vec)
>>  {
>> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>>    int ncopies;
>>
>> @@ -1303,7 +1305,7 @@ vect_get_data_access_cost (struct data_r
>>
>>  typedef struct _vect_peel_info
>>  {
>> -  struct data_reference *dr;
>> +  stmt_vec_info stmt_info;
>>    int npeel;
>>    unsigned int count;
>>  } *vect_peel_info;
>> @@ -1337,16 +1339,17 @@ peel_info_hasher::equal (const _vect_pee
>>  }
>>
>>
>> -/* Insert DR into peeling hash table with NPEEL as key.  */
>> +/* Insert STMT_INFO into peeling hash table with NPEEL as key.  */
>>
>>  static void
>>  vect_peeling_hash_insert (hash_table<peel_info_hasher> *peeling_htab,
>> -                         loop_vec_info loop_vinfo, struct data_reference *dr,
>> +                         loop_vec_info loop_vinfo, stmt_vec_info stmt_info,
>>                            int npeel)
>>  {
>>    struct _vect_peel_info elem, *slot;
>>    _vect_peel_info **new_slot;
>> -  bool supportable_dr_alignment = vect_supportable_dr_alignment (dr, true);
>> +  bool supportable_dr_alignment
>> +    = vect_supportable_dr_alignment (stmt_info, true);
>>
>>    elem.npeel = npeel;
>>    slot = peeling_htab->find (&elem);
>> @@ -1356,7 +1359,7 @@ vect_peeling_hash_insert (hash_table<pee
>>      {
>>        slot = XNEW (struct _vect_peel_info);
>>        slot->npeel = npeel;
>> -      slot->dr = dr;
>> +      slot->stmt_info = stmt_info;
>>        slot->count = 1;
>>        new_slot = peeling_htab->find_slot (slot, INSERT);
>>        *new_slot = slot;
>> @@ -1383,19 +1386,19 @@ vect_peeling_hash_get_most_frequent (_ve
>>      {
>>        max->peel_info.npeel = elem->npeel;
>>        max->peel_info.count = elem->count;
>> -      max->peel_info.dr = elem->dr;
>> +      max->peel_info.stmt_info = elem->stmt_info;
>>      }
>>
>>    return 1;
>>  }
>>
>>  /* Get the costs of peeling NPEEL iterations checking data access costs
>> -   for all data refs.  If UNKNOWN_MISALIGNMENT is true, we assume DR0's
>> -   misalignment will be zero after peeling.  */
>> +   for all data refs.  If UNKNOWN_MISALIGNMENT is true, we assume
>> +   PEEL_STMT_INFO's misalignment will be zero after peeling.  */
>>
>>  static void
>>  vect_get_peeling_costs_all_drs (vec<data_reference_p> datarefs,
>> -                               struct data_reference *dr0,
>> +                               stmt_vec_info peel_stmt_info,
>>                                 unsigned int *inside_cost,
>>                                 unsigned int *outside_cost,
>>                                 stmt_vector_for_cost *body_cost_vec,
>> @@ -1403,8 +1406,6 @@ vect_get_peeling_costs_all_drs (vec<data
>>                                 unsigned int npeel,
>>                                 bool unknown_misalignment)
>>  {
>> -  stmt_vec_info peel_stmt_info = (dr0 ? vect_dr_stmt (dr0)
>> -                                 : NULL_STMT_VEC_INFO);
>>    unsigned i;
>>    data_reference *dr;
>>
>> @@ -1433,8 +1434,8 @@ vect_get_peeling_costs_all_drs (vec<data
>>        else if (unknown_misalignment && stmt_info == peel_stmt_info)
>>         set_dr_misalignment (stmt_info, 0);
>>        else
>> -       vect_update_misalignment_for_peel (dr, dr0, npeel);
>> -      vect_get_data_access_cost (dr, inside_cost, outside_cost,
>> +       vect_update_misalignment_for_peel (stmt_info, peel_stmt_info, npeel);
>> +      vect_get_data_access_cost (stmt_info, inside_cost, outside_cost,
>>                                  body_cost_vec, prologue_cost_vec);
>>        set_dr_misalignment (stmt_info, save_misalignment);
>>      }
>> @@ -1450,7 +1451,7 @@ vect_peeling_hash_get_lowest_cost (_vect
>>    vect_peel_info elem = *slot;
>>    int dummy;
>>    unsigned int inside_cost = 0, outside_cost = 0;
>> -  stmt_vec_info stmt_info = vect_dr_stmt (elem->dr);
>> +  stmt_vec_info stmt_info = elem->stmt_info;
>>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>>    stmt_vector_for_cost prologue_cost_vec, body_cost_vec,
>>                        epilogue_cost_vec;
>> @@ -1460,7 +1461,7 @@ vect_peeling_hash_get_lowest_cost (_vect
>>    epilogue_cost_vec.create (2);
>>
>>    vect_get_peeling_costs_all_drs (LOOP_VINFO_DATAREFS (loop_vinfo),
>> -                                 elem->dr, &inside_cost, &outside_cost,
>> + elem->stmt_info, &inside_cost, &outside_cost,
>>                                   &body_cost_vec, &prologue_cost_vec,
>>                                   elem->npeel, false);
>>
>> @@ -1484,7 +1485,7 @@ vect_peeling_hash_get_lowest_cost (_vect
>>      {
>>        min->inside_cost = inside_cost;
>>        min->outside_cost = outside_cost;
>> -      min->peel_info.dr = elem->dr;
>> +      min->peel_info.stmt_info = elem->stmt_info;
>>        min->peel_info.npeel = elem->npeel;
>>        min->peel_info.count = elem->count;
>>      }
>> @@ -1503,7 +1504,7 @@ vect_peeling_hash_choose_best_peeling (h
>>  {
>>     struct _vect_peel_extended_info res;
>>
>> -   res.peel_info.dr = NULL;
>> +   res.peel_info.stmt_info = NULL;
>>
>>     if (!unlimited_cost_model (LOOP_VINFO_LOOP (loop_vinfo)))
>>       {
>> @@ -1527,8 +1528,8 @@ vect_peeling_hash_choose_best_peeling (h
>>  /* Return true if the new peeling NPEEL is supported.  */
>>
>>  static bool
>> -vect_peeling_supportable (loop_vec_info loop_vinfo, struct
> data_reference *dr0,
>> -                         unsigned npeel)
>> +vect_peeling_supportable (loop_vec_info loop_vinfo,
>> +                         stmt_vec_info peel_stmt_info, unsigned npeel)
>>  {
>>    unsigned i;
>>    struct data_reference *dr = NULL;
>> @@ -1540,10 +1541,10 @@ vect_peeling_supportable (loop_vec_info
>>      {
>>        int save_misalignment;
>>
>> -      if (dr == dr0)
>> +      stmt_vec_info stmt_info = vect_dr_stmt (dr);
>> +      if (stmt_info == peel_stmt_info)
>>         continue;
>>
>> -      stmt_vec_info stmt_info = vect_dr_stmt (dr);
>>        /* For interleaving, only the alignment of the first access
>>          matters.  */
>>        if (STMT_VINFO_GROUPED_ACCESS (stmt_info)
>> @@ -1557,8 +1558,9 @@ vect_peeling_supportable (loop_vec_info
>>         continue;
>>
>>        save_misalignment = dr_misalignment (stmt_info);
>> -      vect_update_misalignment_for_peel (dr, dr0, npeel);
>> -      supportable_dr_alignment = vect_supportable_dr_alignment (dr, false);
>> +      vect_update_misalignment_for_peel (stmt_info, peel_stmt_info, npeel);
>> +      supportable_dr_alignment
>> +       = vect_supportable_dr_alignment (stmt_info, false);
>>        set_dr_misalignment (stmt_info, save_misalignment);
>>
>>        if (!supportable_dr_alignment)
>> @@ -1665,8 +1667,9 @@ vect_enhance_data_refs_alignment (loop_v
>>    vec<data_reference_p> datarefs = LOOP_VINFO_DATAREFS (loop_vinfo);
>>    struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
>>    enum dr_alignment_support supportable_dr_alignment;
>> -  struct data_reference *dr0 = NULL, *first_store = NULL;
>>    struct data_reference *dr;
>> +  stmt_vec_info peel_stmt_info = NULL;
>> +  stmt_vec_info first_store_info = NULL;
>>    unsigned int i, j;
>>    bool do_peeling = false;
>>    bool do_versioning = false;
>> @@ -1675,7 +1678,7 @@ vect_enhance_data_refs_alignment (loop_v
>>    bool one_misalignment_known = false;
>>    bool one_misalignment_unknown = false;
>>    bool one_dr_unsupportable = false;
>> -  struct data_reference *unsupportable_dr = NULL;
>> +  stmt_vec_info unsupportable_stmt_info = NULL;
>>    poly_uint64 vf = LOOP_VINFO_VECT_FACTOR (loop_vinfo);
>>    unsigned possible_npeel_number = 1;
>>    tree vectype;
>> @@ -1745,8 +1748,9 @@ vect_enhance_data_refs_alignment (loop_v
>>           && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
>>         continue;
>>
>> -      supportable_dr_alignment = vect_supportable_dr_alignment (dr, true);
>> -      do_peeling = vector_alignment_reachable_p (dr);
>> +      supportable_dr_alignment
>> +       = vect_supportable_dr_alignment (stmt_info, true);
>> +      do_peeling = vector_alignment_reachable_p (stmt_info);
>>        if (do_peeling)
>>          {
>>           if (known_alignment_for_access_p (stmt_info))
>> @@ -1796,7 +1800,7 @@ vect_enhance_data_refs_alignment (loop_v
>>                for (j = 0; j < possible_npeel_number; j++)
>>                  {
>>                    vect_peeling_hash_insert (&peeling_htab, loop_vinfo,
>> -                                           dr, npeel_tmp);
>> +                                           stmt_info, npeel_tmp);
>>                   npeel_tmp += target_align / dr_size;
>>                  }
>>
>> @@ -1810,11 +1814,11 @@ vect_enhance_data_refs_alignment (loop_v
>>                   stores over load.  */
>>               unsigned same_align_drs
>>                 = STMT_VINFO_SAME_ALIGN_REFS (stmt_info).length ();
>> -             if (!dr0
>> +             if (!peel_stmt_info
>>                   || same_align_drs_max < same_align_drs)
>>                 {
>>                   same_align_drs_max = same_align_drs;
>> -                 dr0 = dr;
>> +                 peel_stmt_info = stmt_info;
>>                 }
>>               /* For data-refs with the same number of related
>>                  accesses prefer the one where the misalign
>> @@ -1822,6 +1826,7 @@ vect_enhance_data_refs_alignment (loop_v
>>               else if (same_align_drs_max == same_align_drs)
>>                 {
>>                   struct loop *ivloop0, *ivloop;
>> +                 data_reference *dr0 = STMT_VINFO_DATA_REF (peel_stmt_info);
>>                   ivloop0 = outermost_invariant_loop_for_expr
>>                     (loop, DR_BASE_ADDRESS (dr0));
>>                   ivloop = outermost_invariant_loop_for_expr
>> @@ -1829,7 +1834,7 @@ vect_enhance_data_refs_alignment (loop_v
>>                   if ((ivloop && !ivloop0)
>>                       || (ivloop && ivloop0
>>                           && flow_loop_nested_p (ivloop, ivloop0)))
>> -                   dr0 = dr;
>> +                   peel_stmt_info = stmt_info;
>>                 }
>>
>>               one_misalignment_unknown = true;
>> @@ -1839,11 +1844,11 @@ vect_enhance_data_refs_alignment (loop_v
>>               if (!supportable_dr_alignment)
>>               {
>>                 one_dr_unsupportable = true;
>> -               unsupportable_dr = dr;
>> +               unsupportable_stmt_info = stmt_info;
>>               }
>>
>> -             if (!first_store && DR_IS_WRITE (dr))
>> -               first_store = dr;
>> +             if (!first_store_info && DR_IS_WRITE (dr))
>> +               first_store_info = stmt_info;
>>              }
>>          }
>>        else
>> @@ -1886,16 +1891,16 @@ vect_enhance_data_refs_alignment (loop_v
>>
>>        stmt_vector_for_cost dummy;
>>        dummy.create (2);
>> -      vect_get_peeling_costs_all_drs (datarefs, dr0,
>> +      vect_get_peeling_costs_all_drs (datarefs, peel_stmt_info,
>>                                       &load_inside_cost,
>>                                       &load_outside_cost,
>>                                       &dummy, &dummy, estimated_npeels, true);
>>        dummy.release ();
>>
>> -      if (first_store)
>> +      if (first_store_info)
>>         {
>>           dummy.create (2);
>> -         vect_get_peeling_costs_all_drs (datarefs, first_store,
>> +         vect_get_peeling_costs_all_drs (datarefs, first_store_info,
>>                                           &store_inside_cost,
>>                                           &store_outside_cost,
>>                                           &dummy, &dummy,
>> @@ -1912,7 +1917,7 @@ vect_enhance_data_refs_alignment (loop_v
>>           || (load_inside_cost == store_inside_cost
>>               && load_outside_cost > store_outside_cost))
>>         {
>> -         dr0 = first_store;
>> +         peel_stmt_info = first_store_info;
>>           peel_for_unknown_alignment.inside_cost = store_inside_cost;
>>           peel_for_unknown_alignment.outside_cost = store_outside_cost;
>>         }
>> @@ -1936,18 +1941,18 @@ vect_enhance_data_refs_alignment (loop_v
>>        epilogue_cost_vec.release ();
>>
>>        peel_for_unknown_alignment.peel_info.count = 1
>> -       + STMT_VINFO_SAME_ALIGN_REFS (vect_dr_stmt (dr0)).length ();
>> +       + STMT_VINFO_SAME_ALIGN_REFS (peel_stmt_info).length ();
>>      }
>>
>>    peel_for_unknown_alignment.peel_info.npeel = 0;
>> -  peel_for_unknown_alignment.peel_info.dr = dr0;
>> +  peel_for_unknown_alignment.peel_info.stmt_info = peel_stmt_info;
>>
>>    best_peel = peel_for_unknown_alignment;
>>
>>    peel_for_known_alignment.inside_cost = INT_MAX;
>>    peel_for_known_alignment.outside_cost = INT_MAX;
>>    peel_for_known_alignment.peel_info.count = 0;
>> -  peel_for_known_alignment.peel_info.dr = NULL;
>> +  peel_for_known_alignment.peel_info.stmt_info = NULL;
>>
>>    if (do_peeling && one_misalignment_known)
>>      {
>> @@ -1959,7 +1964,7 @@ vect_enhance_data_refs_alignment (loop_v
>>      }
>>
>>    /* Compare costs of peeling for known and unknown alignment. */
>> -  if (peel_for_known_alignment.peel_info.dr != NULL
>> +  if (peel_for_known_alignment.peel_info.stmt_info
>>        && peel_for_unknown_alignment.inside_cost
>>        >= peel_for_known_alignment.inside_cost)
>>      {
>> @@ -1976,7 +1981,7 @@ vect_enhance_data_refs_alignment (loop_v
>>       since we'd have to discard a chosen peeling except when it accidentally
>>       aligned the unsupportable data ref.  */
>>    if (one_dr_unsupportable)
>> -    dr0 = unsupportable_dr;
>> +    peel_stmt_info = unsupportable_stmt_info;
>>    else if (do_peeling)
>>      {
>>        /* Calculate the penalty for no peeling, i.e. leaving everything as-is.
>> @@ -2007,7 +2012,7 @@ vect_enhance_data_refs_alignment (loop_v
>>        epilogue_cost_vec.release ();
>>
>>        npeel = best_peel.peel_info.npeel;
>> -      dr0 = best_peel.peel_info.dr;
>> +      peel_stmt_info = best_peel.peel_info.stmt_info;
>>
>>        /* If no peeling is not more expensive than the best peeling we
>>          have so far, don't perform any peeling.  */
>> @@ -2017,8 +2022,8 @@ vect_enhance_data_refs_alignment (loop_v
>>
>>    if (do_peeling)
>>      {
>> -      stmt_vec_info peel_stmt_info = vect_dr_stmt (dr0);
>>        vectype = STMT_VINFO_VECTYPE (peel_stmt_info);
>> +      data_reference *dr0 = STMT_VINFO_DATA_REF (peel_stmt_info);
>>
>>        if (known_alignment_for_access_p (peel_stmt_info))
>>          {
>> @@ -2052,7 +2057,7 @@ vect_enhance_data_refs_alignment (loop_v
>>          }
>>
>>        /* Ensure that all datarefs can be vectorized after the peel.  */
>> -      if (!vect_peeling_supportable (loop_vinfo, dr0, npeel))
>> +      if (!vect_peeling_supportable (loop_vinfo, peel_stmt_info, npeel))
>>         do_peeling = false;
>>
>>        /* Check if all datarefs are supportable and log.  */
>> @@ -2125,7 +2130,8 @@ vect_enhance_data_refs_alignment (loop_v
>>                     && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
>>                   continue;
>>
>> -               vect_update_misalignment_for_peel (dr, dr0, npeel);
>> +               vect_update_misalignment_for_peel (stmt_info,
>> +                                                  peel_stmt_info, npeel);
>>               }
>>
>>            LOOP_VINFO_UNALIGNED_DR (loop_vinfo) = dr0;
>> @@ -2188,7 +2194,8 @@ vect_enhance_data_refs_alignment (loop_v
>>               break;
>>             }
>>
>> - supportable_dr_alignment = vect_supportable_dr_alignment (dr,
> false);
>> +         supportable_dr_alignment
>> +           = vect_supportable_dr_alignment (stmt_info, false);
>>
>>            if (!supportable_dr_alignment)
>>              {
>> @@ -2203,7 +2210,6 @@ vect_enhance_data_refs_alignment (loop_v
>>                    break;
>>                  }
>>
>> -             stmt_info = vect_dr_stmt (dr);
>>               vectype = STMT_VINFO_VECTYPE (stmt_info);
>>               gcc_assert (vectype);
>>
>> @@ -2314,9 +2320,9 @@ vect_find_same_alignment_drs (struct dat
>>    if (maybe_ne (diff, 0))
>>      {
>>        /* Get the wider of the two alignments.  */
>> -      unsigned int align_a = (vect_calculate_target_alignment (dra)
>> +      unsigned int align_a = (vect_calculate_target_alignment (stmtinfo_a)
>>                               / BITS_PER_UNIT);
>> -      unsigned int align_b = (vect_calculate_target_alignment (drb)
>> +      unsigned int align_b = (vect_calculate_target_alignment (stmtinfo_b)
>>                               / BITS_PER_UNIT);
>>        unsigned int max_align = MAX (align_a, align_b);
>>
>> @@ -2366,7 +2372,7 @@ vect_analyze_data_refs_alignment (loop_v
>>      {
>>        stmt_vec_info stmt_info = vect_dr_stmt (dr);
>>        if (STMT_VINFO_VECTORIZABLE (stmt_info))
>> -       vect_compute_data_ref_alignment (dr);
>> +       vect_compute_data_ref_alignment (stmt_info);
>>      }
>>
>>    return true;
>> @@ -2382,17 +2388,16 @@ vect_slp_analyze_and_verify_node_alignme
>>       the node is permuted in which case we start from the first
>>       element in the group.  */
>>    stmt_vec_info first_stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
>> -  data_reference_p first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
>> +  stmt_vec_info stmt_info = first_stmt_info;
>>    if (SLP_TREE_LOAD_PERMUTATION (node).exists ())
>> -    first_stmt_info = DR_GROUP_FIRST_ELEMENT (first_stmt_info);
>> +    stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
>>
>> -  data_reference_p dr = STMT_VINFO_DATA_REF (first_stmt_info);
>> -  vect_compute_data_ref_alignment (dr);
>> +  vect_compute_data_ref_alignment (stmt_info);
>>    /* For creating the data-ref pointer we need alignment of the
>>       first element anyway.  */
>> -  if (dr != first_dr)
>> -    vect_compute_data_ref_alignment (first_dr);
>> -  if (! verify_data_ref_alignment (dr))
>> +  if (stmt_info != first_stmt_info)
>> +    vect_compute_data_ref_alignment (first_stmt_info);
>> +  if (! verify_data_ref_alignment (first_stmt_info))
>>      {
>>        if (dump_enabled_p ())
>>         dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>> @@ -2430,19 +2435,19 @@ vect_slp_analyze_and_verify_instance_ali
>>  }
>>
>>
>> -/* Analyze groups of accesses: check that DR belongs to a group of
>> -   accesses of legal size, step, etc.  Detect gaps, single element
>> -   interleaving, and other special cases. Set grouped access info.
>> -   Collect groups of strided stores for further use in SLP analysis.
>> -   Worker for vect_analyze_group_access.  */
>> +/* Analyze groups of accesses: check that the load or store in STMT_INFO
>> +   belongs to a group of accesses of legal size, step, etc.  Detect gaps,
>> +   single element interleaving, and other special cases.  Set grouped
>> +   access info.  Collect groups of strided stores for further use in
>> +   SLP analysis.  Worker for vect_analyze_group_access.  */
>>
>>  static bool
>> -vect_analyze_group_access_1 (struct data_reference *dr)
>> +vect_analyze_group_access_1 (stmt_vec_info stmt_info)
>>  {
>> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>>    tree step = DR_STEP (dr);
>>    tree scalar_type = TREE_TYPE (DR_REF (dr));
>>    HOST_WIDE_INT type_size = TREE_INT_CST_LOW (TYPE_SIZE_UNIT (scalar_type));
>> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>>    bb_vec_info bb_vinfo = STMT_VINFO_BB_VINFO (stmt_info);
>>    HOST_WIDE_INT dr_step = -1;
>> @@ -2519,7 +2524,7 @@ vect_analyze_group_access_1 (struct data
>>        if (bb_vinfo)
>>         {
>>           /* Mark the statement as unvectorizable.  */
>> -         STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr)) = false;
>> +         STMT_VINFO_VECTORIZABLE (stmt_info) = false;
>>           return true;
>>         }
>>
>> @@ -2667,18 +2672,18 @@ vect_analyze_group_access_1 (struct data
>>    return true;
>>  }
>>
>> -/* Analyze groups of accesses: check that DR belongs to a group of
>> -   accesses of legal size, step, etc.  Detect gaps, single element
>> -   interleaving, and other special cases. Set grouped access info.
>> -   Collect groups of strided stores for further use in SLP analysis.  */
>> +/* Analyze groups of accesses: check that the load or store in STMT_INFO
>> +   belongs to a group of accesses of legal size, step, etc.  Detect gaps,
>> +   single element interleaving, and other special cases.  Set grouped
>> +   access info.  Collect groups of strided stores for further use in
>> +   SLP analysis.  */
>>
>>  static bool
>> -vect_analyze_group_access (struct data_reference *dr)
>> +vect_analyze_group_access (stmt_vec_info stmt_info)
>>  {
>> -  if (!vect_analyze_group_access_1 (dr))
>> +  if (!vect_analyze_group_access_1 (stmt_info))
>>      {
>>        /* Dissolve the group if present.  */
>> -      stmt_vec_info stmt_info = DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (dr));
>>        while (stmt_info)
>>         {
>>           stmt_vec_info next = DR_GROUP_NEXT_ELEMENT (stmt_info);
>> @@ -2691,16 +2696,16 @@ vect_analyze_group_access (struct data_r
>>    return true;
>>  }
>>
>> -/* Analyze the access pattern of the data-reference DR.
>> +/* Analyze the access pattern of the load or store in STMT_INFO.
>>     In case of non-consecutive accesses call vect_analyze_group_access() to
>>     analyze groups of accesses.  */
>>
>>  static bool
>> -vect_analyze_data_ref_access (struct data_reference *dr)
>> +vect_analyze_data_ref_access (stmt_vec_info stmt_info)
>>  {
>> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>>    tree step = DR_STEP (dr);
>>    tree scalar_type = TREE_TYPE (DR_REF (dr));
>> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>>    struct loop *loop = NULL;
>>
>> @@ -2780,10 +2785,10 @@ vect_analyze_data_ref_access (struct dat
>>    if (TREE_CODE (step) != INTEGER_CST)
>>      return (STMT_VINFO_STRIDED_P (stmt_info)
>>             && (!STMT_VINFO_GROUPED_ACCESS (stmt_info)
>> -               || vect_analyze_group_access (dr)));
>> +               || vect_analyze_group_access (stmt_info)));
>>
>>    /* Not consecutive access - check if it's a part of interleaving group.  */
>> -  return vect_analyze_group_access (dr);
>> +  return vect_analyze_group_access (stmt_info);
>>  }
>>
>>  /* Compare two data-references DRA and DRB to group them into chunks
>> @@ -3062,25 +3067,28 @@ vect_analyze_data_ref_accesses (vec_info
>>      }
>>
>>    FOR_EACH_VEC_ELT (datarefs_copy, i, dr)
>> -    if (STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr))
>> -        && !vect_analyze_data_ref_access (dr))
>> -      {
>> -       if (dump_enabled_p ())
>> -         dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>> -                          "not vectorized: complicated access pattern.\n");
>> +    {
>> +      stmt_vec_info stmt_info = vect_dr_stmt (dr);
>> +      if (STMT_VINFO_VECTORIZABLE (stmt_info)
>> +         && !vect_analyze_data_ref_access (stmt_info))
>> +       {
>> +         if (dump_enabled_p ())
>> +           dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>> +                            "not vectorized: complicated access pattern.\n");
>>
>> -        if (is_a <bb_vec_info> (vinfo))
>> -         {
>> -           /* Mark the statement as not vectorizable.  */
>> -           STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr)) = false;
>> -           continue;
>> -         }
>> -        else
>> -         {
>> -           datarefs_copy.release ();
>> -           return false;
>> -         }
>> -      }
>> +         if (is_a <bb_vec_info> (vinfo))
>> +           {
>> +             /* Mark the statement as not vectorizable.  */
>> +             STMT_VINFO_VECTORIZABLE (stmt_info) = false;
>> +             continue;
>> +           }
>> +         else
>> +           {
>> +             datarefs_copy.release ();
>> +             return false;
>> +           }
>> +       }
>> +    }
>>
>>    datarefs_copy.release ();
>>    return true;
>> @@ -3089,7 +3097,7 @@ vect_analyze_data_ref_accesses (vec_info
>>  /* Function vect_vfa_segment_size.
>>
>>     Input:
>> -     DR: The data reference.
>> +     STMT_INFO: the load or store statement.
>>       LENGTH_FACTOR: segment length to consider.
>>
>>     Return a value suitable for the dr_with_seg_len::seg_len field.
>> @@ -3098,8 +3106,9 @@ vect_analyze_data_ref_accesses (vec_info
>>     the size of the access; in effect it only describes the first byte.  */
>>
>>  static tree
>> -vect_vfa_segment_size (struct data_reference *dr, tree length_factor)
>> +vect_vfa_segment_size (stmt_vec_info stmt_info, tree length_factor)
>>  {
>> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>>    length_factor = size_binop (MINUS_EXPR,
>>                               fold_convert (sizetype, length_factor),
>>                               size_one_node);
>> @@ -3107,23 +3116,23 @@ vect_vfa_segment_size (struct data_refer
>>                      length_factor);
>>  }
>>
>> -/* Return a value that, when added to abs (vect_vfa_segment_size (dr)),
>> +/* Return a value that, when added to abs (vect_vfa_segment_size
> (STMT_INFO)),
>>     gives the worst-case number of bytes covered by the segment.  */
>>
>>  static unsigned HOST_WIDE_INT
>> -vect_vfa_access_size (data_reference *dr)
>> +vect_vfa_access_size (stmt_vec_info stmt_vinfo)
>>  {
>> -  stmt_vec_info stmt_vinfo = vect_dr_stmt (dr);
>> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_vinfo);
>>    tree ref_type = TREE_TYPE (DR_REF (dr));
>>    unsigned HOST_WIDE_INT ref_size = tree_to_uhwi (TYPE_SIZE_UNIT (ref_type));
>>    unsigned HOST_WIDE_INT access_size = ref_size;
>>    if (DR_GROUP_FIRST_ELEMENT (stmt_vinfo))
>>      {
>> -      gcc_assert (DR_GROUP_FIRST_ELEMENT (stmt_vinfo) == vect_dr_stmt (dr));
>> +      gcc_assert (DR_GROUP_FIRST_ELEMENT (stmt_vinfo) == stmt_vinfo);
>>        access_size *= DR_GROUP_SIZE (stmt_vinfo) - DR_GROUP_GAP (stmt_vinfo);
>>      }
>>    if (STMT_VINFO_VEC_STMT (stmt_vinfo)
>> -      && (vect_supportable_dr_alignment (dr, false)
>> +      && (vect_supportable_dr_alignment (stmt_vinfo, false)
>>           == dr_explicit_realign_optimized))
>>      {
>>        /* We might access a full vector's worth.  */
>> @@ -3281,13 +3290,14 @@ vect_check_lower_bound (loop_vec_info lo
>>    LOOP_VINFO_LOWER_BOUNDS (loop_vinfo).safe_push (lower_bound);
>>  }
>>
>> -/* Return true if it's unlikely that the step of the vectorized form of DR
>> -   will span fewer than GAP bytes.  */
>> +/* Return true if it's unlikely that the step of the vectorized form of
>> +   the load or store in STMT_INFO will span fewer than GAP bytes.  */
>>
>>  static bool
>> -vect_small_gap_p (loop_vec_info loop_vinfo, data_reference *dr,
> poly_int64 gap)
>> +vect_small_gap_p (stmt_vec_info stmt_info, poly_int64 gap)
>>  {
>> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>> +  loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>>    HOST_WIDE_INT count
>>      = estimated_poly_value (LOOP_VINFO_VECT_FACTOR (loop_vinfo));
>>    if (DR_GROUP_FIRST_ELEMENT (stmt_info))
>> @@ -3295,16 +3305,20 @@ vect_small_gap_p (loop_vec_info loop_vin
>>    return estimated_poly_value (gap) <= count * vect_get_scalar_dr_size (dr);
>>  }
>>
>> -/* Return true if we know that there is no alias between DR_A and DR_B
>> -   when abs (DR_STEP (DR_A)) >= N for some N.  When returning true, set
>> -   *LOWER_BOUND_OUT to this N.  */
>> +/* Return true if we know that there is no alias between the loads and
>> +   stores in STMT_INFO_A and STMT_INFO_B when the absolute step of
>> +   STMT_INFO_A's access is >= some N.  When returning true,
>> +   set *LOWER_BOUND_OUT to this N.  */
>>
>>  static bool
>> -vectorizable_with_step_bound_p (data_reference *dr_a, data_reference *dr_b,
>> +vectorizable_with_step_bound_p (stmt_vec_info stmt_info_a,
>> +                               stmt_vec_info stmt_info_b,
>>                                 poly_uint64 *lower_bound_out)
>>  {
>>    /* Check that there is a constant gap of known sign between DR_A
>>       and DR_B.  */
>> +  data_reference *dr_a = STMT_VINFO_DATA_REF (stmt_info_a);
>> +  data_reference *dr_b = STMT_VINFO_DATA_REF (stmt_info_b);
>>    poly_int64 init_a, init_b;
>>    if (!operand_equal_p (DR_BASE_ADDRESS (dr_a), DR_BASE_ADDRESS (dr_b), 0)
>>        || !operand_equal_p (DR_OFFSET (dr_a), DR_OFFSET (dr_b), 0)
>> @@ -3324,8 +3338,7 @@ vectorizable_with_step_bound_p (data_ref
>>    /* If the two accesses could be dependent within a scalar iteration,
>>       make sure that we'd retain their order.  */
>>    if (maybe_gt (init_a + vect_get_scalar_dr_size (dr_a), init_b)
>> -      && !vect_preserves_scalar_order_p (vect_dr_stmt (dr_a),
>> -                                        vect_dr_stmt (dr_b)))
>> +      && !vect_preserves_scalar_order_p (stmt_info_a, stmt_info_b))
>>      return false;
>>
>>    /* There is no alias if abs (DR_STEP) is greater than or equal to
>> @@ -3426,7 +3439,8 @@ vect_prune_runtime_alias_test_list (loop
>>          and intra-iteration dependencies are guaranteed to be honored.  */
>>        if (ignore_step_p
>>           && (vect_preserves_scalar_order_p (stmt_info_a, stmt_info_b)
>> -             || vectorizable_with_step_bound_p (dr_a, dr_b, &lower_bound)))
>> +             || vectorizable_with_step_bound_p (stmt_info_a, stmt_info_b,
>> +                                                &lower_bound)))
>>         {
>>           if (dump_enabled_p ())
>>             {
>> @@ -3446,9 +3460,10 @@ vect_prune_runtime_alias_test_list (loop
>>          than the number of bytes handled by one vector iteration.)  */
>>        if (!ignore_step_p
>>           && TREE_CODE (DR_STEP (dr_a)) != INTEGER_CST
>> -         && vectorizable_with_step_bound_p (dr_a, dr_b, &lower_bound)
>> -         && (vect_small_gap_p (loop_vinfo, dr_a, lower_bound)
>> -             || vect_small_gap_p (loop_vinfo, dr_b, lower_bound)))
>> +         && vectorizable_with_step_bound_p (stmt_info_a, stmt_info_b,
>> +                                            &lower_bound)
>> +         && (vect_small_gap_p (stmt_info_a, lower_bound)
>> +             || vect_small_gap_p (stmt_info_b, lower_bound)))
>>         {
>>           bool unsigned_p = dr_known_forward_stride_p (dr_a);
>>           if (dump_enabled_p ())
>> @@ -3501,11 +3516,13 @@ vect_prune_runtime_alias_test_list (loop
>>             length_factor = scalar_loop_iters;
>>           else
>>             length_factor = size_int (vect_factor);
>> -         segment_length_a = vect_vfa_segment_size (dr_a, length_factor);
>> -         segment_length_b = vect_vfa_segment_size (dr_b, length_factor);
>> +         segment_length_a = vect_vfa_segment_size (stmt_info_a,
>> +                                                   length_factor);
>> +         segment_length_b = vect_vfa_segment_size (stmt_info_b,
>> +                                                   length_factor);
>>         }
>> -      access_size_a = vect_vfa_access_size (dr_a);
>> -      access_size_b = vect_vfa_access_size (dr_b);
>> +      access_size_a = vect_vfa_access_size (stmt_info_a);
>> +      access_size_b = vect_vfa_access_size (stmt_info_b);
>>        align_a = vect_vfa_align (dr_a);
>>        align_b = vect_vfa_align (dr_b);
>>
>> @@ -4463,12 +4480,12 @@ vect_get_new_ssa_name (tree type, enum v
>>    return new_vect_var;
>>  }
>>
>> -/* Duplicate ptr info and set alignment/misaligment on NAME from DR.  */
>> +/* Duplicate ptr info and set alignment/misaligment on NAME from
> STMT_INFO.  */
>>
>>  static void
>> -vect_duplicate_ssa_name_ptr_info (tree name, data_reference *dr)
>> +vect_duplicate_ssa_name_ptr_info (tree name, stmt_vec_info stmt_info)
>>  {
>> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>>    duplicate_ssa_name_ptr_info (name, DR_PTR_INFO (dr));
>>    int misalign = dr_misalignment (stmt_info);
>>    if (misalign == DR_MISALIGNMENT_UNKNOWN)
>> @@ -4579,7 +4596,7 @@ vect_create_addr_base_for_vector_ref (st
>>        && TREE_CODE (addr_base) == SSA_NAME
>>        && !SSA_NAME_PTR_INFO (addr_base))
>>      {
>> -      vect_duplicate_ssa_name_ptr_info (addr_base, dr);
>> +      vect_duplicate_ssa_name_ptr_info (addr_base, stmt_info);
>>        if (offset || byte_offset)
>>         mark_ptr_info_alignment_unknown (SSA_NAME_PTR_INFO (addr_base));
>>      }
>> @@ -4845,8 +4862,8 @@ vect_create_data_ref_ptr (stmt_vec_info
>>        /* Copy the points-to information if it exists. */
>>        if (DR_PTR_INFO (dr))
>>         {
>> -         vect_duplicate_ssa_name_ptr_info (indx_before_incr, dr);
>> -         vect_duplicate_ssa_name_ptr_info (indx_after_incr, dr);
>> +         vect_duplicate_ssa_name_ptr_info (indx_before_incr, stmt_info);
>> +         vect_duplicate_ssa_name_ptr_info (indx_after_incr, stmt_info);
>>         }
>>        if (ptr_incr)
>>         *ptr_incr = incr;
>> @@ -4875,8 +4892,8 @@ vect_create_data_ref_ptr (stmt_vec_info
>>        /* Copy the points-to information if it exists. */
>>        if (DR_PTR_INFO (dr))
>>         {
>> -         vect_duplicate_ssa_name_ptr_info (indx_before_incr, dr);
>> -         vect_duplicate_ssa_name_ptr_info (indx_after_incr, dr);
>> +         vect_duplicate_ssa_name_ptr_info (indx_before_incr, stmt_info);
>> +         vect_duplicate_ssa_name_ptr_info (indx_after_incr, stmt_info);
>>         }
>>        if (ptr_incr)
>>         *ptr_incr = incr;
>> @@ -6434,17 +6451,17 @@ vect_can_force_dr_alignment_p (const_tre
>>  }
>>
>>
>> -/* Return whether the data reference DR is supported with respect to its
>> -   alignment.
>> +/* Return whether the load or store in STMT_INFO is supported with
>> +   respect to its alignment.
>>     If CHECK_ALIGNED_ACCESSES is TRUE, check if the access is supported even
>> it is aligned, i.e., check if it is possible to vectorize it with
> different
>>     alignment.  */
>>
>>  enum dr_alignment_support
>> -vect_supportable_dr_alignment (struct data_reference *dr,
>> +vect_supportable_dr_alignment (stmt_vec_info stmt_info,
>>                                 bool check_aligned_accesses)
>>  {
>> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>>    machine_mode mode = TYPE_MODE (vectype);
>>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>> Index: gcc/tree-vect-stmts.c
>> ===================================================================
>> --- gcc/tree-vect-stmts.c       2018-07-24 10:24:05.744462369 +0100
>> +++ gcc/tree-vect-stmts.c       2018-07-24 10:24:08.924434128 +0100
>> @@ -1057,8 +1057,8 @@ vect_get_store_cost (stmt_vec_info stmt_
>>                      unsigned int *inside_cost,
>>                      stmt_vector_for_cost *body_cost_vec)
>>  {
>> -  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>> -  int alignment_support_scheme = vect_supportable_dr_alignment (dr, false);
>> +  int alignment_support_scheme
>> +    = vect_supportable_dr_alignment (stmt_info, false);
>>
>>    switch (alignment_support_scheme)
>>      {
>> @@ -1237,8 +1237,8 @@ vect_get_load_cost (stmt_vec_info stmt_i
>>                     stmt_vector_for_cost *body_cost_vec,
>>                     bool record_prologue_costs)
>>  {
>> -  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>> -  int alignment_support_scheme = vect_supportable_dr_alignment (dr, false);
>> +  int alignment_support_scheme
>> +    = vect_supportable_dr_alignment (stmt_info, false);
>>
>>    switch (alignment_support_scheme)
>>      {
>> @@ -2340,7 +2340,6 @@ get_negative_load_store_type (stmt_vec_i
>>                               vec_load_store_type vls_type,
>>                               unsigned int ncopies)
>>  {
>> -  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>>    dr_alignment_support alignment_support_scheme;
>>
>>    if (ncopies > 1)
>> @@ -2351,7 +2350,7 @@ get_negative_load_store_type (stmt_vec_i
>>        return VMAT_ELEMENTWISE;
>>      }
>>
>> -  alignment_support_scheme = vect_supportable_dr_alignment (dr, false);
>> + alignment_support_scheme = vect_supportable_dr_alignment (stmt_info,
> false);
>>    if (alignment_support_scheme != dr_aligned
>>        && alignment_support_scheme != dr_unaligned_supported)
>>      {
>> @@ -2924,15 +2923,14 @@ vect_get_strided_load_store_ops (stmt_ve
>>  }
>>
>>  /* Return the amount that should be added to a vector pointer to move
>> -   to the next or previous copy of AGGR_TYPE.  DR is the data reference
>> -   being vectorized and MEMORY_ACCESS_TYPE describes the type of
>> +   to the next or previous copy of AGGR_TYPE.  STMT_INFO is the load or
>> +   store being vectorized and MEMORY_ACCESS_TYPE describes the type of
>>     vectorization.  */
>>
>>  static tree
>> -vect_get_data_ptr_increment (data_reference *dr, tree aggr_type,
>> +vect_get_data_ptr_increment (stmt_vec_info stmt_info, tree aggr_type,
>>                              vect_memory_access_type memory_access_type)
>>  {
>> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>>    if (memory_access_type == VMAT_INVARIANT)
>>      return size_zero_node;
>>
>> @@ -6171,12 +6169,12 @@ vectorizable_operation (stmt_vec_info st
>>    return true;
>>  }
>>
>> -/* A helper function to ensure data reference DR's base alignment.  */
>> +/* If we decided to increase the base alignment for the memory access in
>> +   STMT_INFO, but haven't increased it yet, do so now.  */
>>
>>  static void
>> -ensure_base_align (struct data_reference *dr)
>> +ensure_base_align (stmt_vec_info stmt_info)
>>  {
>> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>>    if (stmt_info->dr_aux.misalignment == DR_MISALIGNMENT_UNINITIALIZED)
>>      return;
>>
>> @@ -6439,7 +6437,7 @@ vectorizable_store (stmt_vec_info stmt_i
>>
>>    /* Transform.  */
>>
>> -  ensure_base_align (dr);
>> +  ensure_base_align (stmt_info);
>>
>>    if (memory_access_type == VMAT_GATHER_SCATTER && gs_info.decl)
>>      {
>> @@ -6882,7 +6880,8 @@ vectorizable_store (stmt_vec_info stmt_i
>>    auto_vec<tree> dr_chain (group_size);
>>    oprnds.create (group_size);
>>
>> -  alignment_support_scheme = vect_supportable_dr_alignment (first_dr, false);
>> +  alignment_support_scheme
>> +    = vect_supportable_dr_alignment (first_stmt_info, false);
>>    gcc_assert (alignment_support_scheme);
>>    vec_loop_masks *loop_masks
>>      = (loop_vinfo && LOOP_VINFO_FULLY_MASKED_P (loop_vinfo)
>> @@ -6920,7 +6919,8 @@ vectorizable_store (stmt_vec_info stmt_i
>>         aggr_type = build_array_type_nelts (elem_type, vec_num * nunits);
>>        else
>>         aggr_type = vectype;
>> -      bump = vect_get_data_ptr_increment (dr, aggr_type, memory_access_type);
>> +      bump = vect_get_data_ptr_increment (stmt_info, aggr_type,
>> +                                         memory_access_type);
>>      }
>>
>>    if (mask)
>> @@ -7667,7 +7667,7 @@ vectorizable_load (stmt_vec_info stmt_in
>>
>>    /* Transform.  */
>>
>> -  ensure_base_align (dr);
>> +  ensure_base_align (stmt_info);
>>
>>    if (memory_access_type == VMAT_GATHER_SCATTER && gs_info.decl)
>>      {
>> @@ -7990,7 +7990,8 @@ vectorizable_load (stmt_vec_info stmt_in
>>        ref_type = reference_alias_ptr_type (DR_REF (first_dr));
>>      }
>>
>> -  alignment_support_scheme = vect_supportable_dr_alignment (first_dr, false);
>> +  alignment_support_scheme
>> +    = vect_supportable_dr_alignment (first_stmt_info, false);
>>    gcc_assert (alignment_support_scheme);
>>    vec_loop_masks *loop_masks
>>      = (loop_vinfo && LOOP_VINFO_FULLY_MASKED_P (loop_vinfo)
>> @@ -8155,7 +8156,8 @@ vectorizable_load (stmt_vec_info stmt_in
>>         aggr_type = build_array_type_nelts (elem_type, vec_num * nunits);
>>        else
>>         aggr_type = vectype;
>> -      bump = vect_get_data_ptr_increment (dr, aggr_type, memory_access_type);
>> +      bump = vect_get_data_ptr_increment (stmt_info, aggr_type,
>> +                                         memory_access_type);
>>      }
>>
>>    tree vec_mask = NULL_TREE;

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [36/46] Add a pattern_stmt_p field to stmt_vec_info
  2018-07-25 11:09     ` Richard Sandiford
@ 2018-07-25 11:48       ` Richard Biener
  2018-07-26 10:29         ` Richard Sandiford
  0 siblings, 1 reply; 108+ messages in thread
From: Richard Biener @ 2018-07-25 11:48 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Wed, Jul 25, 2018 at 1:09 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> Richard Biener <richard.guenther@gmail.com> writes:
> > On Tue, Jul 24, 2018 at 12:07 PM Richard Sandiford
> > <richard.sandiford@arm.com> wrote:
> >>
> >> This patch adds a pattern_stmt_p field to stmt_vec_info, so that it's
> >> possible to tell whether the statement is a pattern statement without
> >> referring to other statements.  The new field goes in what was
> >> previously a hole in the structure, so the size is the same as before.
> >
> > Not sure what the advantage is?  is_pattern_stmt_p () looks nicer
> > than ->is_pattern_p
>
> I can keep the function wrapper if you prefer that.  But having a
> statement "know" whether it's a pattern stmt makes things like
> freeing stmt_vec_infos simpler (see later patches in the series).

Ah, ok.

> It should also be cheaper to test, but that's much more minor.

So please keep the wrapper.

I guess at some point we should decide what to do with all
the STMT_VINFO_ macros (and the others, {LOOP,BB}_ stuff
is already used inconsistently).

Richard.

> Thanks,
> Richard
>
> >
> >>
> >> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
> >>
> >> gcc/
> >>         * tree-vectorizer.h (_stmt_vec_info::pattern_stmt_p): New field.
> >>         (is_pattern_stmt_p): Delete.
> >>         * tree-vect-patterns.c (vect_init_pattern_stmt): Set pattern_stmt_p
> >>         on pattern statements.
> >>         (vect_split_statement, vect_mark_pattern_stmts): Use the new
> >>         pattern_stmt_p field instead of is_pattern_stmt_p.
> >>         * tree-vect-data-refs.c (vect_preserves_scalar_order_p): Likewise.
> >>         * tree-vect-loop.c (vectorizable_live_operation): Likewise.
> >>         * tree-vect-slp.c (vect_build_slp_tree_2): Likewise.
> >>         (vect_find_last_scalar_stmt_in_slp, vect_remove_slp_scalar_calls)
> >>         (vect_schedule_slp): Likewise.
> >>         * tree-vect-stmts.c (vect_mark_stmts_to_be_vectorized): Likewise.
> >>         (vectorizable_call, vectorizable_simd_clone_call, vectorizable_shift)
> >>         (vectorizable_store, vect_remove_stores): Likewise.
> >>
> >> Index: gcc/tree-vectorizer.h
> >> ===================================================================
> >> --- gcc/tree-vectorizer.h       2018-07-24 10:23:56.440544995 +0100
> >> +++ gcc/tree-vectorizer.h       2018-07-24 10:24:02.364492386 +0100
> >> @@ -791,6 +791,12 @@ struct _stmt_vec_info {
> >>    /* Stmt is part of some pattern (computation idiom)  */
> >>    bool in_pattern_p;
> >>
> >> +  /* True if the statement was created during pattern recognition as
> >> +     part of the replacement for RELATED_STMT.  This implies that the
> >> +     statement isn't part of any basic block, although for convenience
> >> +     its gimple_bb is the same as for RELATED_STMT.  */
> >> +  bool pattern_stmt_p;
> >> +
> >>    /* Is this statement vectorizable or should it be skipped in (partial)
> >>       vectorization.  */
> >>    bool vectorizable;
> >> @@ -1151,16 +1157,6 @@ get_later_stmt (stmt_vec_info stmt1_info
> >>      return stmt2_info;
> >>  }
> >>
> >> -/* Return TRUE if a statement represented by STMT_INFO is a part of a
> >> -   pattern.  */
> >> -
> >> -static inline bool
> >> -is_pattern_stmt_p (stmt_vec_info stmt_info)
> >> -{
> >> -  stmt_vec_info related_stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
> >> -  return related_stmt_info && STMT_VINFO_IN_PATTERN_P (related_stmt_info);
> >> -}
> >> -
> >>  /* Return true if BB is a loop header.  */
> >>
> >>  static inline bool
> >> Index: gcc/tree-vect-patterns.c
> >> ===================================================================
> >> --- gcc/tree-vect-patterns.c    2018-07-24 10:23:59.408518638 +0100
> >> +++ gcc/tree-vect-patterns.c    2018-07-24 10:24:02.360492422 +0100
> >> @@ -108,6 +108,7 @@ vect_init_pattern_stmt (gimple *pattern_
> >>      pattern_stmt_info = orig_stmt_info->vinfo->add_stmt (pattern_stmt);
> >>    gimple_set_bb (pattern_stmt, gimple_bb (orig_stmt_info->stmt));
> >>
> >> +  pattern_stmt_info->pattern_stmt_p = true;
> >>    STMT_VINFO_RELATED_STMT (pattern_stmt_info) = orig_stmt_info;
> >>    STMT_VINFO_DEF_TYPE (pattern_stmt_info)
> >>      = STMT_VINFO_DEF_TYPE (orig_stmt_info);
> >> @@ -630,7 +631,7 @@ vect_recog_temp_ssa_var (tree type, gimp
> >>  vect_split_statement (stmt_vec_info stmt2_info, tree new_rhs,
> >>                       gimple *stmt1, tree vectype)
> >>  {
> >> -  if (is_pattern_stmt_p (stmt2_info))
> >> +  if (stmt2_info->pattern_stmt_p)
> >>      {
> >>        /* STMT2_INFO is part of a pattern.  Get the statement to which
> >>          the pattern is attached.  */
> >> @@ -4726,7 +4727,7 @@ vect_mark_pattern_stmts (stmt_vec_info o
> >>    gimple *def_seq = STMT_VINFO_PATTERN_DEF_SEQ (orig_stmt_info);
> >>
> >>    gimple *orig_pattern_stmt = NULL;
> >> -  if (is_pattern_stmt_p (orig_stmt_info))
> >> +  if (orig_stmt_info->pattern_stmt_p)
> >>      {
> >>        /* We're replacing a statement in an existing pattern definition
> >>          sequence.  */
> >> Index: gcc/tree-vect-data-refs.c
> >> ===================================================================
> >> --- gcc/tree-vect-data-refs.c   2018-07-24 10:23:53.204573732 +0100
> >> +++ gcc/tree-vect-data-refs.c   2018-07-24 10:24:02.356492457 +0100
> >> @@ -212,9 +212,9 @@ vect_preserves_scalar_order_p (stmt_vec_
> >>       (but could happen later) while reads will happen no later than their
> >>       current position (but could happen earlier).  Reordering is therefore
> >>       only possible if the first access is a write.  */
> >> -  if (is_pattern_stmt_p (stmtinfo_a))
> >> +  if (stmtinfo_a->pattern_stmt_p)
> >>      stmtinfo_a = STMT_VINFO_RELATED_STMT (stmtinfo_a);
> >> -  if (is_pattern_stmt_p (stmtinfo_b))
> >> +  if (stmtinfo_b->pattern_stmt_p)
> >>      stmtinfo_b = STMT_VINFO_RELATED_STMT (stmtinfo_b);
> >> stmt_vec_info earlier_stmt_info = get_earlier_stmt (stmtinfo_a,
> > stmtinfo_b);
> >>    return !DR_IS_WRITE (STMT_VINFO_DATA_REF (earlier_stmt_info));
> >> Index: gcc/tree-vect-loop.c
> >> ===================================================================
> >> --- gcc/tree-vect-loop.c        2018-07-24 10:23:56.436545030 +0100
> >> +++ gcc/tree-vect-loop.c        2018-07-24 10:24:02.360492422 +0100
> >> @@ -7907,7 +7907,7 @@ vectorizable_live_operation (stmt_vec_in
> >>      }
> >>
> >>    /* If stmt has a related stmt, then use that for getting the lhs.  */
> >> -  gimple *stmt = (is_pattern_stmt_p (stmt_info)
> >> +  gimple *stmt = (stmt_info->pattern_stmt_p
> >>                   ? STMT_VINFO_RELATED_STMT (stmt_info)->stmt
> >>                   : stmt_info->stmt);
> >>
> >> Index: gcc/tree-vect-slp.c
> >> ===================================================================
> >> --- gcc/tree-vect-slp.c 2018-07-24 10:23:53.204573732 +0100
> >> +++ gcc/tree-vect-slp.c 2018-07-24 10:24:02.360492422 +0100
> >> @@ -376,7 +376,7 @@ vect_get_and_check_slp_defs (vec_info *v
> >>        /* Check if DEF_STMT_INFO is a part of a pattern in LOOP and get
> >>          the def stmt from the pattern.  Check that all the stmts of the
> >>          node are in the pattern.  */
> >> -      if (def_stmt_info && is_pattern_stmt_p (def_stmt_info))
> >> +      if (def_stmt_info && def_stmt_info->pattern_stmt_p)
> >>          {
> >>            pattern = true;
> >>            if (!first && !oprnd_info->first_pattern
> >> @@ -1315,7 +1315,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
> >>               /* ???  Rejecting patterns this way doesn't work.  We'd have to
> >>                  do extra work to cancel the pattern so the uses see the
> >>                  scalar version.  */
> >> -             && !is_pattern_stmt_p (SLP_TREE_SCALAR_STMTS (child)[0]))
> >> +             && !SLP_TREE_SCALAR_STMTS (child)[0]->pattern_stmt_p)
> >>             {
> >>               slp_tree grandchild;
> >>
> >> @@ -1359,7 +1359,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
> >>           /* ???  Rejecting patterns this way doesn't work.  We'd have to
> >>              do extra work to cancel the pattern so the uses see the
> >>              scalar version.  */
> >> -         && !is_pattern_stmt_p (stmt_info))
> >> +         && !stmt_info->pattern_stmt_p)
> >>         {
> >>           dump_printf_loc (MSG_NOTE, vect_location,
> >>                            "Building vector operands from scalars\n");
> >> @@ -1486,7 +1486,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
> >>                   /* ???  Rejecting patterns this way doesn't work.  We'd have
> >> to do extra work to cancel the pattern so the uses see the
> >>                      scalar version.  */
> >> -                 && !is_pattern_stmt_p (SLP_TREE_SCALAR_STMTS (child)[0]))
> >> +                 && !SLP_TREE_SCALAR_STMTS (child)[0]->pattern_stmt_p)
> >>                 {
> >>                   unsigned int j;
> >>                   slp_tree grandchild;
> >> @@ -1848,7 +1848,7 @@ vect_find_last_scalar_stmt_in_slp (slp_t
> >>
> >>    for (int i = 0; SLP_TREE_SCALAR_STMTS (node).iterate (i, &stmt_vinfo); i++)
> >>      {
> >> -      if (is_pattern_stmt_p (stmt_vinfo))
> >> +      if (stmt_vinfo->pattern_stmt_p)
> >>         stmt_vinfo = STMT_VINFO_RELATED_STMT (stmt_vinfo);
> >>        last = last ? get_later_stmt (stmt_vinfo, last) : stmt_vinfo;
> >>      }
> >> @@ -4044,8 +4044,7 @@ vect_remove_slp_scalar_calls (slp_tree n
> >>        gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt);
> >>        if (!stmt || gimple_bb (stmt) == NULL)
> >>         continue;
> >> -      if (is_pattern_stmt_p (stmt_info)
> >> -         || !PURE_SLP_STMT (stmt_info))
> >> +      if (stmt_info->pattern_stmt_p || !PURE_SLP_STMT (stmt_info))
> >>         continue;
> >>        lhs = gimple_call_lhs (stmt);
> >>        new_stmt = gimple_build_assign (lhs, build_zero_cst (TREE_TYPE (lhs)));
> >> @@ -4106,7 +4105,7 @@ vect_schedule_slp (vec_info *vinfo)
> >>           if (!STMT_VINFO_DATA_REF (store_info))
> >>             break;
> >>
> >> -         if (is_pattern_stmt_p (store_info))
> >> +         if (store_info->pattern_stmt_p)
> >>             store_info = STMT_VINFO_RELATED_STMT (store_info);
> >>           /* Free the attached stmt_vec_info and remove the stmt.  */
> >>           gsi = gsi_for_stmt (store_info);
> >> Index: gcc/tree-vect-stmts.c
> >> ===================================================================
> >> --- gcc/tree-vect-stmts.c       2018-07-24 10:23:56.440544995 +0100
> >> +++ gcc/tree-vect-stmts.c       2018-07-24 10:24:02.364492386 +0100
> >> @@ -731,7 +731,7 @@ vect_mark_stmts_to_be_vectorized (loop_v
> >>              break;
> >>          }
> >>
> >> -      if (is_pattern_stmt_p (stmt_vinfo))
> >> +      if (stmt_vinfo->pattern_stmt_p)
> >>          {
> >>            /* Pattern statements are not inserted into the code, so
> >>               FOR_EACH_PHI_OR_STMT_USE optimizes their operands out, and we
> >> @@ -3623,7 +3623,7 @@ vectorizable_call (stmt_vec_info stmt_in
> >>    if (slp_node)
> >>      return true;
> >>
> >> -  if (is_pattern_stmt_p (stmt_info))
> >> +  if (stmt_info->pattern_stmt_p)
> >>      stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
> >>    lhs = gimple_get_lhs (stmt_info->stmt);
> >>
> >> @@ -4362,7 +4362,7 @@ vectorizable_simd_clone_call (stmt_vec_i
> >>    if (scalar_dest)
> >>      {
> >>        type = TREE_TYPE (scalar_dest);
> >> -      if (is_pattern_stmt_p (stmt_info))
> >> +      if (stmt_info->pattern_stmt_p)
> >>         lhs = gimple_call_lhs (STMT_VINFO_RELATED_STMT (stmt_info)->stmt);
> >>        else
> >>         lhs = gimple_call_lhs (stmt);
> >> @@ -5552,7 +5552,7 @@ vectorizable_shift (stmt_vec_info stmt_i
> >>        /* If the shift amount is computed by a pattern stmt we cannot
> >>           use the scalar amount directly thus give up and use a vector
> >>          shift.  */
> >> -      if (op1_def_stmt_info && is_pattern_stmt_p (op1_def_stmt_info))
> >> +      if (op1_def_stmt_info && op1_def_stmt_info->pattern_stmt_p)
> >>         scalar_shift_arg = false;
> >>      }
> >>    else
> >> @@ -6286,7 +6286,7 @@ vectorizable_store (stmt_vec_info stmt_i
> >>      {
> >>        tree scalar_dest = gimple_assign_lhs (assign);
> >>        if (TREE_CODE (scalar_dest) == VIEW_CONVERT_EXPR
> >> -         && is_pattern_stmt_p (stmt_info))
> >> +         && stmt_info->pattern_stmt_p)
> >>         scalar_dest = TREE_OPERAND (scalar_dest, 0);
> >>        if (TREE_CODE (scalar_dest) != ARRAY_REF
> >>           && TREE_CODE (scalar_dest) != BIT_FIELD_REF
> >> @@ -9839,7 +9839,7 @@ vect_remove_stores (stmt_vec_info first_
> >>    while (next_stmt_info)
> >>      {
> >>        stmt_vec_info tmp = DR_GROUP_NEXT_ELEMENT (next_stmt_info);
> >> -      if (is_pattern_stmt_p (next_stmt_info))
> >> +      if (next_stmt_info->pattern_stmt_p)
> >>         next_stmt_info = STMT_VINFO_RELATED_STMT (next_stmt_info);
> >>        /* Free the attached stmt_vec_info and remove the stmt.  */
> >>        next_si = gsi_for_stmt (next_stmt_info->stmt);

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [36/46] Add a pattern_stmt_p field to stmt_vec_info
  2018-07-25 11:48       ` Richard Biener
@ 2018-07-26 10:29         ` Richard Sandiford
  2018-07-26 11:15           ` Richard Biener
  0 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-26 10:29 UTC (permalink / raw)
  To: Richard Biener; +Cc: GCC Patches

Richard Biener <richard.guenther@gmail.com> writes:
> On Wed, Jul 25, 2018 at 1:09 PM Richard Sandiford
> <richard.sandiford@arm.com> wrote:
>>
>> Richard Biener <richard.guenther@gmail.com> writes:
>> > On Tue, Jul 24, 2018 at 12:07 PM Richard Sandiford
>> > <richard.sandiford@arm.com> wrote:
>> >>
>> >> This patch adds a pattern_stmt_p field to stmt_vec_info, so that it's
>> >> possible to tell whether the statement is a pattern statement without
>> >> referring to other statements.  The new field goes in what was
>> >> previously a hole in the structure, so the size is the same as before.
>> >
>> > Not sure what the advantage is?  is_pattern_stmt_p () looks nicer
>> > than ->is_pattern_p
>>
>> I can keep the function wrapper if you prefer that.  But having a
>> statement "know" whether it's a pattern stmt makes things like
>> freeing stmt_vec_infos simpler (see later patches in the series).
>
> Ah, ok.
>
>> It should also be cheaper to test, but that's much more minor.
>
> So please keep the wrapper.

Like this?

> I guess at some point we should decide what to do with all
> the STMT_VINFO_ macros (and the others, {LOOP,BB}_ stuff
> is already used inconsistently).

Yeah...


2018-07-26  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (_stmt_vec_info::pattern_stmt_p): New field.
	(is_pattern_stmt_p): Use it.
	* tree-vect-patterns.c (vect_init_pattern_stmt): Set pattern_stmt_p
	on pattern statements.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-26 11:28:18.000000000 +0100
+++ gcc/tree-vectorizer.h	2018-07-26 11:28:19.072951054 +0100
@@ -791,6 +791,12 @@ struct _stmt_vec_info {
   /* Stmt is part of some pattern (computation idiom)  */
   bool in_pattern_p;
 
+  /* True if the statement was created during pattern recognition as
+     part of the replacement for RELATED_STMT.  This implies that the
+     statement isn't part of any basic block, although for convenience
+     its gimple_bb is the same as for RELATED_STMT.  */
+  bool pattern_stmt_p;
+
   /* Is this statement vectorizable or should it be skipped in (partial)
      vectorization.  */
   bool vectorizable;
@@ -1157,8 +1163,7 @@ get_later_stmt (stmt_vec_info stmt1_info
 static inline bool
 is_pattern_stmt_p (stmt_vec_info stmt_info)
 {
-  stmt_vec_info related_stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
-  return related_stmt_info && STMT_VINFO_IN_PATTERN_P (related_stmt_info);
+  return stmt_info->pattern_stmt_p;
 }
 
 /* Return true if BB is a loop header.  */
Index: gcc/tree-vect-patterns.c
===================================================================
--- gcc/tree-vect-patterns.c	2018-07-26 11:28:18.000000000 +0100
+++ gcc/tree-vect-patterns.c	2018-07-26 11:28:19.068951168 +0100
@@ -108,6 +108,7 @@ vect_init_pattern_stmt (gimple *pattern_
     pattern_stmt_info = orig_stmt_info->vinfo->add_stmt (pattern_stmt);
   gimple_set_bb (pattern_stmt, gimple_bb (orig_stmt_info->stmt));
 
+  pattern_stmt_info->pattern_stmt_p = true;
   STMT_VINFO_RELATED_STMT (pattern_stmt_info) = orig_stmt_info;
   STMT_VINFO_DEF_TYPE (pattern_stmt_info)
     = STMT_VINFO_DEF_TYPE (orig_stmt_info);

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [37/46] Associate alignment information with stmt_vec_infos
  2018-07-25 10:18   ` Richard Biener
@ 2018-07-26 10:55     ` Richard Sandiford
  2018-07-26 11:13       ` Richard Biener
  0 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-26 10:55 UTC (permalink / raw)
  To: Richard Biener; +Cc: GCC Patches

Richard Biener <richard.guenther@gmail.com> writes:
> On Tue, Jul 24, 2018 at 12:08 PM Richard Sandiford
> <richard.sandiford@arm.com> wrote:
>>
>> Alignment information is really a property of a stmt_vec_info
>> (and the way we want to vectorise it) rather than the original scalar dr.
>> I think that was true even before the recent dr sharing.
>
> But that is only so as long as we handle only stmts with a single DR.
> In reality alignment info _is_ a property of the DR and not of the stmt.
>
> So you're doing a shortcut here, shouldn't we rename
> dr_misalignment to stmt_dr_misalignment then?
>
> Otherwise I don't see how this makes sense semantically.

OK, the patch below takes a different approach, suggested in the
38/46 thread.  The idea is to make dr_aux link back to both the scalar
data_reference and the containing stmt_vec_info, so that it becomes a
lookup-free key for a vectorisable reference.

The data_reference link is just STMT_VINFO_DATA_REF, moved from
_stmt_vec_info.  The stmt pointer is a new field and always tracks
the current stmt_vec_info for the reference (which might be a pattern
stmt or the original stmt).

Then 38/40 can use dr_aux instead of data_reference (compared to current
sources) and instead of stmt_vec_info (compared to the original series).
This still avoids the repeated lookups that the series is trying to avoid.

The patch also makes the dr_aux in the current (possibly pattern) stmt
be the one that counts, rather than have the information stay with the
original DR_STMT.  A new macro (STMT_VINFO_DR_INFO) gives this
information for a given stmt_vec_info.

The changes together should make it easier to have multiple dr_auxs
in a single statement.

Thanks,
Richard


2018-07-26  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (vec_info::move_dr): New member function.
	(dataref_aux): Rename to...
	(dr_vec_info): ...this and add "dr" and "stmt" fields.
	(_stmt_vec_info::dr_aux): Update accordingly.
	(_stmt_vec_info::data_ref_info): Delete.
	(STMT_VINFO_GROUPED_ACCESS, DR_GROUP_FIRST_ELEMENT)
	(DR_GROUP_NEXT_ELEMENT, DR_GROUP_SIZE, DR_GROUP_STORE_COUNT)
	(DR_GROUP_GAP, DR_GROUP_SAME_DR_STMT, REDUC_GROUP_FIRST_ELEMENT):
	(REDUC_GROUP_NEXT_ELEMENT, REDUC_GROUP_SIZE): Use dr_aux.dr instead
	of data_ref.
	(STMT_VINFO_DATA_REF): Likewise.  Turn into an lvalue.
	(STMT_VINFO_DR_INFO): New macro.
	(DR_VECT_AUX): Use STMT_VINFO_DR_INKFO and vect_dr_stmt.
	(set_dr_misalignment): Update after rename of dataref_aux.
	(vect_dr_stmt): Move earlier in file.  Return dr_aux.stmt.
	* tree-vect-stmts.c (new_stmt_vec_info): Remove redundant
	initialization of STMT_VINFO_DATA_REF.
	* tree-vectorizer.c (vec_info::move_dr): New function.
	* tree-vect-patterns.c (vect_recog_bool_pattern)
	(vect_recog_mask_conversion_pattern)
	(vect_recog_gather_scatter_pattern): Use it.
	* tree-vect-data-refs.c (vect_analyze_data_refs): Initialize
	the "dr" and "stmt" fields of dr_vec_info instead of
	STMT_VINFO_DATA_REF.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-26 11:30:55.000000000 +0100
+++ gcc/tree-vectorizer.h	2018-07-26 11:30:56.197256524 +0100
@@ -240,6 +240,7 @@ struct vec_info {
   stmt_vec_info lookup_stmt (gimple *);
   stmt_vec_info lookup_def (tree);
   stmt_vec_info lookup_single_use (tree);
+  void move_dr (stmt_vec_info, stmt_vec_info);
 
   /* The type of vectorization.  */
   vec_kind kind;
@@ -767,7 +768,11 @@ enum vect_memory_access_type {
   VMAT_GATHER_SCATTER
 };
 
-struct dataref_aux {
+struct dr_vec_info {
+  /* The data reference itself.  */
+  data_reference *dr;
+  /* The statement that contains the data reference.  */
+  stmt_vec_info stmt;
   /* The misalignment in bytes of the reference, or -1 if not known.  */
   int misalignment;
   /* The byte alignment that we'd ideally like the reference to have,
@@ -818,11 +823,7 @@ struct _stmt_vec_info {
      data-ref (array/pointer/struct access). A GIMPLE stmt is expected to have
      at most one such data-ref.  */
 
-  /* Information about the data-ref (access function, etc),
-     relative to the inner-most containing loop.  */
-  struct data_reference *data_ref_info;
-
-  dataref_aux dr_aux;
+  dr_vec_info dr_aux;
 
   /* Information about the data-ref relative to this loop
      nest (the loop that is being considered for vectorization).  */
@@ -996,7 +997,7 @@ #define STMT_VINFO_LIVE_P(S)
 #define STMT_VINFO_VECTYPE(S)              (S)->vectype
 #define STMT_VINFO_VEC_STMT(S)             (S)->vectorized_stmt
 #define STMT_VINFO_VECTORIZABLE(S)         (S)->vectorizable
-#define STMT_VINFO_DATA_REF(S)             (S)->data_ref_info
+#define STMT_VINFO_DATA_REF(S)             ((S)->dr_aux.dr + 0)
 #define STMT_VINFO_GATHER_SCATTER_P(S)	   (S)->gather_scatter_p
 #define STMT_VINFO_STRIDED_P(S)	   	   (S)->strided_p
 #define STMT_VINFO_MEMORY_ACCESS_TYPE(S)   (S)->memory_access_type
@@ -1017,13 +1018,17 @@ #define STMT_VINFO_DR_OFFSET_ALIGNMENT(S
 #define STMT_VINFO_DR_STEP_ALIGNMENT(S) \
   (S)->dr_wrt_vec_loop.step_alignment
 
+#define STMT_VINFO_DR_INFO(S) \
+  (gcc_checking_assert ((S)->dr_aux.stmt == (S)), &(S)->dr_aux)
+
 #define STMT_VINFO_IN_PATTERN_P(S)         (S)->in_pattern_p
 #define STMT_VINFO_RELATED_STMT(S)         (S)->related_stmt
 #define STMT_VINFO_PATTERN_DEF_SEQ(S)      (S)->pattern_def_seq
 #define STMT_VINFO_SAME_ALIGN_REFS(S)      (S)->same_align_refs
 #define STMT_VINFO_SIMD_CLONE_INFO(S)	   (S)->simd_clone_info
 #define STMT_VINFO_DEF_TYPE(S)             (S)->def_type
-#define STMT_VINFO_GROUPED_ACCESS(S)      ((S)->data_ref_info && DR_GROUP_FIRST_ELEMENT(S))
+#define STMT_VINFO_GROUPED_ACCESS(S) \
+  ((S)->dr_aux.dr && DR_GROUP_FIRST_ELEMENT(S))
 #define STMT_VINFO_LOOP_PHI_EVOLUTION_BASE_UNCHANGED(S) (S)->loop_phi_evolution_base_unchanged
 #define STMT_VINFO_LOOP_PHI_EVOLUTION_PART(S) (S)->loop_phi_evolution_part
 #define STMT_VINFO_MIN_NEG_DIST(S)	(S)->min_neg_dist
@@ -1031,16 +1036,25 @@ #define STMT_VINFO_NUM_SLP_USES(S)	(S)->
 #define STMT_VINFO_REDUC_TYPE(S)	(S)->reduc_type
 #define STMT_VINFO_REDUC_DEF(S)		(S)->reduc_def
 
-#define DR_GROUP_FIRST_ELEMENT(S)  (gcc_checking_assert ((S)->data_ref_info), (S)->first_element)
-#define DR_GROUP_NEXT_ELEMENT(S)   (gcc_checking_assert ((S)->data_ref_info), (S)->next_element)
-#define DR_GROUP_SIZE(S)           (gcc_checking_assert ((S)->data_ref_info), (S)->size)
-#define DR_GROUP_STORE_COUNT(S)    (gcc_checking_assert ((S)->data_ref_info), (S)->store_count)
-#define DR_GROUP_GAP(S)            (gcc_checking_assert ((S)->data_ref_info), (S)->gap)
-#define DR_GROUP_SAME_DR_STMT(S)   (gcc_checking_assert ((S)->data_ref_info), (S)->same_dr_stmt)
-
-#define REDUC_GROUP_FIRST_ELEMENT(S)	(gcc_checking_assert (!(S)->data_ref_info), (S)->first_element)
-#define REDUC_GROUP_NEXT_ELEMENT(S)	(gcc_checking_assert (!(S)->data_ref_info), (S)->next_element)
-#define REDUC_GROUP_SIZE(S)		(gcc_checking_assert (!(S)->data_ref_info), (S)->size)
+#define DR_GROUP_FIRST_ELEMENT(S) \
+  (gcc_checking_assert ((S)->dr_aux.dr), (S)->first_element)
+#define DR_GROUP_NEXT_ELEMENT(S) \
+  (gcc_checking_assert ((S)->dr_aux.dr), (S)->next_element)
+#define DR_GROUP_SIZE(S) \
+  (gcc_checking_assert ((S)->dr_aux.dr), (S)->size)
+#define DR_GROUP_STORE_COUNT(S) \
+  (gcc_checking_assert ((S)->dr_aux.dr), (S)->store_count)
+#define DR_GROUP_GAP(S) \
+  (gcc_checking_assert ((S)->dr_aux.dr), (S)->gap)
+#define DR_GROUP_SAME_DR_STMT(S) \
+  (gcc_checking_assert ((S)->dr_aux.dr), (S)->same_dr_stmt)
+
+#define REDUC_GROUP_FIRST_ELEMENT(S) \
+  (gcc_checking_assert (!(S)->dr_aux.dr), (S)->first_element)
+#define REDUC_GROUP_NEXT_ELEMENT(S) \
+  (gcc_checking_assert (!(S)->dr_aux.dr), (S)->next_element)
+#define REDUC_GROUP_SIZE(S) \
+  (gcc_checking_assert (!(S)->dr_aux.dr), (S)->size)
 
 #define STMT_VINFO_RELEVANT_P(S)          ((S)->relevant != vect_unused_in_scope)
 
@@ -1048,7 +1062,7 @@ #define HYBRID_SLP_STMT(S)
 #define PURE_SLP_STMT(S)                  ((S)->slp_type == pure_slp)
 #define STMT_SLP_TYPE(S)                   (S)->slp_type
 
-#define DR_VECT_AUX(dr) (&vinfo_for_stmt (DR_STMT (dr))->dr_aux)
+#define DR_VECT_AUX(dr) (STMT_VINFO_DR_INFO (vect_dr_stmt (dr)))
 
 #define VECT_MAX_COST 1000
 
@@ -1259,6 +1273,20 @@ add_stmt_costs (void *data, stmt_vector_
 		   cost->misalign, cost->where);
 }
 
+/* Return the stmt DR is in.  For DR_STMT that have been replaced by
+   a pattern this returns the corresponding pattern stmt.  Otherwise
+   DR_STMT is returned.  */
+
+inline stmt_vec_info
+vect_dr_stmt (data_reference *dr)
+{
+  gimple *stmt = DR_STMT (dr);
+  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+  /* DR_STMT should never refer to a stmt in a pattern replacement.  */
+  gcc_checking_assert (!is_pattern_stmt_p (stmt_info));
+  return stmt_info->dr_aux.stmt;
+}
+
 /*-----------------------------------------------------------------*/
 /* Info on data references alignment.                              */
 /*-----------------------------------------------------------------*/
@@ -1268,8 +1296,7 @@ #define DR_MISALIGNMENT_UNINITIALIZED (-
 inline void
 set_dr_misalignment (struct data_reference *dr, int val)
 {
-  dataref_aux *data_aux = DR_VECT_AUX (dr);
-  data_aux->misalignment = val;
+  DR_VECT_AUX (dr)->misalignment = val;
 }
 
 inline int
@@ -1336,22 +1363,6 @@ vect_dr_behavior (data_reference *dr)
     return &STMT_VINFO_DR_WRT_VEC_LOOP (stmt_info);
 }
 
-/* Return the stmt DR is in.  For DR_STMT that have been replaced by
-   a pattern this returns the corresponding pattern stmt.  Otherwise
-   DR_STMT is returned.  */
-
-inline stmt_vec_info
-vect_dr_stmt (data_reference *dr)
-{
-  gimple *stmt = DR_STMT (dr);
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
-  if (STMT_VINFO_IN_PATTERN_P (stmt_info))
-    return STMT_VINFO_RELATED_STMT (stmt_info);
-  /* DR_STMT should never refer to a stmt in a pattern replacement.  */
-  gcc_checking_assert (!STMT_VINFO_RELATED_STMT (stmt_info));
-  return stmt_info;
-}
-
 /* Return true if the vect cost model is unlimited.  */
 static inline bool
 unlimited_cost_model (loop_p loop)
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-26 11:30:55.000000000 +0100
+++ gcc/tree-vect-stmts.c	2018-07-26 11:30:56.197256524 +0100
@@ -9872,7 +9872,6 @@ new_stmt_vec_info (gimple *stmt, vec_inf
   STMT_VINFO_VECTORIZABLE (res) = true;
   STMT_VINFO_IN_PATTERN_P (res) = false;
   STMT_VINFO_PATTERN_DEF_SEQ (res) = NULL;
-  STMT_VINFO_DATA_REF (res) = NULL;
   STMT_VINFO_VEC_REDUCTION_TYPE (res) = TREE_CODE_REDUCTION;
   STMT_VINFO_VEC_CONST_COND_REDUC_CODE (res) = ERROR_MARK;
 
Index: gcc/tree-vectorizer.c
===================================================================
--- gcc/tree-vectorizer.c	2018-07-26 11:30:55.000000000 +0100
+++ gcc/tree-vectorizer.c	2018-07-26 11:30:56.197256524 +0100
@@ -562,6 +562,21 @@ vec_info::lookup_single_use (tree lhs)
   return NULL;
 }
 
+/* Record that NEW_STMT_INFO now implements the same data reference
+   as OLD_STMT_INFO.  */
+
+void
+vec_info::move_dr (stmt_vec_info new_stmt_info, stmt_vec_info old_stmt_info)
+{
+  gcc_assert (!is_pattern_stmt_p (old_stmt_info));
+  STMT_VINFO_DR_INFO (old_stmt_info)->stmt = new_stmt_info;
+  new_stmt_info->dr_aux = old_stmt_info->dr_aux;
+  STMT_VINFO_DR_WRT_VEC_LOOP (new_stmt_info)
+    = STMT_VINFO_DR_WRT_VEC_LOOP (old_stmt_info);
+  STMT_VINFO_GATHER_SCATTER_P (new_stmt_info)
+    = STMT_VINFO_GATHER_SCATTER_P (old_stmt_info);
+}
+
 /* A helper function to free scev and LOOP niter information, as well as
    clear loop constraint LOOP_C_FINITE.  */
 
Index: gcc/tree-vect-patterns.c
===================================================================
--- gcc/tree-vect-patterns.c	2018-07-26 11:30:55.000000000 +0100
+++ gcc/tree-vect-patterns.c	2018-07-26 11:30:56.193256600 +0100
@@ -3828,10 +3828,7 @@ vect_recog_bool_pattern (stmt_vec_info s
 	}
       pattern_stmt = gimple_build_assign (lhs, SSA_NAME, rhs);
       pattern_stmt_info = vinfo->add_stmt (pattern_stmt);
-      STMT_VINFO_DATA_REF (pattern_stmt_info)
-	= STMT_VINFO_DATA_REF (stmt_vinfo);
-      STMT_VINFO_DR_WRT_VEC_LOOP (pattern_stmt_info)
-	= STMT_VINFO_DR_WRT_VEC_LOOP (stmt_vinfo);
+      vinfo->move_dr (pattern_stmt_info, stmt_vinfo);
       *type_out = vectype;
       vect_pattern_detected ("vect_recog_bool_pattern", last_stmt);
 
@@ -3954,14 +3951,7 @@ vect_recog_mask_conversion_pattern (stmt
 
       pattern_stmt_info = vinfo->add_stmt (pattern_stmt);
       if (STMT_VINFO_DATA_REF (stmt_vinfo))
-	{
-	  STMT_VINFO_DATA_REF (pattern_stmt_info)
-	    = STMT_VINFO_DATA_REF (stmt_vinfo);
-	  STMT_VINFO_DR_WRT_VEC_LOOP (pattern_stmt_info)
-	    = STMT_VINFO_DR_WRT_VEC_LOOP (stmt_vinfo);
-	  STMT_VINFO_GATHER_SCATTER_P (pattern_stmt_info)
-	    = STMT_VINFO_GATHER_SCATTER_P (stmt_vinfo);
-	}
+	vinfo->move_dr (pattern_stmt_info, stmt_vinfo);
 
       *type_out = vectype1;
       vect_pattern_detected ("vect_recog_mask_conversion_pattern", last_stmt);
@@ -4283,11 +4273,7 @@ vect_recog_gather_scatter_pattern (stmt_
   /* Copy across relevant vectorization info and associate DR with the
      new pattern statement instead of the original statement.  */
   stmt_vec_info pattern_stmt_info = loop_vinfo->add_stmt (pattern_stmt);
-  STMT_VINFO_DATA_REF (pattern_stmt_info) = dr;
-  STMT_VINFO_DR_WRT_VEC_LOOP (pattern_stmt_info)
-    = STMT_VINFO_DR_WRT_VEC_LOOP (stmt_info);
-  STMT_VINFO_GATHER_SCATTER_P (pattern_stmt_info)
-    = STMT_VINFO_GATHER_SCATTER_P (stmt_info);
+  loop_vinfo->move_dr (pattern_stmt_info, stmt_info);
 
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
   *type_out = vectype;
Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c	2018-07-26 11:30:55.000000000 +0100
+++ gcc/tree-vect-data-refs.c	2018-07-26 11:30:56.193256600 +0100
@@ -4120,7 +4120,10 @@ vect_analyze_data_refs (vec_info *vinfo,
       poly_uint64 vf;
 
       gcc_assert (DR_REF (dr));
-      stmt_vec_info stmt_info = vect_dr_stmt (dr);
+      stmt_vec_info stmt_info = vinfo->lookup_stmt (DR_STMT (dr));
+      gcc_assert (!stmt_info->dr_aux.dr);
+      stmt_info->dr_aux.dr = dr;
+      stmt_info->dr_aux.stmt = stmt_info;
 
       /* Check that analysis of the data-ref succeeded.  */
       if (!DR_BASE_ADDRESS (dr) || !DR_OFFSET (dr) || !DR_INIT (dr)
@@ -4292,9 +4295,6 @@ vect_analyze_data_refs (vec_info *vinfo,
 	    }
 	}
 
-      gcc_assert (!STMT_VINFO_DATA_REF (stmt_info));
-      STMT_VINFO_DATA_REF (stmt_info) = dr;
-
       /* Set vectype for STMT.  */
       scalar_type = TREE_TYPE (DR_REF (dr));
       STMT_VINFO_VECTYPE (stmt_info)

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [38/46] Pass stmt_vec_infos instead of data_references where relevant
  2018-07-25 11:21     ` Richard Sandiford
@ 2018-07-26 11:05       ` Richard Sandiford
  2018-07-26 11:13         ` Richard Biener
  0 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-26 11:05 UTC (permalink / raw)
  To: Richard Biener; +Cc: GCC Patches

Richard Sandiford <richard.sandiford@arm.com> writes:
> Richard Biener <richard.guenther@gmail.com> writes:
>> On Tue, Jul 24, 2018 at 12:08 PM Richard Sandiford
>> <richard.sandiford@arm.com> wrote:
>>>
>>> This patch makes various routines (mostly in tree-vect-data-refs.c)
>>> take stmt_vec_infos rather than data_references.  The affected routines
>>> are really dealing with the way that an access is going to vectorised
>>> for a particular stmt_vec_info, rather than with the original scalar
>>> access described by the data_reference.
>>
>> Similar.  Doesn't it make more sense to pass both stmt_info and DR to
>> the functions?
>
> Not sure.  If we...
>
>> We currently cannot handle aggregate copies in the to-be-vectorized IL
>> but rely on SRA and friends to elide those.  That's the only two-DR
>> stmt I can think of for vectorization.  Maybe aggregate by-value / return
>> function calls with OMP SIMD if that supports this somehow.
>
> ...did this then I don't think a data_refrence would be the natural
> way of identifying a DR within a stmt_vec_info.  Presumably the
> stmt_vec_info would need multiple STMT_VINFO_DATA_REFS and dr_auxs.
> If both of those were vectors then a (stmt_vec_info, index) pair
> might make more sense than (stmt_vec_info, data_reference).
>
> Alternatively we could move STMT_VINFO_DATA_REF into dataref_aux,
> so that there's a back-pointer to the DR, add a stmt_vec_info
> field to dataref_aux too, and then use dataref_aux instead of
> stmt_vec_info as the key.

New patch 37/46 does that.  The one below goes through and uses
dr_vec_info insead of data_reference in code that is dealing
with the way that a reference is going to be vectorised.

Thanks,
Richard


2018-07-26  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (set_dr_misalignment, dr_misalignment)
	(DR_TARGET_ALIGNMENT, aligned_access_p, known_alignment_for_access_p)
	(vect_known_alignment_in_bytes, vect_dr_behavior)
	(vect_get_scalar_dr_size): Take references as dr_vec_infos
	instead of data_references.  Update calls to other routines for
	which the same change has been made.
	* tree-vect-data-refs.c (vect_preserves_scalar_order_p): Take
	dr_vec_infos instead of stmt_vec_infos.
	(vect_analyze_data_ref_dependence): Update call accordingly.
	(vect_slp_analyze_data_ref_dependence)
	(vect_record_base_alignments): Use DR_VECT_AUX.
	(vect_calculate_target_alignment, vect_compute_data_ref_alignment)
	(vect_update_misalignment_for_peel, verify_data_ref_alignment)
	(vector_alignment_reachable_p, vect_get_data_access_cost)
	(vect_peeling_supportable, vect_analyze_group_access_1)
	(vect_analyze_group_access, vect_analyze_data_ref_access)
	(vect_vfa_segment_size, vect_vfa_access_size, vect_vfa_align)
	(vect_compile_time_alias, vect_small_gap_p)
	(vectorizable_with_step_bound_p, vect_duplicate_ssa_name_ptr_info):
	(vect_supportable_dr_alignment): Take references as dr_vec_infos
	instead of data_references.  Update calls to other routines for
	which the same change has been made.
	(vect_verify_datarefs_alignment, vect_get_peeling_costs_all_drs)
	(vect_find_same_alignment_drs, vect_analyze_data_refs_alignment)
	(vect_slp_analyze_and_verify_node_alignment)
	(vect_analyze_data_ref_accesses, vect_prune_runtime_alias_test_list)
	(vect_create_addr_base_for_vector_ref, vect_create_data_ref_ptr)
	(vect_setup_realignment): Use dr_vec_infos.  Update calls after
	above changes.
	(_vect_peel_info::dr): Replace with...
	(_vect_peel_info::dr_info): ...this new field.
	(vect_peeling_hash_get_most_frequent)
	(vect_peeling_hash_choose_best_peeling): Update accordingly.
	(vect_peeling_hash_get_lowest_cost):
	(vect_enhance_data_refs_alignment): Likewise.  Update calls to other
	routines for which the same change has been made.
	(vect_peeling_hash_insert): Likewise.  Take a dr_vec_info instead of a
	data_reference.
	* tree-vect-loop-manip.c (get_misalign_in_elems)
	(vect_gen_prolog_loop_niters): Use dr_vec_infos.  Update calls after
	above changes.
	* tree-vect-loop.c (vect_analyze_loop_2): Likewise.
	* tree-vect-stmts.c (vect_get_store_cost, vect_get_load_cost)
	(vect_truncate_gather_scatter_offset, compare_step_with_zero)
	(get_group_load_store_type, get_negative_load_store_type)
	(vect_get_data_ptr_increment, vectorizable_store)
	(vectorizable_load): Likewise.
	(ensure_base_align): Take a dr_vec_info instead of a data_reference.
	Update calls to other routines for which the same change has been made.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-26 11:30:56.197256524 +0100
+++ gcc/tree-vectorizer.h	2018-07-26 11:42:19.035663718 +0100
@@ -1294,15 +1294,15 @@ #define DR_MISALIGNMENT_UNKNOWN (-1)
 #define DR_MISALIGNMENT_UNINITIALIZED (-2)
 
 inline void
-set_dr_misalignment (struct data_reference *dr, int val)
+set_dr_misalignment (dr_vec_info *dr_info, int val)
 {
-  DR_VECT_AUX (dr)->misalignment = val;
+  dr_info->misalignment = val;
 }
 
 inline int
-dr_misalignment (struct data_reference *dr)
+dr_misalignment (dr_vec_info *dr_info)
 {
-  int misalign = DR_VECT_AUX (dr)->misalignment;
+  int misalign = dr_info->misalignment;
   gcc_assert (misalign != DR_MISALIGNMENT_UNINITIALIZED);
   return misalign;
 }
@@ -1313,52 +1313,51 @@ #define DR_MISALIGNMENT(DR) dr_misalignm
 #define SET_DR_MISALIGNMENT(DR, VAL) set_dr_misalignment (DR, VAL)
 
 /* Only defined once DR_MISALIGNMENT is defined.  */
-#define DR_TARGET_ALIGNMENT(DR) DR_VECT_AUX (DR)->target_alignment
+#define DR_TARGET_ALIGNMENT(DR) ((DR)->target_alignment)
 
-/* Return true if data access DR is aligned to its target alignment
+/* Return true if data access DR_INFO is aligned to its target alignment
    (which may be less than a full vector).  */
 
 static inline bool
-aligned_access_p (struct data_reference *data_ref_info)
+aligned_access_p (dr_vec_info *dr_info)
 {
-  return (DR_MISALIGNMENT (data_ref_info) == 0);
+  return (DR_MISALIGNMENT (dr_info) == 0);
 }
 
 /* Return TRUE if the alignment of the data access is known, and FALSE
    otherwise.  */
 
 static inline bool
-known_alignment_for_access_p (struct data_reference *data_ref_info)
+known_alignment_for_access_p (dr_vec_info *dr_info)
 {
-  return (DR_MISALIGNMENT (data_ref_info) != DR_MISALIGNMENT_UNKNOWN);
+  return (DR_MISALIGNMENT (dr_info) != DR_MISALIGNMENT_UNKNOWN);
 }
 
 /* Return the minimum alignment in bytes that the vectorized version
-   of DR is guaranteed to have.  */
+   of DR_INFO is guaranteed to have.  */
 
 static inline unsigned int
-vect_known_alignment_in_bytes (struct data_reference *dr)
+vect_known_alignment_in_bytes (dr_vec_info *dr_info)
 {
-  if (DR_MISALIGNMENT (dr) == DR_MISALIGNMENT_UNKNOWN)
-    return TYPE_ALIGN_UNIT (TREE_TYPE (DR_REF (dr)));
-  if (DR_MISALIGNMENT (dr) == 0)
-    return DR_TARGET_ALIGNMENT (dr);
-  return DR_MISALIGNMENT (dr) & -DR_MISALIGNMENT (dr);
+  if (DR_MISALIGNMENT (dr_info) == DR_MISALIGNMENT_UNKNOWN)
+    return TYPE_ALIGN_UNIT (TREE_TYPE (DR_REF (dr_info->dr)));
+  if (DR_MISALIGNMENT (dr_info) == 0)
+    return DR_TARGET_ALIGNMENT (dr_info);
+  return DR_MISALIGNMENT (dr_info) & -DR_MISALIGNMENT (dr_info);
 }
 
-/* Return the behavior of DR with respect to the vectorization context
+/* Return the behavior of DR_INFO with respect to the vectorization context
    (which for outer loop vectorization might not be the behavior recorded
-   in DR itself).  */
+   in DR_INFO itself).  */
 
 static inline innermost_loop_behavior *
-vect_dr_behavior (data_reference *dr)
+vect_dr_behavior (dr_vec_info *dr_info)
 {
-  gimple *stmt = DR_STMT (dr);
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+  stmt_vec_info stmt_info = dr_info->stmt;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   if (loop_vinfo == NULL
       || !nested_in_vect_loop_p (LOOP_VINFO_LOOP (loop_vinfo), stmt_info))
-    return &DR_INNERMOST (dr);
+    return &DR_INNERMOST (dr_info->dr);
   else
     return &STMT_VINFO_DR_WRT_VEC_LOOP (stmt_info);
 }
@@ -1451,17 +1450,17 @@ vect_max_vf (loop_vec_info loop_vinfo)
   return MAX_VECTORIZATION_FACTOR;
 }
 
-/* Return the size of the value accessed by unvectorized data reference DR.
-   This is only valid once STMT_VINFO_VECTYPE has been calculated for the
-   associated gimple statement, since that guarantees that DR accesses
-   either a scalar or a scalar equivalent.  ("Scalar equivalent" here
-   includes things like V1SI, which can be vectorized in the same way
+/* Return the size of the value accessed by unvectorized data reference
+   DR_INFO.  This is only valid once STMT_VINFO_VECTYPE has been calculated
+   for the associated gimple statement, since that guarantees that DR_INFO
+   accesses either a scalar or a scalar equivalent.  ("Scalar equivalent"
+   here includes things like V1SI, which can be vectorized in the same way
    as a plain SI.)  */
 
 inline unsigned int
-vect_get_scalar_dr_size (struct data_reference *dr)
+vect_get_scalar_dr_size (dr_vec_info *dr_info)
 {
-  return tree_to_uhwi (TYPE_SIZE_UNIT (TREE_TYPE (DR_REF (dr))));
+  return tree_to_uhwi (TYPE_SIZE_UNIT (TREE_TYPE (DR_REF (dr_info->dr))));
 }
 
 /* Source location + hotness information. */
@@ -1561,7 +1560,7 @@ extern tree vect_get_mask_type_for_stmt
 /* In tree-vect-data-refs.c.  */
 extern bool vect_can_force_dr_alignment_p (const_tree, unsigned int);
 extern enum dr_alignment_support vect_supportable_dr_alignment
-                                           (struct data_reference *, bool);
+                                           (dr_vec_info *, bool);
 extern tree vect_get_smallest_scalar_type (stmt_vec_info, HOST_WIDE_INT *,
                                            HOST_WIDE_INT *);
 extern bool vect_analyze_data_ref_dependences (loop_vec_info, unsigned int *);
Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c	2018-07-26 11:30:56.193256600 +0100
+++ gcc/tree-vect-data-refs.c	2018-07-26 11:42:19.031663762 +0100
@@ -192,14 +192,16 @@ vect_check_nonzero_value (loop_vec_info
   LOOP_VINFO_CHECK_NONZERO (loop_vinfo).safe_push (value);
 }
 
-/* Return true if we know that the order of vectorized STMTINFO_A and
-   vectorized STMTINFO_B will be the same as the order of STMTINFO_A and
-   STMTINFO_B.  At least one of the statements is a write.  */
+/* Return true if we know that the order of vectorized DR_INFO_A and
+   vectorized DR_INFO_B will be the same as the order of DR_INFO_A and
+   DR_INFO_B.  At least one of the accesses is a write.  */
 
 static bool
-vect_preserves_scalar_order_p (stmt_vec_info stmtinfo_a,
-			       stmt_vec_info stmtinfo_b)
+vect_preserves_scalar_order_p (dr_vec_info *dr_info_a, dr_vec_info *dr_info_b)
 {
+  stmt_vec_info stmtinfo_a = dr_info_a->stmt;
+  stmt_vec_info stmtinfo_b = dr_info_b->stmt;
+
   /* Single statements are always kept in their original order.  */
   if (!STMT_VINFO_GROUPED_ACCESS (stmtinfo_a)
       && !STMT_VINFO_GROUPED_ACCESS (stmtinfo_b))
@@ -294,8 +296,10 @@ vect_analyze_data_ref_dependence (struct
   struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
   struct data_reference *dra = DDR_A (ddr);
   struct data_reference *drb = DDR_B (ddr);
-  stmt_vec_info stmtinfo_a = vect_dr_stmt (dra);
-  stmt_vec_info stmtinfo_b = vect_dr_stmt (drb);
+  dr_vec_info *dr_info_a = DR_VECT_AUX (dra);
+  dr_vec_info *dr_info_b = DR_VECT_AUX (drb);
+  stmt_vec_info stmtinfo_a = dr_info_a->stmt;
+  stmt_vec_info stmtinfo_b = dr_info_b->stmt;
   lambda_vector dist_v;
   unsigned int loop_depth;
 
@@ -471,7 +475,7 @@ vect_analyze_data_ref_dependence (struct
 		... = a[i];
 		a[i+1] = ...;
 	     where loads from the group interleave with the store.  */
-	  if (!vect_preserves_scalar_order_p (stmtinfo_a, stmtinfo_b))
+	  if (!vect_preserves_scalar_order_p (dr_info_a, dr_info_b))
 	    {
 	      if (dump_enabled_p ())
 		dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
@@ -609,6 +613,8 @@ vect_slp_analyze_data_ref_dependence (st
 {
   struct data_reference *dra = DDR_A (ddr);
   struct data_reference *drb = DDR_B (ddr);
+  dr_vec_info *dr_info_a = DR_VECT_AUX (dra);
+  dr_vec_info *dr_info_b = DR_VECT_AUX (drb);
 
   /* We need to check dependences of statements marked as unvectorizable
      as well, they still can prohibit vectorization.  */
@@ -626,9 +632,9 @@ vect_slp_analyze_data_ref_dependence (st
 
   /* If dra and drb are part of the same interleaving chain consider
      them independent.  */
-  if (STMT_VINFO_GROUPED_ACCESS (vect_dr_stmt (dra))
-      && (DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (dra))
-	  == DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (drb))))
+  if (STMT_VINFO_GROUPED_ACCESS (dr_info_a->stmt)
+      && (DR_GROUP_FIRST_ELEMENT (dr_info_a->stmt)
+	  == DR_GROUP_FIRST_ELEMENT (dr_info_b->stmt)))
     return false;
 
   /* Unknown data dependence.  */
@@ -842,7 +848,8 @@ vect_record_base_alignments (vec_info *v
   unsigned int i;
   FOR_EACH_VEC_ELT (vinfo->shared->datarefs, i, dr)
     {
-      stmt_vec_info stmt_info = vect_dr_stmt (dr);
+      dr_vec_info *dr_info = DR_VECT_AUX (dr);
+      stmt_vec_info stmt_info = dr_info->stmt;
       if (!DR_IS_CONDITIONAL_IN_STMT (dr)
 	  && STMT_VINFO_VECTORIZABLE (stmt_info)
 	  && !STMT_VINFO_GATHER_SCATTER_P (stmt_info))
@@ -858,34 +865,33 @@ vect_record_base_alignments (vec_info *v
     }
 }
 
-/* Return the target alignment for the vectorized form of DR.  */
+/* Return the target alignment for the vectorized form of DR_INFO.  */
 
 static unsigned int
-vect_calculate_target_alignment (struct data_reference *dr)
+vect_calculate_target_alignment (dr_vec_info *dr_info)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
-  tree vectype = STMT_VINFO_VECTYPE (stmt_info);
+  tree vectype = STMT_VINFO_VECTYPE (dr_info->stmt);
   return targetm.vectorize.preferred_vector_alignment (vectype);
 }
 
 /* Function vect_compute_data_ref_alignment
 
-   Compute the misalignment of the data reference DR.
+   Compute the misalignment of the data reference DR_INFO.
 
    Output:
-   1. DR_MISALIGNMENT (DR) is defined.
+   1. DR_MISALIGNMENT (DR_INFO) is defined.
 
    FOR NOW: No analysis is actually performed. Misalignment is calculated
    only for trivial cases. TODO.  */
 
 static void
-vect_compute_data_ref_alignment (struct data_reference *dr)
+vect_compute_data_ref_alignment (dr_vec_info *dr_info)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  stmt_vec_info stmt_info = dr_info->stmt;
   vec_base_alignments *base_alignments = &stmt_info->vinfo->base_alignments;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct loop *loop = NULL;
-  tree ref = DR_REF (dr);
+  tree ref = DR_REF (dr_info->dr);
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
 
   if (dump_enabled_p ())
@@ -896,17 +902,17 @@ vect_compute_data_ref_alignment (struct
     loop = LOOP_VINFO_LOOP (loop_vinfo);
 
   /* Initialize misalignment to unknown.  */
-  SET_DR_MISALIGNMENT (dr, DR_MISALIGNMENT_UNKNOWN);
+  SET_DR_MISALIGNMENT (dr_info, DR_MISALIGNMENT_UNKNOWN);
 
   if (STMT_VINFO_GATHER_SCATTER_P (stmt_info))
     return;
 
-  innermost_loop_behavior *drb = vect_dr_behavior (dr);
+  innermost_loop_behavior *drb = vect_dr_behavior (dr_info);
   bool step_preserves_misalignment_p;
 
   unsigned HOST_WIDE_INT vector_alignment
-    = vect_calculate_target_alignment (dr) / BITS_PER_UNIT;
-  DR_TARGET_ALIGNMENT (dr) = vector_alignment;
+    = vect_calculate_target_alignment (dr_info) / BITS_PER_UNIT;
+  DR_TARGET_ALIGNMENT (dr_info) = vector_alignment;
 
   /* No step for BB vectorization.  */
   if (!loop)
@@ -924,7 +930,7 @@ vect_compute_data_ref_alignment (struct
   else if (nested_in_vect_loop_p (loop, stmt_info))
     {
       step_preserves_misalignment_p
-	= (DR_STEP_ALIGNMENT (dr) % vector_alignment) == 0;
+	= (DR_STEP_ALIGNMENT (dr_info->dr) % vector_alignment) == 0;
 
       if (dump_enabled_p ())
 	{
@@ -946,7 +952,7 @@ vect_compute_data_ref_alignment (struct
     {
       poly_uint64 vf = LOOP_VINFO_VECT_FACTOR (loop_vinfo);
       step_preserves_misalignment_p
-	= multiple_p (DR_STEP_ALIGNMENT (dr) * vf, vector_alignment);
+	= multiple_p (DR_STEP_ALIGNMENT (dr_info->dr) * vf, vector_alignment);
 
       if (!step_preserves_misalignment_p && dump_enabled_p ())
 	dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
@@ -1009,8 +1015,8 @@ vect_compute_data_ref_alignment (struct
           dump_printf (MSG_NOTE, "\n");
         }
 
-      DR_VECT_AUX (dr)->base_decl = base;
-      DR_VECT_AUX (dr)->base_misaligned = true;
+      dr_info->base_decl = base;
+      dr_info->base_misaligned = true;
       base_misalignment = 0;
     }
   poly_int64 misalignment
@@ -1038,12 +1044,13 @@ vect_compute_data_ref_alignment (struct
       return;
     }
 
-  SET_DR_MISALIGNMENT (dr, const_misalignment);
+  SET_DR_MISALIGNMENT (dr_info, const_misalignment);
 
   if (dump_enabled_p ())
     {
       dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
-                       "misalign = %d bytes of ref ", DR_MISALIGNMENT (dr));
+		       "misalign = %d bytes of ref ",
+		       DR_MISALIGNMENT (dr_info));
       dump_generic_expr (MSG_MISSED_OPTIMIZATION, TDF_SLIM, ref);
       dump_printf (MSG_MISSED_OPTIMIZATION, "\n");
     }
@@ -1052,28 +1059,28 @@ vect_compute_data_ref_alignment (struct
 }
 
 /* Function vect_update_misalignment_for_peel.
-   Sets DR's misalignment
-   - to 0 if it has the same alignment as DR_PEEL,
-   - to the misalignment computed using NPEEL if DR's salignment is known,
+   Sets DR_INFO's misalignment
+   - to 0 if it has the same alignment as DR_PEEL_INFO,
+   - to the misalignment computed using NPEEL if DR_INFO's salignment is known,
    - to -1 (unknown) otherwise.
 
-   DR - the data reference whose misalignment is to be adjusted.
-   DR_PEEL - the data reference whose misalignment is being made
-             zero in the vector loop by the peel.
+   DR_INFO - the data reference whose misalignment is to be adjusted.
+   DR_PEEL_INFO - the data reference whose misalignment is being made
+		  zero in the vector loop by the peel.
    NPEEL - the number of iterations in the peel loop if the misalignment
-           of DR_PEEL is known at compile time.  */
+           of DR_PEEL_INFO is known at compile time.  */
 
 static void
-vect_update_misalignment_for_peel (struct data_reference *dr,
-                                   struct data_reference *dr_peel, int npeel)
+vect_update_misalignment_for_peel (dr_vec_info *dr_info,
+				   dr_vec_info *dr_peel_info, int npeel)
 {
   unsigned int i;
   vec<dr_p> same_aligned_drs;
   struct data_reference *current_dr;
-  int dr_size = vect_get_scalar_dr_size (dr);
-  int dr_peel_size = vect_get_scalar_dr_size (dr_peel);
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
-  stmt_vec_info peel_stmt_info = vect_dr_stmt (dr_peel);
+  int dr_size = vect_get_scalar_dr_size (dr_info);
+  int dr_peel_size = vect_get_scalar_dr_size (dr_peel_info);
+  stmt_vec_info stmt_info = dr_info->stmt;
+  stmt_vec_info peel_stmt_info = dr_peel_info->stmt;
 
  /* For interleaved data accesses the step in the loop must be multiplied by
      the size of the interleaving group.  */
@@ -1084,51 +1091,52 @@ vect_update_misalignment_for_peel (struc
 
   /* It can be assumed that the data refs with the same alignment as dr_peel
      are aligned in the vector loop.  */
-  same_aligned_drs = STMT_VINFO_SAME_ALIGN_REFS (vect_dr_stmt (dr_peel));
+  same_aligned_drs = STMT_VINFO_SAME_ALIGN_REFS (peel_stmt_info);
   FOR_EACH_VEC_ELT (same_aligned_drs, i, current_dr)
     {
-      if (current_dr != dr)
+      if (current_dr != dr_info->dr)
         continue;
-      gcc_assert (!known_alignment_for_access_p (dr)
-		  || !known_alignment_for_access_p (dr_peel)
-		  || (DR_MISALIGNMENT (dr) / dr_size
-		      == DR_MISALIGNMENT (dr_peel) / dr_peel_size));
-      SET_DR_MISALIGNMENT (dr, 0);
+      gcc_assert (!known_alignment_for_access_p (dr_info)
+		  || !known_alignment_for_access_p (dr_peel_info)
+		  || (DR_MISALIGNMENT (dr_info) / dr_size
+		      == DR_MISALIGNMENT (dr_peel_info) / dr_peel_size));
+      SET_DR_MISALIGNMENT (dr_info, 0);
       return;
     }
 
-  if (known_alignment_for_access_p (dr)
-      && known_alignment_for_access_p (dr_peel))
+  if (known_alignment_for_access_p (dr_info)
+      && known_alignment_for_access_p (dr_peel_info))
     {
-      bool negative = tree_int_cst_compare (DR_STEP (dr), size_zero_node) < 0;
-      int misal = DR_MISALIGNMENT (dr);
+      bool negative = tree_int_cst_compare (DR_STEP (dr_info->dr),
+					    size_zero_node) < 0;
+      int misal = DR_MISALIGNMENT (dr_info);
       misal += negative ? -npeel * dr_size : npeel * dr_size;
-      misal &= DR_TARGET_ALIGNMENT (dr) - 1;
-      SET_DR_MISALIGNMENT (dr, misal);
+      misal &= DR_TARGET_ALIGNMENT (dr_info) - 1;
+      SET_DR_MISALIGNMENT (dr_info, misal);
       return;
     }
 
   if (dump_enabled_p ())
     dump_printf_loc (MSG_NOTE, vect_location, "Setting misalignment " \
 		     "to unknown (-1).\n");
-  SET_DR_MISALIGNMENT (dr, DR_MISALIGNMENT_UNKNOWN);
+  SET_DR_MISALIGNMENT (dr_info, DR_MISALIGNMENT_UNKNOWN);
 }
 
 
 /* Function verify_data_ref_alignment
 
-   Return TRUE if DR can be handled with respect to alignment.  */
+   Return TRUE if DR_INFO can be handled with respect to alignment.  */
 
 static bool
-verify_data_ref_alignment (data_reference_p dr)
+verify_data_ref_alignment (dr_vec_info *dr_info)
 {
   enum dr_alignment_support supportable_dr_alignment
-    = vect_supportable_dr_alignment (dr, false);
+    = vect_supportable_dr_alignment (dr_info, false);
   if (!supportable_dr_alignment)
     {
       if (dump_enabled_p ())
 	{
-	  if (DR_IS_READ (dr))
+	  if (DR_IS_READ (dr_info->dr))
 	    dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
 			     "not vectorized: unsupported unaligned load.");
 	  else
@@ -1137,7 +1145,7 @@ verify_data_ref_alignment (data_referenc
 			     "store.");
 
 	  dump_generic_expr (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
-			     DR_REF (dr));
+			     DR_REF (dr_info->dr));
 	  dump_printf (MSG_MISSED_OPTIMIZATION, "\n");
 	}
       return false;
@@ -1164,7 +1172,8 @@ vect_verify_datarefs_alignment (loop_vec
 
   FOR_EACH_VEC_ELT (datarefs, i, dr)
     {
-      stmt_vec_info stmt_info = vect_dr_stmt (dr);
+      dr_vec_info *dr_info = DR_VECT_AUX (dr);
+      stmt_vec_info stmt_info = dr_info->stmt;
 
       if (!STMT_VINFO_RELEVANT_P (stmt_info))
 	continue;
@@ -1180,7 +1189,7 @@ vect_verify_datarefs_alignment (loop_vec
 	  && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
 	continue;
 
-      if (! verify_data_ref_alignment (dr))
+      if (! verify_data_ref_alignment (dr_info))
 	return false;
     }
 
@@ -1202,13 +1211,13 @@ not_size_aligned (tree exp)
 
 /* Function vector_alignment_reachable_p
 
-   Return true if vector alignment for DR is reachable by peeling
+   Return true if vector alignment for DR_INFO is reachable by peeling
    a few loop iterations.  Return false otherwise.  */
 
 static bool
-vector_alignment_reachable_p (struct data_reference *dr)
+vector_alignment_reachable_p (dr_vec_info *dr_info)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  stmt_vec_info stmt_info = dr_info->stmt;
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
 
   if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
@@ -1219,13 +1228,13 @@ vector_alignment_reachable_p (struct dat
       int elem_size, mis_in_elements;
 
       /* FORNOW: handle only known alignment.  */
-      if (!known_alignment_for_access_p (dr))
+      if (!known_alignment_for_access_p (dr_info))
 	return false;
 
       poly_uint64 nelements = TYPE_VECTOR_SUBPARTS (vectype);
       poly_uint64 vector_size = GET_MODE_SIZE (TYPE_MODE (vectype));
       elem_size = vector_element_size (vector_size, nelements);
-      mis_in_elements = DR_MISALIGNMENT (dr) / elem_size;
+      mis_in_elements = DR_MISALIGNMENT (dr_info) / elem_size;
 
       if (!multiple_p (nelements - mis_in_elements, DR_GROUP_SIZE (stmt_info)))
 	return false;
@@ -1233,7 +1242,7 @@ vector_alignment_reachable_p (struct dat
 
   /* If misalignment is known at the compile time then allow peeling
      only if natural alignment is reachable through peeling.  */
-  if (known_alignment_for_access_p (dr) && !aligned_access_p (dr))
+  if (known_alignment_for_access_p (dr_info) && !aligned_access_p (dr_info))
     {
       HOST_WIDE_INT elmsize =
 		int_cst_value (TYPE_SIZE_UNIT (TREE_TYPE (vectype)));
@@ -1242,9 +1251,9 @@ vector_alignment_reachable_p (struct dat
 	  dump_printf_loc (MSG_NOTE, vect_location,
 	                   "data size =" HOST_WIDE_INT_PRINT_DEC, elmsize);
 	  dump_printf (MSG_NOTE,
-	               ". misalignment = %d.\n", DR_MISALIGNMENT (dr));
+	               ". misalignment = %d.\n", DR_MISALIGNMENT (dr_info));
 	}
-      if (DR_MISALIGNMENT (dr) % elmsize)
+      if (DR_MISALIGNMENT (dr_info) % elmsize)
 	{
 	  if (dump_enabled_p ())
 	    dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
@@ -1253,10 +1262,10 @@ vector_alignment_reachable_p (struct dat
 	}
     }
 
-  if (!known_alignment_for_access_p (dr))
+  if (!known_alignment_for_access_p (dr_info))
     {
-      tree type = TREE_TYPE (DR_REF (dr));
-      bool is_packed = not_size_aligned (DR_REF (dr));
+      tree type = TREE_TYPE (DR_REF (dr_info->dr));
+      bool is_packed = not_size_aligned (DR_REF (dr_info->dr));
       if (dump_enabled_p ())
 	dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
 	                 "Unknown misalignment, %snaturally aligned\n",
@@ -1268,16 +1277,16 @@ vector_alignment_reachable_p (struct dat
 }
 
 
-/* Calculate the cost of the memory access represented by DR.  */
+/* Calculate the cost of the memory access represented by DR_INFO.  */
 
 static void
-vect_get_data_access_cost (struct data_reference *dr,
+vect_get_data_access_cost (dr_vec_info *dr_info,
                            unsigned int *inside_cost,
                            unsigned int *outside_cost,
 			   stmt_vector_for_cost *body_cost_vec,
 			   stmt_vector_for_cost *prologue_cost_vec)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  stmt_vec_info stmt_info = dr_info->stmt;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   int ncopies;
 
@@ -1286,7 +1295,7 @@ vect_get_data_access_cost (struct data_r
   else
     ncopies = vect_get_num_copies (loop_vinfo, STMT_VINFO_VECTYPE (stmt_info));
 
-  if (DR_IS_READ (dr))
+  if (DR_IS_READ (dr_info->dr))
     vect_get_load_cost (stmt_info, ncopies, true, inside_cost, outside_cost,
 			prologue_cost_vec, body_cost_vec, false);
   else
@@ -1301,7 +1310,7 @@ vect_get_data_access_cost (struct data_r
 
 typedef struct _vect_peel_info
 {
-  struct data_reference *dr;
+  dr_vec_info *dr_info;
   int npeel;
   unsigned int count;
 } *vect_peel_info;
@@ -1335,16 +1344,17 @@ peel_info_hasher::equal (const _vect_pee
 }
 
 
-/* Insert DR into peeling hash table with NPEEL as key.  */
+/* Insert DR_INFO into peeling hash table with NPEEL as key.  */
 
 static void
 vect_peeling_hash_insert (hash_table<peel_info_hasher> *peeling_htab,
-			  loop_vec_info loop_vinfo, struct data_reference *dr,
+			  loop_vec_info loop_vinfo, dr_vec_info *dr_info,
                           int npeel)
 {
   struct _vect_peel_info elem, *slot;
   _vect_peel_info **new_slot;
-  bool supportable_dr_alignment = vect_supportable_dr_alignment (dr, true);
+  bool supportable_dr_alignment
+    = vect_supportable_dr_alignment (dr_info, true);
 
   elem.npeel = npeel;
   slot = peeling_htab->find (&elem);
@@ -1354,7 +1364,7 @@ vect_peeling_hash_insert (hash_table<pee
     {
       slot = XNEW (struct _vect_peel_info);
       slot->npeel = npeel;
-      slot->dr = dr;
+      slot->dr_info = dr_info;
       slot->count = 1;
       new_slot = peeling_htab->find_slot (slot, INSERT);
       *new_slot = slot;
@@ -1381,19 +1391,19 @@ vect_peeling_hash_get_most_frequent (_ve
     {
       max->peel_info.npeel = elem->npeel;
       max->peel_info.count = elem->count;
-      max->peel_info.dr = elem->dr;
+      max->peel_info.dr_info = elem->dr_info;
     }
 
   return 1;
 }
 
 /* Get the costs of peeling NPEEL iterations checking data access costs
-   for all data refs.  If UNKNOWN_MISALIGNMENT is true, we assume DR0's
+   for all data refs.  If UNKNOWN_MISALIGNMENT is true, we assume DR0_INFO's
    misalignment will be zero after peeling.  */
 
 static void
 vect_get_peeling_costs_all_drs (vec<data_reference_p> datarefs,
-				struct data_reference *dr0,
+				dr_vec_info *dr0_info,
 				unsigned int *inside_cost,
 				unsigned int *outside_cost,
 				stmt_vector_for_cost *body_cost_vec,
@@ -1406,7 +1416,8 @@ vect_get_peeling_costs_all_drs (vec<data
 
   FOR_EACH_VEC_ELT (datarefs, i, dr)
     {
-      stmt_vec_info stmt_info = vect_dr_stmt (dr);
+      dr_vec_info *dr_info = DR_VECT_AUX (dr);
+      stmt_vec_info stmt_info = dr_info->stmt;
       if (!STMT_VINFO_RELEVANT_P (stmt_info))
 	continue;
 
@@ -1423,16 +1434,16 @@ vect_get_peeling_costs_all_drs (vec<data
 	continue;
 
       int save_misalignment;
-      save_misalignment = DR_MISALIGNMENT (dr);
+      save_misalignment = DR_MISALIGNMENT (dr_info);
       if (npeel == 0)
 	;
-      else if (unknown_misalignment && dr == dr0)
-	SET_DR_MISALIGNMENT (dr, 0);
+      else if (unknown_misalignment && dr_info == dr0_info)
+	SET_DR_MISALIGNMENT (dr_info, 0);
       else
-	vect_update_misalignment_for_peel (dr, dr0, npeel);
-      vect_get_data_access_cost (dr, inside_cost, outside_cost,
+	vect_update_misalignment_for_peel (dr_info, dr0_info, npeel);
+      vect_get_data_access_cost (dr_info, inside_cost, outside_cost,
 				 body_cost_vec, prologue_cost_vec);
-      SET_DR_MISALIGNMENT (dr, save_misalignment);
+      SET_DR_MISALIGNMENT (dr_info, save_misalignment);
     }
 }
 
@@ -1446,7 +1457,7 @@ vect_peeling_hash_get_lowest_cost (_vect
   vect_peel_info elem = *slot;
   int dummy;
   unsigned int inside_cost = 0, outside_cost = 0;
-  stmt_vec_info stmt_info = vect_dr_stmt (elem->dr);
+  stmt_vec_info stmt_info = elem->dr_info->stmt;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   stmt_vector_for_cost prologue_cost_vec, body_cost_vec,
 		       epilogue_cost_vec;
@@ -1456,7 +1467,7 @@ vect_peeling_hash_get_lowest_cost (_vect
   epilogue_cost_vec.create (2);
 
   vect_get_peeling_costs_all_drs (LOOP_VINFO_DATAREFS (loop_vinfo),
-				  elem->dr, &inside_cost, &outside_cost,
+				  elem->dr_info, &inside_cost, &outside_cost,
 				  &body_cost_vec, &prologue_cost_vec,
 				  elem->npeel, false);
 
@@ -1480,7 +1491,7 @@ vect_peeling_hash_get_lowest_cost (_vect
     {
       min->inside_cost = inside_cost;
       min->outside_cost = outside_cost;
-      min->peel_info.dr = elem->dr;
+      min->peel_info.dr_info = elem->dr_info;
       min->peel_info.npeel = elem->npeel;
       min->peel_info.count = elem->count;
     }
@@ -1499,7 +1510,7 @@ vect_peeling_hash_choose_best_peeling (h
 {
    struct _vect_peel_extended_info res;
 
-   res.peel_info.dr = NULL;
+   res.peel_info.dr_info = NULL;
 
    if (!unlimited_cost_model (LOOP_VINFO_LOOP (loop_vinfo)))
      {
@@ -1523,7 +1534,7 @@ vect_peeling_hash_choose_best_peeling (h
 /* Return true if the new peeling NPEEL is supported.  */
 
 static bool
-vect_peeling_supportable (loop_vec_info loop_vinfo, struct data_reference *dr0,
+vect_peeling_supportable (loop_vec_info loop_vinfo, dr_vec_info *dr0_info,
 			  unsigned npeel)
 {
   unsigned i;
@@ -1536,10 +1547,11 @@ vect_peeling_supportable (loop_vec_info
     {
       int save_misalignment;
 
-      if (dr == dr0)
+      if (dr == dr0_info->dr)
 	continue;
 
-      stmt_vec_info stmt_info = vect_dr_stmt (dr);
+      dr_vec_info *dr_info = DR_VECT_AUX (dr);
+      stmt_vec_info stmt_info = dr_info->stmt;
       /* For interleaving, only the alignment of the first access
 	 matters.  */
       if (STMT_VINFO_GROUPED_ACCESS (stmt_info)
@@ -1552,10 +1564,11 @@ vect_peeling_supportable (loop_vec_info
 	  && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
 	continue;
 
-      save_misalignment = DR_MISALIGNMENT (dr);
-      vect_update_misalignment_for_peel (dr, dr0, npeel);
-      supportable_dr_alignment = vect_supportable_dr_alignment (dr, false);
-      SET_DR_MISALIGNMENT (dr, save_misalignment);
+      save_misalignment = DR_MISALIGNMENT (dr_info);
+      vect_update_misalignment_for_peel (dr_info, dr0_info, npeel);
+      supportable_dr_alignment
+	= vect_supportable_dr_alignment (dr_info, false);
+      SET_DR_MISALIGNMENT (dr_info, save_misalignment);
 
       if (!supportable_dr_alignment)
 	return false;
@@ -1661,7 +1674,8 @@ vect_enhance_data_refs_alignment (loop_v
   vec<data_reference_p> datarefs = LOOP_VINFO_DATAREFS (loop_vinfo);
   struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
   enum dr_alignment_support supportable_dr_alignment;
-  struct data_reference *dr0 = NULL, *first_store = NULL;
+  dr_vec_info *first_store = NULL;
+  dr_vec_info *dr0_info = NULL;
   struct data_reference *dr;
   unsigned int i, j;
   bool do_peeling = false;
@@ -1671,7 +1685,7 @@ vect_enhance_data_refs_alignment (loop_v
   bool one_misalignment_known = false;
   bool one_misalignment_unknown = false;
   bool one_dr_unsupportable = false;
-  struct data_reference *unsupportable_dr = NULL;
+  dr_vec_info *unsupportable_dr_info = NULL;
   poly_uint64 vf = LOOP_VINFO_VECT_FACTOR (loop_vinfo);
   unsigned possible_npeel_number = 1;
   tree vectype;
@@ -1718,7 +1732,8 @@ vect_enhance_data_refs_alignment (loop_v
 
   FOR_EACH_VEC_ELT (datarefs, i, dr)
     {
-      stmt_vec_info stmt_info = vect_dr_stmt (dr);
+      dr_vec_info *dr_info = DR_VECT_AUX (dr);
+      stmt_vec_info stmt_info = dr_info->stmt;
 
       if (!STMT_VINFO_RELEVANT_P (stmt_info))
 	continue;
@@ -1741,21 +1756,23 @@ vect_enhance_data_refs_alignment (loop_v
 	  && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
 	continue;
 
-      supportable_dr_alignment = vect_supportable_dr_alignment (dr, true);
-      do_peeling = vector_alignment_reachable_p (dr);
+      supportable_dr_alignment = vect_supportable_dr_alignment (dr_info, true);
+      do_peeling = vector_alignment_reachable_p (dr_info);
       if (do_peeling)
         {
-          if (known_alignment_for_access_p (dr))
+          if (known_alignment_for_access_p (dr_info))
             {
 	      unsigned int npeel_tmp = 0;
 	      bool negative = tree_int_cst_compare (DR_STEP (dr),
 						    size_zero_node) < 0;
 
 	      vectype = STMT_VINFO_VECTYPE (stmt_info);
-	      unsigned int target_align = DR_TARGET_ALIGNMENT (dr);
-	      unsigned int dr_size = vect_get_scalar_dr_size (dr);
-	      mis = (negative ? DR_MISALIGNMENT (dr) : -DR_MISALIGNMENT (dr));
-	      if (DR_MISALIGNMENT (dr) != 0)
+	      unsigned int target_align = DR_TARGET_ALIGNMENT (dr_info);
+	      unsigned int dr_size = vect_get_scalar_dr_size (dr_info);
+	      mis = (negative
+		     ? DR_MISALIGNMENT (dr_info)
+		     : -DR_MISALIGNMENT (dr_info));
+	      if (DR_MISALIGNMENT (dr_info) != 0)
 		npeel_tmp = (mis & (target_align - 1)) / dr_size;
 
               /* For multiple types, it is possible that the bigger type access
@@ -1780,7 +1797,7 @@ vect_enhance_data_refs_alignment (loop_v
 
 		  /* NPEEL_TMP is 0 when there is no misalignment, but also
 		     allow peeling NELEMENTS.  */
-		  if (DR_MISALIGNMENT (dr) == 0)
+		  if (DR_MISALIGNMENT (dr_info) == 0)
 		    possible_npeel_number++;
 		}
 
@@ -1789,7 +1806,7 @@ vect_enhance_data_refs_alignment (loop_v
               for (j = 0; j < possible_npeel_number; j++)
                 {
                   vect_peeling_hash_insert (&peeling_htab, loop_vinfo,
-					    dr, npeel_tmp);
+					    dr_info, npeel_tmp);
 		  npeel_tmp += target_align / dr_size;
                 }
 
@@ -1803,11 +1820,11 @@ vect_enhance_data_refs_alignment (loop_v
                  stores over load.  */
 	      unsigned same_align_drs
 		= STMT_VINFO_SAME_ALIGN_REFS (stmt_info).length ();
-	      if (!dr0
+	      if (!dr0_info
 		  || same_align_drs_max < same_align_drs)
 		{
 		  same_align_drs_max = same_align_drs;
-		  dr0 = dr;
+		  dr0_info = dr_info;
 		}
 	      /* For data-refs with the same number of related
 		 accesses prefer the one where the misalign
@@ -1816,13 +1833,13 @@ vect_enhance_data_refs_alignment (loop_v
 		{
 		  struct loop *ivloop0, *ivloop;
 		  ivloop0 = outermost_invariant_loop_for_expr
-		    (loop, DR_BASE_ADDRESS (dr0));
+		    (loop, DR_BASE_ADDRESS (dr0_info->dr));
 		  ivloop = outermost_invariant_loop_for_expr
 		    (loop, DR_BASE_ADDRESS (dr));
 		  if ((ivloop && !ivloop0)
 		      || (ivloop && ivloop0
 			  && flow_loop_nested_p (ivloop, ivloop0)))
-		    dr0 = dr;
+		    dr0_info = dr_info;
 		}
 
 	      one_misalignment_unknown = true;
@@ -1832,16 +1849,16 @@ vect_enhance_data_refs_alignment (loop_v
 	      if (!supportable_dr_alignment)
 	      {
 		one_dr_unsupportable = true;
-		unsupportable_dr = dr;
+		unsupportable_dr_info = dr_info;
 	      }
 
 	      if (!first_store && DR_IS_WRITE (dr))
-		first_store = dr;
+		first_store = dr_info;
             }
         }
       else
         {
-          if (!aligned_access_p (dr))
+          if (!aligned_access_p (dr_info))
             {
               if (dump_enabled_p ())
                 dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
@@ -1879,7 +1896,7 @@ vect_enhance_data_refs_alignment (loop_v
 
       stmt_vector_for_cost dummy;
       dummy.create (2);
-      vect_get_peeling_costs_all_drs (datarefs, dr0,
+      vect_get_peeling_costs_all_drs (datarefs, dr0_info,
 				      &load_inside_cost,
 				      &load_outside_cost,
 				      &dummy, &dummy, estimated_npeels, true);
@@ -1905,7 +1922,7 @@ vect_enhance_data_refs_alignment (loop_v
 	  || (load_inside_cost == store_inside_cost
 	      && load_outside_cost > store_outside_cost))
 	{
-	  dr0 = first_store;
+	  dr0_info = first_store;
 	  peel_for_unknown_alignment.inside_cost = store_inside_cost;
 	  peel_for_unknown_alignment.outside_cost = store_outside_cost;
 	}
@@ -1929,18 +1946,18 @@ vect_enhance_data_refs_alignment (loop_v
       epilogue_cost_vec.release ();
 
       peel_for_unknown_alignment.peel_info.count = 1
-	+ STMT_VINFO_SAME_ALIGN_REFS (vect_dr_stmt (dr0)).length ();
+	+ STMT_VINFO_SAME_ALIGN_REFS (dr0_info->stmt).length ();
     }
 
   peel_for_unknown_alignment.peel_info.npeel = 0;
-  peel_for_unknown_alignment.peel_info.dr = dr0;
+  peel_for_unknown_alignment.peel_info.dr_info = dr0_info;
 
   best_peel = peel_for_unknown_alignment;
 
   peel_for_known_alignment.inside_cost = INT_MAX;
   peel_for_known_alignment.outside_cost = INT_MAX;
   peel_for_known_alignment.peel_info.count = 0;
-  peel_for_known_alignment.peel_info.dr = NULL;
+  peel_for_known_alignment.peel_info.dr_info = NULL;
 
   if (do_peeling && one_misalignment_known)
     {
@@ -1952,7 +1969,7 @@ vect_enhance_data_refs_alignment (loop_v
     }
 
   /* Compare costs of peeling for known and unknown alignment. */
-  if (peel_for_known_alignment.peel_info.dr != NULL
+  if (peel_for_known_alignment.peel_info.dr_info != NULL
       && peel_for_unknown_alignment.inside_cost
       >= peel_for_known_alignment.inside_cost)
     {
@@ -1969,7 +1986,7 @@ vect_enhance_data_refs_alignment (loop_v
      since we'd have to discard a chosen peeling except when it accidentally
      aligned the unsupportable data ref.  */
   if (one_dr_unsupportable)
-    dr0 = unsupportable_dr;
+    dr0_info = unsupportable_dr_info;
   else if (do_peeling)
     {
       /* Calculate the penalty for no peeling, i.e. leaving everything as-is.
@@ -2000,7 +2017,7 @@ vect_enhance_data_refs_alignment (loop_v
       epilogue_cost_vec.release ();
 
       npeel = best_peel.peel_info.npeel;
-      dr0 = best_peel.peel_info.dr;
+      dr0_info = best_peel.peel_info.dr_info;
 
       /* If no peeling is not more expensive than the best peeling we
 	 have so far, don't perform any peeling.  */
@@ -2010,12 +2027,12 @@ vect_enhance_data_refs_alignment (loop_v
 
   if (do_peeling)
     {
-      stmt_vec_info stmt_info = vect_dr_stmt (dr0);
+      stmt_vec_info stmt_info = dr0_info->stmt;
       vectype = STMT_VINFO_VECTYPE (stmt_info);
 
-      if (known_alignment_for_access_p (dr0))
+      if (known_alignment_for_access_p (dr0_info))
         {
-	  bool negative = tree_int_cst_compare (DR_STEP (dr0),
+	  bool negative = tree_int_cst_compare (DR_STEP (dr0_info->dr),
 						size_zero_node) < 0;
           if (!npeel)
             {
@@ -2024,16 +2041,17 @@ vect_enhance_data_refs_alignment (loop_v
                  updating DR_MISALIGNMENT values.  The peeling factor is the
                  vectorization factor minus the misalignment as an element
                  count.  */
-	      mis = negative ? DR_MISALIGNMENT (dr0) : -DR_MISALIGNMENT (dr0);
-	      unsigned int target_align = DR_TARGET_ALIGNMENT (dr0);
+	      mis = (negative
+		     ? DR_MISALIGNMENT (dr0_info)
+		     : -DR_MISALIGNMENT (dr0_info));
+	      unsigned int target_align = DR_TARGET_ALIGNMENT (dr0_info);
 	      npeel = ((mis & (target_align - 1))
-		       / vect_get_scalar_dr_size (dr0));
+		       / vect_get_scalar_dr_size (dr0_info));
             }
 
 	  /* For interleaved data access every iteration accesses all the
 	     members of the group, therefore we divide the number of iterations
 	     by the group size.  */
-	  stmt_info = vect_dr_stmt (dr0);
 	  if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
 	    npeel /= DR_GROUP_SIZE (stmt_info);
 
@@ -2043,11 +2061,11 @@ vect_enhance_data_refs_alignment (loop_v
         }
 
       /* Ensure that all datarefs can be vectorized after the peel.  */
-      if (!vect_peeling_supportable (loop_vinfo, dr0, npeel))
+      if (!vect_peeling_supportable (loop_vinfo, dr0_info, npeel))
 	do_peeling = false;
 
       /* Check if all datarefs are supportable and log.  */
-      if (do_peeling && known_alignment_for_access_p (dr0) && npeel == 0)
+      if (do_peeling && known_alignment_for_access_p (dr0_info) && npeel == 0)
         {
           stat = vect_verify_datarefs_alignment (loop_vinfo);
           if (!stat)
@@ -2066,8 +2084,9 @@ vect_enhance_data_refs_alignment (loop_v
               unsigned max_peel = npeel;
               if (max_peel == 0)
                 {
-		  unsigned int target_align = DR_TARGET_ALIGNMENT (dr0);
-		  max_peel = target_align / vect_get_scalar_dr_size (dr0) - 1;
+		  unsigned int target_align = DR_TARGET_ALIGNMENT (dr0_info);
+		  max_peel = (target_align
+			      / vect_get_scalar_dr_size (dr0_info) - 1);
                 }
               if (max_peel > max_allowed_peel)
                 {
@@ -2103,25 +2122,26 @@ vect_enhance_data_refs_alignment (loop_v
              vectorization factor times the size).  Otherwise, the
              misalignment of DR_i must be set to unknown.  */
 	  FOR_EACH_VEC_ELT (datarefs, i, dr)
-	    if (dr != dr0)
+	    if (dr != dr0_info->dr)
 	      {
 		/* Strided accesses perform only component accesses, alignment
 		   is irrelevant for them.  */
-		stmt_info = vect_dr_stmt (dr);
+		dr_vec_info *dr_info = DR_VECT_AUX (dr);
+		stmt_info = dr_info->stmt;
 		if (STMT_VINFO_STRIDED_P (stmt_info)
 		    && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
 		  continue;
 
-		vect_update_misalignment_for_peel (dr, dr0, npeel);
+		vect_update_misalignment_for_peel (dr_info, dr0_info, npeel);
 	      }
 
-          LOOP_VINFO_UNALIGNED_DR (loop_vinfo) = dr0;
+          LOOP_VINFO_UNALIGNED_DR (loop_vinfo) = dr0_info->dr;
           if (npeel)
             LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) = npeel;
           else
             LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo)
-	      = DR_MISALIGNMENT (dr0);
-	  SET_DR_MISALIGNMENT (dr0, 0);
+	      = DR_MISALIGNMENT (dr0_info);
+	  SET_DR_MISALIGNMENT (dr0_info, 0);
 	  if (dump_enabled_p ())
             {
               dump_printf_loc (MSG_NOTE, vect_location,
@@ -2156,11 +2176,12 @@ vect_enhance_data_refs_alignment (loop_v
     {
       FOR_EACH_VEC_ELT (datarefs, i, dr)
         {
-	  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+	  dr_vec_info *dr_info = DR_VECT_AUX (dr);
+	  stmt_vec_info stmt_info = dr_info->stmt;
 
 	  /* For interleaving, only the alignment of the first access
 	     matters.  */
-	  if (aligned_access_p (dr)
+	  if (aligned_access_p (dr_info)
 	      || (STMT_VINFO_GROUPED_ACCESS (stmt_info)
 		  && DR_GROUP_FIRST_ELEMENT (stmt_info) != stmt_info))
 	    continue;
@@ -2175,14 +2196,15 @@ vect_enhance_data_refs_alignment (loop_v
 	      break;
 	    }
 
-	  supportable_dr_alignment = vect_supportable_dr_alignment (dr, false);
+	  supportable_dr_alignment
+	    = vect_supportable_dr_alignment (dr_info, false);
 
           if (!supportable_dr_alignment)
             {
               int mask;
               tree vectype;
 
-              if (known_alignment_for_access_p (dr)
+              if (known_alignment_for_access_p (dr_info)
                   || LOOP_VINFO_MAY_MISALIGN_STMTS (loop_vinfo).length ()
                      >= (unsigned) PARAM_VALUE (PARAM_VECT_MAX_VERSION_FOR_ALIGNMENT_CHECKS))
                 {
@@ -2190,7 +2212,6 @@ vect_enhance_data_refs_alignment (loop_v
                   break;
                 }
 
-	      stmt_info = vect_dr_stmt (dr);
 	      vectype = STMT_VINFO_VECTYPE (stmt_info);
 	      gcc_assert (vectype);
 
@@ -2241,8 +2262,8 @@ vect_enhance_data_refs_alignment (loop_v
          of the loop being vectorized.  */
       FOR_EACH_VEC_ELT (may_misalign_stmts, i, stmt_info)
         {
-          dr = STMT_VINFO_DATA_REF (stmt_info);
-	  SET_DR_MISALIGNMENT (dr, 0);
+	  dr_vec_info *dr_info = STMT_VINFO_DR_INFO (stmt_info);
+	  SET_DR_MISALIGNMENT (dr_info, 0);
 	  if (dump_enabled_p ())
             dump_printf_loc (MSG_NOTE, vect_location,
                              "Alignment of access forced using versioning.\n");
@@ -2278,8 +2299,10 @@ vect_find_same_alignment_drs (struct dat
 {
   struct data_reference *dra = DDR_A (ddr);
   struct data_reference *drb = DDR_B (ddr);
-  stmt_vec_info stmtinfo_a = vect_dr_stmt (dra);
-  stmt_vec_info stmtinfo_b = vect_dr_stmt (drb);
+  dr_vec_info *dr_info_a = DR_VECT_AUX (dra);
+  dr_vec_info *dr_info_b = DR_VECT_AUX (drb);
+  stmt_vec_info stmtinfo_a = dr_info_a->stmt;
+  stmt_vec_info stmtinfo_b = dr_info_b->stmt;
 
   if (DDR_ARE_DEPENDENT (ddr) == chrec_known)
     return;
@@ -2302,9 +2325,9 @@ vect_find_same_alignment_drs (struct dat
   if (maybe_ne (diff, 0))
     {
       /* Get the wider of the two alignments.  */
-      unsigned int align_a = (vect_calculate_target_alignment (dra)
+      unsigned int align_a = (vect_calculate_target_alignment (dr_info_a)
 			      / BITS_PER_UNIT);
-      unsigned int align_b = (vect_calculate_target_alignment (drb)
+      unsigned int align_b = (vect_calculate_target_alignment (dr_info_b)
 			      / BITS_PER_UNIT);
       unsigned int max_align = MAX (align_a, align_b);
 
@@ -2352,9 +2375,9 @@ vect_analyze_data_refs_alignment (loop_v
   vect_record_base_alignments (vinfo);
   FOR_EACH_VEC_ELT (datarefs, i, dr)
     {
-      stmt_vec_info stmt_info = vect_dr_stmt (dr);
-      if (STMT_VINFO_VECTORIZABLE (stmt_info))
-	vect_compute_data_ref_alignment (dr);
+      dr_vec_info *dr_info = DR_VECT_AUX (dr);
+      if (STMT_VINFO_VECTORIZABLE (dr_info->stmt))
+	vect_compute_data_ref_alignment (dr_info);
     }
 
   return true;
@@ -2370,17 +2393,17 @@ vect_slp_analyze_and_verify_node_alignme
      the node is permuted in which case we start from the first
      element in the group.  */
   stmt_vec_info first_stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
-  data_reference_p first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
+  dr_vec_info *first_dr_info = STMT_VINFO_DR_INFO (first_stmt_info);
   if (SLP_TREE_LOAD_PERMUTATION (node).exists ())
     first_stmt_info = DR_GROUP_FIRST_ELEMENT (first_stmt_info);
 
-  data_reference_p dr = STMT_VINFO_DATA_REF (first_stmt_info);
-  vect_compute_data_ref_alignment (dr);
+  dr_vec_info *dr_info = STMT_VINFO_DR_INFO (first_stmt_info);
+  vect_compute_data_ref_alignment (dr_info);
   /* For creating the data-ref pointer we need alignment of the
      first element anyway.  */
-  if (dr != first_dr)
-    vect_compute_data_ref_alignment (first_dr);
-  if (! verify_data_ref_alignment (dr))
+  if (dr_info != first_dr_info)
+    vect_compute_data_ref_alignment (first_dr_info);
+  if (! verify_data_ref_alignment (dr_info))
     {
       if (dump_enabled_p ())
 	dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
@@ -2418,19 +2441,20 @@ vect_slp_analyze_and_verify_instance_ali
 }
 
 
-/* Analyze groups of accesses: check that DR belongs to a group of
+/* Analyze groups of accesses: check that DR_INFO belongs to a group of
    accesses of legal size, step, etc.  Detect gaps, single element
    interleaving, and other special cases. Set grouped access info.
    Collect groups of strided stores for further use in SLP analysis.
    Worker for vect_analyze_group_access.  */
 
 static bool
-vect_analyze_group_access_1 (struct data_reference *dr)
+vect_analyze_group_access_1 (dr_vec_info *dr_info)
 {
+  data_reference *dr = dr_info->dr;
   tree step = DR_STEP (dr);
   tree scalar_type = TREE_TYPE (DR_REF (dr));
   HOST_WIDE_INT type_size = TREE_INT_CST_LOW (TYPE_SIZE_UNIT (scalar_type));
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  stmt_vec_info stmt_info = dr_info->stmt;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   bb_vec_info bb_vinfo = STMT_VINFO_BB_VINFO (stmt_info);
   HOST_WIDE_INT dr_step = -1;
@@ -2507,7 +2531,7 @@ vect_analyze_group_access_1 (struct data
       if (bb_vinfo)
 	{
 	  /* Mark the statement as unvectorizable.  */
-	  STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr)) = false;
+	  STMT_VINFO_VECTORIZABLE (stmt_info) = false;
 	  return true;
 	}
 
@@ -2655,18 +2679,18 @@ vect_analyze_group_access_1 (struct data
   return true;
 }
 
-/* Analyze groups of accesses: check that DR belongs to a group of
+/* Analyze groups of accesses: check that DR_INFO belongs to a group of
    accesses of legal size, step, etc.  Detect gaps, single element
    interleaving, and other special cases. Set grouped access info.
    Collect groups of strided stores for further use in SLP analysis.  */
 
 static bool
-vect_analyze_group_access (struct data_reference *dr)
+vect_analyze_group_access (dr_vec_info *dr_info)
 {
-  if (!vect_analyze_group_access_1 (dr))
+  if (!vect_analyze_group_access_1 (dr_info))
     {
       /* Dissolve the group if present.  */
-      stmt_vec_info stmt_info = DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (dr));
+      stmt_vec_info stmt_info = DR_GROUP_FIRST_ELEMENT (dr_info->stmt);
       while (stmt_info)
 	{
 	  stmt_vec_info next = DR_GROUP_NEXT_ELEMENT (stmt_info);
@@ -2679,16 +2703,17 @@ vect_analyze_group_access (struct data_r
   return true;
 }
 
-/* Analyze the access pattern of the data-reference DR.
+/* Analyze the access pattern of the data-reference DR_INFO.
    In case of non-consecutive accesses call vect_analyze_group_access() to
    analyze groups of accesses.  */
 
 static bool
-vect_analyze_data_ref_access (struct data_reference *dr)
+vect_analyze_data_ref_access (dr_vec_info *dr_info)
 {
+  data_reference *dr = dr_info->dr;
   tree step = DR_STEP (dr);
   tree scalar_type = TREE_TYPE (DR_REF (dr));
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  stmt_vec_info stmt_info = dr_info->stmt;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct loop *loop = NULL;
 
@@ -2768,10 +2793,10 @@ vect_analyze_data_ref_access (struct dat
   if (TREE_CODE (step) != INTEGER_CST)
     return (STMT_VINFO_STRIDED_P (stmt_info)
 	    && (!STMT_VINFO_GROUPED_ACCESS (stmt_info)
-		|| vect_analyze_group_access (dr)));
+		|| vect_analyze_group_access (dr_info)));
 
   /* Not consecutive access - check if it's a part of interleaving group.  */
-  return vect_analyze_group_access (dr);
+  return vect_analyze_group_access (dr_info);
 }
 
 /* Compare two data-references DRA and DRB to group them into chunks
@@ -2916,7 +2941,8 @@ vect_analyze_data_ref_accesses (vec_info
   for (i = 0; i < datarefs_copy.length () - 1;)
     {
       data_reference_p dra = datarefs_copy[i];
-      stmt_vec_info stmtinfo_a = vect_dr_stmt (dra);
+      dr_vec_info *dr_info_a = DR_VECT_AUX (dra);
+      stmt_vec_info stmtinfo_a = dr_info_a->stmt;
       stmt_vec_info lastinfo = NULL;
       if (!STMT_VINFO_VECTORIZABLE (stmtinfo_a)
 	  || STMT_VINFO_GATHER_SCATTER_P (stmtinfo_a))
@@ -2927,7 +2953,8 @@ vect_analyze_data_ref_accesses (vec_info
       for (i = i + 1; i < datarefs_copy.length (); ++i)
 	{
 	  data_reference_p drb = datarefs_copy[i];
-	  stmt_vec_info stmtinfo_b = vect_dr_stmt (drb);
+	  dr_vec_info *dr_info_b = DR_VECT_AUX (drb);
+	  stmt_vec_info stmtinfo_b = dr_info_b->stmt;
 	  if (!STMT_VINFO_VECTORIZABLE (stmtinfo_b)
 	      || STMT_VINFO_GATHER_SCATTER_P (stmtinfo_b))
 	    break;
@@ -3050,25 +3077,28 @@ vect_analyze_data_ref_accesses (vec_info
     }
 
   FOR_EACH_VEC_ELT (datarefs_copy, i, dr)
-    if (STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr))
-        && !vect_analyze_data_ref_access (dr))
-      {
-	if (dump_enabled_p ())
-	  dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
-	                   "not vectorized: complicated access pattern.\n");
+    {
+      dr_vec_info *dr_info = DR_VECT_AUX (dr);
+      if (STMT_VINFO_VECTORIZABLE (dr_info->stmt)
+	  && !vect_analyze_data_ref_access (dr_info))
+	{
+	  if (dump_enabled_p ())
+	    dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
+			     "not vectorized: complicated access pattern.\n");
 
-        if (is_a <bb_vec_info> (vinfo))
-	  {
-	    /* Mark the statement as not vectorizable.  */
-	    STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr)) = false;
-	    continue;
-	  }
-        else
-	  {
-	    datarefs_copy.release ();
-	    return false;
-	  }
-      }
+	  if (is_a <bb_vec_info> (vinfo))
+	    {
+	      /* Mark the statement as not vectorizable.  */
+	      STMT_VINFO_VECTORIZABLE (dr_info->stmt) = false;
+	      continue;
+	    }
+	  else
+	    {
+	      datarefs_copy.release ();
+	      return false;
+	    }
+	}
+    }
 
   datarefs_copy.release ();
   return true;
@@ -3077,7 +3107,7 @@ vect_analyze_data_ref_accesses (vec_info
 /* Function vect_vfa_segment_size.
 
    Input:
-     DR: The data reference.
+     DR_INFO: The data reference.
      LENGTH_FACTOR: segment length to consider.
 
    Return a value suitable for the dr_with_seg_len::seg_len field.
@@ -3086,32 +3116,32 @@ vect_analyze_data_ref_accesses (vec_info
    the size of the access; in effect it only describes the first byte.  */
 
 static tree
-vect_vfa_segment_size (struct data_reference *dr, tree length_factor)
+vect_vfa_segment_size (dr_vec_info *dr_info, tree length_factor)
 {
   length_factor = size_binop (MINUS_EXPR,
 			      fold_convert (sizetype, length_factor),
 			      size_one_node);
-  return size_binop (MULT_EXPR, fold_convert (sizetype, DR_STEP (dr)),
+  return size_binop (MULT_EXPR, fold_convert (sizetype, DR_STEP (dr_info->dr)),
 		     length_factor);
 }
 
-/* Return a value that, when added to abs (vect_vfa_segment_size (dr)),
+/* Return a value that, when added to abs (vect_vfa_segment_size (DR_INFO)),
    gives the worst-case number of bytes covered by the segment.  */
 
 static unsigned HOST_WIDE_INT
-vect_vfa_access_size (data_reference *dr)
+vect_vfa_access_size (dr_vec_info *dr_info)
 {
-  stmt_vec_info stmt_vinfo = vect_dr_stmt (dr);
-  tree ref_type = TREE_TYPE (DR_REF (dr));
+  stmt_vec_info stmt_vinfo = dr_info->stmt;
+  tree ref_type = TREE_TYPE (DR_REF (dr_info->dr));
   unsigned HOST_WIDE_INT ref_size = tree_to_uhwi (TYPE_SIZE_UNIT (ref_type));
   unsigned HOST_WIDE_INT access_size = ref_size;
   if (DR_GROUP_FIRST_ELEMENT (stmt_vinfo))
     {
-      gcc_assert (DR_GROUP_FIRST_ELEMENT (stmt_vinfo) == vect_dr_stmt (dr));
+      gcc_assert (DR_GROUP_FIRST_ELEMENT (stmt_vinfo) == stmt_vinfo);
       access_size *= DR_GROUP_SIZE (stmt_vinfo) - DR_GROUP_GAP (stmt_vinfo);
     }
   if (STMT_VINFO_VEC_STMT (stmt_vinfo)
-      && (vect_supportable_dr_alignment (dr, false)
+      && (vect_supportable_dr_alignment (dr_info, false)
 	  == dr_explicit_realign_optimized))
     {
       /* We might access a full vector's worth.  */
@@ -3121,12 +3151,13 @@ vect_vfa_access_size (data_reference *dr
   return access_size;
 }
 
-/* Get the minimum alignment for all the scalar accesses that DR describes.  */
+/* Get the minimum alignment for all the scalar accesses that DR_INFO
+   describes.  */
 
 static unsigned int
-vect_vfa_align (const data_reference *dr)
+vect_vfa_align (dr_vec_info *dr_info)
 {
-  return TYPE_ALIGN_UNIT (TREE_TYPE (DR_REF (dr)));
+  return TYPE_ALIGN_UNIT (TREE_TYPE (DR_REF (dr_info->dr)));
 }
 
 /* Function vect_no_alias_p.
@@ -3139,27 +3170,27 @@ vect_vfa_align (const data_reference *dr
    of dr_with_seg_len::{seg_len,access_size} for A and B.  */
 
 static int
-vect_compile_time_alias (struct data_reference *a, struct data_reference *b,
+vect_compile_time_alias (dr_vec_info *a, dr_vec_info *b,
 			 tree segment_length_a, tree segment_length_b,
 			 unsigned HOST_WIDE_INT access_size_a,
 			 unsigned HOST_WIDE_INT access_size_b)
 {
-  poly_offset_int offset_a = wi::to_poly_offset (DR_INIT (a));
-  poly_offset_int offset_b = wi::to_poly_offset (DR_INIT (b));
+  poly_offset_int offset_a = wi::to_poly_offset (DR_INIT (a->dr));
+  poly_offset_int offset_b = wi::to_poly_offset (DR_INIT (b->dr));
   poly_uint64 const_length_a;
   poly_uint64 const_length_b;
 
   /* For negative step, we need to adjust address range by TYPE_SIZE_UNIT
      bytes, e.g., int a[3] -> a[1] range is [a+4, a+16) instead of
      [a, a+12) */
-  if (tree_int_cst_compare (DR_STEP (a), size_zero_node) < 0)
+  if (tree_int_cst_compare (DR_STEP (a->dr), size_zero_node) < 0)
     {
       const_length_a = (-wi::to_poly_wide (segment_length_a)).force_uhwi ();
       offset_a = (offset_a + access_size_a) - const_length_a;
     }
   else
     const_length_a = tree_to_poly_uint64 (segment_length_a);
-  if (tree_int_cst_compare (DR_STEP (b), size_zero_node) < 0)
+  if (tree_int_cst_compare (DR_STEP (b->dr), size_zero_node) < 0)
     {
       const_length_b = (-wi::to_poly_wide (segment_length_b)).force_uhwi ();
       offset_b = (offset_b + access_size_b) - const_length_b;
@@ -3269,30 +3300,34 @@ vect_check_lower_bound (loop_vec_info lo
   LOOP_VINFO_LOWER_BOUNDS (loop_vinfo).safe_push (lower_bound);
 }
 
-/* Return true if it's unlikely that the step of the vectorized form of DR
+/* Return true if it's unlikely that the step of the vectorized form of DR_INFO
    will span fewer than GAP bytes.  */
 
 static bool
-vect_small_gap_p (loop_vec_info loop_vinfo, data_reference *dr, poly_int64 gap)
+vect_small_gap_p (loop_vec_info loop_vinfo, dr_vec_info *dr_info,
+		  poly_int64 gap)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  stmt_vec_info stmt_info = dr_info->stmt;
   HOST_WIDE_INT count
     = estimated_poly_value (LOOP_VINFO_VECT_FACTOR (loop_vinfo));
   if (DR_GROUP_FIRST_ELEMENT (stmt_info))
     count *= DR_GROUP_SIZE (DR_GROUP_FIRST_ELEMENT (stmt_info));
-  return estimated_poly_value (gap) <= count * vect_get_scalar_dr_size (dr);
+  return (estimated_poly_value (gap)
+	  <= count * vect_get_scalar_dr_size (dr_info));
 }
 
-/* Return true if we know that there is no alias between DR_A and DR_B
-   when abs (DR_STEP (DR_A)) >= N for some N.  When returning true, set
-   *LOWER_BOUND_OUT to this N.  */
+/* Return true if we know that there is no alias between DR_INFO_A and
+   DR_INFO_B when abs (DR_STEP (DR_INFO_A->dr)) >= N for some N.
+   When returning true, set *LOWER_BOUND_OUT to this N.  */
 
 static bool
-vectorizable_with_step_bound_p (data_reference *dr_a, data_reference *dr_b,
+vectorizable_with_step_bound_p (dr_vec_info *dr_info_a, dr_vec_info *dr_info_b,
 				poly_uint64 *lower_bound_out)
 {
   /* Check that there is a constant gap of known sign between DR_A
      and DR_B.  */
+  data_reference *dr_a = dr_info_a->dr;
+  data_reference *dr_b = dr_info_b->dr;
   poly_int64 init_a, init_b;
   if (!operand_equal_p (DR_BASE_ADDRESS (dr_a), DR_BASE_ADDRESS (dr_b), 0)
       || !operand_equal_p (DR_OFFSET (dr_a), DR_OFFSET (dr_b), 0)
@@ -3306,19 +3341,19 @@ vectorizable_with_step_bound_p (data_ref
   if (maybe_lt (init_b, init_a))
     {
       std::swap (init_a, init_b);
+      std::swap (dr_info_a, dr_info_b);
       std::swap (dr_a, dr_b);
     }
 
   /* If the two accesses could be dependent within a scalar iteration,
      make sure that we'd retain their order.  */
-  if (maybe_gt (init_a + vect_get_scalar_dr_size (dr_a), init_b)
-      && !vect_preserves_scalar_order_p (vect_dr_stmt (dr_a),
-					 vect_dr_stmt (dr_b)))
+  if (maybe_gt (init_a + vect_get_scalar_dr_size (dr_info_a), init_b)
+      && !vect_preserves_scalar_order_p (dr_info_a, dr_info_b))
     return false;
 
   /* There is no alias if abs (DR_STEP) is greater than or equal to
      the bytes spanned by the combination of the two accesses.  */
-  *lower_bound_out = init_b + vect_get_scalar_dr_size (dr_b) - init_a;
+  *lower_bound_out = init_b + vect_get_scalar_dr_size (dr_info_b) - init_a;
   return true;
 }
 
@@ -3376,7 +3411,6 @@ vect_prune_runtime_alias_test_list (loop
     {
       int comp_res;
       poly_uint64 lower_bound;
-      struct data_reference *dr_a, *dr_b;
       tree segment_length_a, segment_length_b;
       unsigned HOST_WIDE_INT access_size_a, access_size_b;
       unsigned int align_a, align_b;
@@ -3404,25 +3438,26 @@ vect_prune_runtime_alias_test_list (loop
 	  continue;
 	}
 
-      dr_a = DDR_A (ddr);
-      stmt_vec_info stmt_info_a = vect_dr_stmt (DDR_A (ddr));
+      dr_vec_info *dr_info_a = DR_VECT_AUX (DDR_A (ddr));
+      stmt_vec_info stmt_info_a = dr_info_a->stmt;
 
-      dr_b = DDR_B (ddr);
-      stmt_vec_info stmt_info_b = vect_dr_stmt (DDR_B (ddr));
+      dr_vec_info *dr_info_b = DR_VECT_AUX (DDR_B (ddr));
+      stmt_vec_info stmt_info_b = dr_info_b->stmt;
 
       /* Skip the pair if inter-iteration dependencies are irrelevant
 	 and intra-iteration dependencies are guaranteed to be honored.  */
       if (ignore_step_p
-	  && (vect_preserves_scalar_order_p (stmt_info_a, stmt_info_b)
-	      || vectorizable_with_step_bound_p (dr_a, dr_b, &lower_bound)))
+	  && (vect_preserves_scalar_order_p (dr_info_a, dr_info_b)
+	      || vectorizable_with_step_bound_p (dr_info_a, dr_info_b,
+						 &lower_bound)))
 	{
 	  if (dump_enabled_p ())
 	    {
 	      dump_printf_loc (MSG_NOTE, vect_location,
 			       "no need for alias check between ");
-	      dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_a));
+	      dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_info_a->dr));
 	      dump_printf (MSG_NOTE, " and ");
-	      dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_b));
+	      dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_info_b->dr));
 	      dump_printf (MSG_NOTE, " when VF is 1\n");
 	    }
 	  continue;
@@ -3433,20 +3468,21 @@ vect_prune_runtime_alias_test_list (loop
 	 (It might not be, for example, if the minimum step is much larger
 	 than the number of bytes handled by one vector iteration.)  */
       if (!ignore_step_p
-	  && TREE_CODE (DR_STEP (dr_a)) != INTEGER_CST
-	  && vectorizable_with_step_bound_p (dr_a, dr_b, &lower_bound)
-	  && (vect_small_gap_p (loop_vinfo, dr_a, lower_bound)
-	      || vect_small_gap_p (loop_vinfo, dr_b, lower_bound)))
+	  && TREE_CODE (DR_STEP (dr_info_a->dr)) != INTEGER_CST
+	  && vectorizable_with_step_bound_p (dr_info_a, dr_info_b,
+					     &lower_bound)
+	  && (vect_small_gap_p (loop_vinfo, dr_info_a, lower_bound)
+	      || vect_small_gap_p (loop_vinfo, dr_info_b, lower_bound)))
 	{
-	  bool unsigned_p = dr_known_forward_stride_p (dr_a);
+	  bool unsigned_p = dr_known_forward_stride_p (dr_info_a->dr);
 	  if (dump_enabled_p ())
 	    {
 	      dump_printf_loc (MSG_NOTE, vect_location, "no alias between ");
-	      dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_a));
+	      dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_info_a->dr));
 	      dump_printf (MSG_NOTE, " and ");
-	      dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_b));
+	      dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_info_b->dr));
 	      dump_printf (MSG_NOTE, " when the step ");
-	      dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_STEP (dr_a));
+	      dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_STEP (dr_info_a->dr));
 	      dump_printf (MSG_NOTE, " is outside ");
 	      if (unsigned_p)
 		dump_printf (MSG_NOTE, "[0");
@@ -3459,8 +3495,8 @@ vect_prune_runtime_alias_test_list (loop
 	      dump_dec (MSG_NOTE, lower_bound);
 	      dump_printf (MSG_NOTE, ")\n");
 	    }
-	  vect_check_lower_bound (loop_vinfo, DR_STEP (dr_a), unsigned_p,
-				  lower_bound);
+	  vect_check_lower_bound (loop_vinfo, DR_STEP (dr_info_a->dr),
+				  unsigned_p, lower_bound);
 	  continue;
 	}
 
@@ -3468,14 +3504,14 @@ vect_prune_runtime_alias_test_list (loop
       if (dr_group_first_a)
 	{
 	  stmt_info_a = dr_group_first_a;
-	  dr_a = STMT_VINFO_DATA_REF (stmt_info_a);
+	  dr_info_a = STMT_VINFO_DR_INFO (stmt_info_a);
 	}
 
       stmt_vec_info dr_group_first_b = DR_GROUP_FIRST_ELEMENT (stmt_info_b);
       if (dr_group_first_b)
 	{
 	  stmt_info_b = dr_group_first_b;
-	  dr_b = STMT_VINFO_DATA_REF (stmt_info_b);
+	  dr_info_b = STMT_VINFO_DR_INFO (stmt_info_b);
 	}
 
       if (ignore_step_p)
@@ -3485,32 +3521,33 @@ vect_prune_runtime_alias_test_list (loop
 	}
       else
 	{
-	  if (!operand_equal_p (DR_STEP (dr_a), DR_STEP (dr_b), 0))
+	  if (!operand_equal_p (DR_STEP (dr_info_a->dr),
+				DR_STEP (dr_info_b->dr), 0))
 	    length_factor = scalar_loop_iters;
 	  else
 	    length_factor = size_int (vect_factor);
-	  segment_length_a = vect_vfa_segment_size (dr_a, length_factor);
-	  segment_length_b = vect_vfa_segment_size (dr_b, length_factor);
+	  segment_length_a = vect_vfa_segment_size (dr_info_a, length_factor);
+	  segment_length_b = vect_vfa_segment_size (dr_info_b, length_factor);
 	}
-      access_size_a = vect_vfa_access_size (dr_a);
-      access_size_b = vect_vfa_access_size (dr_b);
-      align_a = vect_vfa_align (dr_a);
-      align_b = vect_vfa_align (dr_b);
+      access_size_a = vect_vfa_access_size (dr_info_a);
+      access_size_b = vect_vfa_access_size (dr_info_b);
+      align_a = vect_vfa_align (dr_info_a);
+      align_b = vect_vfa_align (dr_info_b);
 
-      comp_res = data_ref_compare_tree (DR_BASE_ADDRESS (dr_a),
-					DR_BASE_ADDRESS (dr_b));
+      comp_res = data_ref_compare_tree (DR_BASE_ADDRESS (dr_info_a->dr),
+					DR_BASE_ADDRESS (dr_info_b->dr));
       if (comp_res == 0)
-	comp_res = data_ref_compare_tree (DR_OFFSET (dr_a),
-					  DR_OFFSET (dr_b));
+	comp_res = data_ref_compare_tree (DR_OFFSET (dr_info_a->dr),
+					  DR_OFFSET (dr_info_b->dr));
 
       /* See whether the alias is known at compilation time.  */
       if (comp_res == 0
-	  && TREE_CODE (DR_STEP (dr_a)) == INTEGER_CST
-	  && TREE_CODE (DR_STEP (dr_b)) == INTEGER_CST
+	  && TREE_CODE (DR_STEP (dr_info_a->dr)) == INTEGER_CST
+	  && TREE_CODE (DR_STEP (dr_info_b->dr)) == INTEGER_CST
 	  && poly_int_tree_p (segment_length_a)
 	  && poly_int_tree_p (segment_length_b))
 	{
-	  int res = vect_compile_time_alias (dr_a, dr_b,
+	  int res = vect_compile_time_alias (dr_info_a, dr_info_b,
 					     segment_length_a,
 					     segment_length_b,
 					     access_size_a,
@@ -3519,9 +3556,9 @@ vect_prune_runtime_alias_test_list (loop
 	    {
 	      dump_printf_loc (MSG_NOTE, vect_location,
 			       "can tell at compile time that ");
-	      dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_a));
+	      dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_info_a->dr));
 	      dump_printf (MSG_NOTE, " and ");
-	      dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_b));
+	      dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_info_b->dr));
 	      if (res == 0)
 		dump_printf (MSG_NOTE, " do not alias\n");
 	      else
@@ -3541,8 +3578,10 @@ vect_prune_runtime_alias_test_list (loop
 	}
 
       dr_with_seg_len_pair_t dr_with_seg_len_pair
-	(dr_with_seg_len (dr_a, segment_length_a, access_size_a, align_a),
-	 dr_with_seg_len (dr_b, segment_length_b, access_size_b, align_b));
+	(dr_with_seg_len (dr_info_a->dr, segment_length_a,
+			  access_size_a, align_a),
+	 dr_with_seg_len (dr_info_b->dr, segment_length_b,
+			  access_size_b, align_b));
 
       /* Canonicalize pairs by sorting the two DR members.  */
       if (comp_res > 0)
@@ -4451,18 +4490,18 @@ vect_get_new_ssa_name (tree type, enum v
   return new_vect_var;
 }
 
-/* Duplicate ptr info and set alignment/misaligment on NAME from DR.  */
+/* Duplicate ptr info and set alignment/misaligment on NAME from DR_INFO.  */
 
 static void
-vect_duplicate_ssa_name_ptr_info (tree name, data_reference *dr)
+vect_duplicate_ssa_name_ptr_info (tree name, dr_vec_info *dr_info)
 {
-  duplicate_ssa_name_ptr_info (name, DR_PTR_INFO (dr));
-  int misalign = DR_MISALIGNMENT (dr);
+  duplicate_ssa_name_ptr_info (name, DR_PTR_INFO (dr_info->dr));
+  int misalign = DR_MISALIGNMENT (dr_info);
   if (misalign == DR_MISALIGNMENT_UNKNOWN)
     mark_ptr_info_alignment_unknown (SSA_NAME_PTR_INFO (name));
   else
     set_ptr_info_alignment (SSA_NAME_PTR_INFO (name),
-			    DR_TARGET_ALIGNMENT (dr), misalign);
+			    DR_TARGET_ALIGNMENT (dr_info), misalign);
 }
 
 /* Function vect_create_addr_base_for_vector_ref.
@@ -4505,7 +4544,8 @@ vect_create_addr_base_for_vector_ref (st
 				      tree offset,
 				      tree byte_offset)
 {
-  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
+  dr_vec_info *dr_info = STMT_VINFO_DR_INFO (stmt_info);
+  struct data_reference *dr = dr_info->dr;
   const char *base_name;
   tree addr_base;
   tree dest;
@@ -4513,7 +4553,7 @@ vect_create_addr_base_for_vector_ref (st
   tree vect_ptr_type;
   tree step = TYPE_SIZE_UNIT (TREE_TYPE (DR_REF (dr)));
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
-  innermost_loop_behavior *drb = vect_dr_behavior (dr);
+  innermost_loop_behavior *drb = vect_dr_behavior (dr_info);
 
   tree data_ref_base = unshare_expr (drb->base_address);
   tree base_offset = unshare_expr (drb->offset);
@@ -4566,7 +4606,7 @@ vect_create_addr_base_for_vector_ref (st
       && TREE_CODE (addr_base) == SSA_NAME
       && !SSA_NAME_PTR_INFO (addr_base))
     {
-      vect_duplicate_ssa_name_ptr_info (addr_base, dr);
+      vect_duplicate_ssa_name_ptr_info (addr_base, dr_info);
       if (offset || byte_offset)
 	mark_ptr_info_alignment_unknown (SSA_NAME_PTR_INFO (addr_base));
     }
@@ -4658,7 +4698,8 @@ vect_create_data_ref_ptr (stmt_vec_info
   edge pe = NULL;
   basic_block new_bb;
   tree aggr_ptr_init;
-  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
+  dr_vec_info *dr_info = STMT_VINFO_DR_INFO (stmt_info);
+  struct data_reference *dr = dr_info->dr;
   tree aptr;
   gimple_stmt_iterator incr_gsi;
   bool insert_after;
@@ -4687,7 +4728,7 @@ vect_create_data_ref_ptr (stmt_vec_info
 
   /* Check the step (evolution) of the load in LOOP, and record
      whether it's invariant.  */
-  step = vect_dr_behavior (dr)->step;
+  step = vect_dr_behavior (dr_info)->step;
   if (integer_zerop (step))
     *inv_p = true;
   else
@@ -4832,8 +4873,8 @@ vect_create_data_ref_ptr (stmt_vec_info
       /* Copy the points-to information if it exists. */
       if (DR_PTR_INFO (dr))
 	{
-	  vect_duplicate_ssa_name_ptr_info (indx_before_incr, dr);
-	  vect_duplicate_ssa_name_ptr_info (indx_after_incr, dr);
+	  vect_duplicate_ssa_name_ptr_info (indx_before_incr, dr_info);
+	  vect_duplicate_ssa_name_ptr_info (indx_after_incr, dr_info);
 	}
       if (ptr_incr)
 	*ptr_incr = incr;
@@ -4862,8 +4903,8 @@ vect_create_data_ref_ptr (stmt_vec_info
       /* Copy the points-to information if it exists. */
       if (DR_PTR_INFO (dr))
 	{
-	  vect_duplicate_ssa_name_ptr_info (indx_before_incr, dr);
-	  vect_duplicate_ssa_name_ptr_info (indx_after_incr, dr);
+	  vect_duplicate_ssa_name_ptr_info (indx_before_incr, dr_info);
+	  vect_duplicate_ssa_name_ptr_info (indx_after_incr, dr_info);
 	}
       if (ptr_incr)
 	*ptr_incr = incr;
@@ -5406,7 +5447,8 @@ vect_setup_realignment (stmt_vec_info st
 {
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
-  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
+  dr_vec_info *dr_info = STMT_VINFO_DR_INFO (stmt_info);
+  struct data_reference *dr = dr_info->dr;
   struct loop *loop = NULL;
   edge pe = NULL;
   tree scalar_dest = gimple_assign_lhs (stmt_info->stmt);
@@ -5519,7 +5561,7 @@ vect_setup_realignment (stmt_vec_info st
 	new_temp = copy_ssa_name (ptr);
       else
 	new_temp = make_ssa_name (TREE_TYPE (ptr));
-      unsigned int align = DR_TARGET_ALIGNMENT (dr);
+      unsigned int align = DR_TARGET_ALIGNMENT (dr_info);
       new_stmt = gimple_build_assign
 		   (new_temp, BIT_AND_EXPR, ptr,
 		    build_int_cst (TREE_TYPE (ptr), -(HOST_WIDE_INT) align));
@@ -6421,24 +6463,25 @@ vect_can_force_dr_alignment_p (const_tre
 }
 
 
-/* Return whether the data reference DR is supported with respect to its
+/* Return whether the data reference DR_INFO is supported with respect to its
    alignment.
    If CHECK_ALIGNED_ACCESSES is TRUE, check if the access is supported even
    it is aligned, i.e., check if it is possible to vectorize it with different
    alignment.  */
 
 enum dr_alignment_support
-vect_supportable_dr_alignment (struct data_reference *dr,
+vect_supportable_dr_alignment (dr_vec_info *dr_info,
                                bool check_aligned_accesses)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  data_reference *dr = dr_info->dr;
+  stmt_vec_info stmt_info = dr_info->stmt;
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
   machine_mode mode = TYPE_MODE (vectype);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct loop *vect_loop = NULL;
   bool nested_in_vect_loop = false;
 
-  if (aligned_access_p (dr) && !check_aligned_accesses)
+  if (aligned_access_p (dr_info) && !check_aligned_accesses)
     return dr_aligned;
 
   /* For now assume all conditional loads/stores support unaligned
@@ -6546,11 +6589,11 @@ vect_supportable_dr_alignment (struct da
 	  else
 	    return dr_explicit_realign_optimized;
 	}
-      if (!known_alignment_for_access_p (dr))
+      if (!known_alignment_for_access_p (dr_info))
 	is_packed = not_size_aligned (DR_REF (dr));
 
       if (targetm.vectorize.support_vector_misalignment
-	    (mode, type, DR_MISALIGNMENT (dr), is_packed))
+	    (mode, type, DR_MISALIGNMENT (dr_info), is_packed))
 	/* Can't software pipeline the loads, but can at least do them.  */
 	return dr_unaligned_supported;
     }
@@ -6559,11 +6602,11 @@ vect_supportable_dr_alignment (struct da
       bool is_packed = false;
       tree type = (TREE_TYPE (DR_REF (dr)));
 
-      if (!known_alignment_for_access_p (dr))
+      if (!known_alignment_for_access_p (dr_info))
 	is_packed = not_size_aligned (DR_REF (dr));
 
      if (targetm.vectorize.support_vector_misalignment
-	   (mode, type, DR_MISALIGNMENT (dr), is_packed))
+	   (mode, type, DR_MISALIGNMENT (dr_info), is_packed))
        return dr_unaligned_supported;
     }
 
Index: gcc/tree-vect-loop-manip.c
===================================================================
--- gcc/tree-vect-loop-manip.c	2018-07-26 11:28:07.929273995 +0100
+++ gcc/tree-vect-loop-manip.c	2018-07-26 11:42:19.031663762 +0100
@@ -1560,14 +1560,15 @@ vect_update_ivs_after_vectorizer (loop_v
 static tree
 get_misalign_in_elems (gimple **seq, loop_vec_info loop_vinfo)
 {
-  struct data_reference *dr = LOOP_VINFO_UNALIGNED_DR (loop_vinfo);
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  dr_vec_info *dr_info = DR_VECT_AUX (LOOP_VINFO_UNALIGNED_DR (loop_vinfo));
+  stmt_vec_info stmt_info = dr_info->stmt;
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
 
-  unsigned int target_align = DR_TARGET_ALIGNMENT (dr);
+  unsigned int target_align = DR_TARGET_ALIGNMENT (dr_info);
   gcc_assert (target_align != 0);
 
-  bool negative = tree_int_cst_compare (DR_STEP (dr), size_zero_node) < 0;
+  bool negative = tree_int_cst_compare (DR_STEP (dr_info->dr),
+					size_zero_node) < 0;
   tree offset = (negative
 		 ? size_int (-TYPE_VECTOR_SUBPARTS (vectype) + 1)
 		 : size_zero_node);
@@ -1626,14 +1627,14 @@ get_misalign_in_elems (gimple **seq, loo
 vect_gen_prolog_loop_niters (loop_vec_info loop_vinfo,
 			     basic_block bb, int *bound)
 {
-  struct data_reference *dr = LOOP_VINFO_UNALIGNED_DR (loop_vinfo);
+  dr_vec_info *dr_info = DR_VECT_AUX (LOOP_VINFO_UNALIGNED_DR (loop_vinfo));
   tree var;
   tree niters_type = TREE_TYPE (LOOP_VINFO_NITERS (loop_vinfo));
   gimple_seq stmts = NULL, new_stmts = NULL;
   tree iters, iters_name;
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  stmt_vec_info stmt_info = dr_info->stmt;
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
-  unsigned int target_align = DR_TARGET_ALIGNMENT (dr);
+  unsigned int target_align = DR_TARGET_ALIGNMENT (dr_info);
 
   if (LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) > 0)
     {
@@ -1658,7 +1659,8 @@ vect_gen_prolog_loop_niters (loop_vec_in
 
       /* Create:  (niters_type) ((align_in_elems - misalign_in_elems)
 				 & (align_in_elems - 1)).  */
-      bool negative = tree_int_cst_compare (DR_STEP (dr), size_zero_node) < 0;
+      bool negative = tree_int_cst_compare (DR_STEP (dr_info->dr),
+					    size_zero_node) < 0;
       if (negative)
 	iters = fold_build2 (MINUS_EXPR, type, misalign_in_elems,
 			     align_in_elems_tree);
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-26 11:28:12.000000000 +0100
+++ gcc/tree-vect-loop.c	2018-07-26 11:42:19.031663762 +0100
@@ -2142,8 +2142,9 @@ vect_analyze_loop_2 (loop_vec_info loop_
 	  /* Niters for peeled prolog loop.  */
 	  if (LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) < 0)
 	    {
-	      struct data_reference *dr = LOOP_VINFO_UNALIGNED_DR (loop_vinfo);
-	      tree vectype = STMT_VINFO_VECTYPE (vect_dr_stmt (dr));
+	      dr_vec_info *dr_info
+		= DR_VECT_AUX (LOOP_VINFO_UNALIGNED_DR (loop_vinfo));
+	      tree vectype = STMT_VINFO_VECTYPE (dr_info->stmt);
 	      niters_th += TYPE_VECTOR_SUBPARTS (vectype) - 1;
 	    }
 	  else
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-26 11:30:56.197256524 +0100
+++ gcc/tree-vect-stmts.c	2018-07-26 11:42:19.035663718 +0100
@@ -1057,8 +1057,9 @@ vect_get_store_cost (stmt_vec_info stmt_
 		     unsigned int *inside_cost,
 		     stmt_vector_for_cost *body_cost_vec)
 {
-  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
-  int alignment_support_scheme = vect_supportable_dr_alignment (dr, false);
+  dr_vec_info *dr_info = STMT_VINFO_DR_INFO (stmt_info);
+  int alignment_support_scheme
+    = vect_supportable_dr_alignment (dr_info, false);
 
   switch (alignment_support_scheme)
     {
@@ -1079,7 +1080,8 @@ vect_get_store_cost (stmt_vec_info stmt_
         /* Here, we assign an additional cost for the unaligned store.  */
 	*inside_cost += record_stmt_cost (body_cost_vec, ncopies,
 					  unaligned_store, stmt_info,
-					  DR_MISALIGNMENT (dr), vect_body);
+					  DR_MISALIGNMENT (dr_info),
+					  vect_body);
         if (dump_enabled_p ())
           dump_printf_loc (MSG_NOTE, vect_location,
                            "vect_model_store_cost: unaligned supported by "
@@ -1236,8 +1238,9 @@ vect_get_load_cost (stmt_vec_info stmt_i
 		    stmt_vector_for_cost *body_cost_vec,
 		    bool record_prologue_costs)
 {
-  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
-  int alignment_support_scheme = vect_supportable_dr_alignment (dr, false);
+  dr_vec_info *dr_info = STMT_VINFO_DR_INFO (stmt_info);
+  int alignment_support_scheme
+    = vect_supportable_dr_alignment (dr_info, false);
 
   switch (alignment_support_scheme)
     {
@@ -1257,7 +1260,8 @@ vect_get_load_cost (stmt_vec_info stmt_i
         /* Here, we assign an additional cost for the unaligned load.  */
 	*inside_cost += record_stmt_cost (body_cost_vec, ncopies,
 					  unaligned_load, stmt_info,
-					  DR_MISALIGNMENT (dr), vect_body);
+					  DR_MISALIGNMENT (dr_info),
+					  vect_body);
 
         if (dump_enabled_p ())
           dump_printf_loc (MSG_NOTE, vect_location,
@@ -1975,7 +1979,8 @@ vect_truncate_gather_scatter_offset (stm
 				     loop_vec_info loop_vinfo, bool masked_p,
 				     gather_scatter_info *gs_info)
 {
-  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
+  dr_vec_info *dr_info = STMT_VINFO_DR_INFO (stmt_info);
+  data_reference *dr = dr_info->dr;
   tree step = DR_STEP (dr);
   if (TREE_CODE (step) != INTEGER_CST)
     {
@@ -2003,7 +2008,7 @@ vect_truncate_gather_scatter_offset (stm
     count = max_iters.to_shwi ();
 
   /* Try scales of 1 and the element size.  */
-  int scales[] = { 1, vect_get_scalar_dr_size (dr) };
+  int scales[] = { 1, vect_get_scalar_dr_size (dr_info) };
   wi::overflow_type overflow = wi::OVF_NONE;
   for (int i = 0; i < 2; ++i)
     {
@@ -2102,8 +2107,8 @@ vect_use_strided_gather_scatters_p (stmt
 static int
 compare_step_with_zero (stmt_vec_info stmt_info)
 {
-  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
-  return tree_int_cst_compare (vect_dr_behavior (dr)->step,
+  dr_vec_info *dr_info = STMT_VINFO_DR_INFO (stmt_info);
+  return tree_int_cst_compare (vect_dr_behavior (dr_info)->step,
 			       size_zero_node);
 }
 
@@ -2166,7 +2171,7 @@ get_group_load_store_type (stmt_vec_info
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct loop *loop = loop_vinfo ? LOOP_VINFO_LOOP (loop_vinfo) : NULL;
   stmt_vec_info first_stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
-  data_reference *first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
+  dr_vec_info *first_dr_info = STMT_VINFO_DR_INFO (first_stmt_info);
   unsigned int group_size = DR_GROUP_SIZE (first_stmt_info);
   bool single_element_p = (stmt_info == first_stmt_info
 			   && !DR_GROUP_NEXT_ELEMENT (stmt_info));
@@ -2218,8 +2223,8 @@ get_group_load_store_type (stmt_vec_info
 	     be a multiple of B and so we are guaranteed to access a
 	     non-gap element in the same B-sized block.  */
 	  if (overrun_p
-	      && gap < (vect_known_alignment_in_bytes (first_dr)
-			/ vect_get_scalar_dr_size (first_dr)))
+	      && gap < (vect_known_alignment_in_bytes (first_dr_info)
+			/ vect_get_scalar_dr_size (first_dr_info)))
 	    overrun_p = false;
 	  if (overrun_p && !can_overrun_p)
 	    {
@@ -2246,8 +2251,8 @@ get_group_load_store_type (stmt_vec_info
 	 same B-sized block.  */
       if (would_overrun_p
 	  && !masked_p
-	  && gap < (vect_known_alignment_in_bytes (first_dr)
-		    / vect_get_scalar_dr_size (first_dr)))
+	  && gap < (vect_known_alignment_in_bytes (first_dr_info)
+		    / vect_get_scalar_dr_size (first_dr_info)))
 	would_overrun_p = false;
 
       if (!STMT_VINFO_STRIDED_P (stmt_info)
@@ -2339,7 +2344,7 @@ get_negative_load_store_type (stmt_vec_i
 			      vec_load_store_type vls_type,
 			      unsigned int ncopies)
 {
-  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
+  dr_vec_info *dr_info = STMT_VINFO_DR_INFO (stmt_info);
   dr_alignment_support alignment_support_scheme;
 
   if (ncopies > 1)
@@ -2350,7 +2355,7 @@ get_negative_load_store_type (stmt_vec_i
       return VMAT_ELEMENTWISE;
     }
 
-  alignment_support_scheme = vect_supportable_dr_alignment (dr, false);
+  alignment_support_scheme = vect_supportable_dr_alignment (dr_info, false);
   if (alignment_support_scheme != dr_aligned
       && alignment_support_scheme != dr_unaligned_supported)
     {
@@ -2923,19 +2928,19 @@ vect_get_strided_load_store_ops (stmt_ve
 }
 
 /* Return the amount that should be added to a vector pointer to move
-   to the next or previous copy of AGGR_TYPE.  DR is the data reference
+   to the next or previous copy of AGGR_TYPE.  DR_INFO is the data reference
    being vectorized and MEMORY_ACCESS_TYPE describes the type of
    vectorization.  */
 
 static tree
-vect_get_data_ptr_increment (data_reference *dr, tree aggr_type,
+vect_get_data_ptr_increment (dr_vec_info *dr_info, tree aggr_type,
 			     vect_memory_access_type memory_access_type)
 {
   if (memory_access_type == VMAT_INVARIANT)
     return size_zero_node;
 
   tree iv_step = TYPE_SIZE_UNIT (aggr_type);
-  tree step = vect_dr_behavior (dr)->step;
+  tree step = vect_dr_behavior (dr_info)->step;
   if (tree_int_cst_sgn (step) == -1)
     iv_step = fold_build1 (NEGATE_EXPR, TREE_TYPE (iv_step), iv_step);
   return iv_step;
@@ -6169,19 +6174,20 @@ vectorizable_operation (stmt_vec_info st
   return true;
 }
 
-/* A helper function to ensure data reference DR's base alignment.  */
+/* A helper function to ensure data reference DR_INFO's base alignment.  */
 
 static void
-ensure_base_align (struct data_reference *dr)
+ensure_base_align (dr_vec_info *dr_info)
 {
-  if (DR_VECT_AUX (dr)->misalignment == DR_MISALIGNMENT_UNINITIALIZED)
+  if (dr_info->misalignment == DR_MISALIGNMENT_UNINITIALIZED)
     return;
 
-  if (DR_VECT_AUX (dr)->base_misaligned)
+  if (dr_info->base_misaligned)
     {
-      tree base_decl = DR_VECT_AUX (dr)->base_decl;
+      tree base_decl = dr_info->base_decl;
 
-      unsigned int align_base_to = DR_TARGET_ALIGNMENT (dr) * BITS_PER_UNIT;
+      unsigned int align_base_to
+	= DR_TARGET_ALIGNMENT (dr_info) * BITS_PER_UNIT;
 
       if (decl_in_symtab_p (base_decl))
 	symtab_node::get (base_decl)->increase_alignment (align_base_to);
@@ -6190,7 +6196,7 @@ ensure_base_align (struct data_reference
 	  SET_DECL_ALIGN (base_decl, align_base_to);
           DECL_USER_ALIGN (base_decl) = 1;
 	}
-      DR_VECT_AUX (dr)->base_misaligned = false;
+      dr_info->base_misaligned = false;
     }
 }
 
@@ -6239,7 +6245,6 @@ vectorizable_store (stmt_vec_info stmt_i
   tree data_ref;
   tree op;
   tree vec_oprnd = NULL_TREE;
-  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info), *first_dr = NULL;
   tree elem_type;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct loop *loop = NULL;
@@ -6401,19 +6406,20 @@ vectorizable_store (stmt_vec_info stmt_i
 	return false;
     }
 
+  dr_vec_info *dr_info = STMT_VINFO_DR_INFO (stmt_info), *first_dr_info = NULL;
   grouped_store = (STMT_VINFO_GROUPED_ACCESS (stmt_info)
 		   && memory_access_type != VMAT_GATHER_SCATTER
 		   && (slp || memory_access_type != VMAT_CONTIGUOUS));
   if (grouped_store)
     {
       first_stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
-      first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
+      first_dr_info = STMT_VINFO_DR_INFO (first_stmt_info);
       group_size = DR_GROUP_SIZE (first_stmt_info);
     }
   else
     {
       first_stmt_info = stmt_info;
-      first_dr = dr;
+      first_dr_info = dr_info;
       group_size = vec_num = 1;
     }
 
@@ -6435,7 +6441,7 @@ vectorizable_store (stmt_vec_info stmt_i
 
   /* Transform.  */
 
-  ensure_base_align (dr);
+  ensure_base_align (dr_info);
 
   if (memory_access_type == VMAT_GATHER_SCATTER && gs_info.decl)
     {
@@ -6614,7 +6620,7 @@ vectorizable_store (stmt_vec_info stmt_i
 	  first_stmt_info = SLP_TREE_SCALAR_STMTS (slp_node)[0];
 	  gcc_assert (DR_GROUP_FIRST_ELEMENT (first_stmt_info)
 		      == first_stmt_info);
-	  first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
+	  first_dr_info = STMT_VINFO_DR_INFO (first_stmt_info);
 	  op = vect_get_store_rhs (first_stmt_info);
         } 
       else
@@ -6625,7 +6631,7 @@ vectorizable_store (stmt_vec_info stmt_i
       ref_type = get_group_alias_ptr_type (first_stmt_info);
     }
   else
-    ref_type = reference_alias_ptr_type (DR_REF (first_dr));
+    ref_type = reference_alias_ptr_type (DR_REF (first_dr_info->dr));
 
   if (dump_enabled_p ())
     dump_printf_loc (MSG_NOTE, vect_location,
@@ -6651,11 +6657,11 @@ vectorizable_store (stmt_vec_info stmt_i
 
       stride_base
 	= fold_build_pointer_plus
-	    (DR_BASE_ADDRESS (first_dr),
+	    (DR_BASE_ADDRESS (first_dr_info->dr),
 	     size_binop (PLUS_EXPR,
-			 convert_to_ptrofftype (DR_OFFSET (first_dr)),
-			 convert_to_ptrofftype (DR_INIT (first_dr))));
-      stride_step = fold_convert (sizetype, DR_STEP (first_dr));
+			 convert_to_ptrofftype (DR_OFFSET (first_dr_info->dr)),
+			 convert_to_ptrofftype (DR_INIT (first_dr_info->dr))));
+      stride_step = fold_convert (sizetype, DR_STEP (first_dr_info->dr));
 
       /* For a store with loop-invariant (but other than power-of-2)
          stride (i.e. not a grouped access) like so:
@@ -6835,7 +6841,7 @@ vectorizable_store (stmt_vec_info stmt_i
 						 group_el * elsz);
 		  newref = build2 (MEM_REF, ltype,
 				   running_off, this_off);
-		  vect_copy_ref_info (newref, DR_REF (first_dr));
+		  vect_copy_ref_info (newref, DR_REF (first_dr_info->dr));
 
 		  /* And store it to *running_off.  */
 		  assign = gimple_build_assign (newref, elem);
@@ -6878,7 +6884,8 @@ vectorizable_store (stmt_vec_info stmt_i
   auto_vec<tree> dr_chain (group_size);
   oprnds.create (group_size);
 
-  alignment_support_scheme = vect_supportable_dr_alignment (first_dr, false);
+  alignment_support_scheme
+    = vect_supportable_dr_alignment (first_dr_info, false);
   gcc_assert (alignment_support_scheme);
   vec_loop_masks *loop_masks
     = (loop_vinfo && LOOP_VINFO_FULLY_MASKED_P (loop_vinfo)
@@ -6916,7 +6923,8 @@ vectorizable_store (stmt_vec_info stmt_i
 	aggr_type = build_array_type_nelts (elem_type, vec_num * nunits);
       else
 	aggr_type = vectype;
-      bump = vect_get_data_ptr_increment (dr, aggr_type, memory_access_type);
+      bump = vect_get_data_ptr_increment (dr_info, aggr_type,
+					  memory_access_type);
     }
 
   if (mask)
@@ -7011,14 +7019,14 @@ vectorizable_store (stmt_vec_info stmt_i
 	  bool simd_lane_access_p
 	    = STMT_VINFO_SIMD_LANE_ACCESS_P (stmt_info);
 	  if (simd_lane_access_p
-	      && TREE_CODE (DR_BASE_ADDRESS (first_dr)) == ADDR_EXPR
-	      && VAR_P (TREE_OPERAND (DR_BASE_ADDRESS (first_dr), 0))
-	      && integer_zerop (DR_OFFSET (first_dr))
-	      && integer_zerop (DR_INIT (first_dr))
+	      && TREE_CODE (DR_BASE_ADDRESS (first_dr_info->dr)) == ADDR_EXPR
+	      && VAR_P (TREE_OPERAND (DR_BASE_ADDRESS (first_dr_info->dr), 0))
+	      && integer_zerop (DR_OFFSET (first_dr_info->dr))
+	      && integer_zerop (DR_INIT (first_dr_info->dr))
 	      && alias_sets_conflict_p (get_alias_set (aggr_type),
 					get_alias_set (TREE_TYPE (ref_type))))
 	    {
-	      dataref_ptr = unshare_expr (DR_BASE_ADDRESS (first_dr));
+	      dataref_ptr = unshare_expr (DR_BASE_ADDRESS (first_dr_info->dr));
 	      dataref_offset = build_int_cst (ref_type, 0);
 	      inv_p = false;
 	    }
@@ -7175,16 +7183,16 @@ vectorizable_store (stmt_vec_info stmt_i
 		   vect_permute_store_chain().  */
 		vec_oprnd = result_chain[i];
 
-	      align = DR_TARGET_ALIGNMENT (first_dr);
-	      if (aligned_access_p (first_dr))
+	      align = DR_TARGET_ALIGNMENT (first_dr_info);
+	      if (aligned_access_p (first_dr_info))
 		misalign = 0;
-	      else if (DR_MISALIGNMENT (first_dr) == -1)
+	      else if (DR_MISALIGNMENT (first_dr_info) == -1)
 		{
-		  align = dr_alignment (vect_dr_behavior (first_dr));
+		  align = dr_alignment (vect_dr_behavior (first_dr_info));
 		  misalign = 0;
 		}
 	      else
-		misalign = DR_MISALIGNMENT (first_dr);
+		misalign = DR_MISALIGNMENT (first_dr_info);
 	      if (dataref_offset == NULL_TREE
 		  && TREE_CODE (dataref_ptr) == SSA_NAME)
 		set_ptr_info_alignment (get_ptr_info (dataref_ptr), align,
@@ -7227,9 +7235,9 @@ vectorizable_store (stmt_vec_info stmt_i
 					  dataref_offset
 					  ? dataref_offset
 					  : build_int_cst (ref_type, 0));
-		  if (aligned_access_p (first_dr))
+		  if (aligned_access_p (first_dr_info))
 		    ;
-		  else if (DR_MISALIGNMENT (first_dr) == -1)
+		  else if (DR_MISALIGNMENT (first_dr_info) == -1)
 		    TREE_TYPE (data_ref)
 		      = build_aligned_type (TREE_TYPE (data_ref),
 					    align * BITS_PER_UNIT);
@@ -7237,7 +7245,7 @@ vectorizable_store (stmt_vec_info stmt_i
 		    TREE_TYPE (data_ref)
 		      = build_aligned_type (TREE_TYPE (data_ref),
 					    TYPE_ALIGN (elem_type));
-		  vect_copy_ref_info (data_ref, DR_REF (first_dr));
+		  vect_copy_ref_info (data_ref, DR_REF (first_dr_info->dr));
 		  gassign *new_stmt
 		    = gimple_build_assign (data_ref, vec_oprnd);
 		  new_stmt_info
@@ -7400,7 +7408,6 @@ vectorizable_load (stmt_vec_info stmt_in
   struct loop *loop = NULL;
   struct loop *containing_loop = gimple_bb (stmt_info->stmt)->loop_father;
   bool nested_in_vect_loop = false;
-  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info), *first_dr = NULL;
   tree elem_type;
   tree new_temp;
   machine_mode mode;
@@ -7663,7 +7670,8 @@ vectorizable_load (stmt_vec_info stmt_in
 
   /* Transform.  */
 
-  ensure_base_align (dr);
+  dr_vec_info *dr_info = STMT_VINFO_DR_INFO (stmt_info), *first_dr_info = NULL;
+  ensure_base_align (dr_info);
 
   if (memory_access_type == VMAT_GATHER_SCATTER && gs_info.decl)
     {
@@ -7692,12 +7700,12 @@ vectorizable_load (stmt_vec_info stmt_in
       if (grouped_load)
 	{
 	  first_stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
-	  first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
+	  first_dr_info = STMT_VINFO_DR_INFO (first_stmt_info);
 	}
       else
 	{
 	  first_stmt_info = stmt_info;
-	  first_dr = dr;
+	  first_dr_info = dr_info;
 	}
       if (slp && grouped_load)
 	{
@@ -7712,16 +7720,16 @@ vectorizable_load (stmt_vec_info stmt_in
 		 * vect_get_place_in_interleaving_chain (stmt_info,
 							 first_stmt_info));
 	  group_size = 1;
-	  ref_type = reference_alias_ptr_type (DR_REF (dr));
+	  ref_type = reference_alias_ptr_type (DR_REF (dr_info->dr));
 	}
 
       stride_base
 	= fold_build_pointer_plus
-	    (DR_BASE_ADDRESS (first_dr),
+	    (DR_BASE_ADDRESS (first_dr_info->dr),
 	     size_binop (PLUS_EXPR,
-			 convert_to_ptrofftype (DR_OFFSET (first_dr)),
-			 convert_to_ptrofftype (DR_INIT (first_dr))));
-      stride_step = fold_convert (sizetype, DR_STEP (first_dr));
+			 convert_to_ptrofftype (DR_OFFSET (first_dr_info->dr)),
+			 convert_to_ptrofftype (DR_INIT (first_dr_info->dr))));
+      stride_step = fold_convert (sizetype, DR_STEP (first_dr_info->dr));
 
       /* For a load with loop-invariant (but other than power-of-2)
          stride (i.e. not a grouped access) like so:
@@ -7850,7 +7858,7 @@ vectorizable_load (stmt_vec_info stmt_in
 	      tree this_off = build_int_cst (TREE_TYPE (alias_off),
 					     group_el * elsz + cst_offset);
 	      tree data_ref = build2 (MEM_REF, ltype, running_off, this_off);
-	      vect_copy_ref_info (data_ref, DR_REF (first_dr));
+	      vect_copy_ref_info (data_ref, DR_REF (first_dr_info->dr));
 	      gassign *new_stmt
 		= gimple_build_assign (make_ssa_name (ltype), data_ref);
 	      new_stmt_info
@@ -7946,7 +7954,7 @@ vectorizable_load (stmt_vec_info stmt_in
 	  *vec_stmt = STMT_VINFO_VEC_STMT (stmt_info);
 	  return true;
 	}
-      first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
+      first_dr_info = STMT_VINFO_DR_INFO (first_stmt_info);
       group_gap_adj = 0;
 
       /* VEC_NUM is the number of vect stmts to be created for this group.  */
@@ -7980,13 +7988,14 @@ vectorizable_load (stmt_vec_info stmt_in
   else
     {
       first_stmt_info = stmt_info;
-      first_dr = dr;
+      first_dr_info = dr_info;
       group_size = vec_num = 1;
       group_gap_adj = 0;
-      ref_type = reference_alias_ptr_type (DR_REF (first_dr));
+      ref_type = reference_alias_ptr_type (DR_REF (first_dr_info->dr));
     }
 
-  alignment_support_scheme = vect_supportable_dr_alignment (first_dr, false);
+  alignment_support_scheme
+    = vect_supportable_dr_alignment (first_dr_info, false);
   gcc_assert (alignment_support_scheme);
   vec_loop_masks *loop_masks
     = (loop_vinfo && LOOP_VINFO_FULLY_MASKED_P (loop_vinfo)
@@ -8105,7 +8114,7 @@ vectorizable_load (stmt_vec_info stmt_in
      nested within an outer-loop that is being vectorized.  */
 
   if (nested_in_vect_loop
-      && !multiple_p (DR_STEP_ALIGNMENT (dr),
+      && !multiple_p (DR_STEP_ALIGNMENT (dr_info->dr),
 		      GET_MODE_SIZE (TYPE_MODE (vectype))))
     {
       gcc_assert (alignment_support_scheme != dr_explicit_realign_optimized);
@@ -8151,7 +8160,8 @@ vectorizable_load (stmt_vec_info stmt_in
 	aggr_type = build_array_type_nelts (elem_type, vec_num * nunits);
       else
 	aggr_type = vectype;
-      bump = vect_get_data_ptr_increment (dr, aggr_type, memory_access_type);
+      bump = vect_get_data_ptr_increment (dr_info, aggr_type,
+					  memory_access_type);
     }
 
   tree vec_mask = NULL_TREE;
@@ -8166,16 +8176,16 @@ vectorizable_load (stmt_vec_info stmt_in
 	  bool simd_lane_access_p
 	    = STMT_VINFO_SIMD_LANE_ACCESS_P (stmt_info);
 	  if (simd_lane_access_p
-	      && TREE_CODE (DR_BASE_ADDRESS (first_dr)) == ADDR_EXPR
-	      && VAR_P (TREE_OPERAND (DR_BASE_ADDRESS (first_dr), 0))
-	      && integer_zerop (DR_OFFSET (first_dr))
-	      && integer_zerop (DR_INIT (first_dr))
+	      && TREE_CODE (DR_BASE_ADDRESS (first_dr_info->dr)) == ADDR_EXPR
+	      && VAR_P (TREE_OPERAND (DR_BASE_ADDRESS (first_dr_info->dr), 0))
+	      && integer_zerop (DR_OFFSET (first_dr_info->dr))
+	      && integer_zerop (DR_INIT (first_dr_info->dr))
 	      && alias_sets_conflict_p (get_alias_set (aggr_type),
 					get_alias_set (TREE_TYPE (ref_type)))
 	      && (alignment_support_scheme == dr_aligned
 		  || alignment_support_scheme == dr_unaligned_supported))
 	    {
-	      dataref_ptr = unshare_expr (DR_BASE_ADDRESS (first_dr));
+	      dataref_ptr = unshare_expr (DR_BASE_ADDRESS (first_dr_info->dr));
 	      dataref_offset = build_int_cst (ref_type, 0);
 	      inv_p = false;
 	    }
@@ -8190,10 +8200,11 @@ vectorizable_load (stmt_vec_info stmt_in
 	      /* Adjust the pointer by the difference to first_stmt.  */
 	      data_reference_p ptrdr
 		= STMT_VINFO_DATA_REF (first_stmt_info_for_drptr);
-	      tree diff = fold_convert (sizetype,
-					size_binop (MINUS_EXPR,
-						    DR_INIT (first_dr),
-						    DR_INIT (ptrdr)));
+	      tree diff
+		= fold_convert (sizetype,
+				size_binop (MINUS_EXPR,
+					    DR_INIT (first_dr_info->dr),
+					    DR_INIT (ptrdr)));
 	      dataref_ptr = bump_vector_ptr (dataref_ptr, ptr_incr, gsi,
 					     stmt_info, diff);
 	    }
@@ -8326,19 +8337,20 @@ vectorizable_load (stmt_vec_info stmt_in
 			break;
 		      }
 
-		    align = DR_TARGET_ALIGNMENT (dr);
+		    align = DR_TARGET_ALIGNMENT (dr_info);
 		    if (alignment_support_scheme == dr_aligned)
 		      {
-			gcc_assert (aligned_access_p (first_dr));
+			gcc_assert (aligned_access_p (first_dr_info));
 			misalign = 0;
 		      }
-		    else if (DR_MISALIGNMENT (first_dr) == -1)
+		    else if (DR_MISALIGNMENT (first_dr_info) == -1)
 		      {
-			align = dr_alignment (vect_dr_behavior (first_dr));
+			align = dr_alignment
+			  (vect_dr_behavior (first_dr_info));
 			misalign = 0;
 		      }
 		    else
-		      misalign = DR_MISALIGNMENT (first_dr);
+		      misalign = DR_MISALIGNMENT (first_dr_info);
 		    if (dataref_offset == NULL_TREE
 			&& TREE_CODE (dataref_ptr) == SSA_NAME)
 		      set_ptr_info_alignment (get_ptr_info (dataref_ptr),
@@ -8365,7 +8377,7 @@ vectorizable_load (stmt_vec_info stmt_in
 					 : build_int_cst (ref_type, 0));
 			if (alignment_support_scheme == dr_aligned)
 			  ;
-			else if (DR_MISALIGNMENT (first_dr) == -1)
+			else if (DR_MISALIGNMENT (first_dr_info) == -1)
 			  TREE_TYPE (data_ref)
 			    = build_aligned_type (TREE_TYPE (data_ref),
 						  align * BITS_PER_UNIT);
@@ -8392,7 +8404,7 @@ vectorizable_load (stmt_vec_info stmt_in
 		      ptr = copy_ssa_name (dataref_ptr);
 		    else
 		      ptr = make_ssa_name (TREE_TYPE (dataref_ptr));
-		    unsigned int align = DR_TARGET_ALIGNMENT (first_dr);
+		    unsigned int align = DR_TARGET_ALIGNMENT (first_dr_info);
 		    new_stmt = gimple_build_assign
 				 (ptr, BIT_AND_EXPR, dataref_ptr,
 				  build_int_cst
@@ -8402,7 +8414,7 @@ vectorizable_load (stmt_vec_info stmt_in
 		    data_ref
 		      = build2 (MEM_REF, vectype, ptr,
 				build_int_cst (ref_type, 0));
-		    vect_copy_ref_info (data_ref, DR_REF (first_dr));
+		    vect_copy_ref_info (data_ref, DR_REF (first_dr_info->dr));
 		    vec_dest = vect_create_destination_var (scalar_dest,
 							    vectype);
 		    new_stmt = gimple_build_assign (vec_dest, data_ref);
@@ -8436,7 +8448,7 @@ vectorizable_load (stmt_vec_info stmt_in
 		      new_temp = copy_ssa_name (dataref_ptr);
 		    else
 		      new_temp = make_ssa_name (TREE_TYPE (dataref_ptr));
-		    unsigned int align = DR_TARGET_ALIGNMENT (first_dr);
+		    unsigned int align = DR_TARGET_ALIGNMENT (first_dr_info);
 		    new_stmt = gimple_build_assign
 		      (new_temp, BIT_AND_EXPR, dataref_ptr,
 		       build_int_cst (TREE_TYPE (dataref_ptr),
@@ -8454,7 +8466,7 @@ vectorizable_load (stmt_vec_info stmt_in
 	      /* DATA_REF is null if we've already built the statement.  */
 	      if (data_ref)
 		{
-		  vect_copy_ref_info (data_ref, DR_REF (first_dr));
+		  vect_copy_ref_info (data_ref, DR_REF (first_dr_info->dr));
 		  new_stmt = gimple_build_assign (vec_dest, data_ref);
 		}
 	      new_temp = make_ssa_name (vec_dest, new_stmt);

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [39/46 v2] Change STMT_VINFO_UNALIGNED_DR to a dr_vec_info
  2018-07-24 10:08 ` [39/46] Replace STMT_VINFO_UNALIGNED_DR with the associated statement Richard Sandiford
@ 2018-07-26 11:08   ` Richard Sandiford
  2018-07-26 11:13     ` Richard Biener
  0 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-26 11:08 UTC (permalink / raw)
  To: gcc-patches

[Updated after new 37/46 and 38/46]

After previous changes, it makes more sense for STMT_VINFO_UNALIGNED_DR
to be dr_vec_info rather than a data_reference.


2018-07-26  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (_loop_vec_info::unaligned_dr): Change to
	dr_vec_info.
	* tree-vect-data-refs.c (vect_enhance_data_refs_alignment): Update
	accordingly.
	* tree-vect-loop.c (vect_analyze_loop_2): Likewise.
	* tree-vect-loop-manip.c (get_misalign_in_elems): Likewise.
	(vect_gen_prolog_loop_niters): Likewise.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-26 11:42:19.035663718 +0100
+++ gcc/tree-vectorizer.h	2018-07-26 11:42:24.919598492 +0100
@@ -437,7 +437,7 @@ typedef struct _loop_vec_info : public v
   tree mask_compare_type;
 
   /* Unknown DRs according to which loop was peeled.  */
-  struct data_reference *unaligned_dr;
+  struct dr_vec_info *unaligned_dr;
 
   /* peeling_for_alignment indicates whether peeling for alignment will take
      place, and what the peeling factor should be:
Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c	2018-07-26 11:42:19.031663762 +0100
+++ gcc/tree-vect-data-refs.c	2018-07-26 11:42:24.915598537 +0100
@@ -2135,7 +2135,7 @@ vect_enhance_data_refs_alignment (loop_v
 		vect_update_misalignment_for_peel (dr_info, dr0_info, npeel);
 	      }
 
-          LOOP_VINFO_UNALIGNED_DR (loop_vinfo) = dr0_info->dr;
+          LOOP_VINFO_UNALIGNED_DR (loop_vinfo) = dr0_info;
           if (npeel)
             LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) = npeel;
           else
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-26 11:42:19.031663762 +0100
+++ gcc/tree-vect-loop.c	2018-07-26 11:42:24.919598492 +0100
@@ -2142,8 +2142,7 @@ vect_analyze_loop_2 (loop_vec_info loop_
 	  /* Niters for peeled prolog loop.  */
 	  if (LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) < 0)
 	    {
-	      dr_vec_info *dr_info
-		= DR_VECT_AUX (LOOP_VINFO_UNALIGNED_DR (loop_vinfo));
+	      dr_vec_info *dr_info = LOOP_VINFO_UNALIGNED_DR (loop_vinfo);
 	      tree vectype = STMT_VINFO_VECTYPE (dr_info->stmt);
 	      niters_th += TYPE_VECTOR_SUBPARTS (vectype) - 1;
 	    }
Index: gcc/tree-vect-loop-manip.c
===================================================================
--- gcc/tree-vect-loop-manip.c	2018-07-26 11:42:19.031663762 +0100
+++ gcc/tree-vect-loop-manip.c	2018-07-26 11:42:24.915598537 +0100
@@ -1560,7 +1560,7 @@ vect_update_ivs_after_vectorizer (loop_v
 static tree
 get_misalign_in_elems (gimple **seq, loop_vec_info loop_vinfo)
 {
-  dr_vec_info *dr_info = DR_VECT_AUX (LOOP_VINFO_UNALIGNED_DR (loop_vinfo));
+  dr_vec_info *dr_info = LOOP_VINFO_UNALIGNED_DR (loop_vinfo);
   stmt_vec_info stmt_info = dr_info->stmt;
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
 
@@ -1627,7 +1627,7 @@ get_misalign_in_elems (gimple **seq, loo
 vect_gen_prolog_loop_niters (loop_vec_info loop_vinfo,
 			     basic_block bb, int *bound)
 {
-  dr_vec_info *dr_info = DR_VECT_AUX (LOOP_VINFO_UNALIGNED_DR (loop_vinfo));
+  dr_vec_info *dr_info = LOOP_VINFO_UNALIGNED_DR (loop_vinfo);
   tree var;
   tree niters_type = TREE_TYPE (LOOP_VINFO_NITERS (loop_vinfo));
   gimple_seq stmts = NULL, new_stmts = NULL;

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [40/46 v2] Add vec_info::lookup_dr
  2018-07-24 10:09 ` [40/46] Add vec_info::lookup_dr Richard Sandiford
@ 2018-07-26 11:10   ` Richard Sandiford
  2018-07-26 11:16     ` Richard Biener
  0 siblings, 1 reply; 108+ messages in thread
From: Richard Sandiford @ 2018-07-26 11:10 UTC (permalink / raw)
  To: gcc-patches

[Updated after new 37/46 and 38/46.  41 onwards are unaffected.]

This patch replaces DR_VECT_AUX and vect_dr_stmt with a new
vec_info::lookup_dr function, so that the lookup is relative
to a particular vec_info rather than to global state.


2018-07-26  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (vec_info::lookup_dr): New member function.
	(vect_dr_stmt): Delete.
	* tree-vectorizer.c (vec_info::lookup_dr): New function.
	* tree-vect-loop-manip.c (vect_update_inits_of_drs): Use it instead
	of DR_VECT_AUX.
	* tree-vect-data-refs.c (vect_analyze_possibly_independent_ddr)
	(vect_analyze_data_ref_dependence, vect_record_base_alignments)
	(vect_verify_datarefs_alignment, vect_peeling_supportable)
	(vect_analyze_data_ref_accesses, vect_prune_runtime_alias_test_list)
	(vect_analyze_data_refs): Likewise.
	(vect_slp_analyze_data_ref_dependence): Likewise.  Take a vec_info
	argument.
	(vect_find_same_alignment_drs): Likewise.
	(vect_slp_analyze_node_dependences): Update calls accordingly.
	(vect_analyze_data_refs_alignment): Likewise.  Use vec_info::lookup_dr
	instead of DR_VECT_AUX.
	(vect_get_peeling_costs_all_drs): Take a loop_vec_info instead
	of a vector data references.  Use vec_info::lookup_dr instead of
	DR_VECT_AUX.
	(vect_peeling_hash_get_lowest_cost): Update calls accordingly.
	(vect_enhance_data_refs_alignment): Likewise.  Use vec_info::lookup_dr
	instead of DR_VECT_AUX.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-26 11:42:24.919598492 +0100
+++ gcc/tree-vectorizer.h	2018-07-26 11:42:29.387548800 +0100
@@ -240,6 +240,7 @@ struct vec_info {
   stmt_vec_info lookup_stmt (gimple *);
   stmt_vec_info lookup_def (tree);
   stmt_vec_info lookup_single_use (tree);
+  struct dr_vec_info *lookup_dr (data_reference *);
   void move_dr (stmt_vec_info, stmt_vec_info);
 
   /* The type of vectorization.  */
@@ -1062,8 +1063,6 @@ #define HYBRID_SLP_STMT(S)
 #define PURE_SLP_STMT(S)                  ((S)->slp_type == pure_slp)
 #define STMT_SLP_TYPE(S)                   (S)->slp_type
 
-#define DR_VECT_AUX(dr) (STMT_VINFO_DR_INFO (vect_dr_stmt (dr)))
-
 #define VECT_MAX_COST 1000
 
 /* The maximum number of intermediate steps required in multi-step type
@@ -1273,20 +1272,6 @@ add_stmt_costs (void *data, stmt_vector_
 		   cost->misalign, cost->where);
 }
 
-/* Return the stmt DR is in.  For DR_STMT that have been replaced by
-   a pattern this returns the corresponding pattern stmt.  Otherwise
-   DR_STMT is returned.  */
-
-inline stmt_vec_info
-vect_dr_stmt (data_reference *dr)
-{
-  gimple *stmt = DR_STMT (dr);
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
-  /* DR_STMT should never refer to a stmt in a pattern replacement.  */
-  gcc_checking_assert (!is_pattern_stmt_p (stmt_info));
-  return stmt_info->dr_aux.stmt;
-}
-
 /*-----------------------------------------------------------------*/
 /* Info on data references alignment.                              */
 /*-----------------------------------------------------------------*/
Index: gcc/tree-vectorizer.c
===================================================================
--- gcc/tree-vectorizer.c	2018-07-26 11:30:56.197256524 +0100
+++ gcc/tree-vectorizer.c	2018-07-26 11:42:29.387548800 +0100
@@ -562,6 +562,17 @@ vec_info::lookup_single_use (tree lhs)
   return NULL;
 }
 
+/* Return vectorization information about DR.  */
+
+dr_vec_info *
+vec_info::lookup_dr (data_reference *dr)
+{
+  stmt_vec_info stmt_info = lookup_stmt (DR_STMT (dr));
+  /* DR_STMT should never refer to a stmt in a pattern replacement.  */
+  gcc_checking_assert (!is_pattern_stmt_p (stmt_info));
+  return STMT_VINFO_DR_INFO (stmt_info->dr_aux.stmt);
+}
+
 /* Record that NEW_STMT_INFO now implements the same data reference
    as OLD_STMT_INFO.  */
 
Index: gcc/tree-vect-loop-manip.c
===================================================================
--- gcc/tree-vect-loop-manip.c	2018-07-26 11:42:24.915598537 +0100
+++ gcc/tree-vect-loop-manip.c	2018-07-26 11:42:29.387548800 +0100
@@ -1754,8 +1754,8 @@ vect_update_inits_of_drs (loop_vec_info
 
   FOR_EACH_VEC_ELT (datarefs, i, dr)
     {
-      gimple *stmt = DR_STMT (dr);
-      if (!STMT_VINFO_GATHER_SCATTER_P (vinfo_for_stmt (stmt)))
+      dr_vec_info *dr_info = loop_vinfo->lookup_dr (dr);
+      if (!STMT_VINFO_GATHER_SCATTER_P (dr_info->stmt))
 	vect_update_init_of_dr (dr, niters, code);
     }
 }
Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c	2018-07-26 11:42:24.915598537 +0100
+++ gcc/tree-vect-data-refs.c	2018-07-26 11:42:29.387548800 +0100
@@ -269,10 +269,10 @@ vect_analyze_possibly_independent_ddr (d
 
 	     Note that the alias checks will be removed if the VF ends up
 	     being small enough.  */
-	  return (!STMT_VINFO_GATHER_SCATTER_P
-		     (vinfo_for_stmt (DR_STMT (DDR_A (ddr))))
-		  && !STMT_VINFO_GATHER_SCATTER_P
-		        (vinfo_for_stmt (DR_STMT (DDR_B (ddr))))
+	  dr_vec_info *dr_info_a = loop_vinfo->lookup_dr (DDR_A (ddr));
+	  dr_vec_info *dr_info_b = loop_vinfo->lookup_dr (DDR_B (ddr));
+	  return (!STMT_VINFO_GATHER_SCATTER_P (dr_info_a->stmt)
+		  && !STMT_VINFO_GATHER_SCATTER_P (dr_info_b->stmt)
 		  && vect_mark_for_runtime_alias_test (ddr, loop_vinfo));
 	}
     }
@@ -296,8 +296,8 @@ vect_analyze_data_ref_dependence (struct
   struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
   struct data_reference *dra = DDR_A (ddr);
   struct data_reference *drb = DDR_B (ddr);
-  dr_vec_info *dr_info_a = DR_VECT_AUX (dra);
-  dr_vec_info *dr_info_b = DR_VECT_AUX (drb);
+  dr_vec_info *dr_info_a = loop_vinfo->lookup_dr (dra);
+  dr_vec_info *dr_info_b = loop_vinfo->lookup_dr (drb);
   stmt_vec_info stmtinfo_a = dr_info_a->stmt;
   stmt_vec_info stmtinfo_b = dr_info_b->stmt;
   lambda_vector dist_v;
@@ -604,17 +604,18 @@ vect_analyze_data_ref_dependences (loop_
 /* Function vect_slp_analyze_data_ref_dependence.
 
    Return TRUE if there (might) exist a dependence between a memory-reference
-   DRA and a memory-reference DRB.  When versioning for alias may check a
-   dependence at run-time, return FALSE.  Adjust *MAX_VF according to
-   the data dependence.  */
+   DRA and a memory-reference DRB for VINFO.  When versioning for alias
+   may check a dependence at run-time, return FALSE.  Adjust *MAX_VF
+   according to the data dependence.  */
 
 static bool
-vect_slp_analyze_data_ref_dependence (struct data_dependence_relation *ddr)
+vect_slp_analyze_data_ref_dependence (vec_info *vinfo,
+				      struct data_dependence_relation *ddr)
 {
   struct data_reference *dra = DDR_A (ddr);
   struct data_reference *drb = DDR_B (ddr);
-  dr_vec_info *dr_info_a = DR_VECT_AUX (dra);
-  dr_vec_info *dr_info_b = DR_VECT_AUX (drb);
+  dr_vec_info *dr_info_a = vinfo->lookup_dr (dra);
+  dr_vec_info *dr_info_b = vinfo->lookup_dr (drb);
 
   /* We need to check dependences of statements marked as unvectorizable
      as well, they still can prohibit vectorization.  */
@@ -726,7 +727,8 @@ vect_slp_analyze_node_dependences (slp_i
 		  data_reference *store_dr = STMT_VINFO_DATA_REF (store_info);
 		  ddr_p ddr = initialize_data_dependence_relation
 				(dr_a, store_dr, vNULL);
-		  dependent = vect_slp_analyze_data_ref_dependence (ddr);
+		  dependent
+		    = vect_slp_analyze_data_ref_dependence (vinfo, ddr);
 		  free_dependence_relation (ddr);
 		  if (dependent)
 		    break;
@@ -736,7 +738,7 @@ vect_slp_analyze_node_dependences (slp_i
 	    {
 	      ddr_p ddr = initialize_data_dependence_relation (dr_a,
 							       dr_b, vNULL);
-	      dependent = vect_slp_analyze_data_ref_dependence (ddr);
+	      dependent = vect_slp_analyze_data_ref_dependence (vinfo, ddr);
 	      free_dependence_relation (ddr);
 	    }
 	  if (dependent)
@@ -848,7 +850,7 @@ vect_record_base_alignments (vec_info *v
   unsigned int i;
   FOR_EACH_VEC_ELT (vinfo->shared->datarefs, i, dr)
     {
-      dr_vec_info *dr_info = DR_VECT_AUX (dr);
+      dr_vec_info *dr_info = vinfo->lookup_dr (dr);
       stmt_vec_info stmt_info = dr_info->stmt;
       if (!DR_IS_CONDITIONAL_IN_STMT (dr)
 	  && STMT_VINFO_VECTORIZABLE (stmt_info)
@@ -1172,7 +1174,7 @@ vect_verify_datarefs_alignment (loop_vec
 
   FOR_EACH_VEC_ELT (datarefs, i, dr)
     {
-      dr_vec_info *dr_info = DR_VECT_AUX (dr);
+      dr_vec_info *dr_info = vinfo->lookup_dr (dr);
       stmt_vec_info stmt_info = dr_info->stmt;
 
       if (!STMT_VINFO_RELEVANT_P (stmt_info))
@@ -1397,12 +1399,12 @@ vect_peeling_hash_get_most_frequent (_ve
   return 1;
 }
 
-/* Get the costs of peeling NPEEL iterations checking data access costs
-   for all data refs.  If UNKNOWN_MISALIGNMENT is true, we assume DR0_INFO's
-   misalignment will be zero after peeling.  */
+/* Get the costs of peeling NPEEL iterations for LOOP_VINFO, checking
+   data access costs for all data refs.  If UNKNOWN_MISALIGNMENT is true,
+   we assume DR0_INFO's misalignment will be zero after peeling.  */
 
 static void
-vect_get_peeling_costs_all_drs (vec<data_reference_p> datarefs,
+vect_get_peeling_costs_all_drs (loop_vec_info loop_vinfo,
 				dr_vec_info *dr0_info,
 				unsigned int *inside_cost,
 				unsigned int *outside_cost,
@@ -1411,12 +1413,13 @@ vect_get_peeling_costs_all_drs (vec<data
 				unsigned int npeel,
 				bool unknown_misalignment)
 {
+  vec<data_reference_p> datarefs = LOOP_VINFO_DATAREFS (loop_vinfo);
   unsigned i;
   data_reference *dr;
 
   FOR_EACH_VEC_ELT (datarefs, i, dr)
     {
-      dr_vec_info *dr_info = DR_VECT_AUX (dr);
+      dr_vec_info *dr_info = loop_vinfo->lookup_dr (dr);
       stmt_vec_info stmt_info = dr_info->stmt;
       if (!STMT_VINFO_RELEVANT_P (stmt_info))
 	continue;
@@ -1466,10 +1469,9 @@ vect_peeling_hash_get_lowest_cost (_vect
   body_cost_vec.create (2);
   epilogue_cost_vec.create (2);
 
-  vect_get_peeling_costs_all_drs (LOOP_VINFO_DATAREFS (loop_vinfo),
-				  elem->dr_info, &inside_cost, &outside_cost,
-				  &body_cost_vec, &prologue_cost_vec,
-				  elem->npeel, false);
+  vect_get_peeling_costs_all_drs (loop_vinfo, elem->dr_info, &inside_cost,
+				  &outside_cost, &body_cost_vec,
+				  &prologue_cost_vec, elem->npeel, false);
 
   body_cost_vec.release ();
 
@@ -1550,7 +1552,7 @@ vect_peeling_supportable (loop_vec_info
       if (dr == dr0_info->dr)
 	continue;
 
-      dr_vec_info *dr_info = DR_VECT_AUX (dr);
+      dr_vec_info *dr_info = loop_vinfo->lookup_dr (dr);
       stmt_vec_info stmt_info = dr_info->stmt;
       /* For interleaving, only the alignment of the first access
 	 matters.  */
@@ -1732,7 +1734,7 @@ vect_enhance_data_refs_alignment (loop_v
 
   FOR_EACH_VEC_ELT (datarefs, i, dr)
     {
-      dr_vec_info *dr_info = DR_VECT_AUX (dr);
+      dr_vec_info *dr_info = loop_vinfo->lookup_dr (dr);
       stmt_vec_info stmt_info = dr_info->stmt;
 
       if (!STMT_VINFO_RELEVANT_P (stmt_info))
@@ -1896,7 +1898,7 @@ vect_enhance_data_refs_alignment (loop_v
 
       stmt_vector_for_cost dummy;
       dummy.create (2);
-      vect_get_peeling_costs_all_drs (datarefs, dr0_info,
+      vect_get_peeling_costs_all_drs (loop_vinfo, dr0_info,
 				      &load_inside_cost,
 				      &load_outside_cost,
 				      &dummy, &dummy, estimated_npeels, true);
@@ -1905,7 +1907,7 @@ vect_enhance_data_refs_alignment (loop_v
       if (first_store)
 	{
 	  dummy.create (2);
-	  vect_get_peeling_costs_all_drs (datarefs, first_store,
+	  vect_get_peeling_costs_all_drs (loop_vinfo, first_store,
 					  &store_inside_cost,
 					  &store_outside_cost,
 					  &dummy, &dummy,
@@ -1996,7 +1998,7 @@ vect_enhance_data_refs_alignment (loop_v
 
       stmt_vector_for_cost dummy;
       dummy.create (2);
-      vect_get_peeling_costs_all_drs (datarefs, NULL, &nopeel_inside_cost,
+      vect_get_peeling_costs_all_drs (loop_vinfo, NULL, &nopeel_inside_cost,
 				      &nopeel_outside_cost, &dummy, &dummy,
 				      0, false);
       dummy.release ();
@@ -2126,7 +2128,7 @@ vect_enhance_data_refs_alignment (loop_v
 	      {
 		/* Strided accesses perform only component accesses, alignment
 		   is irrelevant for them.  */
-		dr_vec_info *dr_info = DR_VECT_AUX (dr);
+		dr_vec_info *dr_info = loop_vinfo->lookup_dr (dr);
 		stmt_info = dr_info->stmt;
 		if (STMT_VINFO_STRIDED_P (stmt_info)
 		    && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
@@ -2176,7 +2178,7 @@ vect_enhance_data_refs_alignment (loop_v
     {
       FOR_EACH_VEC_ELT (datarefs, i, dr)
         {
-	  dr_vec_info *dr_info = DR_VECT_AUX (dr);
+	  dr_vec_info *dr_info = loop_vinfo->lookup_dr (dr);
 	  stmt_vec_info stmt_info = dr_info->stmt;
 
 	  /* For interleaving, only the alignment of the first access
@@ -2291,16 +2293,16 @@ vect_enhance_data_refs_alignment (loop_v
 
 /* Function vect_find_same_alignment_drs.
 
-   Update group and alignment relations according to the chosen
+   Update group and alignment relations in VINFO according to the chosen
    vectorization factor.  */
 
 static void
-vect_find_same_alignment_drs (struct data_dependence_relation *ddr)
+vect_find_same_alignment_drs (vec_info *vinfo, data_dependence_relation *ddr)
 {
   struct data_reference *dra = DDR_A (ddr);
   struct data_reference *drb = DDR_B (ddr);
-  dr_vec_info *dr_info_a = DR_VECT_AUX (dra);
-  dr_vec_info *dr_info_b = DR_VECT_AUX (drb);
+  dr_vec_info *dr_info_a = vinfo->lookup_dr (dra);
+  dr_vec_info *dr_info_b = vinfo->lookup_dr (drb);
   stmt_vec_info stmtinfo_a = dr_info_a->stmt;
   stmt_vec_info stmtinfo_b = dr_info_b->stmt;
 
@@ -2367,7 +2369,7 @@ vect_analyze_data_refs_alignment (loop_v
   unsigned int i;
 
   FOR_EACH_VEC_ELT (ddrs, i, ddr)
-    vect_find_same_alignment_drs (ddr);
+    vect_find_same_alignment_drs (vinfo, ddr);
 
   vec<data_reference_p> datarefs = vinfo->shared->datarefs;
   struct data_reference *dr;
@@ -2375,7 +2377,7 @@ vect_analyze_data_refs_alignment (loop_v
   vect_record_base_alignments (vinfo);
   FOR_EACH_VEC_ELT (datarefs, i, dr)
     {
-      dr_vec_info *dr_info = DR_VECT_AUX (dr);
+      dr_vec_info *dr_info = vinfo->lookup_dr (dr);
       if (STMT_VINFO_VECTORIZABLE (dr_info->stmt))
 	vect_compute_data_ref_alignment (dr_info);
     }
@@ -2941,7 +2943,7 @@ vect_analyze_data_ref_accesses (vec_info
   for (i = 0; i < datarefs_copy.length () - 1;)
     {
       data_reference_p dra = datarefs_copy[i];
-      dr_vec_info *dr_info_a = DR_VECT_AUX (dra);
+      dr_vec_info *dr_info_a = vinfo->lookup_dr (dra);
       stmt_vec_info stmtinfo_a = dr_info_a->stmt;
       stmt_vec_info lastinfo = NULL;
       if (!STMT_VINFO_VECTORIZABLE (stmtinfo_a)
@@ -2953,7 +2955,7 @@ vect_analyze_data_ref_accesses (vec_info
       for (i = i + 1; i < datarefs_copy.length (); ++i)
 	{
 	  data_reference_p drb = datarefs_copy[i];
-	  dr_vec_info *dr_info_b = DR_VECT_AUX (drb);
+	  dr_vec_info *dr_info_b = vinfo->lookup_dr (drb);
 	  stmt_vec_info stmtinfo_b = dr_info_b->stmt;
 	  if (!STMT_VINFO_VECTORIZABLE (stmtinfo_b)
 	      || STMT_VINFO_GATHER_SCATTER_P (stmtinfo_b))
@@ -3078,7 +3080,7 @@ vect_analyze_data_ref_accesses (vec_info
 
   FOR_EACH_VEC_ELT (datarefs_copy, i, dr)
     {
-      dr_vec_info *dr_info = DR_VECT_AUX (dr);
+      dr_vec_info *dr_info = vinfo->lookup_dr (dr);
       if (STMT_VINFO_VECTORIZABLE (dr_info->stmt)
 	  && !vect_analyze_data_ref_access (dr_info))
 	{
@@ -3438,10 +3440,10 @@ vect_prune_runtime_alias_test_list (loop
 	  continue;
 	}
 
-      dr_vec_info *dr_info_a = DR_VECT_AUX (DDR_A (ddr));
+      dr_vec_info *dr_info_a = loop_vinfo->lookup_dr (DDR_A (ddr));
       stmt_vec_info stmt_info_a = dr_info_a->stmt;
 
-      dr_vec_info *dr_info_b = DR_VECT_AUX (DDR_B (ddr));
+      dr_vec_info *dr_info_b = loop_vinfo->lookup_dr (DDR_B (ddr));
       stmt_vec_info stmt_info_b = dr_info_b->stmt;
 
       /* Skip the pair if inter-iteration dependencies are irrelevant

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [38/46] Pass stmt_vec_infos instead of data_references where relevant
  2018-07-26 11:05       ` Richard Sandiford
@ 2018-07-26 11:13         ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-26 11:13 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Thu, Jul 26, 2018 at 1:05 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> Richard Sandiford <richard.sandiford@arm.com> writes:
> > Richard Biener <richard.guenther@gmail.com> writes:
> >> On Tue, Jul 24, 2018 at 12:08 PM Richard Sandiford
> >> <richard.sandiford@arm.com> wrote:
> >>>
> >>> This patch makes various routines (mostly in tree-vect-data-refs.c)
> >>> take stmt_vec_infos rather than data_references.  The affected routines
> >>> are really dealing with the way that an access is going to vectorised
> >>> for a particular stmt_vec_info, rather than with the original scalar
> >>> access described by the data_reference.
> >>
> >> Similar.  Doesn't it make more sense to pass both stmt_info and DR to
> >> the functions?
> >
> > Not sure.  If we...
> >
> >> We currently cannot handle aggregate copies in the to-be-vectorized IL
> >> but rely on SRA and friends to elide those.  That's the only two-DR
> >> stmt I can think of for vectorization.  Maybe aggregate by-value / return
> >> function calls with OMP SIMD if that supports this somehow.
> >
> > ...did this then I don't think a data_refrence would be the natural
> > way of identifying a DR within a stmt_vec_info.  Presumably the
> > stmt_vec_info would need multiple STMT_VINFO_DATA_REFS and dr_auxs.
> > If both of those were vectors then a (stmt_vec_info, index) pair
> > might make more sense than (stmt_vec_info, data_reference).
> >
> > Alternatively we could move STMT_VINFO_DATA_REF into dataref_aux,
> > so that there's a back-pointer to the DR, add a stmt_vec_info
> > field to dataref_aux too, and then use dataref_aux instead of
> > stmt_vec_info as the key.
>
> New patch 37/46 does that.  The one below goes through and uses
> dr_vec_info insead of data_reference in code that is dealing
> with the way that a reference is going to be vectorised.

OK.

> Thanks,
> Richard
>
>
> 2018-07-26  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (set_dr_misalignment, dr_misalignment)
>         (DR_TARGET_ALIGNMENT, aligned_access_p, known_alignment_for_access_p)
>         (vect_known_alignment_in_bytes, vect_dr_behavior)
>         (vect_get_scalar_dr_size): Take references as dr_vec_infos
>         instead of data_references.  Update calls to other routines for
>         which the same change has been made.
>         * tree-vect-data-refs.c (vect_preserves_scalar_order_p): Take
>         dr_vec_infos instead of stmt_vec_infos.
>         (vect_analyze_data_ref_dependence): Update call accordingly.
>         (vect_slp_analyze_data_ref_dependence)
>         (vect_record_base_alignments): Use DR_VECT_AUX.
>         (vect_calculate_target_alignment, vect_compute_data_ref_alignment)
>         (vect_update_misalignment_for_peel, verify_data_ref_alignment)
>         (vector_alignment_reachable_p, vect_get_data_access_cost)
>         (vect_peeling_supportable, vect_analyze_group_access_1)
>         (vect_analyze_group_access, vect_analyze_data_ref_access)
>         (vect_vfa_segment_size, vect_vfa_access_size, vect_vfa_align)
>         (vect_compile_time_alias, vect_small_gap_p)
>         (vectorizable_with_step_bound_p, vect_duplicate_ssa_name_ptr_info):
>         (vect_supportable_dr_alignment): Take references as dr_vec_infos
>         instead of data_references.  Update calls to other routines for
>         which the same change has been made.
>         (vect_verify_datarefs_alignment, vect_get_peeling_costs_all_drs)
>         (vect_find_same_alignment_drs, vect_analyze_data_refs_alignment)
>         (vect_slp_analyze_and_verify_node_alignment)
>         (vect_analyze_data_ref_accesses, vect_prune_runtime_alias_test_list)
>         (vect_create_addr_base_for_vector_ref, vect_create_data_ref_ptr)
>         (vect_setup_realignment): Use dr_vec_infos.  Update calls after
>         above changes.
>         (_vect_peel_info::dr): Replace with...
>         (_vect_peel_info::dr_info): ...this new field.
>         (vect_peeling_hash_get_most_frequent)
>         (vect_peeling_hash_choose_best_peeling): Update accordingly.
>         (vect_peeling_hash_get_lowest_cost):
>         (vect_enhance_data_refs_alignment): Likewise.  Update calls to other
>         routines for which the same change has been made.
>         (vect_peeling_hash_insert): Likewise.  Take a dr_vec_info instead of a
>         data_reference.
>         * tree-vect-loop-manip.c (get_misalign_in_elems)
>         (vect_gen_prolog_loop_niters): Use dr_vec_infos.  Update calls after
>         above changes.
>         * tree-vect-loop.c (vect_analyze_loop_2): Likewise.
>         * tree-vect-stmts.c (vect_get_store_cost, vect_get_load_cost)
>         (vect_truncate_gather_scatter_offset, compare_step_with_zero)
>         (get_group_load_store_type, get_negative_load_store_type)
>         (vect_get_data_ptr_increment, vectorizable_store)
>         (vectorizable_load): Likewise.
>         (ensure_base_align): Take a dr_vec_info instead of a data_reference.
>         Update calls to other routines for which the same change has been made.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-26 11:30:56.197256524 +0100
> +++ gcc/tree-vectorizer.h       2018-07-26 11:42:19.035663718 +0100
> @@ -1294,15 +1294,15 @@ #define DR_MISALIGNMENT_UNKNOWN (-1)
>  #define DR_MISALIGNMENT_UNINITIALIZED (-2)
>
>  inline void
> -set_dr_misalignment (struct data_reference *dr, int val)
> +set_dr_misalignment (dr_vec_info *dr_info, int val)
>  {
> -  DR_VECT_AUX (dr)->misalignment = val;
> +  dr_info->misalignment = val;
>  }
>
>  inline int
> -dr_misalignment (struct data_reference *dr)
> +dr_misalignment (dr_vec_info *dr_info)
>  {
> -  int misalign = DR_VECT_AUX (dr)->misalignment;
> +  int misalign = dr_info->misalignment;
>    gcc_assert (misalign != DR_MISALIGNMENT_UNINITIALIZED);
>    return misalign;
>  }
> @@ -1313,52 +1313,51 @@ #define DR_MISALIGNMENT(DR) dr_misalignm
>  #define SET_DR_MISALIGNMENT(DR, VAL) set_dr_misalignment (DR, VAL)
>
>  /* Only defined once DR_MISALIGNMENT is defined.  */
> -#define DR_TARGET_ALIGNMENT(DR) DR_VECT_AUX (DR)->target_alignment
> +#define DR_TARGET_ALIGNMENT(DR) ((DR)->target_alignment)
>
> -/* Return true if data access DR is aligned to its target alignment
> +/* Return true if data access DR_INFO is aligned to its target alignment
>     (which may be less than a full vector).  */
>
>  static inline bool
> -aligned_access_p (struct data_reference *data_ref_info)
> +aligned_access_p (dr_vec_info *dr_info)
>  {
> -  return (DR_MISALIGNMENT (data_ref_info) == 0);
> +  return (DR_MISALIGNMENT (dr_info) == 0);
>  }
>
>  /* Return TRUE if the alignment of the data access is known, and FALSE
>     otherwise.  */
>
>  static inline bool
> -known_alignment_for_access_p (struct data_reference *data_ref_info)
> +known_alignment_for_access_p (dr_vec_info *dr_info)
>  {
> -  return (DR_MISALIGNMENT (data_ref_info) != DR_MISALIGNMENT_UNKNOWN);
> +  return (DR_MISALIGNMENT (dr_info) != DR_MISALIGNMENT_UNKNOWN);
>  }
>
>  /* Return the minimum alignment in bytes that the vectorized version
> -   of DR is guaranteed to have.  */
> +   of DR_INFO is guaranteed to have.  */
>
>  static inline unsigned int
> -vect_known_alignment_in_bytes (struct data_reference *dr)
> +vect_known_alignment_in_bytes (dr_vec_info *dr_info)
>  {
> -  if (DR_MISALIGNMENT (dr) == DR_MISALIGNMENT_UNKNOWN)
> -    return TYPE_ALIGN_UNIT (TREE_TYPE (DR_REF (dr)));
> -  if (DR_MISALIGNMENT (dr) == 0)
> -    return DR_TARGET_ALIGNMENT (dr);
> -  return DR_MISALIGNMENT (dr) & -DR_MISALIGNMENT (dr);
> +  if (DR_MISALIGNMENT (dr_info) == DR_MISALIGNMENT_UNKNOWN)
> +    return TYPE_ALIGN_UNIT (TREE_TYPE (DR_REF (dr_info->dr)));
> +  if (DR_MISALIGNMENT (dr_info) == 0)
> +    return DR_TARGET_ALIGNMENT (dr_info);
> +  return DR_MISALIGNMENT (dr_info) & -DR_MISALIGNMENT (dr_info);
>  }
>
> -/* Return the behavior of DR with respect to the vectorization context
> +/* Return the behavior of DR_INFO with respect to the vectorization context
>     (which for outer loop vectorization might not be the behavior recorded
> -   in DR itself).  */
> +   in DR_INFO itself).  */
>
>  static inline innermost_loop_behavior *
> -vect_dr_behavior (data_reference *dr)
> +vect_dr_behavior (dr_vec_info *dr_info)
>  {
> -  gimple *stmt = DR_STMT (dr);
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +  stmt_vec_info stmt_info = dr_info->stmt;
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    if (loop_vinfo == NULL
>        || !nested_in_vect_loop_p (LOOP_VINFO_LOOP (loop_vinfo), stmt_info))
> -    return &DR_INNERMOST (dr);
> +    return &DR_INNERMOST (dr_info->dr);
>    else
>      return &STMT_VINFO_DR_WRT_VEC_LOOP (stmt_info);
>  }
> @@ -1451,17 +1450,17 @@ vect_max_vf (loop_vec_info loop_vinfo)
>    return MAX_VECTORIZATION_FACTOR;
>  }
>
> -/* Return the size of the value accessed by unvectorized data reference DR.
> -   This is only valid once STMT_VINFO_VECTYPE has been calculated for the
> -   associated gimple statement, since that guarantees that DR accesses
> -   either a scalar or a scalar equivalent.  ("Scalar equivalent" here
> -   includes things like V1SI, which can be vectorized in the same way
> +/* Return the size of the value accessed by unvectorized data reference
> +   DR_INFO.  This is only valid once STMT_VINFO_VECTYPE has been calculated
> +   for the associated gimple statement, since that guarantees that DR_INFO
> +   accesses either a scalar or a scalar equivalent.  ("Scalar equivalent"
> +   here includes things like V1SI, which can be vectorized in the same way
>     as a plain SI.)  */
>
>  inline unsigned int
> -vect_get_scalar_dr_size (struct data_reference *dr)
> +vect_get_scalar_dr_size (dr_vec_info *dr_info)
>  {
> -  return tree_to_uhwi (TYPE_SIZE_UNIT (TREE_TYPE (DR_REF (dr))));
> +  return tree_to_uhwi (TYPE_SIZE_UNIT (TREE_TYPE (DR_REF (dr_info->dr))));
>  }
>
>  /* Source location + hotness information. */
> @@ -1561,7 +1560,7 @@ extern tree vect_get_mask_type_for_stmt
>  /* In tree-vect-data-refs.c.  */
>  extern bool vect_can_force_dr_alignment_p (const_tree, unsigned int);
>  extern enum dr_alignment_support vect_supportable_dr_alignment
> -                                           (struct data_reference *, bool);
> +                                           (dr_vec_info *, bool);
>  extern tree vect_get_smallest_scalar_type (stmt_vec_info, HOST_WIDE_INT *,
>                                             HOST_WIDE_INT *);
>  extern bool vect_analyze_data_ref_dependences (loop_vec_info, unsigned int *);
> Index: gcc/tree-vect-data-refs.c
> ===================================================================
> --- gcc/tree-vect-data-refs.c   2018-07-26 11:30:56.193256600 +0100
> +++ gcc/tree-vect-data-refs.c   2018-07-26 11:42:19.031663762 +0100
> @@ -192,14 +192,16 @@ vect_check_nonzero_value (loop_vec_info
>    LOOP_VINFO_CHECK_NONZERO (loop_vinfo).safe_push (value);
>  }
>
> -/* Return true if we know that the order of vectorized STMTINFO_A and
> -   vectorized STMTINFO_B will be the same as the order of STMTINFO_A and
> -   STMTINFO_B.  At least one of the statements is a write.  */
> +/* Return true if we know that the order of vectorized DR_INFO_A and
> +   vectorized DR_INFO_B will be the same as the order of DR_INFO_A and
> +   DR_INFO_B.  At least one of the accesses is a write.  */
>
>  static bool
> -vect_preserves_scalar_order_p (stmt_vec_info stmtinfo_a,
> -                              stmt_vec_info stmtinfo_b)
> +vect_preserves_scalar_order_p (dr_vec_info *dr_info_a, dr_vec_info *dr_info_b)
>  {
> +  stmt_vec_info stmtinfo_a = dr_info_a->stmt;
> +  stmt_vec_info stmtinfo_b = dr_info_b->stmt;
> +
>    /* Single statements are always kept in their original order.  */
>    if (!STMT_VINFO_GROUPED_ACCESS (stmtinfo_a)
>        && !STMT_VINFO_GROUPED_ACCESS (stmtinfo_b))
> @@ -294,8 +296,10 @@ vect_analyze_data_ref_dependence (struct
>    struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
>    struct data_reference *dra = DDR_A (ddr);
>    struct data_reference *drb = DDR_B (ddr);
> -  stmt_vec_info stmtinfo_a = vect_dr_stmt (dra);
> -  stmt_vec_info stmtinfo_b = vect_dr_stmt (drb);
> +  dr_vec_info *dr_info_a = DR_VECT_AUX (dra);
> +  dr_vec_info *dr_info_b = DR_VECT_AUX (drb);
> +  stmt_vec_info stmtinfo_a = dr_info_a->stmt;
> +  stmt_vec_info stmtinfo_b = dr_info_b->stmt;
>    lambda_vector dist_v;
>    unsigned int loop_depth;
>
> @@ -471,7 +475,7 @@ vect_analyze_data_ref_dependence (struct
>                 ... = a[i];
>                 a[i+1] = ...;
>              where loads from the group interleave with the store.  */
> -         if (!vect_preserves_scalar_order_p (stmtinfo_a, stmtinfo_b))
> +         if (!vect_preserves_scalar_order_p (dr_info_a, dr_info_b))
>             {
>               if (dump_enabled_p ())
>                 dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> @@ -609,6 +613,8 @@ vect_slp_analyze_data_ref_dependence (st
>  {
>    struct data_reference *dra = DDR_A (ddr);
>    struct data_reference *drb = DDR_B (ddr);
> +  dr_vec_info *dr_info_a = DR_VECT_AUX (dra);
> +  dr_vec_info *dr_info_b = DR_VECT_AUX (drb);
>
>    /* We need to check dependences of statements marked as unvectorizable
>       as well, they still can prohibit vectorization.  */
> @@ -626,9 +632,9 @@ vect_slp_analyze_data_ref_dependence (st
>
>    /* If dra and drb are part of the same interleaving chain consider
>       them independent.  */
> -  if (STMT_VINFO_GROUPED_ACCESS (vect_dr_stmt (dra))
> -      && (DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (dra))
> -         == DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (drb))))
> +  if (STMT_VINFO_GROUPED_ACCESS (dr_info_a->stmt)
> +      && (DR_GROUP_FIRST_ELEMENT (dr_info_a->stmt)
> +         == DR_GROUP_FIRST_ELEMENT (dr_info_b->stmt)))
>      return false;
>
>    /* Unknown data dependence.  */
> @@ -842,7 +848,8 @@ vect_record_base_alignments (vec_info *v
>    unsigned int i;
>    FOR_EACH_VEC_ELT (vinfo->shared->datarefs, i, dr)
>      {
> -      stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +      dr_vec_info *dr_info = DR_VECT_AUX (dr);
> +      stmt_vec_info stmt_info = dr_info->stmt;
>        if (!DR_IS_CONDITIONAL_IN_STMT (dr)
>           && STMT_VINFO_VECTORIZABLE (stmt_info)
>           && !STMT_VINFO_GATHER_SCATTER_P (stmt_info))
> @@ -858,34 +865,33 @@ vect_record_base_alignments (vec_info *v
>      }
>  }
>
> -/* Return the target alignment for the vectorized form of DR.  */
> +/* Return the target alignment for the vectorized form of DR_INFO.  */
>
>  static unsigned int
> -vect_calculate_target_alignment (struct data_reference *dr)
> +vect_calculate_target_alignment (dr_vec_info *dr_info)
>  {
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> -  tree vectype = STMT_VINFO_VECTYPE (stmt_info);
> +  tree vectype = STMT_VINFO_VECTYPE (dr_info->stmt);
>    return targetm.vectorize.preferred_vector_alignment (vectype);
>  }
>
>  /* Function vect_compute_data_ref_alignment
>
> -   Compute the misalignment of the data reference DR.
> +   Compute the misalignment of the data reference DR_INFO.
>
>     Output:
> -   1. DR_MISALIGNMENT (DR) is defined.
> +   1. DR_MISALIGNMENT (DR_INFO) is defined.
>
>     FOR NOW: No analysis is actually performed. Misalignment is calculated
>     only for trivial cases. TODO.  */
>
>  static void
> -vect_compute_data_ref_alignment (struct data_reference *dr)
> +vect_compute_data_ref_alignment (dr_vec_info *dr_info)
>  {
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +  stmt_vec_info stmt_info = dr_info->stmt;
>    vec_base_alignments *base_alignments = &stmt_info->vinfo->base_alignments;
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    struct loop *loop = NULL;
> -  tree ref = DR_REF (dr);
> +  tree ref = DR_REF (dr_info->dr);
>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>
>    if (dump_enabled_p ())
> @@ -896,17 +902,17 @@ vect_compute_data_ref_alignment (struct
>      loop = LOOP_VINFO_LOOP (loop_vinfo);
>
>    /* Initialize misalignment to unknown.  */
> -  SET_DR_MISALIGNMENT (dr, DR_MISALIGNMENT_UNKNOWN);
> +  SET_DR_MISALIGNMENT (dr_info, DR_MISALIGNMENT_UNKNOWN);
>
>    if (STMT_VINFO_GATHER_SCATTER_P (stmt_info))
>      return;
>
> -  innermost_loop_behavior *drb = vect_dr_behavior (dr);
> +  innermost_loop_behavior *drb = vect_dr_behavior (dr_info);
>    bool step_preserves_misalignment_p;
>
>    unsigned HOST_WIDE_INT vector_alignment
> -    = vect_calculate_target_alignment (dr) / BITS_PER_UNIT;
> -  DR_TARGET_ALIGNMENT (dr) = vector_alignment;
> +    = vect_calculate_target_alignment (dr_info) / BITS_PER_UNIT;
> +  DR_TARGET_ALIGNMENT (dr_info) = vector_alignment;
>
>    /* No step for BB vectorization.  */
>    if (!loop)
> @@ -924,7 +930,7 @@ vect_compute_data_ref_alignment (struct
>    else if (nested_in_vect_loop_p (loop, stmt_info))
>      {
>        step_preserves_misalignment_p
> -       = (DR_STEP_ALIGNMENT (dr) % vector_alignment) == 0;
> +       = (DR_STEP_ALIGNMENT (dr_info->dr) % vector_alignment) == 0;
>
>        if (dump_enabled_p ())
>         {
> @@ -946,7 +952,7 @@ vect_compute_data_ref_alignment (struct
>      {
>        poly_uint64 vf = LOOP_VINFO_VECT_FACTOR (loop_vinfo);
>        step_preserves_misalignment_p
> -       = multiple_p (DR_STEP_ALIGNMENT (dr) * vf, vector_alignment);
> +       = multiple_p (DR_STEP_ALIGNMENT (dr_info->dr) * vf, vector_alignment);
>
>        if (!step_preserves_misalignment_p && dump_enabled_p ())
>         dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> @@ -1009,8 +1015,8 @@ vect_compute_data_ref_alignment (struct
>            dump_printf (MSG_NOTE, "\n");
>          }
>
> -      DR_VECT_AUX (dr)->base_decl = base;
> -      DR_VECT_AUX (dr)->base_misaligned = true;
> +      dr_info->base_decl = base;
> +      dr_info->base_misaligned = true;
>        base_misalignment = 0;
>      }
>    poly_int64 misalignment
> @@ -1038,12 +1044,13 @@ vect_compute_data_ref_alignment (struct
>        return;
>      }
>
> -  SET_DR_MISALIGNMENT (dr, const_misalignment);
> +  SET_DR_MISALIGNMENT (dr_info, const_misalignment);
>
>    if (dump_enabled_p ())
>      {
>        dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> -                       "misalign = %d bytes of ref ", DR_MISALIGNMENT (dr));
> +                      "misalign = %d bytes of ref ",
> +                      DR_MISALIGNMENT (dr_info));
>        dump_generic_expr (MSG_MISSED_OPTIMIZATION, TDF_SLIM, ref);
>        dump_printf (MSG_MISSED_OPTIMIZATION, "\n");
>      }
> @@ -1052,28 +1059,28 @@ vect_compute_data_ref_alignment (struct
>  }
>
>  /* Function vect_update_misalignment_for_peel.
> -   Sets DR's misalignment
> -   - to 0 if it has the same alignment as DR_PEEL,
> -   - to the misalignment computed using NPEEL if DR's salignment is known,
> +   Sets DR_INFO's misalignment
> +   - to 0 if it has the same alignment as DR_PEEL_INFO,
> +   - to the misalignment computed using NPEEL if DR_INFO's salignment is known,
>     - to -1 (unknown) otherwise.
>
> -   DR - the data reference whose misalignment is to be adjusted.
> -   DR_PEEL - the data reference whose misalignment is being made
> -             zero in the vector loop by the peel.
> +   DR_INFO - the data reference whose misalignment is to be adjusted.
> +   DR_PEEL_INFO - the data reference whose misalignment is being made
> +                 zero in the vector loop by the peel.
>     NPEEL - the number of iterations in the peel loop if the misalignment
> -           of DR_PEEL is known at compile time.  */
> +           of DR_PEEL_INFO is known at compile time.  */
>
>  static void
> -vect_update_misalignment_for_peel (struct data_reference *dr,
> -                                   struct data_reference *dr_peel, int npeel)
> +vect_update_misalignment_for_peel (dr_vec_info *dr_info,
> +                                  dr_vec_info *dr_peel_info, int npeel)
>  {
>    unsigned int i;
>    vec<dr_p> same_aligned_drs;
>    struct data_reference *current_dr;
> -  int dr_size = vect_get_scalar_dr_size (dr);
> -  int dr_peel_size = vect_get_scalar_dr_size (dr_peel);
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> -  stmt_vec_info peel_stmt_info = vect_dr_stmt (dr_peel);
> +  int dr_size = vect_get_scalar_dr_size (dr_info);
> +  int dr_peel_size = vect_get_scalar_dr_size (dr_peel_info);
> +  stmt_vec_info stmt_info = dr_info->stmt;
> +  stmt_vec_info peel_stmt_info = dr_peel_info->stmt;
>
>   /* For interleaved data accesses the step in the loop must be multiplied by
>       the size of the interleaving group.  */
> @@ -1084,51 +1091,52 @@ vect_update_misalignment_for_peel (struc
>
>    /* It can be assumed that the data refs with the same alignment as dr_peel
>       are aligned in the vector loop.  */
> -  same_aligned_drs = STMT_VINFO_SAME_ALIGN_REFS (vect_dr_stmt (dr_peel));
> +  same_aligned_drs = STMT_VINFO_SAME_ALIGN_REFS (peel_stmt_info);
>    FOR_EACH_VEC_ELT (same_aligned_drs, i, current_dr)
>      {
> -      if (current_dr != dr)
> +      if (current_dr != dr_info->dr)
>          continue;
> -      gcc_assert (!known_alignment_for_access_p (dr)
> -                 || !known_alignment_for_access_p (dr_peel)
> -                 || (DR_MISALIGNMENT (dr) / dr_size
> -                     == DR_MISALIGNMENT (dr_peel) / dr_peel_size));
> -      SET_DR_MISALIGNMENT (dr, 0);
> +      gcc_assert (!known_alignment_for_access_p (dr_info)
> +                 || !known_alignment_for_access_p (dr_peel_info)
> +                 || (DR_MISALIGNMENT (dr_info) / dr_size
> +                     == DR_MISALIGNMENT (dr_peel_info) / dr_peel_size));
> +      SET_DR_MISALIGNMENT (dr_info, 0);
>        return;
>      }
>
> -  if (known_alignment_for_access_p (dr)
> -      && known_alignment_for_access_p (dr_peel))
> +  if (known_alignment_for_access_p (dr_info)
> +      && known_alignment_for_access_p (dr_peel_info))
>      {
> -      bool negative = tree_int_cst_compare (DR_STEP (dr), size_zero_node) < 0;
> -      int misal = DR_MISALIGNMENT (dr);
> +      bool negative = tree_int_cst_compare (DR_STEP (dr_info->dr),
> +                                           size_zero_node) < 0;
> +      int misal = DR_MISALIGNMENT (dr_info);
>        misal += negative ? -npeel * dr_size : npeel * dr_size;
> -      misal &= DR_TARGET_ALIGNMENT (dr) - 1;
> -      SET_DR_MISALIGNMENT (dr, misal);
> +      misal &= DR_TARGET_ALIGNMENT (dr_info) - 1;
> +      SET_DR_MISALIGNMENT (dr_info, misal);
>        return;
>      }
>
>    if (dump_enabled_p ())
>      dump_printf_loc (MSG_NOTE, vect_location, "Setting misalignment " \
>                      "to unknown (-1).\n");
> -  SET_DR_MISALIGNMENT (dr, DR_MISALIGNMENT_UNKNOWN);
> +  SET_DR_MISALIGNMENT (dr_info, DR_MISALIGNMENT_UNKNOWN);
>  }
>
>
>  /* Function verify_data_ref_alignment
>
> -   Return TRUE if DR can be handled with respect to alignment.  */
> +   Return TRUE if DR_INFO can be handled with respect to alignment.  */
>
>  static bool
> -verify_data_ref_alignment (data_reference_p dr)
> +verify_data_ref_alignment (dr_vec_info *dr_info)
>  {
>    enum dr_alignment_support supportable_dr_alignment
> -    = vect_supportable_dr_alignment (dr, false);
> +    = vect_supportable_dr_alignment (dr_info, false);
>    if (!supportable_dr_alignment)
>      {
>        if (dump_enabled_p ())
>         {
> -         if (DR_IS_READ (dr))
> +         if (DR_IS_READ (dr_info->dr))
>             dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>                              "not vectorized: unsupported unaligned load.");
>           else
> @@ -1137,7 +1145,7 @@ verify_data_ref_alignment (data_referenc
>                              "store.");
>
>           dump_generic_expr (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
> -                            DR_REF (dr));
> +                            DR_REF (dr_info->dr));
>           dump_printf (MSG_MISSED_OPTIMIZATION, "\n");
>         }
>        return false;
> @@ -1164,7 +1172,8 @@ vect_verify_datarefs_alignment (loop_vec
>
>    FOR_EACH_VEC_ELT (datarefs, i, dr)
>      {
> -      stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +      dr_vec_info *dr_info = DR_VECT_AUX (dr);
> +      stmt_vec_info stmt_info = dr_info->stmt;
>
>        if (!STMT_VINFO_RELEVANT_P (stmt_info))
>         continue;
> @@ -1180,7 +1189,7 @@ vect_verify_datarefs_alignment (loop_vec
>           && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
>         continue;
>
> -      if (! verify_data_ref_alignment (dr))
> +      if (! verify_data_ref_alignment (dr_info))
>         return false;
>      }
>
> @@ -1202,13 +1211,13 @@ not_size_aligned (tree exp)
>
>  /* Function vector_alignment_reachable_p
>
> -   Return true if vector alignment for DR is reachable by peeling
> +   Return true if vector alignment for DR_INFO is reachable by peeling
>     a few loop iterations.  Return false otherwise.  */
>
>  static bool
> -vector_alignment_reachable_p (struct data_reference *dr)
> +vector_alignment_reachable_p (dr_vec_info *dr_info)
>  {
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +  stmt_vec_info stmt_info = dr_info->stmt;
>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>
>    if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
> @@ -1219,13 +1228,13 @@ vector_alignment_reachable_p (struct dat
>        int elem_size, mis_in_elements;
>
>        /* FORNOW: handle only known alignment.  */
> -      if (!known_alignment_for_access_p (dr))
> +      if (!known_alignment_for_access_p (dr_info))
>         return false;
>
>        poly_uint64 nelements = TYPE_VECTOR_SUBPARTS (vectype);
>        poly_uint64 vector_size = GET_MODE_SIZE (TYPE_MODE (vectype));
>        elem_size = vector_element_size (vector_size, nelements);
> -      mis_in_elements = DR_MISALIGNMENT (dr) / elem_size;
> +      mis_in_elements = DR_MISALIGNMENT (dr_info) / elem_size;
>
>        if (!multiple_p (nelements - mis_in_elements, DR_GROUP_SIZE (stmt_info)))
>         return false;
> @@ -1233,7 +1242,7 @@ vector_alignment_reachable_p (struct dat
>
>    /* If misalignment is known at the compile time then allow peeling
>       only if natural alignment is reachable through peeling.  */
> -  if (known_alignment_for_access_p (dr) && !aligned_access_p (dr))
> +  if (known_alignment_for_access_p (dr_info) && !aligned_access_p (dr_info))
>      {
>        HOST_WIDE_INT elmsize =
>                 int_cst_value (TYPE_SIZE_UNIT (TREE_TYPE (vectype)));
> @@ -1242,9 +1251,9 @@ vector_alignment_reachable_p (struct dat
>           dump_printf_loc (MSG_NOTE, vect_location,
>                            "data size =" HOST_WIDE_INT_PRINT_DEC, elmsize);
>           dump_printf (MSG_NOTE,
> -                      ". misalignment = %d.\n", DR_MISALIGNMENT (dr));
> +                      ". misalignment = %d.\n", DR_MISALIGNMENT (dr_info));
>         }
> -      if (DR_MISALIGNMENT (dr) % elmsize)
> +      if (DR_MISALIGNMENT (dr_info) % elmsize)
>         {
>           if (dump_enabled_p ())
>             dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> @@ -1253,10 +1262,10 @@ vector_alignment_reachable_p (struct dat
>         }
>      }
>
> -  if (!known_alignment_for_access_p (dr))
> +  if (!known_alignment_for_access_p (dr_info))
>      {
> -      tree type = TREE_TYPE (DR_REF (dr));
> -      bool is_packed = not_size_aligned (DR_REF (dr));
> +      tree type = TREE_TYPE (DR_REF (dr_info->dr));
> +      bool is_packed = not_size_aligned (DR_REF (dr_info->dr));
>        if (dump_enabled_p ())
>         dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>                          "Unknown misalignment, %snaturally aligned\n",
> @@ -1268,16 +1277,16 @@ vector_alignment_reachable_p (struct dat
>  }
>
>
> -/* Calculate the cost of the memory access represented by DR.  */
> +/* Calculate the cost of the memory access represented by DR_INFO.  */
>
>  static void
> -vect_get_data_access_cost (struct data_reference *dr,
> +vect_get_data_access_cost (dr_vec_info *dr_info,
>                             unsigned int *inside_cost,
>                             unsigned int *outside_cost,
>                            stmt_vector_for_cost *body_cost_vec,
>                            stmt_vector_for_cost *prologue_cost_vec)
>  {
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +  stmt_vec_info stmt_info = dr_info->stmt;
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    int ncopies;
>
> @@ -1286,7 +1295,7 @@ vect_get_data_access_cost (struct data_r
>    else
>      ncopies = vect_get_num_copies (loop_vinfo, STMT_VINFO_VECTYPE (stmt_info));
>
> -  if (DR_IS_READ (dr))
> +  if (DR_IS_READ (dr_info->dr))
>      vect_get_load_cost (stmt_info, ncopies, true, inside_cost, outside_cost,
>                         prologue_cost_vec, body_cost_vec, false);
>    else
> @@ -1301,7 +1310,7 @@ vect_get_data_access_cost (struct data_r
>
>  typedef struct _vect_peel_info
>  {
> -  struct data_reference *dr;
> +  dr_vec_info *dr_info;
>    int npeel;
>    unsigned int count;
>  } *vect_peel_info;
> @@ -1335,16 +1344,17 @@ peel_info_hasher::equal (const _vect_pee
>  }
>
>
> -/* Insert DR into peeling hash table with NPEEL as key.  */
> +/* Insert DR_INFO into peeling hash table with NPEEL as key.  */
>
>  static void
>  vect_peeling_hash_insert (hash_table<peel_info_hasher> *peeling_htab,
> -                         loop_vec_info loop_vinfo, struct data_reference *dr,
> +                         loop_vec_info loop_vinfo, dr_vec_info *dr_info,
>                            int npeel)
>  {
>    struct _vect_peel_info elem, *slot;
>    _vect_peel_info **new_slot;
> -  bool supportable_dr_alignment = vect_supportable_dr_alignment (dr, true);
> +  bool supportable_dr_alignment
> +    = vect_supportable_dr_alignment (dr_info, true);
>
>    elem.npeel = npeel;
>    slot = peeling_htab->find (&elem);
> @@ -1354,7 +1364,7 @@ vect_peeling_hash_insert (hash_table<pee
>      {
>        slot = XNEW (struct _vect_peel_info);
>        slot->npeel = npeel;
> -      slot->dr = dr;
> +      slot->dr_info = dr_info;
>        slot->count = 1;
>        new_slot = peeling_htab->find_slot (slot, INSERT);
>        *new_slot = slot;
> @@ -1381,19 +1391,19 @@ vect_peeling_hash_get_most_frequent (_ve
>      {
>        max->peel_info.npeel = elem->npeel;
>        max->peel_info.count = elem->count;
> -      max->peel_info.dr = elem->dr;
> +      max->peel_info.dr_info = elem->dr_info;
>      }
>
>    return 1;
>  }
>
>  /* Get the costs of peeling NPEEL iterations checking data access costs
> -   for all data refs.  If UNKNOWN_MISALIGNMENT is true, we assume DR0's
> +   for all data refs.  If UNKNOWN_MISALIGNMENT is true, we assume DR0_INFO's
>     misalignment will be zero after peeling.  */
>
>  static void
>  vect_get_peeling_costs_all_drs (vec<data_reference_p> datarefs,
> -                               struct data_reference *dr0,
> +                               dr_vec_info *dr0_info,
>                                 unsigned int *inside_cost,
>                                 unsigned int *outside_cost,
>                                 stmt_vector_for_cost *body_cost_vec,
> @@ -1406,7 +1416,8 @@ vect_get_peeling_costs_all_drs (vec<data
>
>    FOR_EACH_VEC_ELT (datarefs, i, dr)
>      {
> -      stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +      dr_vec_info *dr_info = DR_VECT_AUX (dr);
> +      stmt_vec_info stmt_info = dr_info->stmt;
>        if (!STMT_VINFO_RELEVANT_P (stmt_info))
>         continue;
>
> @@ -1423,16 +1434,16 @@ vect_get_peeling_costs_all_drs (vec<data
>         continue;
>
>        int save_misalignment;
> -      save_misalignment = DR_MISALIGNMENT (dr);
> +      save_misalignment = DR_MISALIGNMENT (dr_info);
>        if (npeel == 0)
>         ;
> -      else if (unknown_misalignment && dr == dr0)
> -       SET_DR_MISALIGNMENT (dr, 0);
> +      else if (unknown_misalignment && dr_info == dr0_info)
> +       SET_DR_MISALIGNMENT (dr_info, 0);
>        else
> -       vect_update_misalignment_for_peel (dr, dr0, npeel);
> -      vect_get_data_access_cost (dr, inside_cost, outside_cost,
> +       vect_update_misalignment_for_peel (dr_info, dr0_info, npeel);
> +      vect_get_data_access_cost (dr_info, inside_cost, outside_cost,
>                                  body_cost_vec, prologue_cost_vec);
> -      SET_DR_MISALIGNMENT (dr, save_misalignment);
> +      SET_DR_MISALIGNMENT (dr_info, save_misalignment);
>      }
>  }
>
> @@ -1446,7 +1457,7 @@ vect_peeling_hash_get_lowest_cost (_vect
>    vect_peel_info elem = *slot;
>    int dummy;
>    unsigned int inside_cost = 0, outside_cost = 0;
> -  stmt_vec_info stmt_info = vect_dr_stmt (elem->dr);
> +  stmt_vec_info stmt_info = elem->dr_info->stmt;
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    stmt_vector_for_cost prologue_cost_vec, body_cost_vec,
>                        epilogue_cost_vec;
> @@ -1456,7 +1467,7 @@ vect_peeling_hash_get_lowest_cost (_vect
>    epilogue_cost_vec.create (2);
>
>    vect_get_peeling_costs_all_drs (LOOP_VINFO_DATAREFS (loop_vinfo),
> -                                 elem->dr, &inside_cost, &outside_cost,
> +                                 elem->dr_info, &inside_cost, &outside_cost,
>                                   &body_cost_vec, &prologue_cost_vec,
>                                   elem->npeel, false);
>
> @@ -1480,7 +1491,7 @@ vect_peeling_hash_get_lowest_cost (_vect
>      {
>        min->inside_cost = inside_cost;
>        min->outside_cost = outside_cost;
> -      min->peel_info.dr = elem->dr;
> +      min->peel_info.dr_info = elem->dr_info;
>        min->peel_info.npeel = elem->npeel;
>        min->peel_info.count = elem->count;
>      }
> @@ -1499,7 +1510,7 @@ vect_peeling_hash_choose_best_peeling (h
>  {
>     struct _vect_peel_extended_info res;
>
> -   res.peel_info.dr = NULL;
> +   res.peel_info.dr_info = NULL;
>
>     if (!unlimited_cost_model (LOOP_VINFO_LOOP (loop_vinfo)))
>       {
> @@ -1523,7 +1534,7 @@ vect_peeling_hash_choose_best_peeling (h
>  /* Return true if the new peeling NPEEL is supported.  */
>
>  static bool
> -vect_peeling_supportable (loop_vec_info loop_vinfo, struct data_reference *dr0,
> +vect_peeling_supportable (loop_vec_info loop_vinfo, dr_vec_info *dr0_info,
>                           unsigned npeel)
>  {
>    unsigned i;
> @@ -1536,10 +1547,11 @@ vect_peeling_supportable (loop_vec_info
>      {
>        int save_misalignment;
>
> -      if (dr == dr0)
> +      if (dr == dr0_info->dr)
>         continue;
>
> -      stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +      dr_vec_info *dr_info = DR_VECT_AUX (dr);
> +      stmt_vec_info stmt_info = dr_info->stmt;
>        /* For interleaving, only the alignment of the first access
>          matters.  */
>        if (STMT_VINFO_GROUPED_ACCESS (stmt_info)
> @@ -1552,10 +1564,11 @@ vect_peeling_supportable (loop_vec_info
>           && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
>         continue;
>
> -      save_misalignment = DR_MISALIGNMENT (dr);
> -      vect_update_misalignment_for_peel (dr, dr0, npeel);
> -      supportable_dr_alignment = vect_supportable_dr_alignment (dr, false);
> -      SET_DR_MISALIGNMENT (dr, save_misalignment);
> +      save_misalignment = DR_MISALIGNMENT (dr_info);
> +      vect_update_misalignment_for_peel (dr_info, dr0_info, npeel);
> +      supportable_dr_alignment
> +       = vect_supportable_dr_alignment (dr_info, false);
> +      SET_DR_MISALIGNMENT (dr_info, save_misalignment);
>
>        if (!supportable_dr_alignment)
>         return false;
> @@ -1661,7 +1674,8 @@ vect_enhance_data_refs_alignment (loop_v
>    vec<data_reference_p> datarefs = LOOP_VINFO_DATAREFS (loop_vinfo);
>    struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
>    enum dr_alignment_support supportable_dr_alignment;
> -  struct data_reference *dr0 = NULL, *first_store = NULL;
> +  dr_vec_info *first_store = NULL;
> +  dr_vec_info *dr0_info = NULL;
>    struct data_reference *dr;
>    unsigned int i, j;
>    bool do_peeling = false;
> @@ -1671,7 +1685,7 @@ vect_enhance_data_refs_alignment (loop_v
>    bool one_misalignment_known = false;
>    bool one_misalignment_unknown = false;
>    bool one_dr_unsupportable = false;
> -  struct data_reference *unsupportable_dr = NULL;
> +  dr_vec_info *unsupportable_dr_info = NULL;
>    poly_uint64 vf = LOOP_VINFO_VECT_FACTOR (loop_vinfo);
>    unsigned possible_npeel_number = 1;
>    tree vectype;
> @@ -1718,7 +1732,8 @@ vect_enhance_data_refs_alignment (loop_v
>
>    FOR_EACH_VEC_ELT (datarefs, i, dr)
>      {
> -      stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +      dr_vec_info *dr_info = DR_VECT_AUX (dr);
> +      stmt_vec_info stmt_info = dr_info->stmt;
>
>        if (!STMT_VINFO_RELEVANT_P (stmt_info))
>         continue;
> @@ -1741,21 +1756,23 @@ vect_enhance_data_refs_alignment (loop_v
>           && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
>         continue;
>
> -      supportable_dr_alignment = vect_supportable_dr_alignment (dr, true);
> -      do_peeling = vector_alignment_reachable_p (dr);
> +      supportable_dr_alignment = vect_supportable_dr_alignment (dr_info, true);
> +      do_peeling = vector_alignment_reachable_p (dr_info);
>        if (do_peeling)
>          {
> -          if (known_alignment_for_access_p (dr))
> +          if (known_alignment_for_access_p (dr_info))
>              {
>               unsigned int npeel_tmp = 0;
>               bool negative = tree_int_cst_compare (DR_STEP (dr),
>                                                     size_zero_node) < 0;
>
>               vectype = STMT_VINFO_VECTYPE (stmt_info);
> -             unsigned int target_align = DR_TARGET_ALIGNMENT (dr);
> -             unsigned int dr_size = vect_get_scalar_dr_size (dr);
> -             mis = (negative ? DR_MISALIGNMENT (dr) : -DR_MISALIGNMENT (dr));
> -             if (DR_MISALIGNMENT (dr) != 0)
> +             unsigned int target_align = DR_TARGET_ALIGNMENT (dr_info);
> +             unsigned int dr_size = vect_get_scalar_dr_size (dr_info);
> +             mis = (negative
> +                    ? DR_MISALIGNMENT (dr_info)
> +                    : -DR_MISALIGNMENT (dr_info));
> +             if (DR_MISALIGNMENT (dr_info) != 0)
>                 npeel_tmp = (mis & (target_align - 1)) / dr_size;
>
>                /* For multiple types, it is possible that the bigger type access
> @@ -1780,7 +1797,7 @@ vect_enhance_data_refs_alignment (loop_v
>
>                   /* NPEEL_TMP is 0 when there is no misalignment, but also
>                      allow peeling NELEMENTS.  */
> -                 if (DR_MISALIGNMENT (dr) == 0)
> +                 if (DR_MISALIGNMENT (dr_info) == 0)
>                     possible_npeel_number++;
>                 }
>
> @@ -1789,7 +1806,7 @@ vect_enhance_data_refs_alignment (loop_v
>                for (j = 0; j < possible_npeel_number; j++)
>                  {
>                    vect_peeling_hash_insert (&peeling_htab, loop_vinfo,
> -                                           dr, npeel_tmp);
> +                                           dr_info, npeel_tmp);
>                   npeel_tmp += target_align / dr_size;
>                  }
>
> @@ -1803,11 +1820,11 @@ vect_enhance_data_refs_alignment (loop_v
>                   stores over load.  */
>               unsigned same_align_drs
>                 = STMT_VINFO_SAME_ALIGN_REFS (stmt_info).length ();
> -             if (!dr0
> +             if (!dr0_info
>                   || same_align_drs_max < same_align_drs)
>                 {
>                   same_align_drs_max = same_align_drs;
> -                 dr0 = dr;
> +                 dr0_info = dr_info;
>                 }
>               /* For data-refs with the same number of related
>                  accesses prefer the one where the misalign
> @@ -1816,13 +1833,13 @@ vect_enhance_data_refs_alignment (loop_v
>                 {
>                   struct loop *ivloop0, *ivloop;
>                   ivloop0 = outermost_invariant_loop_for_expr
> -                   (loop, DR_BASE_ADDRESS (dr0));
> +                   (loop, DR_BASE_ADDRESS (dr0_info->dr));
>                   ivloop = outermost_invariant_loop_for_expr
>                     (loop, DR_BASE_ADDRESS (dr));
>                   if ((ivloop && !ivloop0)
>                       || (ivloop && ivloop0
>                           && flow_loop_nested_p (ivloop, ivloop0)))
> -                   dr0 = dr;
> +                   dr0_info = dr_info;
>                 }
>
>               one_misalignment_unknown = true;
> @@ -1832,16 +1849,16 @@ vect_enhance_data_refs_alignment (loop_v
>               if (!supportable_dr_alignment)
>               {
>                 one_dr_unsupportable = true;
> -               unsupportable_dr = dr;
> +               unsupportable_dr_info = dr_info;
>               }
>
>               if (!first_store && DR_IS_WRITE (dr))
> -               first_store = dr;
> +               first_store = dr_info;
>              }
>          }
>        else
>          {
> -          if (!aligned_access_p (dr))
> +          if (!aligned_access_p (dr_info))
>              {
>                if (dump_enabled_p ())
>                  dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> @@ -1879,7 +1896,7 @@ vect_enhance_data_refs_alignment (loop_v
>
>        stmt_vector_for_cost dummy;
>        dummy.create (2);
> -      vect_get_peeling_costs_all_drs (datarefs, dr0,
> +      vect_get_peeling_costs_all_drs (datarefs, dr0_info,
>                                       &load_inside_cost,
>                                       &load_outside_cost,
>                                       &dummy, &dummy, estimated_npeels, true);
> @@ -1905,7 +1922,7 @@ vect_enhance_data_refs_alignment (loop_v
>           || (load_inside_cost == store_inside_cost
>               && load_outside_cost > store_outside_cost))
>         {
> -         dr0 = first_store;
> +         dr0_info = first_store;
>           peel_for_unknown_alignment.inside_cost = store_inside_cost;
>           peel_for_unknown_alignment.outside_cost = store_outside_cost;
>         }
> @@ -1929,18 +1946,18 @@ vect_enhance_data_refs_alignment (loop_v
>        epilogue_cost_vec.release ();
>
>        peel_for_unknown_alignment.peel_info.count = 1
> -       + STMT_VINFO_SAME_ALIGN_REFS (vect_dr_stmt (dr0)).length ();
> +       + STMT_VINFO_SAME_ALIGN_REFS (dr0_info->stmt).length ();
>      }
>
>    peel_for_unknown_alignment.peel_info.npeel = 0;
> -  peel_for_unknown_alignment.peel_info.dr = dr0;
> +  peel_for_unknown_alignment.peel_info.dr_info = dr0_info;
>
>    best_peel = peel_for_unknown_alignment;
>
>    peel_for_known_alignment.inside_cost = INT_MAX;
>    peel_for_known_alignment.outside_cost = INT_MAX;
>    peel_for_known_alignment.peel_info.count = 0;
> -  peel_for_known_alignment.peel_info.dr = NULL;
> +  peel_for_known_alignment.peel_info.dr_info = NULL;
>
>    if (do_peeling && one_misalignment_known)
>      {
> @@ -1952,7 +1969,7 @@ vect_enhance_data_refs_alignment (loop_v
>      }
>
>    /* Compare costs of peeling for known and unknown alignment. */
> -  if (peel_for_known_alignment.peel_info.dr != NULL
> +  if (peel_for_known_alignment.peel_info.dr_info != NULL
>        && peel_for_unknown_alignment.inside_cost
>        >= peel_for_known_alignment.inside_cost)
>      {
> @@ -1969,7 +1986,7 @@ vect_enhance_data_refs_alignment (loop_v
>       since we'd have to discard a chosen peeling except when it accidentally
>       aligned the unsupportable data ref.  */
>    if (one_dr_unsupportable)
> -    dr0 = unsupportable_dr;
> +    dr0_info = unsupportable_dr_info;
>    else if (do_peeling)
>      {
>        /* Calculate the penalty for no peeling, i.e. leaving everything as-is.
> @@ -2000,7 +2017,7 @@ vect_enhance_data_refs_alignment (loop_v
>        epilogue_cost_vec.release ();
>
>        npeel = best_peel.peel_info.npeel;
> -      dr0 = best_peel.peel_info.dr;
> +      dr0_info = best_peel.peel_info.dr_info;
>
>        /* If no peeling is not more expensive than the best peeling we
>          have so far, don't perform any peeling.  */
> @@ -2010,12 +2027,12 @@ vect_enhance_data_refs_alignment (loop_v
>
>    if (do_peeling)
>      {
> -      stmt_vec_info stmt_info = vect_dr_stmt (dr0);
> +      stmt_vec_info stmt_info = dr0_info->stmt;
>        vectype = STMT_VINFO_VECTYPE (stmt_info);
>
> -      if (known_alignment_for_access_p (dr0))
> +      if (known_alignment_for_access_p (dr0_info))
>          {
> -         bool negative = tree_int_cst_compare (DR_STEP (dr0),
> +         bool negative = tree_int_cst_compare (DR_STEP (dr0_info->dr),
>                                                 size_zero_node) < 0;
>            if (!npeel)
>              {
> @@ -2024,16 +2041,17 @@ vect_enhance_data_refs_alignment (loop_v
>                   updating DR_MISALIGNMENT values.  The peeling factor is the
>                   vectorization factor minus the misalignment as an element
>                   count.  */
> -             mis = negative ? DR_MISALIGNMENT (dr0) : -DR_MISALIGNMENT (dr0);
> -             unsigned int target_align = DR_TARGET_ALIGNMENT (dr0);
> +             mis = (negative
> +                    ? DR_MISALIGNMENT (dr0_info)
> +                    : -DR_MISALIGNMENT (dr0_info));
> +             unsigned int target_align = DR_TARGET_ALIGNMENT (dr0_info);
>               npeel = ((mis & (target_align - 1))
> -                      / vect_get_scalar_dr_size (dr0));
> +                      / vect_get_scalar_dr_size (dr0_info));
>              }
>
>           /* For interleaved data access every iteration accesses all the
>              members of the group, therefore we divide the number of iterations
>              by the group size.  */
> -         stmt_info = vect_dr_stmt (dr0);
>           if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
>             npeel /= DR_GROUP_SIZE (stmt_info);
>
> @@ -2043,11 +2061,11 @@ vect_enhance_data_refs_alignment (loop_v
>          }
>
>        /* Ensure that all datarefs can be vectorized after the peel.  */
> -      if (!vect_peeling_supportable (loop_vinfo, dr0, npeel))
> +      if (!vect_peeling_supportable (loop_vinfo, dr0_info, npeel))
>         do_peeling = false;
>
>        /* Check if all datarefs are supportable and log.  */
> -      if (do_peeling && known_alignment_for_access_p (dr0) && npeel == 0)
> +      if (do_peeling && known_alignment_for_access_p (dr0_info) && npeel == 0)
>          {
>            stat = vect_verify_datarefs_alignment (loop_vinfo);
>            if (!stat)
> @@ -2066,8 +2084,9 @@ vect_enhance_data_refs_alignment (loop_v
>                unsigned max_peel = npeel;
>                if (max_peel == 0)
>                  {
> -                 unsigned int target_align = DR_TARGET_ALIGNMENT (dr0);
> -                 max_peel = target_align / vect_get_scalar_dr_size (dr0) - 1;
> +                 unsigned int target_align = DR_TARGET_ALIGNMENT (dr0_info);
> +                 max_peel = (target_align
> +                             / vect_get_scalar_dr_size (dr0_info) - 1);
>                  }
>                if (max_peel > max_allowed_peel)
>                  {
> @@ -2103,25 +2122,26 @@ vect_enhance_data_refs_alignment (loop_v
>               vectorization factor times the size).  Otherwise, the
>               misalignment of DR_i must be set to unknown.  */
>           FOR_EACH_VEC_ELT (datarefs, i, dr)
> -           if (dr != dr0)
> +           if (dr != dr0_info->dr)
>               {
>                 /* Strided accesses perform only component accesses, alignment
>                    is irrelevant for them.  */
> -               stmt_info = vect_dr_stmt (dr);
> +               dr_vec_info *dr_info = DR_VECT_AUX (dr);
> +               stmt_info = dr_info->stmt;
>                 if (STMT_VINFO_STRIDED_P (stmt_info)
>                     && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
>                   continue;
>
> -               vect_update_misalignment_for_peel (dr, dr0, npeel);
> +               vect_update_misalignment_for_peel (dr_info, dr0_info, npeel);
>               }
>
> -          LOOP_VINFO_UNALIGNED_DR (loop_vinfo) = dr0;
> +          LOOP_VINFO_UNALIGNED_DR (loop_vinfo) = dr0_info->dr;
>            if (npeel)
>              LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) = npeel;
>            else
>              LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo)
> -             = DR_MISALIGNMENT (dr0);
> -         SET_DR_MISALIGNMENT (dr0, 0);
> +             = DR_MISALIGNMENT (dr0_info);
> +         SET_DR_MISALIGNMENT (dr0_info, 0);
>           if (dump_enabled_p ())
>              {
>                dump_printf_loc (MSG_NOTE, vect_location,
> @@ -2156,11 +2176,12 @@ vect_enhance_data_refs_alignment (loop_v
>      {
>        FOR_EACH_VEC_ELT (datarefs, i, dr)
>          {
> -         stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +         dr_vec_info *dr_info = DR_VECT_AUX (dr);
> +         stmt_vec_info stmt_info = dr_info->stmt;
>
>           /* For interleaving, only the alignment of the first access
>              matters.  */
> -         if (aligned_access_p (dr)
> +         if (aligned_access_p (dr_info)
>               || (STMT_VINFO_GROUPED_ACCESS (stmt_info)
>                   && DR_GROUP_FIRST_ELEMENT (stmt_info) != stmt_info))
>             continue;
> @@ -2175,14 +2196,15 @@ vect_enhance_data_refs_alignment (loop_v
>               break;
>             }
>
> -         supportable_dr_alignment = vect_supportable_dr_alignment (dr, false);
> +         supportable_dr_alignment
> +           = vect_supportable_dr_alignment (dr_info, false);
>
>            if (!supportable_dr_alignment)
>              {
>                int mask;
>                tree vectype;
>
> -              if (known_alignment_for_access_p (dr)
> +              if (known_alignment_for_access_p (dr_info)
>                    || LOOP_VINFO_MAY_MISALIGN_STMTS (loop_vinfo).length ()
>                       >= (unsigned) PARAM_VALUE (PARAM_VECT_MAX_VERSION_FOR_ALIGNMENT_CHECKS))
>                  {
> @@ -2190,7 +2212,6 @@ vect_enhance_data_refs_alignment (loop_v
>                    break;
>                  }
>
> -             stmt_info = vect_dr_stmt (dr);
>               vectype = STMT_VINFO_VECTYPE (stmt_info);
>               gcc_assert (vectype);
>
> @@ -2241,8 +2262,8 @@ vect_enhance_data_refs_alignment (loop_v
>           of the loop being vectorized.  */
>        FOR_EACH_VEC_ELT (may_misalign_stmts, i, stmt_info)
>          {
> -          dr = STMT_VINFO_DATA_REF (stmt_info);
> -         SET_DR_MISALIGNMENT (dr, 0);
> +         dr_vec_info *dr_info = STMT_VINFO_DR_INFO (stmt_info);
> +         SET_DR_MISALIGNMENT (dr_info, 0);
>           if (dump_enabled_p ())
>              dump_printf_loc (MSG_NOTE, vect_location,
>                               "Alignment of access forced using versioning.\n");
> @@ -2278,8 +2299,10 @@ vect_find_same_alignment_drs (struct dat
>  {
>    struct data_reference *dra = DDR_A (ddr);
>    struct data_reference *drb = DDR_B (ddr);
> -  stmt_vec_info stmtinfo_a = vect_dr_stmt (dra);
> -  stmt_vec_info stmtinfo_b = vect_dr_stmt (drb);
> +  dr_vec_info *dr_info_a = DR_VECT_AUX (dra);
> +  dr_vec_info *dr_info_b = DR_VECT_AUX (drb);
> +  stmt_vec_info stmtinfo_a = dr_info_a->stmt;
> +  stmt_vec_info stmtinfo_b = dr_info_b->stmt;
>
>    if (DDR_ARE_DEPENDENT (ddr) == chrec_known)
>      return;
> @@ -2302,9 +2325,9 @@ vect_find_same_alignment_drs (struct dat
>    if (maybe_ne (diff, 0))
>      {
>        /* Get the wider of the two alignments.  */
> -      unsigned int align_a = (vect_calculate_target_alignment (dra)
> +      unsigned int align_a = (vect_calculate_target_alignment (dr_info_a)
>                               / BITS_PER_UNIT);
> -      unsigned int align_b = (vect_calculate_target_alignment (drb)
> +      unsigned int align_b = (vect_calculate_target_alignment (dr_info_b)
>                               / BITS_PER_UNIT);
>        unsigned int max_align = MAX (align_a, align_b);
>
> @@ -2352,9 +2375,9 @@ vect_analyze_data_refs_alignment (loop_v
>    vect_record_base_alignments (vinfo);
>    FOR_EACH_VEC_ELT (datarefs, i, dr)
>      {
> -      stmt_vec_info stmt_info = vect_dr_stmt (dr);
> -      if (STMT_VINFO_VECTORIZABLE (stmt_info))
> -       vect_compute_data_ref_alignment (dr);
> +      dr_vec_info *dr_info = DR_VECT_AUX (dr);
> +      if (STMT_VINFO_VECTORIZABLE (dr_info->stmt))
> +       vect_compute_data_ref_alignment (dr_info);
>      }
>
>    return true;
> @@ -2370,17 +2393,17 @@ vect_slp_analyze_and_verify_node_alignme
>       the node is permuted in which case we start from the first
>       element in the group.  */
>    stmt_vec_info first_stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
> -  data_reference_p first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
> +  dr_vec_info *first_dr_info = STMT_VINFO_DR_INFO (first_stmt_info);
>    if (SLP_TREE_LOAD_PERMUTATION (node).exists ())
>      first_stmt_info = DR_GROUP_FIRST_ELEMENT (first_stmt_info);
>
> -  data_reference_p dr = STMT_VINFO_DATA_REF (first_stmt_info);
> -  vect_compute_data_ref_alignment (dr);
> +  dr_vec_info *dr_info = STMT_VINFO_DR_INFO (first_stmt_info);
> +  vect_compute_data_ref_alignment (dr_info);
>    /* For creating the data-ref pointer we need alignment of the
>       first element anyway.  */
> -  if (dr != first_dr)
> -    vect_compute_data_ref_alignment (first_dr);
> -  if (! verify_data_ref_alignment (dr))
> +  if (dr_info != first_dr_info)
> +    vect_compute_data_ref_alignment (first_dr_info);
> +  if (! verify_data_ref_alignment (dr_info))
>      {
>        if (dump_enabled_p ())
>         dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> @@ -2418,19 +2441,20 @@ vect_slp_analyze_and_verify_instance_ali
>  }
>
>
> -/* Analyze groups of accesses: check that DR belongs to a group of
> +/* Analyze groups of accesses: check that DR_INFO belongs to a group of
>     accesses of legal size, step, etc.  Detect gaps, single element
>     interleaving, and other special cases. Set grouped access info.
>     Collect groups of strided stores for further use in SLP analysis.
>     Worker for vect_analyze_group_access.  */
>
>  static bool
> -vect_analyze_group_access_1 (struct data_reference *dr)
> +vect_analyze_group_access_1 (dr_vec_info *dr_info)
>  {
> +  data_reference *dr = dr_info->dr;
>    tree step = DR_STEP (dr);
>    tree scalar_type = TREE_TYPE (DR_REF (dr));
>    HOST_WIDE_INT type_size = TREE_INT_CST_LOW (TYPE_SIZE_UNIT (scalar_type));
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +  stmt_vec_info stmt_info = dr_info->stmt;
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    bb_vec_info bb_vinfo = STMT_VINFO_BB_VINFO (stmt_info);
>    HOST_WIDE_INT dr_step = -1;
> @@ -2507,7 +2531,7 @@ vect_analyze_group_access_1 (struct data
>        if (bb_vinfo)
>         {
>           /* Mark the statement as unvectorizable.  */
> -         STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr)) = false;
> +         STMT_VINFO_VECTORIZABLE (stmt_info) = false;
>           return true;
>         }
>
> @@ -2655,18 +2679,18 @@ vect_analyze_group_access_1 (struct data
>    return true;
>  }
>
> -/* Analyze groups of accesses: check that DR belongs to a group of
> +/* Analyze groups of accesses: check that DR_INFO belongs to a group of
>     accesses of legal size, step, etc.  Detect gaps, single element
>     interleaving, and other special cases. Set grouped access info.
>     Collect groups of strided stores for further use in SLP analysis.  */
>
>  static bool
> -vect_analyze_group_access (struct data_reference *dr)
> +vect_analyze_group_access (dr_vec_info *dr_info)
>  {
> -  if (!vect_analyze_group_access_1 (dr))
> +  if (!vect_analyze_group_access_1 (dr_info))
>      {
>        /* Dissolve the group if present.  */
> -      stmt_vec_info stmt_info = DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (dr));
> +      stmt_vec_info stmt_info = DR_GROUP_FIRST_ELEMENT (dr_info->stmt);
>        while (stmt_info)
>         {
>           stmt_vec_info next = DR_GROUP_NEXT_ELEMENT (stmt_info);
> @@ -2679,16 +2703,17 @@ vect_analyze_group_access (struct data_r
>    return true;
>  }
>
> -/* Analyze the access pattern of the data-reference DR.
> +/* Analyze the access pattern of the data-reference DR_INFO.
>     In case of non-consecutive accesses call vect_analyze_group_access() to
>     analyze groups of accesses.  */
>
>  static bool
> -vect_analyze_data_ref_access (struct data_reference *dr)
> +vect_analyze_data_ref_access (dr_vec_info *dr_info)
>  {
> +  data_reference *dr = dr_info->dr;
>    tree step = DR_STEP (dr);
>    tree scalar_type = TREE_TYPE (DR_REF (dr));
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +  stmt_vec_info stmt_info = dr_info->stmt;
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    struct loop *loop = NULL;
>
> @@ -2768,10 +2793,10 @@ vect_analyze_data_ref_access (struct dat
>    if (TREE_CODE (step) != INTEGER_CST)
>      return (STMT_VINFO_STRIDED_P (stmt_info)
>             && (!STMT_VINFO_GROUPED_ACCESS (stmt_info)
> -               || vect_analyze_group_access (dr)));
> +               || vect_analyze_group_access (dr_info)));
>
>    /* Not consecutive access - check if it's a part of interleaving group.  */
> -  return vect_analyze_group_access (dr);
> +  return vect_analyze_group_access (dr_info);
>  }
>
>  /* Compare two data-references DRA and DRB to group them into chunks
> @@ -2916,7 +2941,8 @@ vect_analyze_data_ref_accesses (vec_info
>    for (i = 0; i < datarefs_copy.length () - 1;)
>      {
>        data_reference_p dra = datarefs_copy[i];
> -      stmt_vec_info stmtinfo_a = vect_dr_stmt (dra);
> +      dr_vec_info *dr_info_a = DR_VECT_AUX (dra);
> +      stmt_vec_info stmtinfo_a = dr_info_a->stmt;
>        stmt_vec_info lastinfo = NULL;
>        if (!STMT_VINFO_VECTORIZABLE (stmtinfo_a)
>           || STMT_VINFO_GATHER_SCATTER_P (stmtinfo_a))
> @@ -2927,7 +2953,8 @@ vect_analyze_data_ref_accesses (vec_info
>        for (i = i + 1; i < datarefs_copy.length (); ++i)
>         {
>           data_reference_p drb = datarefs_copy[i];
> -         stmt_vec_info stmtinfo_b = vect_dr_stmt (drb);
> +         dr_vec_info *dr_info_b = DR_VECT_AUX (drb);
> +         stmt_vec_info stmtinfo_b = dr_info_b->stmt;
>           if (!STMT_VINFO_VECTORIZABLE (stmtinfo_b)
>               || STMT_VINFO_GATHER_SCATTER_P (stmtinfo_b))
>             break;
> @@ -3050,25 +3077,28 @@ vect_analyze_data_ref_accesses (vec_info
>      }
>
>    FOR_EACH_VEC_ELT (datarefs_copy, i, dr)
> -    if (STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr))
> -        && !vect_analyze_data_ref_access (dr))
> -      {
> -       if (dump_enabled_p ())
> -         dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> -                          "not vectorized: complicated access pattern.\n");
> +    {
> +      dr_vec_info *dr_info = DR_VECT_AUX (dr);
> +      if (STMT_VINFO_VECTORIZABLE (dr_info->stmt)
> +         && !vect_analyze_data_ref_access (dr_info))
> +       {
> +         if (dump_enabled_p ())
> +           dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> +                            "not vectorized: complicated access pattern.\n");
>
> -        if (is_a <bb_vec_info> (vinfo))
> -         {
> -           /* Mark the statement as not vectorizable.  */
> -           STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr)) = false;
> -           continue;
> -         }
> -        else
> -         {
> -           datarefs_copy.release ();
> -           return false;
> -         }
> -      }
> +         if (is_a <bb_vec_info> (vinfo))
> +           {
> +             /* Mark the statement as not vectorizable.  */
> +             STMT_VINFO_VECTORIZABLE (dr_info->stmt) = false;
> +             continue;
> +           }
> +         else
> +           {
> +             datarefs_copy.release ();
> +             return false;
> +           }
> +       }
> +    }
>
>    datarefs_copy.release ();
>    return true;
> @@ -3077,7 +3107,7 @@ vect_analyze_data_ref_accesses (vec_info
>  /* Function vect_vfa_segment_size.
>
>     Input:
> -     DR: The data reference.
> +     DR_INFO: The data reference.
>       LENGTH_FACTOR: segment length to consider.
>
>     Return a value suitable for the dr_with_seg_len::seg_len field.
> @@ -3086,32 +3116,32 @@ vect_analyze_data_ref_accesses (vec_info
>     the size of the access; in effect it only describes the first byte.  */
>
>  static tree
> -vect_vfa_segment_size (struct data_reference *dr, tree length_factor)
> +vect_vfa_segment_size (dr_vec_info *dr_info, tree length_factor)
>  {
>    length_factor = size_binop (MINUS_EXPR,
>                               fold_convert (sizetype, length_factor),
>                               size_one_node);
> -  return size_binop (MULT_EXPR, fold_convert (sizetype, DR_STEP (dr)),
> +  return size_binop (MULT_EXPR, fold_convert (sizetype, DR_STEP (dr_info->dr)),
>                      length_factor);
>  }
>
> -/* Return a value that, when added to abs (vect_vfa_segment_size (dr)),
> +/* Return a value that, when added to abs (vect_vfa_segment_size (DR_INFO)),
>     gives the worst-case number of bytes covered by the segment.  */
>
>  static unsigned HOST_WIDE_INT
> -vect_vfa_access_size (data_reference *dr)
> +vect_vfa_access_size (dr_vec_info *dr_info)
>  {
> -  stmt_vec_info stmt_vinfo = vect_dr_stmt (dr);
> -  tree ref_type = TREE_TYPE (DR_REF (dr));
> +  stmt_vec_info stmt_vinfo = dr_info->stmt;
> +  tree ref_type = TREE_TYPE (DR_REF (dr_info->dr));
>    unsigned HOST_WIDE_INT ref_size = tree_to_uhwi (TYPE_SIZE_UNIT (ref_type));
>    unsigned HOST_WIDE_INT access_size = ref_size;
>    if (DR_GROUP_FIRST_ELEMENT (stmt_vinfo))
>      {
> -      gcc_assert (DR_GROUP_FIRST_ELEMENT (stmt_vinfo) == vect_dr_stmt (dr));
> +      gcc_assert (DR_GROUP_FIRST_ELEMENT (stmt_vinfo) == stmt_vinfo);
>        access_size *= DR_GROUP_SIZE (stmt_vinfo) - DR_GROUP_GAP (stmt_vinfo);
>      }
>    if (STMT_VINFO_VEC_STMT (stmt_vinfo)
> -      && (vect_supportable_dr_alignment (dr, false)
> +      && (vect_supportable_dr_alignment (dr_info, false)
>           == dr_explicit_realign_optimized))
>      {
>        /* We might access a full vector's worth.  */
> @@ -3121,12 +3151,13 @@ vect_vfa_access_size (data_reference *dr
>    return access_size;
>  }
>
> -/* Get the minimum alignment for all the scalar accesses that DR describes.  */
> +/* Get the minimum alignment for all the scalar accesses that DR_INFO
> +   describes.  */
>
>  static unsigned int
> -vect_vfa_align (const data_reference *dr)
> +vect_vfa_align (dr_vec_info *dr_info)
>  {
> -  return TYPE_ALIGN_UNIT (TREE_TYPE (DR_REF (dr)));
> +  return TYPE_ALIGN_UNIT (TREE_TYPE (DR_REF (dr_info->dr)));
>  }
>
>  /* Function vect_no_alias_p.
> @@ -3139,27 +3170,27 @@ vect_vfa_align (const data_reference *dr
>     of dr_with_seg_len::{seg_len,access_size} for A and B.  */
>
>  static int
> -vect_compile_time_alias (struct data_reference *a, struct data_reference *b,
> +vect_compile_time_alias (dr_vec_info *a, dr_vec_info *b,
>                          tree segment_length_a, tree segment_length_b,
>                          unsigned HOST_WIDE_INT access_size_a,
>                          unsigned HOST_WIDE_INT access_size_b)
>  {
> -  poly_offset_int offset_a = wi::to_poly_offset (DR_INIT (a));
> -  poly_offset_int offset_b = wi::to_poly_offset (DR_INIT (b));
> +  poly_offset_int offset_a = wi::to_poly_offset (DR_INIT (a->dr));
> +  poly_offset_int offset_b = wi::to_poly_offset (DR_INIT (b->dr));
>    poly_uint64 const_length_a;
>    poly_uint64 const_length_b;
>
>    /* For negative step, we need to adjust address range by TYPE_SIZE_UNIT
>       bytes, e.g., int a[3] -> a[1] range is [a+4, a+16) instead of
>       [a, a+12) */
> -  if (tree_int_cst_compare (DR_STEP (a), size_zero_node) < 0)
> +  if (tree_int_cst_compare (DR_STEP (a->dr), size_zero_node) < 0)
>      {
>        const_length_a = (-wi::to_poly_wide (segment_length_a)).force_uhwi ();
>        offset_a = (offset_a + access_size_a) - const_length_a;
>      }
>    else
>      const_length_a = tree_to_poly_uint64 (segment_length_a);
> -  if (tree_int_cst_compare (DR_STEP (b), size_zero_node) < 0)
> +  if (tree_int_cst_compare (DR_STEP (b->dr), size_zero_node) < 0)
>      {
>        const_length_b = (-wi::to_poly_wide (segment_length_b)).force_uhwi ();
>        offset_b = (offset_b + access_size_b) - const_length_b;
> @@ -3269,30 +3300,34 @@ vect_check_lower_bound (loop_vec_info lo
>    LOOP_VINFO_LOWER_BOUNDS (loop_vinfo).safe_push (lower_bound);
>  }
>
> -/* Return true if it's unlikely that the step of the vectorized form of DR
> +/* Return true if it's unlikely that the step of the vectorized form of DR_INFO
>     will span fewer than GAP bytes.  */
>
>  static bool
> -vect_small_gap_p (loop_vec_info loop_vinfo, data_reference *dr, poly_int64 gap)
> +vect_small_gap_p (loop_vec_info loop_vinfo, dr_vec_info *dr_info,
> +                 poly_int64 gap)
>  {
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +  stmt_vec_info stmt_info = dr_info->stmt;
>    HOST_WIDE_INT count
>      = estimated_poly_value (LOOP_VINFO_VECT_FACTOR (loop_vinfo));
>    if (DR_GROUP_FIRST_ELEMENT (stmt_info))
>      count *= DR_GROUP_SIZE (DR_GROUP_FIRST_ELEMENT (stmt_info));
> -  return estimated_poly_value (gap) <= count * vect_get_scalar_dr_size (dr);
> +  return (estimated_poly_value (gap)
> +         <= count * vect_get_scalar_dr_size (dr_info));
>  }
>
> -/* Return true if we know that there is no alias between DR_A and DR_B
> -   when abs (DR_STEP (DR_A)) >= N for some N.  When returning true, set
> -   *LOWER_BOUND_OUT to this N.  */
> +/* Return true if we know that there is no alias between DR_INFO_A and
> +   DR_INFO_B when abs (DR_STEP (DR_INFO_A->dr)) >= N for some N.
> +   When returning true, set *LOWER_BOUND_OUT to this N.  */
>
>  static bool
> -vectorizable_with_step_bound_p (data_reference *dr_a, data_reference *dr_b,
> +vectorizable_with_step_bound_p (dr_vec_info *dr_info_a, dr_vec_info *dr_info_b,
>                                 poly_uint64 *lower_bound_out)
>  {
>    /* Check that there is a constant gap of known sign between DR_A
>       and DR_B.  */
> +  data_reference *dr_a = dr_info_a->dr;
> +  data_reference *dr_b = dr_info_b->dr;
>    poly_int64 init_a, init_b;
>    if (!operand_equal_p (DR_BASE_ADDRESS (dr_a), DR_BASE_ADDRESS (dr_b), 0)
>        || !operand_equal_p (DR_OFFSET (dr_a), DR_OFFSET (dr_b), 0)
> @@ -3306,19 +3341,19 @@ vectorizable_with_step_bound_p (data_ref
>    if (maybe_lt (init_b, init_a))
>      {
>        std::swap (init_a, init_b);
> +      std::swap (dr_info_a, dr_info_b);
>        std::swap (dr_a, dr_b);
>      }
>
>    /* If the two accesses could be dependent within a scalar iteration,
>       make sure that we'd retain their order.  */
> -  if (maybe_gt (init_a + vect_get_scalar_dr_size (dr_a), init_b)
> -      && !vect_preserves_scalar_order_p (vect_dr_stmt (dr_a),
> -                                        vect_dr_stmt (dr_b)))
> +  if (maybe_gt (init_a + vect_get_scalar_dr_size (dr_info_a), init_b)
> +      && !vect_preserves_scalar_order_p (dr_info_a, dr_info_b))
>      return false;
>
>    /* There is no alias if abs (DR_STEP) is greater than or equal to
>       the bytes spanned by the combination of the two accesses.  */
> -  *lower_bound_out = init_b + vect_get_scalar_dr_size (dr_b) - init_a;
> +  *lower_bound_out = init_b + vect_get_scalar_dr_size (dr_info_b) - init_a;
>    return true;
>  }
>
> @@ -3376,7 +3411,6 @@ vect_prune_runtime_alias_test_list (loop
>      {
>        int comp_res;
>        poly_uint64 lower_bound;
> -      struct data_reference *dr_a, *dr_b;
>        tree segment_length_a, segment_length_b;
>        unsigned HOST_WIDE_INT access_size_a, access_size_b;
>        unsigned int align_a, align_b;
> @@ -3404,25 +3438,26 @@ vect_prune_runtime_alias_test_list (loop
>           continue;
>         }
>
> -      dr_a = DDR_A (ddr);
> -      stmt_vec_info stmt_info_a = vect_dr_stmt (DDR_A (ddr));
> +      dr_vec_info *dr_info_a = DR_VECT_AUX (DDR_A (ddr));
> +      stmt_vec_info stmt_info_a = dr_info_a->stmt;
>
> -      dr_b = DDR_B (ddr);
> -      stmt_vec_info stmt_info_b = vect_dr_stmt (DDR_B (ddr));
> +      dr_vec_info *dr_info_b = DR_VECT_AUX (DDR_B (ddr));
> +      stmt_vec_info stmt_info_b = dr_info_b->stmt;
>
>        /* Skip the pair if inter-iteration dependencies are irrelevant
>          and intra-iteration dependencies are guaranteed to be honored.  */
>        if (ignore_step_p
> -         && (vect_pr

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [37/46] Associate alignment information with stmt_vec_infos
  2018-07-26 10:55     ` Richard Sandiford
@ 2018-07-26 11:13       ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-26 11:13 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Thu, Jul 26, 2018 at 12:55 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> Richard Biener <richard.guenther@gmail.com> writes:
> > On Tue, Jul 24, 2018 at 12:08 PM Richard Sandiford
> > <richard.sandiford@arm.com> wrote:
> >>
> >> Alignment information is really a property of a stmt_vec_info
> >> (and the way we want to vectorise it) rather than the original scalar dr.
> >> I think that was true even before the recent dr sharing.
> >
> > But that is only so as long as we handle only stmts with a single DR.
> > In reality alignment info _is_ a property of the DR and not of the stmt.
> >
> > So you're doing a shortcut here, shouldn't we rename
> > dr_misalignment to stmt_dr_misalignment then?
> >
> > Otherwise I don't see how this makes sense semantically.
>
> OK, the patch below takes a different approach, suggested in the
> 38/46 thread.  The idea is to make dr_aux link back to both the scalar
> data_reference and the containing stmt_vec_info, so that it becomes a
> lookup-free key for a vectorisable reference.
>
> The data_reference link is just STMT_VINFO_DATA_REF, moved from
> _stmt_vec_info.  The stmt pointer is a new field and always tracks
> the current stmt_vec_info for the reference (which might be a pattern
> stmt or the original stmt).
>
> Then 38/40 can use dr_aux instead of data_reference (compared to current
> sources) and instead of stmt_vec_info (compared to the original series).
> This still avoids the repeated lookups that the series is trying to avoid.
>
> The patch also makes the dr_aux in the current (possibly pattern) stmt
> be the one that counts, rather than have the information stay with the
> original DR_STMT.  A new macro (STMT_VINFO_DR_INFO) gives this
> information for a given stmt_vec_info.
>
> The changes together should make it easier to have multiple dr_auxs
> in a single statement.

I like this.

OK.
Richard.

> Thanks,
> Richard
>
>
> 2018-07-26  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (vec_info::move_dr): New member function.
>         (dataref_aux): Rename to...
>         (dr_vec_info): ...this and add "dr" and "stmt" fields.
>         (_stmt_vec_info::dr_aux): Update accordingly.
>         (_stmt_vec_info::data_ref_info): Delete.
>         (STMT_VINFO_GROUPED_ACCESS, DR_GROUP_FIRST_ELEMENT)
>         (DR_GROUP_NEXT_ELEMENT, DR_GROUP_SIZE, DR_GROUP_STORE_COUNT)
>         (DR_GROUP_GAP, DR_GROUP_SAME_DR_STMT, REDUC_GROUP_FIRST_ELEMENT):
>         (REDUC_GROUP_NEXT_ELEMENT, REDUC_GROUP_SIZE): Use dr_aux.dr instead
>         of data_ref.
>         (STMT_VINFO_DATA_REF): Likewise.  Turn into an lvalue.
>         (STMT_VINFO_DR_INFO): New macro.
>         (DR_VECT_AUX): Use STMT_VINFO_DR_INKFO and vect_dr_stmt.
>         (set_dr_misalignment): Update after rename of dataref_aux.
>         (vect_dr_stmt): Move earlier in file.  Return dr_aux.stmt.
>         * tree-vect-stmts.c (new_stmt_vec_info): Remove redundant
>         initialization of STMT_VINFO_DATA_REF.
>         * tree-vectorizer.c (vec_info::move_dr): New function.
>         * tree-vect-patterns.c (vect_recog_bool_pattern)
>         (vect_recog_mask_conversion_pattern)
>         (vect_recog_gather_scatter_pattern): Use it.
>         * tree-vect-data-refs.c (vect_analyze_data_refs): Initialize
>         the "dr" and "stmt" fields of dr_vec_info instead of
>         STMT_VINFO_DATA_REF.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-26 11:30:55.000000000 +0100
> +++ gcc/tree-vectorizer.h       2018-07-26 11:30:56.197256524 +0100
> @@ -240,6 +240,7 @@ struct vec_info {
>    stmt_vec_info lookup_stmt (gimple *);
>    stmt_vec_info lookup_def (tree);
>    stmt_vec_info lookup_single_use (tree);
> +  void move_dr (stmt_vec_info, stmt_vec_info);
>
>    /* The type of vectorization.  */
>    vec_kind kind;
> @@ -767,7 +768,11 @@ enum vect_memory_access_type {
>    VMAT_GATHER_SCATTER
>  };
>
> -struct dataref_aux {
> +struct dr_vec_info {
> +  /* The data reference itself.  */
> +  data_reference *dr;
> +  /* The statement that contains the data reference.  */
> +  stmt_vec_info stmt;
>    /* The misalignment in bytes of the reference, or -1 if not known.  */
>    int misalignment;
>    /* The byte alignment that we'd ideally like the reference to have,
> @@ -818,11 +823,7 @@ struct _stmt_vec_info {
>       data-ref (array/pointer/struct access). A GIMPLE stmt is expected to have
>       at most one such data-ref.  */
>
> -  /* Information about the data-ref (access function, etc),
> -     relative to the inner-most containing loop.  */
> -  struct data_reference *data_ref_info;
> -
> -  dataref_aux dr_aux;
> +  dr_vec_info dr_aux;
>
>    /* Information about the data-ref relative to this loop
>       nest (the loop that is being considered for vectorization).  */
> @@ -996,7 +997,7 @@ #define STMT_VINFO_LIVE_P(S)
>  #define STMT_VINFO_VECTYPE(S)              (S)->vectype
>  #define STMT_VINFO_VEC_STMT(S)             (S)->vectorized_stmt
>  #define STMT_VINFO_VECTORIZABLE(S)         (S)->vectorizable
> -#define STMT_VINFO_DATA_REF(S)             (S)->data_ref_info
> +#define STMT_VINFO_DATA_REF(S)             ((S)->dr_aux.dr + 0)
>  #define STMT_VINFO_GATHER_SCATTER_P(S)    (S)->gather_scatter_p
>  #define STMT_VINFO_STRIDED_P(S)                   (S)->strided_p
>  #define STMT_VINFO_MEMORY_ACCESS_TYPE(S)   (S)->memory_access_type
> @@ -1017,13 +1018,17 @@ #define STMT_VINFO_DR_OFFSET_ALIGNMENT(S
>  #define STMT_VINFO_DR_STEP_ALIGNMENT(S) \
>    (S)->dr_wrt_vec_loop.step_alignment
>
> +#define STMT_VINFO_DR_INFO(S) \
> +  (gcc_checking_assert ((S)->dr_aux.stmt == (S)), &(S)->dr_aux)
> +
>  #define STMT_VINFO_IN_PATTERN_P(S)         (S)->in_pattern_p
>  #define STMT_VINFO_RELATED_STMT(S)         (S)->related_stmt
>  #define STMT_VINFO_PATTERN_DEF_SEQ(S)      (S)->pattern_def_seq
>  #define STMT_VINFO_SAME_ALIGN_REFS(S)      (S)->same_align_refs
>  #define STMT_VINFO_SIMD_CLONE_INFO(S)     (S)->simd_clone_info
>  #define STMT_VINFO_DEF_TYPE(S)             (S)->def_type
> -#define STMT_VINFO_GROUPED_ACCESS(S)      ((S)->data_ref_info && DR_GROUP_FIRST_ELEMENT(S))
> +#define STMT_VINFO_GROUPED_ACCESS(S) \
> +  ((S)->dr_aux.dr && DR_GROUP_FIRST_ELEMENT(S))
>  #define STMT_VINFO_LOOP_PHI_EVOLUTION_BASE_UNCHANGED(S) (S)->loop_phi_evolution_base_unchanged
>  #define STMT_VINFO_LOOP_PHI_EVOLUTION_PART(S) (S)->loop_phi_evolution_part
>  #define STMT_VINFO_MIN_NEG_DIST(S)     (S)->min_neg_dist
> @@ -1031,16 +1036,25 @@ #define STMT_VINFO_NUM_SLP_USES(S)      (S)->
>  #define STMT_VINFO_REDUC_TYPE(S)       (S)->reduc_type
>  #define STMT_VINFO_REDUC_DEF(S)                (S)->reduc_def
>
> -#define DR_GROUP_FIRST_ELEMENT(S)  (gcc_checking_assert ((S)->data_ref_info), (S)->first_element)
> -#define DR_GROUP_NEXT_ELEMENT(S)   (gcc_checking_assert ((S)->data_ref_info), (S)->next_element)
> -#define DR_GROUP_SIZE(S)           (gcc_checking_assert ((S)->data_ref_info), (S)->size)
> -#define DR_GROUP_STORE_COUNT(S)    (gcc_checking_assert ((S)->data_ref_info), (S)->store_count)
> -#define DR_GROUP_GAP(S)            (gcc_checking_assert ((S)->data_ref_info), (S)->gap)
> -#define DR_GROUP_SAME_DR_STMT(S)   (gcc_checking_assert ((S)->data_ref_info), (S)->same_dr_stmt)
> -
> -#define REDUC_GROUP_FIRST_ELEMENT(S)   (gcc_checking_assert (!(S)->data_ref_info), (S)->first_element)
> -#define REDUC_GROUP_NEXT_ELEMENT(S)    (gcc_checking_assert (!(S)->data_ref_info), (S)->next_element)
> -#define REDUC_GROUP_SIZE(S)            (gcc_checking_assert (!(S)->data_ref_info), (S)->size)
> +#define DR_GROUP_FIRST_ELEMENT(S) \
> +  (gcc_checking_assert ((S)->dr_aux.dr), (S)->first_element)
> +#define DR_GROUP_NEXT_ELEMENT(S) \
> +  (gcc_checking_assert ((S)->dr_aux.dr), (S)->next_element)
> +#define DR_GROUP_SIZE(S) \
> +  (gcc_checking_assert ((S)->dr_aux.dr), (S)->size)
> +#define DR_GROUP_STORE_COUNT(S) \
> +  (gcc_checking_assert ((S)->dr_aux.dr), (S)->store_count)
> +#define DR_GROUP_GAP(S) \
> +  (gcc_checking_assert ((S)->dr_aux.dr), (S)->gap)
> +#define DR_GROUP_SAME_DR_STMT(S) \
> +  (gcc_checking_assert ((S)->dr_aux.dr), (S)->same_dr_stmt)
> +
> +#define REDUC_GROUP_FIRST_ELEMENT(S) \
> +  (gcc_checking_assert (!(S)->dr_aux.dr), (S)->first_element)
> +#define REDUC_GROUP_NEXT_ELEMENT(S) \
> +  (gcc_checking_assert (!(S)->dr_aux.dr), (S)->next_element)
> +#define REDUC_GROUP_SIZE(S) \
> +  (gcc_checking_assert (!(S)->dr_aux.dr), (S)->size)
>
>  #define STMT_VINFO_RELEVANT_P(S)          ((S)->relevant != vect_unused_in_scope)
>
> @@ -1048,7 +1062,7 @@ #define HYBRID_SLP_STMT(S)
>  #define PURE_SLP_STMT(S)                  ((S)->slp_type == pure_slp)
>  #define STMT_SLP_TYPE(S)                   (S)->slp_type
>
> -#define DR_VECT_AUX(dr) (&vinfo_for_stmt (DR_STMT (dr))->dr_aux)
> +#define DR_VECT_AUX(dr) (STMT_VINFO_DR_INFO (vect_dr_stmt (dr)))
>
>  #define VECT_MAX_COST 1000
>
> @@ -1259,6 +1273,20 @@ add_stmt_costs (void *data, stmt_vector_
>                    cost->misalign, cost->where);
>  }
>
> +/* Return the stmt DR is in.  For DR_STMT that have been replaced by
> +   a pattern this returns the corresponding pattern stmt.  Otherwise
> +   DR_STMT is returned.  */
> +
> +inline stmt_vec_info
> +vect_dr_stmt (data_reference *dr)
> +{
> +  gimple *stmt = DR_STMT (dr);
> +  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +  /* DR_STMT should never refer to a stmt in a pattern replacement.  */
> +  gcc_checking_assert (!is_pattern_stmt_p (stmt_info));
> +  return stmt_info->dr_aux.stmt;
> +}
> +
>  /*-----------------------------------------------------------------*/
>  /* Info on data references alignment.                              */
>  /*-----------------------------------------------------------------*/
> @@ -1268,8 +1296,7 @@ #define DR_MISALIGNMENT_UNINITIALIZED (-
>  inline void
>  set_dr_misalignment (struct data_reference *dr, int val)
>  {
> -  dataref_aux *data_aux = DR_VECT_AUX (dr);
> -  data_aux->misalignment = val;
> +  DR_VECT_AUX (dr)->misalignment = val;
>  }
>
>  inline int
> @@ -1336,22 +1363,6 @@ vect_dr_behavior (data_reference *dr)
>      return &STMT_VINFO_DR_WRT_VEC_LOOP (stmt_info);
>  }
>
> -/* Return the stmt DR is in.  For DR_STMT that have been replaced by
> -   a pattern this returns the corresponding pattern stmt.  Otherwise
> -   DR_STMT is returned.  */
> -
> -inline stmt_vec_info
> -vect_dr_stmt (data_reference *dr)
> -{
> -  gimple *stmt = DR_STMT (dr);
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> -  if (STMT_VINFO_IN_PATTERN_P (stmt_info))
> -    return STMT_VINFO_RELATED_STMT (stmt_info);
> -  /* DR_STMT should never refer to a stmt in a pattern replacement.  */
> -  gcc_checking_assert (!STMT_VINFO_RELATED_STMT (stmt_info));
> -  return stmt_info;
> -}
> -
>  /* Return true if the vect cost model is unlimited.  */
>  static inline bool
>  unlimited_cost_model (loop_p loop)
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-26 11:30:55.000000000 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-26 11:30:56.197256524 +0100
> @@ -9872,7 +9872,6 @@ new_stmt_vec_info (gimple *stmt, vec_inf
>    STMT_VINFO_VECTORIZABLE (res) = true;
>    STMT_VINFO_IN_PATTERN_P (res) = false;
>    STMT_VINFO_PATTERN_DEF_SEQ (res) = NULL;
> -  STMT_VINFO_DATA_REF (res) = NULL;
>    STMT_VINFO_VEC_REDUCTION_TYPE (res) = TREE_CODE_REDUCTION;
>    STMT_VINFO_VEC_CONST_COND_REDUC_CODE (res) = ERROR_MARK;
>
> Index: gcc/tree-vectorizer.c
> ===================================================================
> --- gcc/tree-vectorizer.c       2018-07-26 11:30:55.000000000 +0100
> +++ gcc/tree-vectorizer.c       2018-07-26 11:30:56.197256524 +0100
> @@ -562,6 +562,21 @@ vec_info::lookup_single_use (tree lhs)
>    return NULL;
>  }
>
> +/* Record that NEW_STMT_INFO now implements the same data reference
> +   as OLD_STMT_INFO.  */
> +
> +void
> +vec_info::move_dr (stmt_vec_info new_stmt_info, stmt_vec_info old_stmt_info)
> +{
> +  gcc_assert (!is_pattern_stmt_p (old_stmt_info));
> +  STMT_VINFO_DR_INFO (old_stmt_info)->stmt = new_stmt_info;
> +  new_stmt_info->dr_aux = old_stmt_info->dr_aux;
> +  STMT_VINFO_DR_WRT_VEC_LOOP (new_stmt_info)
> +    = STMT_VINFO_DR_WRT_VEC_LOOP (old_stmt_info);
> +  STMT_VINFO_GATHER_SCATTER_P (new_stmt_info)
> +    = STMT_VINFO_GATHER_SCATTER_P (old_stmt_info);
> +}
> +
>  /* A helper function to free scev and LOOP niter information, as well as
>     clear loop constraint LOOP_C_FINITE.  */
>
> Index: gcc/tree-vect-patterns.c
> ===================================================================
> --- gcc/tree-vect-patterns.c    2018-07-26 11:30:55.000000000 +0100
> +++ gcc/tree-vect-patterns.c    2018-07-26 11:30:56.193256600 +0100
> @@ -3828,10 +3828,7 @@ vect_recog_bool_pattern (stmt_vec_info s
>         }
>        pattern_stmt = gimple_build_assign (lhs, SSA_NAME, rhs);
>        pattern_stmt_info = vinfo->add_stmt (pattern_stmt);
> -      STMT_VINFO_DATA_REF (pattern_stmt_info)
> -       = STMT_VINFO_DATA_REF (stmt_vinfo);
> -      STMT_VINFO_DR_WRT_VEC_LOOP (pattern_stmt_info)
> -       = STMT_VINFO_DR_WRT_VEC_LOOP (stmt_vinfo);
> +      vinfo->move_dr (pattern_stmt_info, stmt_vinfo);
>        *type_out = vectype;
>        vect_pattern_detected ("vect_recog_bool_pattern", last_stmt);
>
> @@ -3954,14 +3951,7 @@ vect_recog_mask_conversion_pattern (stmt
>
>        pattern_stmt_info = vinfo->add_stmt (pattern_stmt);
>        if (STMT_VINFO_DATA_REF (stmt_vinfo))
> -       {
> -         STMT_VINFO_DATA_REF (pattern_stmt_info)
> -           = STMT_VINFO_DATA_REF (stmt_vinfo);
> -         STMT_VINFO_DR_WRT_VEC_LOOP (pattern_stmt_info)
> -           = STMT_VINFO_DR_WRT_VEC_LOOP (stmt_vinfo);
> -         STMT_VINFO_GATHER_SCATTER_P (pattern_stmt_info)
> -           = STMT_VINFO_GATHER_SCATTER_P (stmt_vinfo);
> -       }
> +       vinfo->move_dr (pattern_stmt_info, stmt_vinfo);
>
>        *type_out = vectype1;
>        vect_pattern_detected ("vect_recog_mask_conversion_pattern", last_stmt);
> @@ -4283,11 +4273,7 @@ vect_recog_gather_scatter_pattern (stmt_
>    /* Copy across relevant vectorization info and associate DR with the
>       new pattern statement instead of the original statement.  */
>    stmt_vec_info pattern_stmt_info = loop_vinfo->add_stmt (pattern_stmt);
> -  STMT_VINFO_DATA_REF (pattern_stmt_info) = dr;
> -  STMT_VINFO_DR_WRT_VEC_LOOP (pattern_stmt_info)
> -    = STMT_VINFO_DR_WRT_VEC_LOOP (stmt_info);
> -  STMT_VINFO_GATHER_SCATTER_P (pattern_stmt_info)
> -    = STMT_VINFO_GATHER_SCATTER_P (stmt_info);
> +  loop_vinfo->move_dr (pattern_stmt_info, stmt_info);
>
>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>    *type_out = vectype;
> Index: gcc/tree-vect-data-refs.c
> ===================================================================
> --- gcc/tree-vect-data-refs.c   2018-07-26 11:30:55.000000000 +0100
> +++ gcc/tree-vect-data-refs.c   2018-07-26 11:30:56.193256600 +0100
> @@ -4120,7 +4120,10 @@ vect_analyze_data_refs (vec_info *vinfo,
>        poly_uint64 vf;
>
>        gcc_assert (DR_REF (dr));
> -      stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +      stmt_vec_info stmt_info = vinfo->lookup_stmt (DR_STMT (dr));
> +      gcc_assert (!stmt_info->dr_aux.dr);
> +      stmt_info->dr_aux.dr = dr;
> +      stmt_info->dr_aux.stmt = stmt_info;
>
>        /* Check that analysis of the data-ref succeeded.  */
>        if (!DR_BASE_ADDRESS (dr) || !DR_OFFSET (dr) || !DR_INIT (dr)
> @@ -4292,9 +4295,6 @@ vect_analyze_data_refs (vec_info *vinfo,
>             }
>         }
>
> -      gcc_assert (!STMT_VINFO_DATA_REF (stmt_info));
> -      STMT_VINFO_DATA_REF (stmt_info) = dr;
> -
>        /* Set vectype for STMT.  */
>        scalar_type = TREE_TYPE (DR_REF (dr));
>        STMT_VINFO_VECTYPE (stmt_info)

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [39/46 v2] Change STMT_VINFO_UNALIGNED_DR to a dr_vec_info
  2018-07-26 11:08   ` [39/46 v2] Change STMT_VINFO_UNALIGNED_DR to a dr_vec_info Richard Sandiford
@ 2018-07-26 11:13     ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-26 11:13 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Thu, Jul 26, 2018 at 1:08 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> [Updated after new 37/46 and 38/46]
>
> After previous changes, it makes more sense for STMT_VINFO_UNALIGNED_DR
> to be dr_vec_info rather than a data_reference.

OK.

>
> 2018-07-26  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (_loop_vec_info::unaligned_dr): Change to
>         dr_vec_info.
>         * tree-vect-data-refs.c (vect_enhance_data_refs_alignment): Update
>         accordingly.
>         * tree-vect-loop.c (vect_analyze_loop_2): Likewise.
>         * tree-vect-loop-manip.c (get_misalign_in_elems): Likewise.
>         (vect_gen_prolog_loop_niters): Likewise.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-26 11:42:19.035663718 +0100
> +++ gcc/tree-vectorizer.h       2018-07-26 11:42:24.919598492 +0100
> @@ -437,7 +437,7 @@ typedef struct _loop_vec_info : public v
>    tree mask_compare_type;
>
>    /* Unknown DRs according to which loop was peeled.  */
> -  struct data_reference *unaligned_dr;
> +  struct dr_vec_info *unaligned_dr;
>
>    /* peeling_for_alignment indicates whether peeling for alignment will take
>       place, and what the peeling factor should be:
> Index: gcc/tree-vect-data-refs.c
> ===================================================================
> --- gcc/tree-vect-data-refs.c   2018-07-26 11:42:19.031663762 +0100
> +++ gcc/tree-vect-data-refs.c   2018-07-26 11:42:24.915598537 +0100
> @@ -2135,7 +2135,7 @@ vect_enhance_data_refs_alignment (loop_v
>                 vect_update_misalignment_for_peel (dr_info, dr0_info, npeel);
>               }
>
> -          LOOP_VINFO_UNALIGNED_DR (loop_vinfo) = dr0_info->dr;
> +          LOOP_VINFO_UNALIGNED_DR (loop_vinfo) = dr0_info;
>            if (npeel)
>              LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) = npeel;
>            else
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-26 11:42:19.031663762 +0100
> +++ gcc/tree-vect-loop.c        2018-07-26 11:42:24.919598492 +0100
> @@ -2142,8 +2142,7 @@ vect_analyze_loop_2 (loop_vec_info loop_
>           /* Niters for peeled prolog loop.  */
>           if (LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) < 0)
>             {
> -             dr_vec_info *dr_info
> -               = DR_VECT_AUX (LOOP_VINFO_UNALIGNED_DR (loop_vinfo));
> +             dr_vec_info *dr_info = LOOP_VINFO_UNALIGNED_DR (loop_vinfo);
>               tree vectype = STMT_VINFO_VECTYPE (dr_info->stmt);
>               niters_th += TYPE_VECTOR_SUBPARTS (vectype) - 1;
>             }
> Index: gcc/tree-vect-loop-manip.c
> ===================================================================
> --- gcc/tree-vect-loop-manip.c  2018-07-26 11:42:19.031663762 +0100
> +++ gcc/tree-vect-loop-manip.c  2018-07-26 11:42:24.915598537 +0100
> @@ -1560,7 +1560,7 @@ vect_update_ivs_after_vectorizer (loop_v
>  static tree
>  get_misalign_in_elems (gimple **seq, loop_vec_info loop_vinfo)
>  {
> -  dr_vec_info *dr_info = DR_VECT_AUX (LOOP_VINFO_UNALIGNED_DR (loop_vinfo));
> +  dr_vec_info *dr_info = LOOP_VINFO_UNALIGNED_DR (loop_vinfo);
>    stmt_vec_info stmt_info = dr_info->stmt;
>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>
> @@ -1627,7 +1627,7 @@ get_misalign_in_elems (gimple **seq, loo
>  vect_gen_prolog_loop_niters (loop_vec_info loop_vinfo,
>                              basic_block bb, int *bound)
>  {
> -  dr_vec_info *dr_info = DR_VECT_AUX (LOOP_VINFO_UNALIGNED_DR (loop_vinfo));
> +  dr_vec_info *dr_info = LOOP_VINFO_UNALIGNED_DR (loop_vinfo);
>    tree var;
>    tree niters_type = TREE_TYPE (LOOP_VINFO_NITERS (loop_vinfo));
>    gimple_seq stmts = NULL, new_stmts = NULL;

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [36/46] Add a pattern_stmt_p field to stmt_vec_info
  2018-07-26 10:29         ` Richard Sandiford
@ 2018-07-26 11:15           ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-26 11:15 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Thu, Jul 26, 2018 at 12:29 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> Richard Biener <richard.guenther@gmail.com> writes:
> > On Wed, Jul 25, 2018 at 1:09 PM Richard Sandiford
> > <richard.sandiford@arm.com> wrote:
> >>
> >> Richard Biener <richard.guenther@gmail.com> writes:
> >> > On Tue, Jul 24, 2018 at 12:07 PM Richard Sandiford
> >> > <richard.sandiford@arm.com> wrote:
> >> >>
> >> >> This patch adds a pattern_stmt_p field to stmt_vec_info, so that it's
> >> >> possible to tell whether the statement is a pattern statement without
> >> >> referring to other statements.  The new field goes in what was
> >> >> previously a hole in the structure, so the size is the same as before.
> >> >
> >> > Not sure what the advantage is?  is_pattern_stmt_p () looks nicer
> >> > than ->is_pattern_p
> >>
> >> I can keep the function wrapper if you prefer that.  But having a
> >> statement "know" whether it's a pattern stmt makes things like
> >> freeing stmt_vec_infos simpler (see later patches in the series).
> >
> > Ah, ok.
> >
> >> It should also be cheaper to test, but that's much more minor.
> >
> > So please keep the wrapper.
>
> Like this?

Yes, OK.

Thanks,
Richard.

> > I guess at some point we should decide what to do with all
> > the STMT_VINFO_ macros (and the others, {LOOP,BB}_ stuff
> > is already used inconsistently).
>
> Yeah...
>
>
> 2018-07-26  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (_stmt_vec_info::pattern_stmt_p): New field.
>         (is_pattern_stmt_p): Use it.
>         * tree-vect-patterns.c (vect_init_pattern_stmt): Set pattern_stmt_p
>         on pattern statements.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-26 11:28:18.000000000 +0100
> +++ gcc/tree-vectorizer.h       2018-07-26 11:28:19.072951054 +0100
> @@ -791,6 +791,12 @@ struct _stmt_vec_info {
>    /* Stmt is part of some pattern (computation idiom)  */
>    bool in_pattern_p;
>
> +  /* True if the statement was created during pattern recognition as
> +     part of the replacement for RELATED_STMT.  This implies that the
> +     statement isn't part of any basic block, although for convenience
> +     its gimple_bb is the same as for RELATED_STMT.  */
> +  bool pattern_stmt_p;
> +
>    /* Is this statement vectorizable or should it be skipped in (partial)
>       vectorization.  */
>    bool vectorizable;
> @@ -1157,8 +1163,7 @@ get_later_stmt (stmt_vec_info stmt1_info
>  static inline bool
>  is_pattern_stmt_p (stmt_vec_info stmt_info)
>  {
> -  stmt_vec_info related_stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
> -  return related_stmt_info && STMT_VINFO_IN_PATTERN_P (related_stmt_info);
> +  return stmt_info->pattern_stmt_p;
>  }
>
>  /* Return true if BB is a loop header.  */
> Index: gcc/tree-vect-patterns.c
> ===================================================================
> --- gcc/tree-vect-patterns.c    2018-07-26 11:28:18.000000000 +0100
> +++ gcc/tree-vect-patterns.c    2018-07-26 11:28:19.068951168 +0100
> @@ -108,6 +108,7 @@ vect_init_pattern_stmt (gimple *pattern_
>      pattern_stmt_info = orig_stmt_info->vinfo->add_stmt (pattern_stmt);
>    gimple_set_bb (pattern_stmt, gimple_bb (orig_stmt_info->stmt));
>
> +  pattern_stmt_info->pattern_stmt_p = true;
>    STMT_VINFO_RELATED_STMT (pattern_stmt_info) = orig_stmt_info;
>    STMT_VINFO_DEF_TYPE (pattern_stmt_info)
>      = STMT_VINFO_DEF_TYPE (orig_stmt_info);

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [40/46 v2] Add vec_info::lookup_dr
  2018-07-26 11:10   ` [40/46 v2] " Richard Sandiford
@ 2018-07-26 11:16     ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-26 11:16 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Thu, Jul 26, 2018 at 1:10 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> [Updated after new 37/46 and 38/46.  41 onwards are unaffected.]
>
> This patch replaces DR_VECT_AUX and vect_dr_stmt with a new
> vec_info::lookup_dr function, so that the lookup is relative
> to a particular vec_info rather than to global state.

OK.

>
> 2018-07-26  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (vec_info::lookup_dr): New member function.
>         (vect_dr_stmt): Delete.
>         * tree-vectorizer.c (vec_info::lookup_dr): New function.
>         * tree-vect-loop-manip.c (vect_update_inits_of_drs): Use it instead
>         of DR_VECT_AUX.
>         * tree-vect-data-refs.c (vect_analyze_possibly_independent_ddr)
>         (vect_analyze_data_ref_dependence, vect_record_base_alignments)
>         (vect_verify_datarefs_alignment, vect_peeling_supportable)
>         (vect_analyze_data_ref_accesses, vect_prune_runtime_alias_test_list)
>         (vect_analyze_data_refs): Likewise.
>         (vect_slp_analyze_data_ref_dependence): Likewise.  Take a vec_info
>         argument.
>         (vect_find_same_alignment_drs): Likewise.
>         (vect_slp_analyze_node_dependences): Update calls accordingly.
>         (vect_analyze_data_refs_alignment): Likewise.  Use vec_info::lookup_dr
>         instead of DR_VECT_AUX.
>         (vect_get_peeling_costs_all_drs): Take a loop_vec_info instead
>         of a vector data references.  Use vec_info::lookup_dr instead of
>         DR_VECT_AUX.
>         (vect_peeling_hash_get_lowest_cost): Update calls accordingly.
>         (vect_enhance_data_refs_alignment): Likewise.  Use vec_info::lookup_dr
>         instead of DR_VECT_AUX.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-26 11:42:24.919598492 +0100
> +++ gcc/tree-vectorizer.h       2018-07-26 11:42:29.387548800 +0100
> @@ -240,6 +240,7 @@ struct vec_info {
>    stmt_vec_info lookup_stmt (gimple *);
>    stmt_vec_info lookup_def (tree);
>    stmt_vec_info lookup_single_use (tree);
> +  struct dr_vec_info *lookup_dr (data_reference *);
>    void move_dr (stmt_vec_info, stmt_vec_info);
>
>    /* The type of vectorization.  */
> @@ -1062,8 +1063,6 @@ #define HYBRID_SLP_STMT(S)
>  #define PURE_SLP_STMT(S)                  ((S)->slp_type == pure_slp)
>  #define STMT_SLP_TYPE(S)                   (S)->slp_type
>
> -#define DR_VECT_AUX(dr) (STMT_VINFO_DR_INFO (vect_dr_stmt (dr)))
> -
>  #define VECT_MAX_COST 1000
>
>  /* The maximum number of intermediate steps required in multi-step type
> @@ -1273,20 +1272,6 @@ add_stmt_costs (void *data, stmt_vector_
>                    cost->misalign, cost->where);
>  }
>
> -/* Return the stmt DR is in.  For DR_STMT that have been replaced by
> -   a pattern this returns the corresponding pattern stmt.  Otherwise
> -   DR_STMT is returned.  */
> -
> -inline stmt_vec_info
> -vect_dr_stmt (data_reference *dr)
> -{
> -  gimple *stmt = DR_STMT (dr);
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> -  /* DR_STMT should never refer to a stmt in a pattern replacement.  */
> -  gcc_checking_assert (!is_pattern_stmt_p (stmt_info));
> -  return stmt_info->dr_aux.stmt;
> -}
> -
>  /*-----------------------------------------------------------------*/
>  /* Info on data references alignment.                              */
>  /*-----------------------------------------------------------------*/
> Index: gcc/tree-vectorizer.c
> ===================================================================
> --- gcc/tree-vectorizer.c       2018-07-26 11:30:56.197256524 +0100
> +++ gcc/tree-vectorizer.c       2018-07-26 11:42:29.387548800 +0100
> @@ -562,6 +562,17 @@ vec_info::lookup_single_use (tree lhs)
>    return NULL;
>  }
>
> +/* Return vectorization information about DR.  */
> +
> +dr_vec_info *
> +vec_info::lookup_dr (data_reference *dr)
> +{
> +  stmt_vec_info stmt_info = lookup_stmt (DR_STMT (dr));
> +  /* DR_STMT should never refer to a stmt in a pattern replacement.  */
> +  gcc_checking_assert (!is_pattern_stmt_p (stmt_info));
> +  return STMT_VINFO_DR_INFO (stmt_info->dr_aux.stmt);
> +}
> +
>  /* Record that NEW_STMT_INFO now implements the same data reference
>     as OLD_STMT_INFO.  */
>
> Index: gcc/tree-vect-loop-manip.c
> ===================================================================
> --- gcc/tree-vect-loop-manip.c  2018-07-26 11:42:24.915598537 +0100
> +++ gcc/tree-vect-loop-manip.c  2018-07-26 11:42:29.387548800 +0100
> @@ -1754,8 +1754,8 @@ vect_update_inits_of_drs (loop_vec_info
>
>    FOR_EACH_VEC_ELT (datarefs, i, dr)
>      {
> -      gimple *stmt = DR_STMT (dr);
> -      if (!STMT_VINFO_GATHER_SCATTER_P (vinfo_for_stmt (stmt)))
> +      dr_vec_info *dr_info = loop_vinfo->lookup_dr (dr);
> +      if (!STMT_VINFO_GATHER_SCATTER_P (dr_info->stmt))
>         vect_update_init_of_dr (dr, niters, code);
>      }
>  }
> Index: gcc/tree-vect-data-refs.c
> ===================================================================
> --- gcc/tree-vect-data-refs.c   2018-07-26 11:42:24.915598537 +0100
> +++ gcc/tree-vect-data-refs.c   2018-07-26 11:42:29.387548800 +0100
> @@ -269,10 +269,10 @@ vect_analyze_possibly_independent_ddr (d
>
>              Note that the alias checks will be removed if the VF ends up
>              being small enough.  */
> -         return (!STMT_VINFO_GATHER_SCATTER_P
> -                    (vinfo_for_stmt (DR_STMT (DDR_A (ddr))))
> -                 && !STMT_VINFO_GATHER_SCATTER_P
> -                       (vinfo_for_stmt (DR_STMT (DDR_B (ddr))))
> +         dr_vec_info *dr_info_a = loop_vinfo->lookup_dr (DDR_A (ddr));
> +         dr_vec_info *dr_info_b = loop_vinfo->lookup_dr (DDR_B (ddr));
> +         return (!STMT_VINFO_GATHER_SCATTER_P (dr_info_a->stmt)
> +                 && !STMT_VINFO_GATHER_SCATTER_P (dr_info_b->stmt)
>                   && vect_mark_for_runtime_alias_test (ddr, loop_vinfo));
>         }
>      }
> @@ -296,8 +296,8 @@ vect_analyze_data_ref_dependence (struct
>    struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
>    struct data_reference *dra = DDR_A (ddr);
>    struct data_reference *drb = DDR_B (ddr);
> -  dr_vec_info *dr_info_a = DR_VECT_AUX (dra);
> -  dr_vec_info *dr_info_b = DR_VECT_AUX (drb);
> +  dr_vec_info *dr_info_a = loop_vinfo->lookup_dr (dra);
> +  dr_vec_info *dr_info_b = loop_vinfo->lookup_dr (drb);
>    stmt_vec_info stmtinfo_a = dr_info_a->stmt;
>    stmt_vec_info stmtinfo_b = dr_info_b->stmt;
>    lambda_vector dist_v;
> @@ -604,17 +604,18 @@ vect_analyze_data_ref_dependences (loop_
>  /* Function vect_slp_analyze_data_ref_dependence.
>
>     Return TRUE if there (might) exist a dependence between a memory-reference
> -   DRA and a memory-reference DRB.  When versioning for alias may check a
> -   dependence at run-time, return FALSE.  Adjust *MAX_VF according to
> -   the data dependence.  */
> +   DRA and a memory-reference DRB for VINFO.  When versioning for alias
> +   may check a dependence at run-time, return FALSE.  Adjust *MAX_VF
> +   according to the data dependence.  */
>
>  static bool
> -vect_slp_analyze_data_ref_dependence (struct data_dependence_relation *ddr)
> +vect_slp_analyze_data_ref_dependence (vec_info *vinfo,
> +                                     struct data_dependence_relation *ddr)
>  {
>    struct data_reference *dra = DDR_A (ddr);
>    struct data_reference *drb = DDR_B (ddr);
> -  dr_vec_info *dr_info_a = DR_VECT_AUX (dra);
> -  dr_vec_info *dr_info_b = DR_VECT_AUX (drb);
> +  dr_vec_info *dr_info_a = vinfo->lookup_dr (dra);
> +  dr_vec_info *dr_info_b = vinfo->lookup_dr (drb);
>
>    /* We need to check dependences of statements marked as unvectorizable
>       as well, they still can prohibit vectorization.  */
> @@ -726,7 +727,8 @@ vect_slp_analyze_node_dependences (slp_i
>                   data_reference *store_dr = STMT_VINFO_DATA_REF (store_info);
>                   ddr_p ddr = initialize_data_dependence_relation
>                                 (dr_a, store_dr, vNULL);
> -                 dependent = vect_slp_analyze_data_ref_dependence (ddr);
> +                 dependent
> +                   = vect_slp_analyze_data_ref_dependence (vinfo, ddr);
>                   free_dependence_relation (ddr);
>                   if (dependent)
>                     break;
> @@ -736,7 +738,7 @@ vect_slp_analyze_node_dependences (slp_i
>             {
>               ddr_p ddr = initialize_data_dependence_relation (dr_a,
>                                                                dr_b, vNULL);
> -             dependent = vect_slp_analyze_data_ref_dependence (ddr);
> +             dependent = vect_slp_analyze_data_ref_dependence (vinfo, ddr);
>               free_dependence_relation (ddr);
>             }
>           if (dependent)
> @@ -848,7 +850,7 @@ vect_record_base_alignments (vec_info *v
>    unsigned int i;
>    FOR_EACH_VEC_ELT (vinfo->shared->datarefs, i, dr)
>      {
> -      dr_vec_info *dr_info = DR_VECT_AUX (dr);
> +      dr_vec_info *dr_info = vinfo->lookup_dr (dr);
>        stmt_vec_info stmt_info = dr_info->stmt;
>        if (!DR_IS_CONDITIONAL_IN_STMT (dr)
>           && STMT_VINFO_VECTORIZABLE (stmt_info)
> @@ -1172,7 +1174,7 @@ vect_verify_datarefs_alignment (loop_vec
>
>    FOR_EACH_VEC_ELT (datarefs, i, dr)
>      {
> -      dr_vec_info *dr_info = DR_VECT_AUX (dr);
> +      dr_vec_info *dr_info = vinfo->lookup_dr (dr);
>        stmt_vec_info stmt_info = dr_info->stmt;
>
>        if (!STMT_VINFO_RELEVANT_P (stmt_info))
> @@ -1397,12 +1399,12 @@ vect_peeling_hash_get_most_frequent (_ve
>    return 1;
>  }
>
> -/* Get the costs of peeling NPEEL iterations checking data access costs
> -   for all data refs.  If UNKNOWN_MISALIGNMENT is true, we assume DR0_INFO's
> -   misalignment will be zero after peeling.  */
> +/* Get the costs of peeling NPEEL iterations for LOOP_VINFO, checking
> +   data access costs for all data refs.  If UNKNOWN_MISALIGNMENT is true,
> +   we assume DR0_INFO's misalignment will be zero after peeling.  */
>
>  static void
> -vect_get_peeling_costs_all_drs (vec<data_reference_p> datarefs,
> +vect_get_peeling_costs_all_drs (loop_vec_info loop_vinfo,
>                                 dr_vec_info *dr0_info,
>                                 unsigned int *inside_cost,
>                                 unsigned int *outside_cost,
> @@ -1411,12 +1413,13 @@ vect_get_peeling_costs_all_drs (vec<data
>                                 unsigned int npeel,
>                                 bool unknown_misalignment)
>  {
> +  vec<data_reference_p> datarefs = LOOP_VINFO_DATAREFS (loop_vinfo);
>    unsigned i;
>    data_reference *dr;
>
>    FOR_EACH_VEC_ELT (datarefs, i, dr)
>      {
> -      dr_vec_info *dr_info = DR_VECT_AUX (dr);
> +      dr_vec_info *dr_info = loop_vinfo->lookup_dr (dr);
>        stmt_vec_info stmt_info = dr_info->stmt;
>        if (!STMT_VINFO_RELEVANT_P (stmt_info))
>         continue;
> @@ -1466,10 +1469,9 @@ vect_peeling_hash_get_lowest_cost (_vect
>    body_cost_vec.create (2);
>    epilogue_cost_vec.create (2);
>
> -  vect_get_peeling_costs_all_drs (LOOP_VINFO_DATAREFS (loop_vinfo),
> -                                 elem->dr_info, &inside_cost, &outside_cost,
> -                                 &body_cost_vec, &prologue_cost_vec,
> -                                 elem->npeel, false);
> +  vect_get_peeling_costs_all_drs (loop_vinfo, elem->dr_info, &inside_cost,
> +                                 &outside_cost, &body_cost_vec,
> +                                 &prologue_cost_vec, elem->npeel, false);
>
>    body_cost_vec.release ();
>
> @@ -1550,7 +1552,7 @@ vect_peeling_supportable (loop_vec_info
>        if (dr == dr0_info->dr)
>         continue;
>
> -      dr_vec_info *dr_info = DR_VECT_AUX (dr);
> +      dr_vec_info *dr_info = loop_vinfo->lookup_dr (dr);
>        stmt_vec_info stmt_info = dr_info->stmt;
>        /* For interleaving, only the alignment of the first access
>          matters.  */
> @@ -1732,7 +1734,7 @@ vect_enhance_data_refs_alignment (loop_v
>
>    FOR_EACH_VEC_ELT (datarefs, i, dr)
>      {
> -      dr_vec_info *dr_info = DR_VECT_AUX (dr);
> +      dr_vec_info *dr_info = loop_vinfo->lookup_dr (dr);
>        stmt_vec_info stmt_info = dr_info->stmt;
>
>        if (!STMT_VINFO_RELEVANT_P (stmt_info))
> @@ -1896,7 +1898,7 @@ vect_enhance_data_refs_alignment (loop_v
>
>        stmt_vector_for_cost dummy;
>        dummy.create (2);
> -      vect_get_peeling_costs_all_drs (datarefs, dr0_info,
> +      vect_get_peeling_costs_all_drs (loop_vinfo, dr0_info,
>                                       &load_inside_cost,
>                                       &load_outside_cost,
>                                       &dummy, &dummy, estimated_npeels, true);
> @@ -1905,7 +1907,7 @@ vect_enhance_data_refs_alignment (loop_v
>        if (first_store)
>         {
>           dummy.create (2);
> -         vect_get_peeling_costs_all_drs (datarefs, first_store,
> +         vect_get_peeling_costs_all_drs (loop_vinfo, first_store,
>                                           &store_inside_cost,
>                                           &store_outside_cost,
>                                           &dummy, &dummy,
> @@ -1996,7 +1998,7 @@ vect_enhance_data_refs_alignment (loop_v
>
>        stmt_vector_for_cost dummy;
>        dummy.create (2);
> -      vect_get_peeling_costs_all_drs (datarefs, NULL, &nopeel_inside_cost,
> +      vect_get_peeling_costs_all_drs (loop_vinfo, NULL, &nopeel_inside_cost,
>                                       &nopeel_outside_cost, &dummy, &dummy,
>                                       0, false);
>        dummy.release ();
> @@ -2126,7 +2128,7 @@ vect_enhance_data_refs_alignment (loop_v
>               {
>                 /* Strided accesses perform only component accesses, alignment
>                    is irrelevant for them.  */
> -               dr_vec_info *dr_info = DR_VECT_AUX (dr);
> +               dr_vec_info *dr_info = loop_vinfo->lookup_dr (dr);
>                 stmt_info = dr_info->stmt;
>                 if (STMT_VINFO_STRIDED_P (stmt_info)
>                     && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
> @@ -2176,7 +2178,7 @@ vect_enhance_data_refs_alignment (loop_v
>      {
>        FOR_EACH_VEC_ELT (datarefs, i, dr)
>          {
> -         dr_vec_info *dr_info = DR_VECT_AUX (dr);
> +         dr_vec_info *dr_info = loop_vinfo->lookup_dr (dr);
>           stmt_vec_info stmt_info = dr_info->stmt;
>
>           /* For interleaving, only the alignment of the first access
> @@ -2291,16 +2293,16 @@ vect_enhance_data_refs_alignment (loop_v
>
>  /* Function vect_find_same_alignment_drs.
>
> -   Update group and alignment relations according to the chosen
> +   Update group and alignment relations in VINFO according to the chosen
>     vectorization factor.  */
>
>  static void
> -vect_find_same_alignment_drs (struct data_dependence_relation *ddr)
> +vect_find_same_alignment_drs (vec_info *vinfo, data_dependence_relation *ddr)
>  {
>    struct data_reference *dra = DDR_A (ddr);
>    struct data_reference *drb = DDR_B (ddr);
> -  dr_vec_info *dr_info_a = DR_VECT_AUX (dra);
> -  dr_vec_info *dr_info_b = DR_VECT_AUX (drb);
> +  dr_vec_info *dr_info_a = vinfo->lookup_dr (dra);
> +  dr_vec_info *dr_info_b = vinfo->lookup_dr (drb);
>    stmt_vec_info stmtinfo_a = dr_info_a->stmt;
>    stmt_vec_info stmtinfo_b = dr_info_b->stmt;
>
> @@ -2367,7 +2369,7 @@ vect_analyze_data_refs_alignment (loop_v
>    unsigned int i;
>
>    FOR_EACH_VEC_ELT (ddrs, i, ddr)
> -    vect_find_same_alignment_drs (ddr);
> +    vect_find_same_alignment_drs (vinfo, ddr);
>
>    vec<data_reference_p> datarefs = vinfo->shared->datarefs;
>    struct data_reference *dr;
> @@ -2375,7 +2377,7 @@ vect_analyze_data_refs_alignment (loop_v
>    vect_record_base_alignments (vinfo);
>    FOR_EACH_VEC_ELT (datarefs, i, dr)
>      {
> -      dr_vec_info *dr_info = DR_VECT_AUX (dr);
> +      dr_vec_info *dr_info = vinfo->lookup_dr (dr);
>        if (STMT_VINFO_VECTORIZABLE (dr_info->stmt))
>         vect_compute_data_ref_alignment (dr_info);
>      }
> @@ -2941,7 +2943,7 @@ vect_analyze_data_ref_accesses (vec_info
>    for (i = 0; i < datarefs_copy.length () - 1;)
>      {
>        data_reference_p dra = datarefs_copy[i];
> -      dr_vec_info *dr_info_a = DR_VECT_AUX (dra);
> +      dr_vec_info *dr_info_a = vinfo->lookup_dr (dra);
>        stmt_vec_info stmtinfo_a = dr_info_a->stmt;
>        stmt_vec_info lastinfo = NULL;
>        if (!STMT_VINFO_VECTORIZABLE (stmtinfo_a)
> @@ -2953,7 +2955,7 @@ vect_analyze_data_ref_accesses (vec_info
>        for (i = i + 1; i < datarefs_copy.length (); ++i)
>         {
>           data_reference_p drb = datarefs_copy[i];
> -         dr_vec_info *dr_info_b = DR_VECT_AUX (drb);
> +         dr_vec_info *dr_info_b = vinfo->lookup_dr (drb);
>           stmt_vec_info stmtinfo_b = dr_info_b->stmt;
>           if (!STMT_VINFO_VECTORIZABLE (stmtinfo_b)
>               || STMT_VINFO_GATHER_SCATTER_P (stmtinfo_b))
> @@ -3078,7 +3080,7 @@ vect_analyze_data_ref_accesses (vec_info
>
>    FOR_EACH_VEC_ELT (datarefs_copy, i, dr)
>      {
> -      dr_vec_info *dr_info = DR_VECT_AUX (dr);
> +      dr_vec_info *dr_info = vinfo->lookup_dr (dr);
>        if (STMT_VINFO_VECTORIZABLE (dr_info->stmt)
>           && !vect_analyze_data_ref_access (dr_info))
>         {
> @@ -3438,10 +3440,10 @@ vect_prune_runtime_alias_test_list (loop
>           continue;
>         }
>
> -      dr_vec_info *dr_info_a = DR_VECT_AUX (DDR_A (ddr));
> +      dr_vec_info *dr_info_a = loop_vinfo->lookup_dr (DDR_A (ddr));
>        stmt_vec_info stmt_info_a = dr_info_a->stmt;
>
> -      dr_vec_info *dr_info_b = DR_VECT_AUX (DDR_B (ddr));
> +      dr_vec_info *dr_info_b = loop_vinfo->lookup_dr (DDR_B (ddr));
>        stmt_vec_info stmt_info_b = dr_info_b->stmt;
>
>        /* Skip the pair if inter-iteration dependencies are irrelevant

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [41/46] Add vec_info::remove_stmt
  2018-07-24 10:09 ` [41/46] Add vec_info::remove_stmt Richard Sandiford
@ 2018-07-31 12:02   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-31 12:02 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 12:09 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch adds a new helper function for permanently removing a
> statement and its associated stmt_vec_info.

OK.

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (vec_info::remove_stmt): Declare.
>         * tree-vectorizer.c (vec_info::remove_stmt): New function.
>         * tree-vect-loop-manip.c (vect_set_loop_condition): Use it.
>         * tree-vect-loop.c (vect_transform_loop): Likewise.
>         * tree-vect-slp.c (vect_schedule_slp): Likewise.
>         * tree-vect-stmts.c (vect_remove_stores): Likewise.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:24:16.552366384 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:24:19.544339803 +0100
> @@ -241,6 +241,7 @@ struct vec_info {
>    stmt_vec_info lookup_def (tree);
>    stmt_vec_info lookup_single_use (tree);
>    stmt_vec_info lookup_dr (data_reference *);
> +  void remove_stmt (stmt_vec_info);
>
>    /* The type of vectorization.  */
>    vec_kind kind;
> Index: gcc/tree-vectorizer.c
> ===================================================================
> --- gcc/tree-vectorizer.c       2018-07-24 10:24:16.552366384 +0100
> +++ gcc/tree-vectorizer.c       2018-07-24 10:24:19.544339803 +0100
> @@ -577,6 +577,20 @@ vec_info::lookup_dr (data_reference *dr)
>    return stmt_info;
>  }
>
> +/* Permanently remove the statement described by STMT_INFO from the
> +   function.  */
> +
> +void
> +vec_info::remove_stmt (stmt_vec_info stmt_info)
> +{
> +  gcc_assert (!stmt_info->pattern_stmt_p);
> +  gimple_stmt_iterator si = gsi_for_stmt (stmt_info->stmt);
> +  unlink_stmt_vdef (stmt_info->stmt);
> +  gsi_remove (&si, true);
> +  release_defs (stmt_info->stmt);
> +  free_stmt_vec_info (stmt_info);
> +}
> +
>  /* A helper function to free scev and LOOP niter information, as well as
>     clear loop constraint LOOP_C_FINITE.  */
>
> Index: gcc/tree-vect-loop-manip.c
> ===================================================================
> --- gcc/tree-vect-loop-manip.c  2018-07-24 10:24:16.552366384 +0100
> +++ gcc/tree-vect-loop-manip.c  2018-07-24 10:24:19.540339838 +0100
> @@ -935,8 +935,12 @@ vect_set_loop_condition (struct loop *lo
>                                                   loop_cond_gsi);
>
>    /* Remove old loop exit test.  */
> -  gsi_remove (&loop_cond_gsi, true);
> -  free_stmt_vec_info (orig_cond);
> +  stmt_vec_info orig_cond_info;
> +  if (loop_vinfo
> +      && (orig_cond_info = loop_vinfo->lookup_stmt (orig_cond)))
> +    loop_vinfo->remove_stmt (orig_cond_info);
> +  else
> +    gsi_remove (&loop_cond_gsi, true);
>
>    if (dump_enabled_p ())
>      {
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:24:12.252404574 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:24:19.540339838 +0100
> @@ -8487,28 +8487,18 @@ vect_transform_loop (loop_vec_info loop_
>                   vect_transform_loop_stmt (loop_vinfo, stmt_info, &si,
>                                             &seen_store, &slp_scheduled);
>                 }
> +             gsi_next (&si);
>               if (seen_store)
>                 {
>                   if (STMT_VINFO_GROUPED_ACCESS (seen_store))
> -                   {
> -                     /* Interleaving.  If IS_STORE is TRUE, the
> -                        vectorization of the interleaving chain was
> -                        completed - free all the stores in the chain.  */
> -                     gsi_next (&si);
> -                     vect_remove_stores (DR_GROUP_FIRST_ELEMENT (seen_store));
> -                   }
> +                   /* Interleaving.  If IS_STORE is TRUE, the
> +                      vectorization of the interleaving chain was
> +                      completed - free all the stores in the chain.  */
> +                   vect_remove_stores (DR_GROUP_FIRST_ELEMENT (seen_store));
>                   else
> -                   {
> -                     /* Free the attached stmt_vec_info and remove the
> -                        stmt.  */
> -                     free_stmt_vec_info (stmt);
> -                     unlink_stmt_vdef (stmt);
> -                     gsi_remove (&si, true);
> -                     release_defs (stmt);
> -                   }
> +                   /* Free the attached stmt_vec_info and remove the stmt.  */
> +                   loop_vinfo->remove_stmt (stmt_info);
>                 }
> -             else
> -               gsi_next (&si);
>             }
>         }
>
> Index: gcc/tree-vect-slp.c
> ===================================================================
> --- gcc/tree-vect-slp.c 2018-07-24 10:24:02.360492422 +0100
> +++ gcc/tree-vect-slp.c 2018-07-24 10:24:19.540339838 +0100
> @@ -4087,7 +4087,6 @@ vect_schedule_slp (vec_info *vinfo)
>        slp_tree root = SLP_INSTANCE_TREE (instance);
>        stmt_vec_info store_info;
>        unsigned int j;
> -      gimple_stmt_iterator gsi;
>
>        /* Remove scalar call stmts.  Do not do this for basic-block
>          vectorization as not all uses may be vectorized.
> @@ -4108,11 +4107,7 @@ vect_schedule_slp (vec_info *vinfo)
>           if (store_info->pattern_stmt_p)
>             store_info = STMT_VINFO_RELATED_STMT (store_info);
>           /* Free the attached stmt_vec_info and remove the stmt.  */
> -         gsi = gsi_for_stmt (store_info);
> -         unlink_stmt_vdef (store_info);
> -         gsi_remove (&gsi, true);
> -         release_defs (store_info);
> -         free_stmt_vec_info (store_info);
> +         vinfo->remove_stmt (store_info);
>          }
>      }
>
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-24 10:24:08.924434128 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:24:19.544339803 +0100
> @@ -9842,8 +9842,8 @@ vect_transform_stmt (stmt_vec_info stmt_
>  void
>  vect_remove_stores (stmt_vec_info first_stmt_info)
>  {
> +  vec_info *vinfo = first_stmt_info->vinfo;
>    stmt_vec_info next_stmt_info = first_stmt_info;
> -  gimple_stmt_iterator next_si;
>
>    while (next_stmt_info)
>      {
> @@ -9851,11 +9851,7 @@ vect_remove_stores (stmt_vec_info first_
>        if (next_stmt_info->pattern_stmt_p)
>         next_stmt_info = STMT_VINFO_RELATED_STMT (next_stmt_info);
>        /* Free the attached stmt_vec_info and remove the stmt.  */
> -      next_si = gsi_for_stmt (next_stmt_info->stmt);
> -      unlink_stmt_vdef (next_stmt_info->stmt);
> -      gsi_remove (&next_si, true);
> -      release_defs (next_stmt_info->stmt);
> -      free_stmt_vec_info (next_stmt_info);
> +      vinfo->remove_stmt (next_stmt_info);
>        next_stmt_info = tmp;
>      }
>  }

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [43/46] Make free_stmt_vec_info take a stmt_vec_info
  2018-07-24 10:10 ` [43/46] Make free_stmt_vec_info take a stmt_vec_info Richard Sandiford
@ 2018-07-31 12:03   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-31 12:03 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 12:10 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch makes free_stmt_vec_info take the stmt_vec_info that
> it's supposed to free and makes it free only that stmt_vec_info.
> Callers need to update the statement mapping where necessary
> (but now there are only a couple of callers).
>
> This in turns means that we can leave ~vec_info to do the actual
> freeing, since there's no longer a need to do it before resetting
> the gimple_uids.

OK.

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (free_stmt_vec_info): Take a stmt_vec_info
>         rather than a gimple stmt.
>         * tree-vect-stmts.c (free_stmt_vec_info): Likewise.  Don't free
>         information for pattern statements when passed the original
>         statement; instead wait to be passed the pattern statement itself.
>         Don't call set_vinfo_for_stmt here.
>         (free_stmt_vec_infos): Update call to free_stmt_vec_info.
>         * tree-vect-loop.c (_loop_vec_info::~loop_vec_info): Don't free
>         stmt_vec_infos here.
>         * tree-vect-slp.c (_bb_vec_info::~bb_vec_info): Likewise.
>         * tree-vectorizer.c (vec_info::remove_stmt): Nullify the statement's
>         stmt_vec_infos entry.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:24:22.684311906 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:24:26.084281700 +0100
> @@ -1484,7 +1484,7 @@ extern bool supportable_narrowing_operat
>                                              enum tree_code *,
>                                              int *, vec<tree> *);
>  extern stmt_vec_info new_stmt_vec_info (gimple *stmt, vec_info *);
> -extern void free_stmt_vec_info (gimple *stmt);
> +extern void free_stmt_vec_info (stmt_vec_info);
>  extern unsigned record_stmt_cost (stmt_vector_for_cost *, int,
>                                   enum vect_cost_for_stmt, stmt_vec_info,
>                                   int, enum vect_cost_model_location);
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-24 10:24:22.684311906 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:24:26.084281700 +0100
> @@ -9916,7 +9916,7 @@ free_stmt_vec_infos (vec<stmt_vec_info>
>    stmt_vec_info info;
>    FOR_EACH_VEC_ELT (*v, i, info)
>      if (info != NULL_STMT_VEC_INFO)
> -      free_stmt_vec_info (STMT_VINFO_STMT (info));
> +      free_stmt_vec_info (info);
>    if (v == stmt_vec_info_vec)
>      stmt_vec_info_vec = NULL;
>    v->release ();
> @@ -9926,44 +9926,18 @@ free_stmt_vec_infos (vec<stmt_vec_info>
>  /* Free stmt vectorization related info.  */
>
>  void
> -free_stmt_vec_info (gimple *stmt)
> +free_stmt_vec_info (stmt_vec_info stmt_info)
>  {
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> -
> -  if (!stmt_info)
> -    return;
> -
> -  /* Check if this statement has a related "pattern stmt"
> -     (introduced by the vectorizer during the pattern recognition
> -     pass).  Free pattern's stmt_vec_info and def stmt's stmt_vec_info
> -     too.  */
> -  if (STMT_VINFO_IN_PATTERN_P (stmt_info))
> +  if (stmt_info->pattern_stmt_p)
>      {
> -      if (gimple_seq seq = STMT_VINFO_PATTERN_DEF_SEQ (stmt_info))
> -       for (gimple_stmt_iterator si = gsi_start (seq);
> -            !gsi_end_p (si); gsi_next (&si))
> -         {
> -           gimple *seq_stmt = gsi_stmt (si);
> -           gimple_set_bb (seq_stmt, NULL);
> -           tree lhs = gimple_get_lhs (seq_stmt);
> -           if (lhs && TREE_CODE (lhs) == SSA_NAME)
> -             release_ssa_name (lhs);
> -           free_stmt_vec_info (seq_stmt);
> -         }
> -      stmt_vec_info patt_stmt_info = STMT_VINFO_RELATED_STMT (stmt_info);
> -      if (patt_stmt_info)
> -       {
> -         gimple_set_bb (patt_stmt_info->stmt, NULL);
> -         tree lhs = gimple_get_lhs (patt_stmt_info->stmt);
> -         if (lhs && TREE_CODE (lhs) == SSA_NAME)
> -           release_ssa_name (lhs);
> -         free_stmt_vec_info (patt_stmt_info);
> -       }
> +      gimple_set_bb (stmt_info->stmt, NULL);
> +      tree lhs = gimple_get_lhs (stmt_info->stmt);
> +      if (lhs && TREE_CODE (lhs) == SSA_NAME)
> +       release_ssa_name (lhs);
>      }
>
>    STMT_VINFO_SAME_ALIGN_REFS (stmt_info).release ();
>    STMT_VINFO_SIMD_CLONE_INFO (stmt_info).release ();
> -  set_vinfo_for_stmt (stmt, NULL);
>    free (stmt_info);
>  }
>
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:24:19.540339838 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:24:26.080281735 +0100
> @@ -894,9 +894,6 @@ _loop_vec_info::~_loop_vec_info ()
>    for (j = 0; j < nbbs; j++)
>      {
>        basic_block bb = bbs[j];
> -      for (si = gsi_start_phis (bb); !gsi_end_p (si); gsi_next (&si))
> -        free_stmt_vec_info (gsi_stmt (si));
> -
>        for (si = gsi_start_bb (bb); !gsi_end_p (si); )
>          {
>           gimple *stmt = gsi_stmt (si);
> @@ -936,9 +933,6 @@ _loop_vec_info::~_loop_vec_info ()
>                     }
>                 }
>             }
> -
> -         /* Free stmt_vec_info.  */
> -         free_stmt_vec_info (stmt);
>            gsi_next (&si);
>          }
>      }
> Index: gcc/tree-vect-slp.c
> ===================================================================
> --- gcc/tree-vect-slp.c 2018-07-24 10:24:22.680311942 +0100
> +++ gcc/tree-vect-slp.c 2018-07-24 10:24:26.080281735 +0100
> @@ -2490,17 +2490,8 @@ _bb_vec_info::~_bb_vec_info ()
>  {
>    for (gimple_stmt_iterator si = region_begin;
>         gsi_stmt (si) != gsi_stmt (region_end); gsi_next (&si))
> -    {
> -      gimple *stmt = gsi_stmt (si);
> -      stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> -
> -      if (stmt_info)
> -        /* Free stmt_vec_info.  */
> -        free_stmt_vec_info (stmt);
> -
> -      /* Reset region marker.  */
> -      gimple_set_uid (stmt, -1);
> -    }
> +    /* Reset region marker.  */
> +    gimple_set_uid (gsi_stmt (si), -1);
>
>    bb->aux = NULL;
>  }
> Index: gcc/tree-vectorizer.c
> ===================================================================
> --- gcc/tree-vectorizer.c       2018-07-24 10:24:22.684311906 +0100
> +++ gcc/tree-vectorizer.c       2018-07-24 10:24:26.084281700 +0100
> @@ -584,6 +584,7 @@ vec_info::lookup_dr (data_reference *dr)
>  vec_info::remove_stmt (stmt_vec_info stmt_info)
>  {
>    gcc_assert (!stmt_info->pattern_stmt_p);
> +  set_vinfo_for_stmt (stmt_info->stmt, NULL);
>    gimple_stmt_iterator si = gsi_for_stmt (stmt_info->stmt);
>    unlink_stmt_vdef (stmt_info->stmt);
>    gsi_remove (&si, true);

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [42/46] Add vec_info::replace_stmt
  2018-07-24 10:09 ` [42/46] Add vec_info::replace_stmt Richard Sandiford
@ 2018-07-31 12:03   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-31 12:03 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 12:09 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch adds a helper for replacing a stmt_vec_info's statement with
> a new statement.

OK.

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (vec_info::replace_stmt): Declare.
>         * tree-vectorizer.c (vec_info::replace_stmt): New function.
>         * tree-vect-slp.c (vect_remove_slp_scalar_calls): Use it.
>         * tree-vect-stmts.c (vectorizable_call): Likewise.
>         (vectorizable_simd_clone_call): Likewise.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:24:19.544339803 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:24:22.684311906 +0100
> @@ -242,6 +242,7 @@ struct vec_info {
>    stmt_vec_info lookup_single_use (tree);
>    stmt_vec_info lookup_dr (data_reference *);
>    void remove_stmt (stmt_vec_info);
> +  void replace_stmt (gimple_stmt_iterator *, stmt_vec_info, gimple *);
>
>    /* The type of vectorization.  */
>    vec_kind kind;
> Index: gcc/tree-vectorizer.c
> ===================================================================
> --- gcc/tree-vectorizer.c       2018-07-24 10:24:19.544339803 +0100
> +++ gcc/tree-vectorizer.c       2018-07-24 10:24:22.684311906 +0100
> @@ -591,6 +591,22 @@ vec_info::remove_stmt (stmt_vec_info stm
>    free_stmt_vec_info (stmt_info);
>  }
>
> +/* Replace the statement at GSI by NEW_STMT, both the vectorization
> +   information and the function itself.  STMT_INFO describes the statement
> +   at GSI.  */
> +
> +void
> +vec_info::replace_stmt (gimple_stmt_iterator *gsi, stmt_vec_info stmt_info,
> +                       gimple *new_stmt)
> +{
> +  gimple *old_stmt = stmt_info->stmt;
> +  gcc_assert (!stmt_info->pattern_stmt_p && old_stmt == gsi_stmt (*gsi));
> +  set_vinfo_for_stmt (old_stmt, NULL);
> +  set_vinfo_for_stmt (new_stmt, stmt_info);
> +  stmt_info->stmt = new_stmt;
> +  gsi_replace (gsi, new_stmt, true);
> +}
> +
>  /* A helper function to free scev and LOOP niter information, as well as
>     clear loop constraint LOOP_C_FINITE.  */
>
> Index: gcc/tree-vect-slp.c
> ===================================================================
> --- gcc/tree-vect-slp.c 2018-07-24 10:24:19.540339838 +0100
> +++ gcc/tree-vect-slp.c 2018-07-24 10:24:22.680311942 +0100
> @@ -4048,11 +4048,8 @@ vect_remove_slp_scalar_calls (slp_tree n
>         continue;
>        lhs = gimple_call_lhs (stmt);
>        new_stmt = gimple_build_assign (lhs, build_zero_cst (TREE_TYPE (lhs)));
> -      set_vinfo_for_stmt (new_stmt, stmt_info);
> -      set_vinfo_for_stmt (stmt, NULL);
> -      STMT_VINFO_STMT (stmt_info) = new_stmt;
>        gsi = gsi_for_stmt (stmt);
> -      gsi_replace (&gsi, new_stmt, false);
> +      stmt_info->vinfo->replace_stmt (&gsi, stmt_info, new_stmt);
>        SSA_NAME_DEF_STMT (gimple_assign_lhs (new_stmt)) = new_stmt;
>      }
>  }
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-24 10:24:19.544339803 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:24:22.684311906 +0100
> @@ -3629,10 +3629,7 @@ vectorizable_call (stmt_vec_info stmt_in
>
>    gassign *new_stmt
>      = gimple_build_assign (lhs, build_zero_cst (TREE_TYPE (lhs)));
> -  set_vinfo_for_stmt (new_stmt, stmt_info);
> -  set_vinfo_for_stmt (stmt_info->stmt, NULL);
> -  STMT_VINFO_STMT (stmt_info) = new_stmt;
> -  gsi_replace (gsi, new_stmt, false);
> +  vinfo->replace_stmt (gsi, stmt_info, new_stmt);
>
>    return true;
>  }
> @@ -4370,10 +4367,7 @@ vectorizable_simd_clone_call (stmt_vec_i
>      }
>    else
>      new_stmt = gimple_build_nop ();
> -  set_vinfo_for_stmt (new_stmt, stmt_info);
> -  set_vinfo_for_stmt (stmt, NULL);
> -  STMT_VINFO_STMT (stmt_info) = new_stmt;
> -  gsi_replace (gsi, new_stmt, true);
> +  vinfo->replace_stmt (gsi, stmt_info, new_stmt);
>    unlink_stmt_vdef (stmt);
>
>    return true;

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [44/46] Remove global vinfo_for_stmt-related routines
  2018-07-24 10:10 ` [44/46] Remove global vinfo_for_stmt-related routines Richard Sandiford
@ 2018-07-31 12:05   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-31 12:05 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 12:10 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> There are no more direct uses of:
>
> - new_stmt_vec_info
> - set_vinfo_for_stmt
> - free_stmt_vec_infos
> - free_stmt_vec_info
>
> outside of vec_info, so they can now be private member functions.
> It also seemed better to put them in tree-vectorizer.c, along with the
> other vec_info routines.
>
> We can also get rid of:
>
> - vinfo_for_stmt
> - stmt_vec_info_vec
> - set_stmt_vec_info_vec
>
> since nothing now uses them.  This was the main goal of the series.

Great.

OK.

Thanks,
Richard.

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (vec_info::new_vinfo_for_stmt)
>         (vec_info::set_vinfo_for_stmt, vec_info::free_stmt_vec_infos)
>         (vec_info::free_stmt_vec_info): New private member functions.
>         (set_stmt_vec_info_vec, free_stmt_vec_infos, vinfo_for_stmt)
>         (set_vinfo_for_stmt, new_stmt_vec_info, free_stmt_vec_info): Delete.
>         * tree-parloops.c (gather_scalar_reductions): Remove calls to
>         set_stmt_vec_info_vec and free_stmt_vec_infos.
>         * tree-vect-loop.c (_loop_vec_info): Remove call to
>         set_stmt_vec_info_vec.
>         * tree-vect-stmts.c (new_stmt_vec_info, set_stmt_vec_info_vec)
>         (free_stmt_vec_infos, free_stmt_vec_info): Delete in favor of...
>         * tree-vectorizer.c (vec_info::new_stmt_vec_info)
>         (vec_info::set_vinfo_for_stmt, vec_info::free_stmt_vec_infos)
>         (vec_info::free_stmt_vec_info): ...these new functions.  Remove
>         assignments in {vec_info::,}new_stmt_vec_info that are redundant
>         with the clearing in the xcalloc.
>         (stmt_vec_info_vec): Delete.
>         (vec_info::vec_info): Don't call set_stmt_vec_info_vec.
>         (vectorize_loops): Likewise.
>         (vec_info::~vec_info): Remove argument from call to
>         free_stmt_vec_infos.
>         (vec_info::add_stmt): Remove vinfo argument from call to
>         new_stmt_vec_info.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:24:26.084281700 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:24:29.300253129 +0100
> @@ -266,6 +266,12 @@ struct vec_info {
>
>    /* Cost data used by the target cost model.  */
>    void *target_cost_data;
> +
> +private:
> +  stmt_vec_info new_stmt_vec_info (gimple *stmt);
> +  void set_vinfo_for_stmt (gimple *, stmt_vec_info);
> +  void free_stmt_vec_infos ();
> +  void free_stmt_vec_info (stmt_vec_info);
>  };
>
>  struct _loop_vec_info;
> @@ -1085,43 +1091,6 @@ inline stmt_vec_info::operator gimple *
>    return m_ptr ? m_ptr->stmt : NULL;
>  }
>
> -extern vec<stmt_vec_info> *stmt_vec_info_vec;
> -
> -void set_stmt_vec_info_vec (vec<stmt_vec_info> *);
> -void free_stmt_vec_infos (vec<stmt_vec_info> *);
> -
> -/* Return a stmt_vec_info corresponding to STMT.  */
> -
> -static inline stmt_vec_info
> -vinfo_for_stmt (gimple *stmt)
> -{
> -  int uid = gimple_uid (stmt);
> -  if (uid <= 0)
> -    return NULL;
> -
> -  return (*stmt_vec_info_vec)[uid - 1];
> -}
> -
> -/* Set vectorizer information INFO for STMT.  */
> -
> -static inline void
> -set_vinfo_for_stmt (gimple *stmt, stmt_vec_info info)
> -{
> -  unsigned int uid = gimple_uid (stmt);
> -  if (uid == 0)
> -    {
> -      gcc_checking_assert (info);
> -      uid = stmt_vec_info_vec->length () + 1;
> -      gimple_set_uid (stmt, uid);
> -      stmt_vec_info_vec->safe_push (info);
> -    }
> -  else
> -    {
> -      gcc_checking_assert (info == NULL_STMT_VEC_INFO);
> -      (*stmt_vec_info_vec)[uid - 1] = info;
> -    }
> -}
> -
>  static inline bool
>  nested_in_vect_loop_p (struct loop *loop, stmt_vec_info stmt_info)
>  {
> @@ -1483,8 +1452,6 @@ extern bool supportable_widening_operati
>  extern bool supportable_narrowing_operation (enum tree_code, tree, tree,
>                                              enum tree_code *,
>                                              int *, vec<tree> *);
> -extern stmt_vec_info new_stmt_vec_info (gimple *stmt, vec_info *);
> -extern void free_stmt_vec_info (stmt_vec_info);
>  extern unsigned record_stmt_cost (stmt_vector_for_cost *, int,
>                                   enum vect_cost_for_stmt, stmt_vec_info,
>                                   int, enum vect_cost_model_location);
> Index: gcc/tree-parloops.c
> ===================================================================
> --- gcc/tree-parloops.c 2018-07-24 10:22:57.273070426 +0100
> +++ gcc/tree-parloops.c 2018-07-24 10:24:29.296253164 +0100
> @@ -2592,10 +2592,6 @@ gather_scalar_reductions (loop_p loop, r
>    auto_vec<gphi *, 4> double_reduc_phis;
>    auto_vec<gimple *, 4> double_reduc_stmts;
>
> -  vec<stmt_vec_info> stmt_vec_infos;
> -  stmt_vec_infos.create (50);
> -  set_stmt_vec_info_vec (&stmt_vec_infos);
> -
>    vec_info_shared shared;
>    simple_loop_info = vect_analyze_loop_form (loop, &shared);
>    if (simple_loop_info == NULL)
> @@ -2679,14 +2675,11 @@ gather_scalar_reductions (loop_p loop, r
>      }
>
>   gather_done:
> -  /* Release the claim on gimple_uid.  */
> -  free_stmt_vec_infos (&stmt_vec_infos);
> -
>    if (reduction_list->elements () == 0)
>      return;
>
>    /* As gimple_uid is used by the vectorizer in between vect_analyze_loop_form
> -     and free_stmt_vec_info_vec, we can set gimple_uid of reduc_phi stmts only
> +     and delete simple_loop_info, we can set gimple_uid of reduc_phi stmts only
>       now.  */
>    basic_block bb;
>    FOR_EACH_BB_FN (bb, cfun)
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:24:26.080281735 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:24:29.296253164 +0100
> @@ -888,8 +888,6 @@ _loop_vec_info::~_loop_vec_info ()
>    gimple_stmt_iterator si;
>    int j;
>
> -  /* ???  We're releasing loop_vinfos en-block.  */
> -  set_stmt_vec_info_vec (&stmt_vec_infos);
>    nbbs = loop->num_nodes;
>    for (j = 0; j < nbbs; j++)
>      {
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-24 10:24:26.084281700 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:24:29.300253129 +0100
> @@ -9850,98 +9850,6 @@ vect_remove_stores (stmt_vec_info first_
>      }
>  }
>
> -
> -/* Function new_stmt_vec_info.
> -
> -   Create and initialize a new stmt_vec_info struct for STMT.  */
> -
> -stmt_vec_info
> -new_stmt_vec_info (gimple *stmt, vec_info *vinfo)
> -{
> -  stmt_vec_info res;
> -  res = (_stmt_vec_info *) xcalloc (1, sizeof (struct _stmt_vec_info));
> -
> -  STMT_VINFO_TYPE (res) = undef_vec_info_type;
> -  STMT_VINFO_STMT (res) = stmt;
> -  res->vinfo = vinfo;
> -  STMT_VINFO_RELEVANT (res) = vect_unused_in_scope;
> -  STMT_VINFO_LIVE_P (res) = false;
> -  STMT_VINFO_VECTYPE (res) = NULL;
> -  STMT_VINFO_VEC_STMT (res) = NULL;
> -  STMT_VINFO_VECTORIZABLE (res) = true;
> -  STMT_VINFO_IN_PATTERN_P (res) = false;
> -  STMT_VINFO_PATTERN_DEF_SEQ (res) = NULL;
> -  STMT_VINFO_DATA_REF (res) = NULL;
> -  STMT_VINFO_VEC_REDUCTION_TYPE (res) = TREE_CODE_REDUCTION;
> -  STMT_VINFO_VEC_CONST_COND_REDUC_CODE (res) = ERROR_MARK;
> -
> -  if (gimple_code (stmt) == GIMPLE_PHI
> -      && is_loop_header_bb_p (gimple_bb (stmt)))
> -    STMT_VINFO_DEF_TYPE (res) = vect_unknown_def_type;
> -  else
> -    STMT_VINFO_DEF_TYPE (res) = vect_internal_def;
> -
> -  STMT_VINFO_SAME_ALIGN_REFS (res).create (0);
> -  STMT_SLP_TYPE (res) = loop_vect;
> -  STMT_VINFO_NUM_SLP_USES (res) = 0;
> -
> -  res->first_element = NULL; /* GROUP_FIRST_ELEMENT */
> -  res->next_element = NULL; /* GROUP_NEXT_ELEMENT */
> -  res->size = 0; /* GROUP_SIZE */
> -  res->store_count = 0; /* GROUP_STORE_COUNT */
> -  res->gap = 0; /* GROUP_GAP */
> -  res->same_dr_stmt = NULL; /* GROUP_SAME_DR_STMT */
> -
> -  /* This is really "uninitialized" until vect_compute_data_ref_alignment.  */
> -  res->dr_aux.misalignment = DR_MISALIGNMENT_UNINITIALIZED;
> -
> -  return res;
> -}
> -
> -
> -/* Set the current stmt_vec_info vector to V.  */
> -
> -void
> -set_stmt_vec_info_vec (vec<stmt_vec_info> *v)
> -{
> -  stmt_vec_info_vec = v;
> -}
> -
> -/* Free the stmt_vec_info entries in V and release V.  */
> -
> -void
> -free_stmt_vec_infos (vec<stmt_vec_info> *v)
> -{
> -  unsigned int i;
> -  stmt_vec_info info;
> -  FOR_EACH_VEC_ELT (*v, i, info)
> -    if (info != NULL_STMT_VEC_INFO)
> -      free_stmt_vec_info (info);
> -  if (v == stmt_vec_info_vec)
> -    stmt_vec_info_vec = NULL;
> -  v->release ();
> -}
> -
> -
> -/* Free stmt vectorization related info.  */
> -
> -void
> -free_stmt_vec_info (stmt_vec_info stmt_info)
> -{
> -  if (stmt_info->pattern_stmt_p)
> -    {
> -      gimple_set_bb (stmt_info->stmt, NULL);
> -      tree lhs = gimple_get_lhs (stmt_info->stmt);
> -      if (lhs && TREE_CODE (lhs) == SSA_NAME)
> -       release_ssa_name (lhs);
> -    }
> -
> -  STMT_VINFO_SAME_ALIGN_REFS (stmt_info).release ();
> -  STMT_VINFO_SIMD_CLONE_INFO (stmt_info).release ();
> -  free (stmt_info);
> -}
> -
> -
>  /* Function get_vectype_for_scalar_type_and_size.
>
>     Returns the vector type corresponding to SCALAR_TYPE  and SIZE as supported
> Index: gcc/tree-vectorizer.c
> ===================================================================
> --- gcc/tree-vectorizer.c       2018-07-24 10:24:26.084281700 +0100
> +++ gcc/tree-vectorizer.c       2018-07-24 10:24:29.300253129 +0100
> @@ -84,9 +84,6 @@ Software Foundation; either version 3, o
>  /* Loop or bb location, with hotness information.  */
>  dump_user_location_t vect_location;
>
> -/* Vector mapping GIMPLE stmt to stmt_vec_info. */
> -vec<stmt_vec_info> *stmt_vec_info_vec;
> -
>  /* Dump a cost entry according to args to F.  */
>
>  void
> @@ -457,7 +454,6 @@ vec_info::vec_info (vec_info::vec_kind k
>      target_cost_data (target_cost_data_in)
>  {
>    stmt_vec_infos.create (50);
> -  set_stmt_vec_info_vec (&stmt_vec_infos);
>  }
>
>  vec_info::~vec_info ()
> @@ -469,7 +465,7 @@ vec_info::~vec_info ()
>      vect_free_slp_instance (instance, true);
>
>    destroy_cost_data (target_cost_data);
> -  free_stmt_vec_infos (&stmt_vec_infos);
> +  free_stmt_vec_infos ();
>  }
>
>  vec_info_shared::vec_info_shared ()
> @@ -513,7 +509,7 @@ vec_info_shared::check_datarefs ()
>  stmt_vec_info
>  vec_info::add_stmt (gimple *stmt)
>  {
> -  stmt_vec_info res = new_stmt_vec_info (stmt, this);
> +  stmt_vec_info res = new_stmt_vec_info (stmt);
>    set_vinfo_for_stmt (stmt, res);
>    return res;
>  }
> @@ -608,6 +604,87 @@ vec_info::replace_stmt (gimple_stmt_iter
>    gsi_replace (gsi, new_stmt, true);
>  }
>
> +/* Create and initialize a new stmt_vec_info struct for STMT.  */
> +
> +stmt_vec_info
> +vec_info::new_stmt_vec_info (gimple *stmt)
> +{
> +  stmt_vec_info res = XCNEW (struct _stmt_vec_info);
> +  res->vinfo = this;
> +  res->stmt = stmt;
> +
> +  STMT_VINFO_TYPE (res) = undef_vec_info_type;
> +  STMT_VINFO_RELEVANT (res) = vect_unused_in_scope;
> +  STMT_VINFO_VECTORIZABLE (res) = true;
> +  STMT_VINFO_VEC_REDUCTION_TYPE (res) = TREE_CODE_REDUCTION;
> +  STMT_VINFO_VEC_CONST_COND_REDUC_CODE (res) = ERROR_MARK;
> +
> +  if (gimple_code (stmt) == GIMPLE_PHI
> +      && is_loop_header_bb_p (gimple_bb (stmt)))
> +    STMT_VINFO_DEF_TYPE (res) = vect_unknown_def_type;
> +  else
> +    STMT_VINFO_DEF_TYPE (res) = vect_internal_def;
> +
> +  STMT_VINFO_SAME_ALIGN_REFS (res).create (0);
> +  STMT_SLP_TYPE (res) = loop_vect;
> +
> +  /* This is really "uninitialized" until vect_compute_data_ref_alignment.  */
> +  res->dr_aux.misalignment = DR_MISALIGNMENT_UNINITIALIZED;
> +
> +  return res;
> +}
> +
> +/* Associate STMT with INFO.  */
> +
> +void
> +vec_info::set_vinfo_for_stmt (gimple *stmt, stmt_vec_info info)
> +{
> +  unsigned int uid = gimple_uid (stmt);
> +  if (uid == 0)
> +    {
> +      gcc_checking_assert (info);
> +      uid = stmt_vec_infos.length () + 1;
> +      gimple_set_uid (stmt, uid);
> +      stmt_vec_infos.safe_push (info);
> +    }
> +  else
> +    {
> +      gcc_checking_assert (info == NULL_STMT_VEC_INFO);
> +      stmt_vec_infos[uid - 1] = info;
> +    }
> +}
> +
> +/* Free the contents of stmt_vec_infos.  */
> +
> +void
> +vec_info::free_stmt_vec_infos (void)
> +{
> +  unsigned int i;
> +  stmt_vec_info info;
> +  FOR_EACH_VEC_ELT (stmt_vec_infos, i, info)
> +    if (info != NULL_STMT_VEC_INFO)
> +      free_stmt_vec_info (info);
> +  stmt_vec_infos.release ();
> +}
> +
> +/* Free STMT_INFO.  */
> +
> +void
> +vec_info::free_stmt_vec_info (stmt_vec_info stmt_info)
> +{
> +  if (stmt_info->pattern_stmt_p)
> +    {
> +      gimple_set_bb (stmt_info->stmt, NULL);
> +      tree lhs = gimple_get_lhs (stmt_info->stmt);
> +      if (lhs && TREE_CODE (lhs) == SSA_NAME)
> +       release_ssa_name (lhs);
> +    }
> +
> +  STMT_VINFO_SAME_ALIGN_REFS (stmt_info).release ();
> +  STMT_VINFO_SIMD_CLONE_INFO (stmt_info).release ();
> +  free (stmt_info);
> +}
> +
>  /* A helper function to free scev and LOOP niter information, as well as
>     clear loop constraint LOOP_C_FINITE.  */
>
> @@ -963,8 +1040,6 @@ vectorize_loops (void)
>    if (cfun->has_simduid_loops)
>      note_simd_array_uses (&simd_array_to_simduid_htab);
>
> -  set_stmt_vec_info_vec (NULL);
> -
>    /*  ----------- Analyze loops. -----------  */
>
>    /* If some loop was duplicated, it gets bigger number

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [45/46] Remove vect_stmt_in_region_p
  2018-07-24 10:10 ` [45/46] Remove vect_stmt_in_region_p Richard Sandiford
@ 2018-07-31 12:06   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-31 12:06 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 12:11 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> Unlike the old vinfo_for_stmt, vec_info::lookup_stmt can cope with
> any statement, so there's no need to check beforehand that the statement
> is part of the vectorisable region.  This means that there are no longer
> any calls to vect_stmt_in_region_p.

OK.

Richard.

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (vect_stmt_in_region_p): Delete.
>         * tree-vectorizer.c (vect_stmt_in_region_p): Likewise.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:24:29.300253129 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:24:32.472224947 +0100
> @@ -1609,7 +1609,6 @@ void vect_pattern_recog (vec_info *);
>
>  /* In tree-vectorizer.c.  */
>  unsigned vectorize_loops (void);
> -bool vect_stmt_in_region_p (vec_info *, gimple *);
>  void vect_free_loop_info_assumptions (struct loop *);
>
>  #endif  /* GCC_TREE_VECTORIZER_H  */
> Index: gcc/tree-vectorizer.c
> ===================================================================
> --- gcc/tree-vectorizer.c       2018-07-24 10:24:29.300253129 +0100
> +++ gcc/tree-vectorizer.c       2018-07-24 10:24:32.472224947 +0100
> @@ -700,33 +700,6 @@ vect_free_loop_info_assumptions (struct
>    loop_constraint_clear (loop, LOOP_C_FINITE);
>  }
>
> -/* Return whether STMT is inside the region we try to vectorize.  */
> -
> -bool
> -vect_stmt_in_region_p (vec_info *vinfo, gimple *stmt)
> -{
> -  if (!gimple_bb (stmt))
> -    return false;
> -
> -  if (loop_vec_info loop_vinfo = dyn_cast <loop_vec_info> (vinfo))
> -    {
> -      struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
> -      if (!flow_bb_inside_loop_p (loop, gimple_bb (stmt)))
> -       return false;
> -    }
> -  else
> -    {
> -      bb_vec_info bb_vinfo = as_a <bb_vec_info> (vinfo);
> -      if (gimple_bb (stmt) != BB_VINFO_BB (bb_vinfo)
> -         || gimple_uid (stmt) == -1U
> -         || gimple_code (stmt) == GIMPLE_PHI)
> -       return false;
> -    }
> -
> -  return true;
> -}
> -
> -
>  /* If LOOP has been versioned during ifcvt, return the internal call
>     guarding it.  */
>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [46/46] Turn stmt_vec_info back into a typedef
  2018-07-24 10:11 ` [46/46] Turn stmt_vec_info back into a typedef Richard Sandiford
@ 2018-07-31 12:07   ` Richard Biener
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Biener @ 2018-07-31 12:07 UTC (permalink / raw)
  To: GCC Patches, richard.sandiford

On Tue, Jul 24, 2018 at 12:11 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch removes the stmt_vec_info wrapper class added near the
> beginning of the series and turns stmt_vec_info back into a typedef.
>

OK.  For the whole series now if I didn't miss anything...

Richard.

> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (stmt_vec_info): Turn back into a typedef.
>         (NULL_STMT_VEC_INFO): Delete.
>         (stmt_vec_info::operator*): Likewise.
>         (stmt_vec_info::operator gimple *): Likewise.
>         * tree-vect-loop.c (vectorizable_reduction): Use NULL instead
>         of NULL_STMT_VEC_INFO.
>         * tree-vect-patterns.c (vect_init_pattern_stmt): Likewise.
>         (vect_reassociating_reduction_p): Likewise.
>         * tree-vect-stmts.c (vect_build_gather_load_calls): Likewise.
>         (vectorizable_store): Likewise.
>         * tree-vectorizer.c (vec_info::set_vinfo_for_stmt): Likewise.
>         (vec_info::free_stmt_vec_infos): Likewise.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:24:32.472224947 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:24:35.888194598 +0100
> @@ -21,26 +21,7 @@ Software Foundation; either version 3, o
>  #ifndef GCC_TREE_VECTORIZER_H
>  #define GCC_TREE_VECTORIZER_H
>
> -class stmt_vec_info {
> -public:
> -  stmt_vec_info () {}
> -  stmt_vec_info (struct _stmt_vec_info *ptr) : m_ptr (ptr) {}
> -  struct _stmt_vec_info *operator-> () const { return m_ptr; }
> -  struct _stmt_vec_info &operator* () const;
> -  operator struct _stmt_vec_info * () const { return m_ptr; }
> -  operator gimple * () const;
> -  operator void * () const { return m_ptr; }
> -  operator bool () const { return m_ptr; }
> -  bool operator == (const stmt_vec_info &x) { return x.m_ptr == m_ptr; }
> -  bool operator == (_stmt_vec_info *x) { return x == m_ptr; }
> -  bool operator != (const stmt_vec_info &x) { return x.m_ptr != m_ptr; }
> -  bool operator != (_stmt_vec_info *x) { return x != m_ptr; }
> -
> -private:
> -  struct _stmt_vec_info *m_ptr;
> -};
> -
> -#define NULL_STMT_VEC_INFO (stmt_vec_info (NULL))
> +typedef struct _stmt_vec_info *stmt_vec_info;
>
>  #include "tree-data-ref.h"
>  #include "tree-hash-traits.h"
> @@ -1080,17 +1061,6 @@ #define VECT_SCALAR_BOOLEAN_TYPE_P(TYPE)
>         && TYPE_PRECISION (TYPE) == 1           \
>         && TYPE_UNSIGNED (TYPE)))
>
> -inline _stmt_vec_info &
> -stmt_vec_info::operator* () const
> -{
> -  return *m_ptr;
> -}
> -
> -inline stmt_vec_info::operator gimple * () const
> -{
> -  return m_ptr ? m_ptr->stmt : NULL;
> -}
> -
>  static inline bool
>  nested_in_vect_loop_p (struct loop *loop, stmt_vec_info stmt_info)
>  {
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c        2018-07-24 10:24:29.296253164 +0100
> +++ gcc/tree-vect-loop.c        2018-07-24 10:24:35.884194634 +0100
> @@ -6755,7 +6755,7 @@ vectorizable_reduction (stmt_vec_info st
>    if (slp_node)
>      neutral_op = neutral_op_for_slp_reduction
>        (slp_node_instance->reduc_phis, code,
> -       REDUC_GROUP_FIRST_ELEMENT (stmt_info) != NULL_STMT_VEC_INFO);
> +       REDUC_GROUP_FIRST_ELEMENT (stmt_info) != NULL);
>
>    if (double_reduc && reduction_type == FOLD_LEFT_REDUCTION)
>      {
> Index: gcc/tree-vect-patterns.c
> ===================================================================
> --- gcc/tree-vect-patterns.c    2018-07-24 10:24:02.360492422 +0100
> +++ gcc/tree-vect-patterns.c    2018-07-24 10:24:35.884194634 +0100
> @@ -104,7 +104,7 @@ vect_init_pattern_stmt (gimple *pattern_
>  {
>    vec_info *vinfo = orig_stmt_info->vinfo;
>    stmt_vec_info pattern_stmt_info = vinfo->lookup_stmt (pattern_stmt);
> -  if (pattern_stmt_info == NULL_STMT_VEC_INFO)
> +  if (pattern_stmt_info == NULL)
>      pattern_stmt_info = orig_stmt_info->vinfo->add_stmt (pattern_stmt);
>    gimple_set_bb (pattern_stmt, gimple_bb (orig_stmt_info->stmt));
>
> @@ -819,7 +819,7 @@ vect_reassociating_reduction_p (stmt_vec
>  {
>    return (STMT_VINFO_DEF_TYPE (stmt_vinfo) == vect_reduction_def
>           ? STMT_VINFO_REDUC_TYPE (stmt_vinfo) != FOLD_LEFT_REDUCTION
> -         : REDUC_GROUP_FIRST_ELEMENT (stmt_vinfo) != NULL_STMT_VEC_INFO);
> +         : REDUC_GROUP_FIRST_ELEMENT (stmt_vinfo) != NULL);
>  }
>
>  /* As above, but also require it to have code CODE and to be a reduction
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-24 10:24:29.300253129 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:24:35.888194598 +0100
> @@ -2842,7 +2842,7 @@ vect_build_gather_load_calls (stmt_vec_i
>           new_stmt_info = loop_vinfo->lookup_def (var);
>         }
>
> -      if (prev_stmt_info == NULL_STMT_VEC_INFO)
> +      if (prev_stmt_info == NULL)
>         STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
>        else
>         STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
> @@ -6574,7 +6574,7 @@ vectorizable_store (stmt_vec_info stmt_i
>           stmt_vec_info new_stmt_info
>             = vect_finish_stmt_generation (stmt_info, new_stmt, gsi);
>
> -         if (prev_stmt_info == NULL_STMT_VEC_INFO)
> +         if (prev_stmt_info == NULL)
>             STMT_VINFO_VEC_STMT (stmt_info) = *vec_stmt = new_stmt_info;
>           else
>             STMT_VINFO_RELATED_STMT (prev_stmt_info) = new_stmt_info;
> Index: gcc/tree-vectorizer.c
> ===================================================================
> --- gcc/tree-vectorizer.c       2018-07-24 10:24:32.472224947 +0100
> +++ gcc/tree-vectorizer.c       2018-07-24 10:24:35.888194598 +0100
> @@ -649,7 +649,7 @@ vec_info::set_vinfo_for_stmt (gimple *st
>      }
>    else
>      {
> -      gcc_checking_assert (info == NULL_STMT_VEC_INFO);
> +      gcc_checking_assert (info == NULL);
>        stmt_vec_infos[uid - 1] = info;
>      }
>  }
> @@ -662,7 +662,7 @@ vec_info::free_stmt_vec_infos (void)
>    unsigned int i;
>    stmt_vec_info info;
>    FOR_EACH_VEC_ELT (stmt_vec_infos, i, info)
> -    if (info != NULL_STMT_VEC_INFO)
> +    if (info != NULL)
>        free_stmt_vec_info (info);
>    stmt_vec_infos.release ();
>  }

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [18/46] Make SLP_TREE_SCALAR_STMTS a vec<stmt_vec_info>
  2018-07-25  9:27   ` Richard Biener
@ 2018-07-31 15:03     ` Richard Sandiford
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Sandiford @ 2018-07-31 15:03 UTC (permalink / raw)
  To: Richard Biener; +Cc: GCC Patches

Richard Biener <richard.guenther@gmail.com> writes:
> On Tue, Jul 24, 2018 at 12:01 PM Richard Sandiford
> <richard.sandiford@arm.com> wrote:
>>
>> This patch changes SLP_TREE_SCALAR_STMTS from a vec<gimple *> to
>> a vec<stmt_vec_info>.  It's longer than the previous conversions
>> but mostly mechanical.
>
> OK.  I don't remember exactly but vect_external_def SLP nodes have
> empty stmts vector then?  I realize we only have those for defs that
> are in the vectorized region.

Yeah, for this the thing we care about is that it's part of the
vectorisable region.  I'm not sure how much stuff we hang off
a vect_external_def SLP stmt_vec_info, but we do need at least
STMT_VINFO_DEF_TYPE as well STMT_VINFO_STMT itself.

Thanks,
Richard

>
>>
>> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>>
>> gcc/
>>         * tree-vectorizer.h (_slp_tree::stmts): Change from a vec<gimple *>
>>         to a vec<stmt_vec_info>.
>>         * tree-vect-slp.c (vect_free_slp_tree): Update accordingly.
>>         (vect_create_new_slp_node): Take a vec<gimple *> instead of a
>>         vec<stmt_vec_info>.
>>         (_slp_oprnd_info::def_stmts): Change from a vec<gimple *>
>>         to a vec<stmt_vec_info>.
>>         (bst_traits::value_type, bst_traits::value_type): Likewise.
>>         (bst_traits::hash): Update accordingly.
>>         (vect_get_and_check_slp_defs): Change the stmts parameter from
>>         a vec<gimple *> to a vec<stmt_vec_info>.
>>         (vect_two_operations_perm_ok_p, vect_build_slp_tree_1): Likewise.
>>         (vect_build_slp_tree): Likewise.
>>         (vect_build_slp_tree_2): Likewise.  Update uses of
>>         SLP_TREE_SCALAR_STMTS.
>>         (vect_print_slp_tree): Update uses of SLP_TREE_SCALAR_STMTS.
>>         (vect_mark_slp_stmts, vect_mark_slp_stmts_relevant)
>>         (vect_slp_rearrange_stmts, vect_attempt_slp_rearrange_stmts)
>>         (vect_supported_load_permutation_p, vect_find_last_scalar_stmt_in_slp)
>>         (vect_detect_hybrid_slp_stmts, vect_slp_analyze_node_operations_1)
>>         (vect_slp_analyze_node_operations, vect_slp_analyze_operations)
>>         (vect_bb_slp_scalar_cost, vect_slp_analyze_bb_1)
>>         (vect_get_constant_vectors, vect_get_slp_defs)
>>         (vect_transform_slp_perm_load, vect_schedule_slp_instance)
>>         (vect_remove_slp_scalar_calls, vect_schedule_slp): Likewise.
>>         (vect_analyze_slp_instance): Build up a vec of stmt_vec_infos
>>         instead of gimple stmts.
>>         * tree-vect-data-refs.c (vect_slp_analyze_node_dependences): Change
>>         the stores parameter for a vec<gimple *> to a vec<stmt_vec_info>.
>>         (vect_slp_analyze_instance_dependence): Update uses of
>>         SLP_TREE_SCALAR_STMTS.
>>         (vect_slp_analyze_and_verify_node_alignment): Likewise.
>>         (vect_slp_analyze_and_verify_instance_alignment): Likewise.
>>         * tree-vect-loop.c (neutral_op_for_slp_reduction): Likewise.
>>         (get_initial_defs_for_reduction): Likewise.
>>         (vect_create_epilog_for_reduction): Likewise.
>>         (vectorize_fold_left_reduction): Likewise.
>>         * tree-vect-stmts.c (vect_prologue_cost_for_slp_op): Likewise.
>>         (vect_model_simple_cost, vectorizable_shift, vectorizable_load)
>>         (can_vectorize_live_stmts): Likewise.
>>
>> Index: gcc/tree-vectorizer.h
>> ===================================================================
>> --- gcc/tree-vectorizer.h       2018-07-24 10:22:57.277070390 +0100
>> +++ gcc/tree-vectorizer.h       2018-07-24 10:23:00.401042649 +0100
>> @@ -138,7 +138,7 @@ struct _slp_tree {
>>    /* Nodes that contain def-stmts of this node statements operands.  */
>>    vec<slp_tree> children;
>>    /* A group of scalar stmts to be vectorized together.  */
>> -  vec<gimple *> stmts;
>> +  vec<stmt_vec_info> stmts;
>>    /* Load permutation relative to the stores, NULL if there is no
>>       permutation.  */
>>    vec<unsigned> load_permutation;
>> Index: gcc/tree-vect-slp.c
>> ===================================================================
>> --- gcc/tree-vect-slp.c 2018-07-24 10:22:57.277070390 +0100
>> +++ gcc/tree-vect-slp.c 2018-07-24 10:23:00.401042649 +0100
>> @@ -66,11 +66,11 @@ vect_free_slp_tree (slp_tree node, bool
>>       statements would be redundant.  */
>>    if (!final_p)
>>      {
>> -      gimple *stmt;
>> -      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
>> +      stmt_vec_info stmt_info;
>> +      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
>>         {
>> -         gcc_assert (STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt)) > 0);
>> -         STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt))--;
>> +         gcc_assert (STMT_VINFO_NUM_SLP_USES (stmt_info) > 0);
>> +         STMT_VINFO_NUM_SLP_USES (stmt_info)--;
>>         }
>>      }
>>
>> @@ -99,21 +99,21 @@ vect_free_slp_instance (slp_instance ins
>>  /* Create an SLP node for SCALAR_STMTS.  */
>>
>>  static slp_tree
>> -vect_create_new_slp_node (vec<gimple *> scalar_stmts)
>> +vect_create_new_slp_node (vec<stmt_vec_info> scalar_stmts)
>>  {
>>    slp_tree node;
>> -  gimple *stmt = scalar_stmts[0];
>> +  stmt_vec_info stmt_info = scalar_stmts[0];
>>    unsigned int nops;
>>
>> -  if (is_gimple_call (stmt))
>> +  if (gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt))
>>      nops = gimple_call_num_args (stmt);
>> -  else if (is_gimple_assign (stmt))
>> +  else if (gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt))
>>      {
>>        nops = gimple_num_ops (stmt) - 1;
>>        if (gimple_assign_rhs_code (stmt) == COND_EXPR)
>>         nops++;
>>      }
>> -  else if (gimple_code (stmt) == GIMPLE_PHI)
>> +  else if (is_a <gphi *> (stmt_info->stmt))
>>      nops = 0;
>>    else
>>      return NULL;
>> @@ -128,8 +128,8 @@ vect_create_new_slp_node (vec<gimple *>
>>    SLP_TREE_DEF_TYPE (node) = vect_internal_def;
>>
>>    unsigned i;
>> -  FOR_EACH_VEC_ELT (scalar_stmts, i, stmt)
>> -    STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt))++;
>> +  FOR_EACH_VEC_ELT (scalar_stmts, i, stmt_info)
>> +    STMT_VINFO_NUM_SLP_USES (stmt_info)++;
>>
>>    return node;
>>  }
>> @@ -141,7 +141,7 @@ vect_create_new_slp_node (vec<gimple *>
>>  typedef struct _slp_oprnd_info
>>  {
>>    /* Def-stmts for the operands.  */
>> -  vec<gimple *> def_stmts;
>> +  vec<stmt_vec_info> def_stmts;
>>    /* Information about the first statement, its vector def-type, type, the
>> operand itself in case it's constant, and an indication if it's a
> pattern
>>       stmt.  */
>> @@ -297,10 +297,10 @@ can_duplicate_and_interleave_p (unsigned
>>     ok return 0.  */
>>  static int
>>  vect_get_and_check_slp_defs (vec_info *vinfo, unsigned char *swap,
>> -                            vec<gimple *> stmts, unsigned stmt_num,
>> +                            vec<stmt_vec_info> stmts, unsigned stmt_num,
>>                              vec<slp_oprnd_info> *oprnds_info)
>>  {
>> -  gimple *stmt = stmts[stmt_num];
>> +  stmt_vec_info stmt_info = stmts[stmt_num];
>>    tree oprnd;
>>    unsigned int i, number_of_oprnds;
>>    enum vect_def_type dt = vect_uninitialized_def;
>> @@ -312,12 +312,12 @@ vect_get_and_check_slp_defs (vec_info *v
>>    bool first = stmt_num == 0;
>>    bool second = stmt_num == 1;
>>
>> -  if (is_gimple_call (stmt))
>> +  if (gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt))
>>      {
>>        number_of_oprnds = gimple_call_num_args (stmt);
>>        first_op_idx = 3;
>>      }
>> -  else if (is_gimple_assign (stmt))
>> +  else if (gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt))
>>      {
>>        enum tree_code code = gimple_assign_rhs_code (stmt);
>>        number_of_oprnds = gimple_num_ops (stmt) - 1;
>> @@ -347,12 +347,13 @@ vect_get_and_check_slp_defs (vec_info *v
>>           int *map = maps[*swap];
>>
>>           if (i < 2)
>> -           oprnd = TREE_OPERAND (gimple_op (stmt, first_op_idx), map[i]);
>> +           oprnd = TREE_OPERAND (gimple_op (stmt_info->stmt,
>> +                                            first_op_idx), map[i]);
>>           else
>> -           oprnd = gimple_op (stmt, map[i]);
>> +           oprnd = gimple_op (stmt_info->stmt, map[i]);
>>         }
>>        else
>> -       oprnd = gimple_op (stmt, first_op_idx + (swapped ? !i : i));
>> + oprnd = gimple_op (stmt_info->stmt, first_op_idx + (swapped ? !i :
> i));
>>
>>        oprnd_info = (*oprnds_info)[i];
>>
>> @@ -518,18 +519,20 @@ vect_get_and_check_slp_defs (vec_info *v
>>      {
>>        /* If there are already uses of this stmt in a SLP instance then
>>           we've committed to the operand order and can't swap it.  */
>> -      if (STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt)) != 0)
>> +      if (STMT_VINFO_NUM_SLP_USES (stmt_info) != 0)
>>         {
>>           if (dump_enabled_p ())
>>             {
>>               dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>>                                "Build SLP failed: cannot swap operands of "
>>                                "shared stmt ");
>> -             dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM, stmt, 0);
>> +             dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
>> +                               stmt_info->stmt, 0);
>>             }
>>           return -1;
>>         }
>>
>> +      gassign *stmt = as_a <gassign *> (stmt_info->stmt);
>>        if (first_op_cond)
>>         {
>>           tree cond = gimple_assign_rhs1 (stmt);
>> @@ -655,8 +658,9 @@ vect_record_max_nunits (vec_info *vinfo,
>>     would be permuted.  */
>>
>>  static bool
>> -vect_two_operations_perm_ok_p (vec<gimple *> stmts, unsigned int group_size,
>> -                              tree vectype, tree_code alt_stmt_code)
>> +vect_two_operations_perm_ok_p (vec<stmt_vec_info> stmts,
>> +                              unsigned int group_size, tree vectype,
>> +                              tree_code alt_stmt_code)
>>  {
>>    unsigned HOST_WIDE_INT count;
>>    if (!TYPE_VECTOR_SUBPARTS (vectype).is_constant (&count))
>> @@ -666,7 +670,8 @@ vect_two_operations_perm_ok_p (vec<gimpl
>>    for (unsigned int i = 0; i < count; ++i)
>>      {
>>        unsigned int elt = i;
>> -      if (gimple_assign_rhs_code (stmts[i % group_size]) == alt_stmt_code)
>> +      gassign *stmt = as_a <gassign *> (stmts[i % group_size]->stmt);
>> +      if (gimple_assign_rhs_code (stmt) == alt_stmt_code)
>>         elt += count;
>>        sel.quick_push (elt);
>>      }
>> @@ -690,12 +695,12 @@ vect_two_operations_perm_ok_p (vec<gimpl
>>
>>  static bool
>>  vect_build_slp_tree_1 (vec_info *vinfo, unsigned char *swap,
>> -                      vec<gimple *> stmts, unsigned int group_size,
>> +                      vec<stmt_vec_info> stmts, unsigned int group_size,
>>                        poly_uint64 *max_nunits, bool *matches,
>>                        bool *two_operators)
>>  {
>>    unsigned int i;
>> -  gimple *first_stmt = stmts[0], *stmt = stmts[0];
>> +  stmt_vec_info first_stmt_info = stmts[0];
>>    enum tree_code first_stmt_code = ERROR_MARK;
>>    enum tree_code alt_stmt_code = ERROR_MARK;
>>    enum tree_code rhs_code = ERROR_MARK;
>> @@ -710,9 +715,10 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>>    gimple *first_load = NULL, *prev_first_load = NULL;
>>
>>    /* For every stmt in NODE find its def stmt/s.  */
>> -  FOR_EACH_VEC_ELT (stmts, i, stmt)
>> +  stmt_vec_info stmt_info;
>> +  FOR_EACH_VEC_ELT (stmts, i, stmt_info)
>>      {
>> -      stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>> +      gimple *stmt = stmt_info->stmt;
>>        swap[i] = 0;
>>        matches[i] = false;
>>
>> @@ -723,7 +729,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>>         }
>>
>>        /* Fail to vectorize statements marked as unvectorizable.  */
>> -      if (!STMT_VINFO_VECTORIZABLE (vinfo_for_stmt (stmt)))
>> +      if (!STMT_VINFO_VECTORIZABLE (stmt_info))
>>          {
>>            if (dump_enabled_p ())
>>              {
>> @@ -755,7 +761,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>>        if (!vect_get_vector_types_for_stmt (stmt_info, &vectype,
>>                                            &nunits_vectype)
>>           || (nunits_vectype
>> -             && !vect_record_max_nunits (vinfo, stmt, group_size,
>> +             && !vect_record_max_nunits (vinfo, stmt_info, group_size,
>>                                           nunits_vectype, max_nunits)))
>>         {
>>           /* Fatal mismatch.  */
>> @@ -877,7 +883,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>>                    && (alt_stmt_code == PLUS_EXPR
>>                        || alt_stmt_code == MINUS_EXPR)
>>                    && rhs_code == alt_stmt_code)
>> -              && !(STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt))
>> +             && !(STMT_VINFO_GROUPED_ACCESS (stmt_info)
>>                     && (first_stmt_code == ARRAY_REF
>>                         || first_stmt_code == BIT_FIELD_REF
>>                         || first_stmt_code == INDIRECT_REF
>> @@ -893,7 +899,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>>                   dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>>                                    "original stmt ");
>>                   dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
>> -                                   first_stmt, 0);
>> +                                   first_stmt_info->stmt, 0);
>>                 }
>>               /* Mismatch.  */
>>               continue;
>> @@ -915,8 +921,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>>
>>           if (rhs_code == CALL_EXPR)
>>             {
>> -             gimple *first_stmt = stmts[0];
>> -             if (!compatible_calls_p (as_a <gcall *> (first_stmt),
>> +             if (!compatible_calls_p (as_a <gcall *> (stmts[0]->stmt),
>>                                        as_a <gcall *> (stmt)))
>>                 {
>>                   if (dump_enabled_p ())
>> @@ -933,7 +938,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>>         }
>>
>>        /* Grouped store or load.  */
>> -      if (STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt)))
>> +      if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
>>         {
>>           if (REFERENCE_CLASS_P (lhs))
>>             {
>> @@ -943,7 +948,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>>           else
>>             {
>>               /* Load.  */
>> -              first_load = DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt));
>> +             first_load = DR_GROUP_FIRST_ELEMENT (stmt_info);
>>                if (prev_first_load)
>>                  {
>> /* Check that there are no loads from different interleaving
>> @@ -1061,7 +1066,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>>                                              vectype, alt_stmt_code))
>>         {
>>           for (i = 0; i < group_size; ++i)
>> -           if (gimple_assign_rhs_code (stmts[i]) == alt_stmt_code)
>> +           if (gimple_assign_rhs_code (stmts[i]->stmt) == alt_stmt_code)
>>               {
>>                 matches[i] = false;
>>                 if (dump_enabled_p ())
>> @@ -1070,11 +1075,11 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>>                                      "Build SLP failed: different operation "
>>                                      "in stmt ");
>>                     dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
>> -                                     stmts[i], 0);
>> +                                     stmts[i]->stmt, 0);
>>                     dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>>                                      "original stmt ");
>>                     dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
>> -                                     first_stmt, 0);
>> +                                     first_stmt_info->stmt, 0);
>>                   }
>>               }
>>           return false;
>> @@ -1090,8 +1095,8 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>>     need a special value for deleted that differs from empty.  */
>>  struct bst_traits
>>  {
>> -  typedef vec <gimple *> value_type;
>> -  typedef vec <gimple *> compare_type;
>> +  typedef vec <stmt_vec_info> value_type;
>> +  typedef vec <stmt_vec_info> compare_type;
>>    static inline hashval_t hash (value_type);
>>    static inline bool equal (value_type existing, value_type candidate);
>>    static inline bool is_empty (value_type x) { return !x.exists (); }
>> @@ -1105,7 +1110,7 @@ bst_traits::hash (value_type x)
>>  {
>>    inchash::hash h;
>>    for (unsigned i = 0; i < x.length (); ++i)
>> -    h.add_int (gimple_uid (x[i]));
>> +    h.add_int (gimple_uid (x[i]->stmt));
>>    return h.end ();
>>  }
>>  inline bool
>> @@ -1128,7 +1133,7 @@ typedef hash_map <vec <gimple *>, slp_tr
>>
>>  static slp_tree
>>  vect_build_slp_tree_2 (vec_info *vinfo,
>> -                      vec<gimple *> stmts, unsigned int group_size,
>> +                      vec<stmt_vec_info> stmts, unsigned int group_size,
>>                        poly_uint64 *max_nunits,
>>                        vec<slp_tree> *loads,
>> bool *matches, unsigned *npermutes, unsigned *tree_size,
>> @@ -1136,7 +1141,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>>
>>  static slp_tree
>>  vect_build_slp_tree (vec_info *vinfo,
>> -                    vec<gimple *> stmts, unsigned int group_size,
>> +                    vec<stmt_vec_info> stmts, unsigned int group_size,
>>                      poly_uint64 *max_nunits, vec<slp_tree> *loads,
>>                      bool *matches, unsigned *npermutes, unsigned *tree_size,
>>                      unsigned max_tree_size)
>> @@ -1151,7 +1156,7 @@ vect_build_slp_tree (vec_info *vinfo,
>>       scalars, see PR81723.  */
>>    if (! res)
>>      {
>> -      vec <gimple *> x;
>> +      vec <stmt_vec_info> x;
>>        x.create (stmts.length ());
>>        x.splice (stmts);
>>        bst_fail->add (x);
>> @@ -1168,7 +1173,7 @@ vect_build_slp_tree (vec_info *vinfo,
>>
>>  static slp_tree
>>  vect_build_slp_tree_2 (vec_info *vinfo,
>> -                      vec<gimple *> stmts, unsigned int group_size,
>> +                      vec<stmt_vec_info> stmts, unsigned int group_size,
>>                        poly_uint64 *max_nunits,
>>                        vec<slp_tree> *loads,
>> bool *matches, unsigned *npermutes, unsigned *tree_size,
>> @@ -1176,53 +1181,54 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>>  {
>>    unsigned nops, i, this_tree_size = 0;
>>    poly_uint64 this_max_nunits = *max_nunits;
>> -  gimple *stmt;
>>    slp_tree node;
>>
>>    matches[0] = false;
>>
>> -  stmt = stmts[0];
>> -  if (is_gimple_call (stmt))
>> +  stmt_vec_info stmt_info = stmts[0];
>> +  if (gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt))
>>      nops = gimple_call_num_args (stmt);
>> -  else if (is_gimple_assign (stmt))
>> +  else if (gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt))
>>      {
>>        nops = gimple_num_ops (stmt) - 1;
>>        if (gimple_assign_rhs_code (stmt) == COND_EXPR)
>>         nops++;
>>      }
>> -  else if (gimple_code (stmt) == GIMPLE_PHI)
>> +  else if (is_a <gphi *> (stmt_info->stmt))
>>      nops = 0;
>>    else
>>      return NULL;
>>
>>    /* If the SLP node is a PHI (induction or reduction), terminate
>>       the recursion.  */
>> -  if (gimple_code (stmt) == GIMPLE_PHI)
>> +  if (gphi *stmt = dyn_cast <gphi *> (stmt_info->stmt))
>>      {
>>        tree scalar_type = TREE_TYPE (PHI_RESULT (stmt));
>>        tree vectype = get_vectype_for_scalar_type (scalar_type);
>> -      if (!vect_record_max_nunits (vinfo, stmt, group_size, vectype,
>> +      if (!vect_record_max_nunits (vinfo, stmt_info, group_size, vectype,
>>                                    max_nunits))
>>         return NULL;
>>
>> -      vect_def_type def_type = STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt));
>> +      vect_def_type def_type = STMT_VINFO_DEF_TYPE (stmt_info);
>>        /* Induction from different IVs is not supported.  */
>>        if (def_type == vect_induction_def)
>>         {
>> -         FOR_EACH_VEC_ELT (stmts, i, stmt)
>> -           if (stmt != stmts[0])
>> +         stmt_vec_info other_info;
>> +         FOR_EACH_VEC_ELT (stmts, i, other_info)
>> +           if (stmt_info != other_info)
>>               return NULL;
>>         }
>>        else
>>         {
>>           /* Else def types have to match.  */
>> -         FOR_EACH_VEC_ELT (stmts, i, stmt)
>> +         stmt_vec_info other_info;
>> +         FOR_EACH_VEC_ELT (stmts, i, other_info)
>>             {
>>               /* But for reduction chains only check on the first stmt.  */
>> -             if (REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt))
>> - && REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)) != stmt)
>> +             if (REDUC_GROUP_FIRST_ELEMENT (other_info)
>> +                 && REDUC_GROUP_FIRST_ELEMENT (other_info) != stmt_info)
>>                 continue;
>> -             if (STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt)) != def_type)
>> +             if (STMT_VINFO_DEF_TYPE (other_info) != def_type)
>>                 return NULL;
>>             }
>>         }
>> @@ -1238,8 +1244,8 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>>      return NULL;
>>
>>    /* If the SLP node is a load, terminate the recursion.  */
>> -  if (STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt))
>> -      && DR_IS_READ (STMT_VINFO_DATA_REF (vinfo_for_stmt (stmt))))
>> +  if (STMT_VINFO_GROUPED_ACCESS (stmt_info)
>> +      && DR_IS_READ (STMT_VINFO_DATA_REF (stmt_info)))
>>      {
>>        *max_nunits = this_max_nunits;
>>        node = vect_create_new_slp_node (stmts);
>> @@ -1250,7 +1256,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>>    /* Get at the operands, verifying they are compatible.  */
>> vec<slp_oprnd_info> oprnds_info = vect_create_oprnd_info (nops,
> group_size);
>>    slp_oprnd_info oprnd_info;
>> -  FOR_EACH_VEC_ELT (stmts, i, stmt)
>> +  FOR_EACH_VEC_ELT (stmts, i, stmt_info)
>>      {
>>        int res = vect_get_and_check_slp_defs (vinfo, &swap[i],
>>                                              stmts, i, &oprnds_info);
>> @@ -1269,7 +1275,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>>    auto_vec<slp_tree, 4> children;
>>    auto_vec<slp_tree> this_loads;
>>
>> -  stmt = stmts[0];
>> +  stmt_info = stmts[0];
>>
>>    if (tree_size)
>>      max_tree_size -= *tree_size;
>> @@ -1307,8 +1313,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>>               /* ???  Rejecting patterns this way doesn't work.  We'd have to
>>                  do extra work to cancel the pattern so the uses see the
>>                  scalar version.  */
>> -             && !is_pattern_stmt_p
>> -                   (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (child)[0])))
>> +             && !is_pattern_stmt_p (SLP_TREE_SCALAR_STMTS (child)[0]))
>>             {
>>               slp_tree grandchild;
>>
>> @@ -1352,7 +1357,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>>           /* ???  Rejecting patterns this way doesn't work.  We'd have to
>>              do extra work to cancel the pattern so the uses see the
>>              scalar version.  */
>> -         && !is_pattern_stmt_p (vinfo_for_stmt (stmt)))
>> +         && !is_pattern_stmt_p (stmt_info))
>>         {
>>           dump_printf_loc (MSG_NOTE, vect_location,
>>                            "Building vector operands from scalars\n");
>> @@ -1373,7 +1378,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>>              as well as the arms under some constraints.  */
>>           && nops == 2
>>           && oprnds_info[1]->first_dt == vect_internal_def
>> -         && is_gimple_assign (stmt)
>> +         && is_gimple_assign (stmt_info->stmt)
>>           /* Do so only if the number of not successful permutes was nor more
>>              than a cut-ff as re-trying the recursive match on
>>              possibly each level of the tree would expose exponential
>> @@ -1389,9 +1394,10 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>>                 {
>>                   if (matches[j] != !swap_not_matching)
>>                     continue;
>> -                 gimple *stmt = stmts[j];
>> +                 stmt_vec_info stmt_info = stmts[j];
>>                   /* Verify if we can swap operands of this stmt.  */
>> -                 if (!is_gimple_assign (stmt)
>> +                 gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt);
>> +                 if (!stmt
>> || !commutative_tree_code (gimple_assign_rhs_code (stmt)))
>>                     {
>>                       if (!swap_not_matching)
>> @@ -1406,7 +1412,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>>                      node and temporarily do that when processing it
>>                      (or wrap operand accessors in a helper).  */
>>                   else if (swap[j] != 0
>> -                          || STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt)))
>> +                          || STMT_VINFO_NUM_SLP_USES (stmt_info))
>>                     {
>>                       if (!swap_not_matching)
>>                         {
>> @@ -1417,7 +1423,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>> "Build SLP failed: cannot swap "
>>                                                "operands of shared stmt ");
>>                               dump_gimple_stmt (MSG_MISSED_OPTIMIZATION,
>> -                                               TDF_SLIM, stmts[j], 0);
>> +                                               TDF_SLIM, stmts[j]->stmt, 0);
>>                             }
>>                           goto fail;
>>                         }
>> @@ -1454,31 +1460,23 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>>                  if we end up building the operand from scalars as
>>                  we'll continue to process swapped operand two.  */
>>               for (j = 0; j < group_size; ++j)
>> -               {
>> -                 gimple *stmt = stmts[j];
>> -                 gimple_set_plf (stmt, GF_PLF_1, false);
>> -               }
>> +               gimple_set_plf (stmts[j]->stmt, GF_PLF_1, false);
>>               for (j = 0; j < group_size; ++j)
>> -               {
>> -                 gimple *stmt = stmts[j];
>> -                 if (matches[j] == !swap_not_matching)
>> -                   {
>> -                     /* Avoid swapping operands twice.  */
>> -                     if (gimple_plf (stmt, GF_PLF_1))
>> -                       continue;
>> -                     swap_ssa_operands (stmt, gimple_assign_rhs1_ptr (stmt),
>> -                                        gimple_assign_rhs2_ptr (stmt));
>> -                     gimple_set_plf (stmt, GF_PLF_1, true);
>> -                   }
>> -               }
>> +               if (matches[j] == !swap_not_matching)
>> +                 {
>> +                   gassign *stmt = as_a <gassign *> (stmts[j]->stmt);
>> +                   /* Avoid swapping operands twice.  */
>> +                   if (gimple_plf (stmt, GF_PLF_1))
>> +                     continue;
>> +                   swap_ssa_operands (stmt, gimple_assign_rhs1_ptr (stmt),
>> +                                      gimple_assign_rhs2_ptr (stmt));
>> +                   gimple_set_plf (stmt, GF_PLF_1, true);
>> +                 }
>>               /* Verify we swap all duplicates or none.  */
>>               if (flag_checking)
>>                 for (j = 0; j < group_size; ++j)
>> -                 {
>> -                   gimple *stmt = stmts[j];
>> -                   gcc_assert (gimple_plf (stmt, GF_PLF_1)
>> -                               == (matches[j] == !swap_not_matching));
>> -                 }
>> +                 gcc_assert (gimple_plf (stmts[j]->stmt, GF_PLF_1)
>> +                             == (matches[j] == !swap_not_matching));
>>
>>               /* If we have all children of child built up from scalars then
>> just throw that away and build it up this node from scalars.  */
>> @@ -1486,8 +1484,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>>                   /* ???  Rejecting patterns this way doesn't work.  We'd have
>> to do extra work to cancel the pattern so the uses see the
>>                      scalar version.  */
>> -                 && !is_pattern_stmt_p
>> -                       (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (child)[0])))
>> +                 && !is_pattern_stmt_p (SLP_TREE_SCALAR_STMTS (child)[0]))
>>                 {
>>                   unsigned int j;
>>                   slp_tree grandchild;
>> @@ -1550,16 +1547,16 @@ vect_print_slp_tree (dump_flags_t dump_k
>>                      slp_tree node)
>>  {
>>    int i;
>> -  gimple *stmt;
>> +  stmt_vec_info stmt_info;
>>    slp_tree child;
>>
>>    dump_printf_loc (dump_kind, loc, "node%s\n",
>>                    SLP_TREE_DEF_TYPE (node) != vect_internal_def
>>                    ? " (external)" : "");
>> -  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
>> +  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
>>      {
>>        dump_printf_loc (dump_kind, loc, "\tstmt %d ", i);
>> -      dump_gimple_stmt (dump_kind, TDF_SLIM, stmt, 0);
>> +      dump_gimple_stmt (dump_kind, TDF_SLIM, stmt_info->stmt, 0);
>>      }
>>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
>>      vect_print_slp_tree (dump_kind, loc, child);
>> @@ -1575,15 +1572,15 @@ vect_print_slp_tree (dump_flags_t dump_k
>>  vect_mark_slp_stmts (slp_tree node, enum slp_vect_type mark, int j)
>>  {
>>    int i;
>> -  gimple *stmt;
>> +  stmt_vec_info stmt_info;
>>    slp_tree child;
>>
>>    if (SLP_TREE_DEF_TYPE (node) != vect_internal_def)
>>      return;
>>
>> -  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
>> +  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
>>      if (j < 0 || i == j)
>> -      STMT_SLP_TYPE (vinfo_for_stmt (stmt)) = mark;
>> +      STMT_SLP_TYPE (stmt_info) = mark;
>>
>>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
>>      vect_mark_slp_stmts (child, mark, j);
>> @@ -1596,16 +1593,14 @@ vect_mark_slp_stmts (slp_tree node, enum
>>  vect_mark_slp_stmts_relevant (slp_tree node)
>>  {
>>    int i;
>> -  gimple *stmt;
>>    stmt_vec_info stmt_info;
>>    slp_tree child;
>>
>>    if (SLP_TREE_DEF_TYPE (node) != vect_internal_def)
>>      return;
>>
>> -  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
>> +  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
>>      {
>> -      stmt_info = vinfo_for_stmt (stmt);
>>        gcc_assert (!STMT_VINFO_RELEVANT (stmt_info)
>>                    || STMT_VINFO_RELEVANT (stmt_info) == vect_used_in_scope);
>>        STMT_VINFO_RELEVANT (stmt_info) = vect_used_in_scope;
>> @@ -1622,8 +1617,8 @@ vect_mark_slp_stmts_relevant (slp_tree n
>>  vect_slp_rearrange_stmts (slp_tree node, unsigned int group_size,
>>                            vec<unsigned> permutation)
>>  {
>> -  gimple *stmt;
>> -  vec<gimple *> tmp_stmts;
>> +  stmt_vec_info stmt_info;
>> +  vec<stmt_vec_info> tmp_stmts;
>>    unsigned int i;
>>    slp_tree child;
>>
>> @@ -1634,8 +1629,8 @@ vect_slp_rearrange_stmts (slp_tree node,
>>    tmp_stmts.create (group_size);
>>    tmp_stmts.quick_grow_cleared (group_size);
>>
>> -  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
>> -    tmp_stmts[permutation[i]] = stmt;
>> +  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
>> +    tmp_stmts[permutation[i]] = stmt_info;
>>
>>    SLP_TREE_SCALAR_STMTS (node).release ();
>>    SLP_TREE_SCALAR_STMTS (node) = tmp_stmts;
>> @@ -1696,13 +1691,14 @@ vect_attempt_slp_rearrange_stmts (slp_in
>>    poly_uint64 unrolling_factor = SLP_INSTANCE_UNROLLING_FACTOR (slp_instn);
>>    FOR_EACH_VEC_ELT (SLP_INSTANCE_LOADS (slp_instn), i, node)
>>      {
>> -      gimple *first_stmt = SLP_TREE_SCALAR_STMTS (node)[0];
>> -      first_stmt = DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (first_stmt));
>> +      stmt_vec_info first_stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
>> +      first_stmt_info
>> +       = vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (first_stmt_info));
>>        /* But we have to keep those permutations that are required because
>>           of handling of gaps.  */
>>        if (known_eq (unrolling_factor, 1U)
>> -         || (group_size == DR_GROUP_SIZE (vinfo_for_stmt (first_stmt))
>> -             && DR_GROUP_GAP (vinfo_for_stmt (first_stmt)) == 0))
>> +         || (group_size == DR_GROUP_SIZE (first_stmt_info)
>> +             && DR_GROUP_GAP (first_stmt_info) == 0))
>>         SLP_TREE_LOAD_PERMUTATION (node).release ();
>>        else
>>         for (j = 0; j < SLP_TREE_LOAD_PERMUTATION (node).length (); ++j)
>> @@ -1721,7 +1717,7 @@ vect_supported_load_permutation_p (slp_i
>>    unsigned int group_size = SLP_INSTANCE_GROUP_SIZE (slp_instn);
>>    unsigned int i, j, k, next;
>>    slp_tree node;
>> -  gimple *stmt, *load, *next_load;
>> +  gimple *next_load;
>>
>>    if (dump_enabled_p ())
>>      {
>> @@ -1750,18 +1746,18 @@ vect_supported_load_permutation_p (slp_i
>>        return false;
>>
>>    node = SLP_INSTANCE_TREE (slp_instn);
>> -  stmt = SLP_TREE_SCALAR_STMTS (node)[0];
>> +  stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
>>
>>    /* Reduction (there are no data-refs in the root).
>>       In reduction chain the order of the loads is not important.  */
>> -  if (!STMT_VINFO_DATA_REF (vinfo_for_stmt (stmt))
>> -      && !REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)))
>> +  if (!STMT_VINFO_DATA_REF (stmt_info)
>> +      && !REDUC_GROUP_FIRST_ELEMENT (stmt_info))
>>      vect_attempt_slp_rearrange_stmts (slp_instn);
>>
>>    /* In basic block vectorization we allow any subchain of an interleaving
>>       chain.
>> FORNOW: not supported in loop SLP because of realignment compications.
> */
>> -  if (STMT_VINFO_BB_VINFO (vinfo_for_stmt (stmt)))
>> +  if (STMT_VINFO_BB_VINFO (stmt_info))
>>      {
>>        /* Check whether the loads in an instance form a subchain and thus
>>           no permutation is necessary.  */
>> @@ -1771,24 +1767,25 @@ vect_supported_load_permutation_p (slp_i
>>             continue;
>>           bool subchain_p = true;
>>            next_load = NULL;
>> -          FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), j, load)
>> -            {
>> -              if (j != 0
>> -                 && (next_load != load
>> -                     || DR_GROUP_GAP (vinfo_for_stmt (load)) != 1))
>> +         stmt_vec_info load_info;
>> +         FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), j, load_info)
>> +           {
>> +             if (j != 0
>> +                 && (next_load != load_info
>> +                     || DR_GROUP_GAP (load_info) != 1))
>>                 {
>>                   subchain_p = false;
>>                   break;
>>                 }
>> -              next_load = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (load));
>> -            }
>> +             next_load = DR_GROUP_NEXT_ELEMENT (load_info);
>> +           }
>>           if (subchain_p)
>>             SLP_TREE_LOAD_PERMUTATION (node).release ();
>>           else
>>             {
>> -             stmt_vec_info group_info
>> -               = vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (node)[0]);
>> - group_info = vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (group_info));
>> +             stmt_vec_info group_info = SLP_TREE_SCALAR_STMTS (node)[0];
>> +             group_info
>> +               = vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (group_info));
>>               unsigned HOST_WIDE_INT nunits;
>>               unsigned k, maxk = 0;
>>               FOR_EACH_VEC_ELT (SLP_TREE_LOAD_PERMUTATION (node), j, k)
>> @@ -1831,7 +1828,7 @@ vect_supported_load_permutation_p (slp_i
>>    poly_uint64 test_vf
>>      = force_common_multiple (SLP_INSTANCE_UNROLLING_FACTOR (slp_instn),
>>                              LOOP_VINFO_VECT_FACTOR
>> -                            (STMT_VINFO_LOOP_VINFO (vinfo_for_stmt (stmt))));
>> +                            (STMT_VINFO_LOOP_VINFO (stmt_info)));
>>    FOR_EACH_VEC_ELT (SLP_INSTANCE_LOADS (slp_instn), i, node)
>>      if (node->load_permutation.exists ()
>>         && !vect_transform_slp_perm_load (node, vNULL, NULL, test_vf,
>> @@ -1847,15 +1844,15 @@ vect_supported_load_permutation_p (slp_i
>>  gimple *
>>  vect_find_last_scalar_stmt_in_slp (slp_tree node)
>>  {
>> -  gimple *last = NULL, *stmt;
>> +  gimple *last = NULL;
>> +  stmt_vec_info stmt_vinfo;
>>
>> -  for (int i = 0; SLP_TREE_SCALAR_STMTS (node).iterate (i, &stmt); i++)
>> +  for (int i = 0; SLP_TREE_SCALAR_STMTS (node).iterate (i, &stmt_vinfo); i++)
>>      {
>> -      stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
>>        if (is_pattern_stmt_p (stmt_vinfo))
>>         last = get_later_stmt (STMT_VINFO_RELATED_STMT (stmt_vinfo), last);
>>        else
>> -       last = get_later_stmt (stmt, last);
>> +       last = get_later_stmt (stmt_vinfo, last);
>>      }
>>
>>    return last;
>> @@ -1926,6 +1923,7 @@ calculate_unrolling_factor (poly_uint64
>>  vect_analyze_slp_instance (vec_info *vinfo,
>>                            gimple *stmt, unsigned max_tree_size)
>>  {
>> +  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>>    slp_instance new_instance;
>>    slp_tree node;
>>    unsigned int group_size;
>> @@ -1934,25 +1932,25 @@ vect_analyze_slp_instance (vec_info *vin
>>    stmt_vec_info next_info;
>>    unsigned int i;
>>    vec<slp_tree> loads;
>> -  struct data_reference *dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (stmt));
>> -  vec<gimple *> scalar_stmts;
>> +  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>> +  vec<stmt_vec_info> scalar_stmts;
>>
>> -  if (STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt)))
>> +  if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
>>      {
>>        scalar_type = TREE_TYPE (DR_REF (dr));
>>        vectype = get_vectype_for_scalar_type (scalar_type);
>> -      group_size = DR_GROUP_SIZE (vinfo_for_stmt (stmt));
>> +      group_size = DR_GROUP_SIZE (stmt_info);
>>      }
>> -  else if (!dr && REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)))
>> +  else if (!dr && REDUC_GROUP_FIRST_ELEMENT (stmt_info))
>>      {
>>        gcc_assert (is_a <loop_vec_info> (vinfo));
>> -      vectype = STMT_VINFO_VECTYPE (vinfo_for_stmt (stmt));
>> -      group_size = REDUC_GROUP_SIZE (vinfo_for_stmt (stmt));
>> +      vectype = STMT_VINFO_VECTYPE (stmt_info);
>> +      group_size = REDUC_GROUP_SIZE (stmt_info);
>>      }
>>    else
>>      {
>>        gcc_assert (is_a <loop_vec_info> (vinfo));
>> -      vectype = STMT_VINFO_VECTYPE (vinfo_for_stmt (stmt));
>> +      vectype = STMT_VINFO_VECTYPE (stmt_info);
>>        group_size = as_a <loop_vec_info> (vinfo)->reductions.length ();
>>      }
>>
>> @@ -1973,38 +1971,38 @@ vect_analyze_slp_instance (vec_info *vin
>> /* Create a node (a root of the SLP tree) for the packed grouped
> stores.  */
>>    scalar_stmts.create (group_size);
>>    next = stmt;
>> -  if (STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt)))
>> +  if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
>>      {
>>        /* Collect the stores and store them in SLP_TREE_SCALAR_STMTS.  */
>>        while (next)
>>          {
>> -         if (STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (next))
>> -             && STMT_VINFO_RELATED_STMT (vinfo_for_stmt (next)))
>> -           scalar_stmts.safe_push (
>> -                 STMT_VINFO_RELATED_STMT (vinfo_for_stmt (next)));
>> +         next_info = vinfo_for_stmt (next);
>> +         if (STMT_VINFO_IN_PATTERN_P (next_info)
>> +             && STMT_VINFO_RELATED_STMT (next_info))
>> +           scalar_stmts.safe_push (STMT_VINFO_RELATED_STMT (next_info));
>>           else
>> -            scalar_stmts.safe_push (next);
>> +           scalar_stmts.safe_push (next_info);
>>            next = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next));
>>          }
>>      }
>> -  else if (!dr && REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)))
>> +  else if (!dr && REDUC_GROUP_FIRST_ELEMENT (stmt_info))
>>      {
>>        /* Collect the reduction stmts and store them in
>>          SLP_TREE_SCALAR_STMTS.  */
>>        while (next)
>>          {
>> -         if (STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (next))
>> -             && STMT_VINFO_RELATED_STMT (vinfo_for_stmt (next)))
>> -           scalar_stmts.safe_push (
>> -                 STMT_VINFO_RELATED_STMT (vinfo_for_stmt (next)));
>> +         next_info = vinfo_for_stmt (next);
>> +         if (STMT_VINFO_IN_PATTERN_P (next_info)
>> +             && STMT_VINFO_RELATED_STMT (next_info))
>> +           scalar_stmts.safe_push (STMT_VINFO_RELATED_STMT (next_info));
>>           else
>> -            scalar_stmts.safe_push (next);
>> +           scalar_stmts.safe_push (next_info);
>>            next = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next));
>>          }
>> /* Mark the first element of the reduction chain as reduction to
> properly
>>          transform the node.  In the reduction analysis phase only the last
>>          element of the chain is marked as reduction.  */
>> -      STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt)) = vect_reduction_def;
>> +      STMT_VINFO_DEF_TYPE (stmt_info) = vect_reduction_def;
>>      }
>>    else
>>      {
>> @@ -2068,15 +2066,16 @@ vect_analyze_slp_instance (vec_info *vin
>>         {
>>           vec<unsigned> load_permutation;
>>           int j;
>> -         gimple *load, *first_stmt;
>> +         stmt_vec_info load_info;
>> +         gimple *first_stmt;
>>           bool this_load_permuted = false;
>>           load_permutation.create (group_size);
>>           first_stmt = DR_GROUP_FIRST_ELEMENT
>> -             (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (load_node)[0]));
>> -         FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (load_node), j, load)
>> +           (SLP_TREE_SCALAR_STMTS (load_node)[0]);
>> +         FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (load_node), j, load_info)
>>             {
>> -                 int load_place = vect_get_place_in_interleaving_chain
>> -                                    (load, first_stmt);
>> +             int load_place = vect_get_place_in_interleaving_chain
>> +               (load_info, first_stmt);
>>               gcc_assert (load_place != -1);
>>               if (load_place != j)
>>                 this_load_permuted = true;
>> @@ -2124,7 +2123,7 @@ vect_analyze_slp_instance (vec_info *vin
>>           FOR_EACH_VEC_ELT (loads, i, load_node)
>>             {
>>               gimple *first_stmt = DR_GROUP_FIRST_ELEMENT
>> -                 (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (load_node)[0]));
>> +               (SLP_TREE_SCALAR_STMTS (load_node)[0]);
>>               stmt_vec_info stmt_vinfo = vinfo_for_stmt (first_stmt);
>>                   /* Use SLP for strided accesses (or if we
>>                      can't load-lanes).  */
>> @@ -2307,10 +2306,10 @@ vect_make_slp_decision (loop_vec_info lo
>>  static void
>>  vect_detect_hybrid_slp_stmts (slp_tree node, unsigned i, slp_vect_type stype)
>>  {
>> -  gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[i];
>> +  stmt_vec_info stmt_vinfo = SLP_TREE_SCALAR_STMTS (node)[i];
>>    imm_use_iterator imm_iter;
>>    gimple *use_stmt;
>> -  stmt_vec_info use_vinfo, stmt_vinfo = vinfo_for_stmt (stmt);
>> +  stmt_vec_info use_vinfo;
>>    slp_tree child;
>>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_vinfo);
>>    int j;
>> @@ -2326,6 +2325,7 @@ vect_detect_hybrid_slp_stmts (slp_tree n
>>        gcc_checking_assert (PURE_SLP_STMT (stmt_vinfo));
>>        /* If we get a pattern stmt here we have to use the LHS of the
>>           original stmt for immediate uses.  */
>> +      gimple *stmt = stmt_vinfo->stmt;
>>        if (! STMT_VINFO_IN_PATTERN_P (stmt_vinfo)
>>           && STMT_VINFO_RELATED_STMT (stmt_vinfo))
>>         stmt = STMT_VINFO_RELATED_STMT (stmt_vinfo)->stmt;
>> @@ -2366,7 +2366,7 @@ vect_detect_hybrid_slp_stmts (slp_tree n
>>        if (dump_enabled_p ())
>>         {
>>           dump_printf_loc (MSG_NOTE, vect_location, "marking hybrid: ");
>> -         dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
>> +         dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_vinfo->stmt, 0);
>>         }
>>        STMT_SLP_TYPE (stmt_vinfo) = hybrid;
>>      }
>> @@ -2525,9 +2525,8 @@ vect_slp_analyze_node_operations_1 (vec_
>>                                     slp_instance node_instance,
>>                                     stmt_vector_for_cost *cost_vec)
>>  {
>> -  gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[0];
>> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>> -  gcc_assert (stmt_info);
>> +  stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
>> +  gimple *stmt = stmt_info->stmt;
>>    gcc_assert (STMT_SLP_TYPE (stmt_info) != loop_vect);
>>
>>    /* For BB vectorization vector types are assigned here.
>> @@ -2551,10 +2550,10 @@ vect_slp_analyze_node_operations_1 (vec_
>>             return false;
>>         }
>>
>> -      gimple *sstmt;
>> +      stmt_vec_info sstmt_info;
>>        unsigned int i;
>> -      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, sstmt)
>> -       STMT_VINFO_VECTYPE (vinfo_for_stmt (sstmt)) = vectype;
>> +      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, sstmt_info)
>> +       STMT_VINFO_VECTYPE (sstmt_info) = vectype;
>>      }
>>
>>    /* Calculate the number of vector statements to be created for the
>> @@ -2626,14 +2625,14 @@ vect_slp_analyze_node_operations (vec_in
>>    /* Push SLP node def-type to stmt operands.  */
>>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), j, child)
>>      if (SLP_TREE_DEF_TYPE (child) != vect_internal_def)
>> -      STMT_VINFO_DEF_TYPE (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (child)[0]))
>> +      STMT_VINFO_DEF_TYPE (SLP_TREE_SCALAR_STMTS (child)[0])
>>         = SLP_TREE_DEF_TYPE (child);
>>    bool res = vect_slp_analyze_node_operations_1 (vinfo, node, node_instance,
>>                                                  cost_vec);
>>    /* Restore def-types.  */
>>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), j, child)
>>      if (SLP_TREE_DEF_TYPE (child) != vect_internal_def)
>> -      STMT_VINFO_DEF_TYPE (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (child)[0]))
>> +      STMT_VINFO_DEF_TYPE (SLP_TREE_SCALAR_STMTS (child)[0])
>>         = vect_internal_def;
>>    if (! res)
>>      return false;
>> @@ -2665,11 +2664,11 @@ vect_slp_analyze_operations (vec_info *v
>>                                              instance, visited, &lvisited,
>>                                              &cost_vec))
>>          {
>> +         slp_tree node = SLP_INSTANCE_TREE (instance);
>> +         stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
>>           dump_printf_loc (MSG_NOTE, vect_location,
>> "removing SLP instance operations starting from: ");
>> -         dump_gimple_stmt (MSG_NOTE, TDF_SLIM,
>> -                           SLP_TREE_SCALAR_STMTS
>> -                             (SLP_INSTANCE_TREE (instance))[0], 0);
>> +         dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
>>           vect_free_slp_instance (instance, false);
>>            vinfo->slp_instances.ordered_remove (i);
>>           cost_vec.release ();
>> @@ -2701,14 +2700,14 @@ vect_bb_slp_scalar_cost (basic_block bb,
>>                          stmt_vector_for_cost *cost_vec)
>>  {
>>    unsigned i;
>> -  gimple *stmt;
>> +  stmt_vec_info stmt_info;
>>    slp_tree child;
>>
>> -  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
>> +  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
>>      {
>> +      gimple *stmt = stmt_info->stmt;
>>        ssa_op_iter op_iter;
>>        def_operand_p def_p;
>> -      stmt_vec_info stmt_info;
>>
>>        if ((*life)[i])
>>         continue;
>> @@ -2724,8 +2723,7 @@ vect_bb_slp_scalar_cost (basic_block bb,
>>           gimple *use_stmt;
>>           FOR_EACH_IMM_USE_STMT (use_stmt, use_iter, DEF_FROM_PTR (def_p))
>>             if (!is_gimple_debug (use_stmt)
>> -               && (! vect_stmt_in_region_p (vinfo_for_stmt (stmt)->vinfo,
>> -                                            use_stmt)
>> +               && (! vect_stmt_in_region_p (stmt_info->vinfo, use_stmt)
>>                     || ! PURE_SLP_STMT (vinfo_for_stmt (use_stmt))))
>>               {
>>                 (*life)[i] = true;
>> @@ -2740,7 +2738,6 @@ vect_bb_slp_scalar_cost (basic_block bb,
>>         continue;
>>        gimple_set_visited (stmt, true);
>>
>> -      stmt_info = vinfo_for_stmt (stmt);
>>        vect_cost_for_stmt kind;
>>        if (STMT_VINFO_DATA_REF (stmt_info))
>>          {
>> @@ -2944,11 +2941,11 @@ vect_slp_analyze_bb_1 (gimple_stmt_itera
>>        if (! vect_slp_analyze_and_verify_instance_alignment (instance)
>>           || ! vect_slp_analyze_instance_dependence (instance))
>>         {
>> +         slp_tree node = SLP_INSTANCE_TREE (instance);
>> +         stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
>>           dump_printf_loc (MSG_NOTE, vect_location,
>> "removing SLP instance operations starting from: ");
>> -         dump_gimple_stmt (MSG_NOTE, TDF_SLIM,
>> -                           SLP_TREE_SCALAR_STMTS
>> -                             (SLP_INSTANCE_TREE (instance))[0], 0);
>> +         dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
>>           vect_free_slp_instance (instance, false);
>>           BB_VINFO_SLP_INSTANCES (bb_vinfo).ordered_remove (i);
>>           continue;
>> @@ -3299,9 +3296,9 @@ vect_get_constant_vectors (tree op, slp_
>>                             vec<tree> *vec_oprnds,
>> unsigned int op_num, unsigned int number_of_vectors)
>>  {
>> -  vec<gimple *> stmts = SLP_TREE_SCALAR_STMTS (slp_node);
>> -  gimple *stmt = stmts[0];
>> -  stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
>> +  vec<stmt_vec_info> stmts = SLP_TREE_SCALAR_STMTS (slp_node);
>> +  stmt_vec_info stmt_vinfo = stmts[0];
>> +  gimple *stmt = stmt_vinfo->stmt;
>>    unsigned HOST_WIDE_INT nunits;
>>    tree vec_cst;
>>    unsigned j, number_of_places_left_in_vector;
>> @@ -3320,7 +3317,7 @@ vect_get_constant_vectors (tree op, slp_
>>
>>    /* Check if vector type is a boolean vector.  */
>>    if (VECT_SCALAR_BOOLEAN_TYPE_P (TREE_TYPE (op))
>> -      && vect_mask_constant_operand_p (stmt, op_num))
>> +      && vect_mask_constant_operand_p (stmt_vinfo, op_num))
>>      vector_type
>>        = build_same_sized_truth_vector_type (STMT_VINFO_VECTYPE (stmt_vinfo));
>>    else
>> @@ -3366,8 +3363,9 @@ vect_get_constant_vectors (tree op, slp_
>>    bool place_after_defs = false;
>>    for (j = 0; j < number_of_copies; j++)
>>      {
>> -      for (i = group_size - 1; stmts.iterate (i, &stmt); i--)
>> +      for (i = group_size - 1; stmts.iterate (i, &stmt_vinfo); i--)
>>          {
>> +         stmt = stmt_vinfo->stmt;
>>            if (is_store)
>>              op = gimple_assign_rhs1 (stmt);
>>            else
>> @@ -3496,10 +3494,12 @@ vect_get_constant_vectors (tree op, slp_
>>                 {
>>                   gsi = gsi_for_stmt
>>                           (vect_find_last_scalar_stmt_in_slp (slp_node));
>> -                 init = vect_init_vector (stmt, vec_cst, vector_type, &gsi);
>> +                 init = vect_init_vector (stmt_vinfo, vec_cst, vector_type,
>> +                                          &gsi);
>>                 }
>>               else
>> -               init = vect_init_vector (stmt, vec_cst, vector_type, NULL);
>> +               init = vect_init_vector (stmt_vinfo, vec_cst, vector_type,
>> +                                        NULL);
>>               if (ctor_seq != NULL)
>>                 {
>>                   gsi = gsi_for_stmt (SSA_NAME_DEF_STMT (init));
>> @@ -3612,15 +3612,14 @@ vect_get_slp_defs (vec<tree> ops, slp_tr
>>           /* We have to check both pattern and original def, if available.  */
>>           if (SLP_TREE_DEF_TYPE (child) == vect_internal_def)
>>             {
>> -             gimple *first_def = SLP_TREE_SCALAR_STMTS (child)[0];
>> -             stmt_vec_info related
>> -               = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (first_def));
>> +             stmt_vec_info first_def_info = SLP_TREE_SCALAR_STMTS (child)[0];
>> + stmt_vec_info related = STMT_VINFO_RELATED_STMT (first_def_info);
>>               tree first_def_op;
>>
>> -             if (gimple_code (first_def) == GIMPLE_PHI)
>> +             if (gphi *first_def = dyn_cast <gphi *> (first_def_info->stmt))
>>                 first_def_op = gimple_phi_result (first_def);
>>               else
>> -               first_def_op = gimple_get_lhs (first_def);
>> +               first_def_op = gimple_get_lhs (first_def_info->stmt);
>>               if (operand_equal_p (oprnd, first_def_op, 0)
>>                   || (related
>>                       && operand_equal_p (oprnd,
>> @@ -3686,8 +3685,7 @@ vect_transform_slp_perm_load (slp_tree n
>> slp_instance slp_node_instance, bool analyze_only,
>>                               unsigned *n_perms)
>>  {
>> -  gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[0];
>> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>> +  stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
>>    vec_info *vinfo = stmt_info->vinfo;
>>    tree mask_element_type = NULL_TREE, mask_type;
>>    int vec_index = 0;
>> @@ -3779,7 +3777,7 @@ vect_transform_slp_perm_load (slp_tree n
>>                                    "permutation requires at "
>>                                    "least three vectors ");
>>                   dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
>> -                                   stmt, 0);
>> +                                   stmt_info->stmt, 0);
>>                 }
>>               gcc_assert (analyze_only);
>>               return false;
>> @@ -3832,6 +3830,7 @@ vect_transform_slp_perm_load (slp_tree n
>>                   stmt_vec_info perm_stmt_info;
>>                   if (! noop_p)
>>                     {
>> +                     gassign *stmt = as_a <gassign *> (stmt_info->stmt);
>>                       tree perm_dest
>> = vect_create_destination_var (gimple_assign_lhs (stmt),
>>                                                        vectype);
>> @@ -3841,7 +3840,8 @@ vect_transform_slp_perm_load (slp_tree n
>>                                                first_vec, second_vec,
>>                                                mask_vec);
>>                       perm_stmt_info
>> -                       = vect_finish_stmt_generation (stmt, perm_stmt, gsi);
>> +                       = vect_finish_stmt_generation (stmt_info, perm_stmt,
>> +                                                      gsi);
>>                     }
>>                   else
>>                     /* If mask was NULL_TREE generate the requested
>> @@ -3870,7 +3870,6 @@ vect_transform_slp_perm_load (slp_tree n
>>  vect_schedule_slp_instance (slp_tree node, slp_instance instance,
>>                             scalar_stmts_to_slp_tree_map_t *bst_map)
>>  {
>> -  gimple *stmt;
>>    bool grouped_store, is_store;
>>    gimple_stmt_iterator si;
>>    stmt_vec_info stmt_info;
>> @@ -3897,11 +3896,13 @@ vect_schedule_slp_instance (slp_tree nod
>>    /* Push SLP node def-type to stmts.  */
>>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
>>      if (SLP_TREE_DEF_TYPE (child) != vect_internal_def)
>> -      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (child), j, stmt)
>> - STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt)) = SLP_TREE_DEF_TYPE
> (child);
>> +      {
>> +       stmt_vec_info child_stmt_info;
>> +       FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (child), j, child_stmt_info)
>> +         STMT_VINFO_DEF_TYPE (child_stmt_info) = SLP_TREE_DEF_TYPE (child);
>> +      }
>>
>> -  stmt = SLP_TREE_SCALAR_STMTS (node)[0];
>> -  stmt_info = vinfo_for_stmt (stmt);
>> +  stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
>>
>>    /* VECTYPE is the type of the destination.  */
>>    vectype = STMT_VINFO_VECTYPE (stmt_info);
>> @@ -3916,7 +3917,7 @@ vect_schedule_slp_instance (slp_tree nod
>>      {
>>        dump_printf_loc (MSG_NOTE,vect_location,
>>                        "------>vectorizing SLP node starting from: ");
>> -      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
>> +      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
>>      }
>>
>>    /* Vectorized stmts go before the last scalar stmt which is where
>> @@ -3928,7 +3929,7 @@ vect_schedule_slp_instance (slp_tree nod
>>       chain is marked as reduction.  */
>>    if (!STMT_VINFO_GROUPED_ACCESS (stmt_info)
>>        && REDUC_GROUP_FIRST_ELEMENT (stmt_info)
>> -      && REDUC_GROUP_FIRST_ELEMENT (stmt_info) == stmt)
>> +      && REDUC_GROUP_FIRST_ELEMENT (stmt_info) == stmt_info)
>>      {
>>        STMT_VINFO_DEF_TYPE (stmt_info) = vect_reduction_def;
>>        STMT_VINFO_TYPE (stmt_info) = reduc_vec_info_type;
>> @@ -3938,29 +3939,33 @@ vect_schedule_slp_instance (slp_tree nod
>>       both operations and then performing a merge.  */
>>    if (SLP_TREE_TWO_OPERATORS (node))
>>      {
>> +      gassign *stmt = as_a <gassign *> (stmt_info->stmt);
>>        enum tree_code code0 = gimple_assign_rhs_code (stmt);
>>        enum tree_code ocode = ERROR_MARK;
>> -      gimple *ostmt;
>> +      stmt_vec_info ostmt_info;
>>        vec_perm_builder mask (group_size, group_size, 1);
>> -      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, ostmt)
>> -       if (gimple_assign_rhs_code (ostmt) != code0)
>> -         {
>> -           mask.quick_push (1);
>> -           ocode = gimple_assign_rhs_code (ostmt);
>> -         }
>> -       else
>> -         mask.quick_push (0);
>> +      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, ostmt_info)
>> +       {
>> +         gassign *ostmt = as_a <gassign *> (ostmt_info->stmt);
>> +         if (gimple_assign_rhs_code (ostmt) != code0)
>> +           {
>> +             mask.quick_push (1);
>> +             ocode = gimple_assign_rhs_code (ostmt);
>> +           }
>> +         else
>> +           mask.quick_push (0);
>> +       }
>>        if (ocode != ERROR_MARK)
>>         {
>>           vec<stmt_vec_info> v0;
>>           vec<stmt_vec_info> v1;
>>           unsigned j;
>>           tree tmask = NULL_TREE;
>> -         vect_transform_stmt (stmt, &si, &grouped_store, node, instance);
>> + vect_transform_stmt (stmt_info, &si, &grouped_store, node,
> instance);
>>           v0 = SLP_TREE_VEC_STMTS (node).copy ();
>>           SLP_TREE_VEC_STMTS (node).truncate (0);
>>           gimple_assign_set_rhs_code (stmt, ocode);
>> -         vect_transform_stmt (stmt, &si, &grouped_store, node, instance);
>> + vect_transform_stmt (stmt_info, &si, &grouped_store, node,
> instance);
>>           gimple_assign_set_rhs_code (stmt, code0);
>>           v1 = SLP_TREE_VEC_STMTS (node).copy ();
>>           SLP_TREE_VEC_STMTS (node).truncate (0);
>> @@ -3998,20 +4003,24 @@ vect_schedule_slp_instance (slp_tree nod
>>                                            gimple_assign_lhs (v1[j]->stmt),
>>                                            tmask);
>>               SLP_TREE_VEC_STMTS (node).quick_push
>> -               (vect_finish_stmt_generation (stmt, vstmt, &si));
>> +               (vect_finish_stmt_generation (stmt_info, vstmt, &si));
>>             }
>>           v0.release ();
>>           v1.release ();
>>           return false;
>>         }
>>      }
>> -  is_store = vect_transform_stmt (stmt, &si, &grouped_store, node, instance);
>> +  is_store = vect_transform_stmt (stmt_info, &si, &grouped_store, node,
>> +                                 instance);
>>
>>    /* Restore stmt def-types.  */
>>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
>>      if (SLP_TREE_DEF_TYPE (child) != vect_internal_def)
>> -      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (child), j, stmt)
>> -       STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt)) = vect_internal_def;
>> +      {
>> +       stmt_vec_info child_stmt_info;
>> +       FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (child), j, child_stmt_info)
>> +         STMT_VINFO_DEF_TYPE (child_stmt_info) = vect_internal_def;
>> +      }
>>
>>    return is_store;
>>  }
>> @@ -4024,7 +4033,7 @@ vect_schedule_slp_instance (slp_tree nod
>>  static void
>>  vect_remove_slp_scalar_calls (slp_tree node)
>>  {
>> -  gimple *stmt, *new_stmt;
>> +  gimple *new_stmt;
>>    gimple_stmt_iterator gsi;
>>    int i;
>>    slp_tree child;
>> @@ -4037,13 +4046,12 @@ vect_remove_slp_scalar_calls (slp_tree n
>>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
>>      vect_remove_slp_scalar_calls (child);
>>
>> -  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
>> +  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
>>      {
>> -      if (!is_gimple_call (stmt) || gimple_bb (stmt) == NULL)
>> +      gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt);
>> +      if (!stmt || gimple_bb (stmt) == NULL)
>>         continue;
>> -      stmt_info = vinfo_for_stmt (stmt);
>> -      if (stmt_info == NULL_STMT_VEC_INFO
>> -         || is_pattern_stmt_p (stmt_info)
>> +      if (is_pattern_stmt_p (stmt_info)
>>           || !PURE_SLP_STMT (stmt_info))
>>         continue;
>>        lhs = gimple_call_lhs (stmt);
>> @@ -4085,7 +4093,7 @@ vect_schedule_slp (vec_info *vinfo)
>>    FOR_EACH_VEC_ELT (slp_instances, i, instance)
>>      {
>>        slp_tree root = SLP_INSTANCE_TREE (instance);
>> -      gimple *store;
>> +      stmt_vec_info store_info;
>>        unsigned int j;
>>        gimple_stmt_iterator gsi;
>>
>> @@ -4099,20 +4107,20 @@ vect_schedule_slp (vec_info *vinfo)
>>        if (is_a <loop_vec_info> (vinfo))
>>         vect_remove_slp_scalar_calls (root);
>>
>> -      for (j = 0; SLP_TREE_SCALAR_STMTS (root).iterate (j, &store)
>> +      for (j = 0; SLP_TREE_SCALAR_STMTS (root).iterate (j, &store_info)
>>                    && j < SLP_INSTANCE_GROUP_SIZE (instance); j++)
>>          {
>> -          if (!STMT_VINFO_DATA_REF (vinfo_for_stmt (store)))
>> -            break;
>> +         if (!STMT_VINFO_DATA_REF (store_info))
>> +           break;
>>
>> -         if (is_pattern_stmt_p (vinfo_for_stmt (store)))
>> -           store = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (store));
>> -          /* Free the attached stmt_vec_info and remove the stmt.  */
>> -          gsi = gsi_for_stmt (store);
>> -         unlink_stmt_vdef (store);
>> -          gsi_remove (&gsi, true);
>> -         release_defs (store);
>> -          free_stmt_vec_info (store);
>> +         if (is_pattern_stmt_p (store_info))
>> +           store_info = STMT_VINFO_RELATED_STMT (store_info);
>> +         /* Free the attached stmt_vec_info and remove the stmt.  */
>> +         gsi = gsi_for_stmt (store_info);
>> +         unlink_stmt_vdef (store_info);
>> +         gsi_remove (&gsi, true);
>> +         release_defs (store_info);
>> +         free_stmt_vec_info (store_info);
>>          }
>>      }
>>
>> Index: gcc/tree-vect-data-refs.c
>> ===================================================================
>> --- gcc/tree-vect-data-refs.c   2018-07-24 10:22:47.485157343 +0100
>> +++ gcc/tree-vect-data-refs.c   2018-07-24 10:23:00.397042684 +0100
>> @@ -665,7 +665,8 @@ vect_slp_analyze_data_ref_dependence (st
>>
>>  static bool
>>  vect_slp_analyze_node_dependences (slp_instance instance, slp_tree node,
>> -                                  vec<gimple *> stores, gimple *last_store)
>> +                                  vec<stmt_vec_info> stores,
>> +                                  gimple *last_store)
>>  {
>>    /* This walks over all stmts involved in the SLP load/store done
>>       in NODE verifying we can sink them up to the last stmt in the
>> @@ -673,13 +674,13 @@ vect_slp_analyze_node_dependences (slp_i
>>    gimple *last_access = vect_find_last_scalar_stmt_in_slp (node);
>>    for (unsigned k = 0; k < SLP_INSTANCE_GROUP_SIZE (instance); ++k)
>>      {
>> -      gimple *access = SLP_TREE_SCALAR_STMTS (node)[k];
>> -      if (access == last_access)
>> +      stmt_vec_info access_info = SLP_TREE_SCALAR_STMTS (node)[k];
>> +      if (access_info == last_access)
>>         continue;
>> -      data_reference *dr_a = STMT_VINFO_DATA_REF (vinfo_for_stmt (access));
>> +      data_reference *dr_a = STMT_VINFO_DATA_REF (access_info);
>>        ao_ref ref;
>>        bool ref_initialized_p = false;
>> -      for (gimple_stmt_iterator gsi = gsi_for_stmt (access);
>> +      for (gimple_stmt_iterator gsi = gsi_for_stmt (access_info->stmt);
>>            gsi_stmt (gsi) != last_access; gsi_next (&gsi))

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [14/46] Make STMT_VINFO_VEC_STMT a stmt_vec_info
  2018-07-24  9:58 ` [14/46] Make STMT_VINFO_VEC_STMT " Richard Sandiford
  2018-07-25  9:21   ` Richard Biener
@ 2018-08-02  0:22   ` H.J. Lu
  2018-08-02  9:58     ` Richard Sandiford
  1 sibling, 1 reply; 108+ messages in thread
From: H.J. Lu @ 2018-08-02  0:22 UTC (permalink / raw)
  To: GCC Patches, Richard Sandiford

On Tue, Jul 24, 2018 at 2:58 AM, Richard Sandiford
<richard.sandiford@arm.com> wrote:
> This patch changes STMT_VINFO_VEC_STMT from a gimple stmt to a
> stmt_vec_info and makes the vectorizable_* routines pass back
> a stmt_vec_info to vect_transform_stmt.
>
>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (_stmt_vec_info::vectorized_stmt): Change from
>         a gimple stmt to a stmt_vec_info.
>         (vectorizable_condition, vectorizable_live_operation)
>         (vectorizable_reduction, vectorizable_induction): Pass back the
>         vectorized statement as a stmt_vec_info.
>         * tree-vect-data-refs.c (vect_record_grouped_load_vectors): Update
>         use of STMT_VINFO_VEC_STMT.
>         * tree-vect-loop.c (vect_create_epilog_for_reduction): Likewise,
>         accumulating the inner phis that feed the STMT_VINFO_VEC_STMT
>         as stmt_vec_infos rather than gimple stmts.
>         (vectorize_fold_left_reduction): Change vec_stmt from a gimple stmt
>         to a stmt_vec_info.
>         (vectorizable_live_operation): Likewise.
>         (vectorizable_reduction, vectorizable_induction): Likewise,
>         updating use of STMT_VINFO_VEC_STMT.
>         * tree-vect-stmts.c (vect_get_vec_def_for_operand_1): Update use
>         of STMT_VINFO_VEC_STMT.
>         (vect_build_gather_load_calls, vectorizable_bswap, vectorizable_call)
>         (vectorizable_simd_clone_call, vectorizable_conversion)
>         (vectorizable_assignment, vectorizable_shift, vectorizable_operation)
>         (vectorizable_store, vectorizable_load, vectorizable_condition)
>         (vectorizable_comparison, can_vectorize_live_stmts): Change vec_stmt
>         from a gimple stmt to a stmt_vec_info.
>         (vect_transform_stmt): Update use of STMT_VINFO_VEC_STMT.  Pass a
>         pointer to a stmt_vec_info to the vectorizable_* routines.
>

This caused:

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=86824

-- 
H.J.

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [14/46] Make STMT_VINFO_VEC_STMT a stmt_vec_info
  2018-08-02  0:22   ` H.J. Lu
@ 2018-08-02  9:58     ` Richard Sandiford
  0 siblings, 0 replies; 108+ messages in thread
From: Richard Sandiford @ 2018-08-02  9:58 UTC (permalink / raw)
  To: H.J. Lu; +Cc: GCC Patches

"H.J. Lu" <hjl.tools@gmail.com> writes:
> On Tue, Jul 24, 2018 at 2:58 AM, Richard Sandiford
> <richard.sandiford@arm.com> wrote:
>> This patch changes STMT_VINFO_VEC_STMT from a gimple stmt to a
>> stmt_vec_info and makes the vectorizable_* routines pass back
>> a stmt_vec_info to vect_transform_stmt.
>>
>>
>> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>>
>> gcc/
>>         * tree-vectorizer.h (_stmt_vec_info::vectorized_stmt): Change from
>>         a gimple stmt to a stmt_vec_info.
>>         (vectorizable_condition, vectorizable_live_operation)
>>         (vectorizable_reduction, vectorizable_induction): Pass back the
>>         vectorized statement as a stmt_vec_info.
>>         * tree-vect-data-refs.c (vect_record_grouped_load_vectors): Update
>>         use of STMT_VINFO_VEC_STMT.
>>         * tree-vect-loop.c (vect_create_epilog_for_reduction): Likewise,
>>         accumulating the inner phis that feed the STMT_VINFO_VEC_STMT
>>         as stmt_vec_infos rather than gimple stmts.
>>         (vectorize_fold_left_reduction): Change vec_stmt from a gimple stmt
>>         to a stmt_vec_info.
>>         (vectorizable_live_operation): Likewise.
>>         (vectorizable_reduction, vectorizable_induction): Likewise,
>>         updating use of STMT_VINFO_VEC_STMT.
>>         * tree-vect-stmts.c (vect_get_vec_def_for_operand_1): Update use
>>         of STMT_VINFO_VEC_STMT.
>>         (vect_build_gather_load_calls, vectorizable_bswap, vectorizable_call)
>>         (vectorizable_simd_clone_call, vectorizable_conversion)
>>         (vectorizable_assignment, vectorizable_shift, vectorizable_operation)
>>         (vectorizable_store, vectorizable_load, vectorizable_condition)
>>         (vectorizable_comparison, can_vectorize_live_stmts): Change vec_stmt
>>         from a gimple stmt to a stmt_vec_info.
>>         (vect_transform_stmt): Update use of STMT_VINFO_VEC_STMT.  Pass a
>>         pointer to a stmt_vec_info to the vectorizable_* routines.
>>
>
> This caused:
>
> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=86824

Should be fixed by r263222 (tested on an x86 SPEC2k6 run).
Sorry again for the breakage.

Richard

^ permalink raw reply	[flat|nested] 108+ messages in thread

end of thread, other threads:[~2018-08-02  9:58 UTC | newest]

Thread overview: 108+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-24  9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
2018-07-24  9:52 ` [01/46] Move special cases out of get_initial_def_for_reduction Richard Sandiford
2018-07-25  8:42   ` Richard Biener
2018-07-24  9:53 ` [03/46] Remove unnecessary update of NUM_SLP_USES Richard Sandiford
2018-07-25  8:46   ` Richard Biener
2018-07-24  9:53 ` [02/46] Remove dead vectorizable_reduction code Richard Sandiford
2018-07-25  8:43   ` Richard Biener
2018-07-24  9:54 ` [05/46] Fix make_ssa_name call in vectorizable_reduction Richard Sandiford
2018-07-25  8:47   ` Richard Biener
2018-07-24  9:54 ` [04/46] Factor out the test for a valid reduction input Richard Sandiford
2018-07-25  8:46   ` Richard Biener
2018-07-24  9:55 ` [07/46] Add vec_info::lookup_stmt Richard Sandiford
2018-07-25  9:11   ` Richard Biener
2018-07-24  9:55 ` [06/46] Add vec_info::add_stmt Richard Sandiford
2018-07-25  9:10   ` Richard Biener
2018-07-24  9:55 ` [08/46] Add vec_info::lookup_def Richard Sandiford
2018-07-25  9:12   ` Richard Biener
2018-07-24  9:56 ` [09/46] Add vec_info::lookup_single_use Richard Sandiford
2018-07-25  9:13   ` Richard Biener
2018-07-24  9:57 ` [10/46] Temporarily make stmt_vec_info a class Richard Sandiford
2018-07-25  9:14   ` Richard Biener
2018-07-24  9:57 ` [11/46] Pass back a stmt_vec_info from vect_is_simple_use Richard Sandiford
2018-07-25  9:18   ` Richard Biener
2018-07-24  9:58 ` [13/46] Make STMT_VINFO_RELATED_STMT a stmt_vec_info Richard Sandiford
2018-07-25  9:19   ` Richard Biener
2018-07-24  9:58 ` [12/46] Make vect_finish_stmt_generation return " Richard Sandiford
2018-07-25  9:19   ` Richard Biener
2018-07-24  9:58 ` [14/46] Make STMT_VINFO_VEC_STMT " Richard Sandiford
2018-07-25  9:21   ` Richard Biener
2018-07-25 11:03     ` Richard Sandiford
2018-08-02  0:22   ` H.J. Lu
2018-08-02  9:58     ` Richard Sandiford
2018-07-24  9:59 ` [17/46] Make LOOP_VINFO_REDUCTIONS an auto_vec<stmt_vec_info> Richard Sandiford
2018-07-25  9:23   ` Richard Biener
2018-07-24  9:59 ` [16/46] Make STMT_VINFO_REDUC_DEF a stmt_vec_info Richard Sandiford
2018-07-25  9:22   ` Richard Biener
2018-07-24  9:59 ` [15/46] Make SLP_TREE_VEC_STMTS a vec<stmt_vec_info> Richard Sandiford
2018-07-25  9:22   ` Richard Biener
2018-07-24 10:00 ` [18/46] Make SLP_TREE_SCALAR_STMTS " Richard Sandiford
2018-07-25  9:27   ` Richard Biener
2018-07-31 15:03     ` Richard Sandiford
2018-07-24 10:01 ` [21/46] Make grouped_stores and reduction_chains use stmt_vec_infos Richard Sandiford
2018-07-25  9:28   ` Richard Biener
2018-07-24 10:01 ` [20/46] Make *FIRST_ELEMENT and *NEXT_ELEMENT stmt_vec_infos Richard Sandiford
2018-07-25  9:28   ` Richard Biener
2018-07-24 10:01 ` [19/46] Make vect_dr_stmt return a stmt_vec_info Richard Sandiford
2018-07-25  9:28   ` Richard Biener
2018-07-24 10:02 ` [24/46] Make stmt_info_for_cost use " Richard Sandiford
2018-07-25  9:30   ` Richard Biener
2018-07-24 10:02 ` [22/46] Make DR_GROUP_SAME_DR_STMT " Richard Sandiford
2018-07-25  9:29   ` Richard Biener
2018-07-24 10:02 ` [23/46] Make LOOP_VINFO_MAY_MISALIGN_STMTS use stmt_vec_info Richard Sandiford
2018-07-25  9:29   ` Richard Biener
2018-07-24 10:03 ` [26/46] Make more use of dyn_cast in tree-vect* Richard Sandiford
2018-07-25  9:31   ` Richard Biener
2018-07-24 10:03 ` [25/46] Make get_earlier/later_stmt take and return stmt_vec_infos Richard Sandiford
2018-07-25  9:31   ` Richard Biener
2018-07-24 10:03 ` [27/46] Remove duplicated stmt_vec_info lookups Richard Sandiford
2018-07-25  9:32   ` Richard Biener
2018-07-24 10:04 ` [29/46] Use stmt_vec_info instead of gimple stmts internally (part 2) Richard Sandiford
2018-07-25 10:03   ` Richard Biener
2018-07-24 10:04 ` [30/46] Use stmt_vec_infos rather than gimple stmts for worklists Richard Sandiford
2018-07-25 10:04   ` Richard Biener
2018-07-24 10:04 ` [28/46] Use stmt_vec_info instead of gimple stmts internally (part 1) Richard Sandiford
2018-07-25  9:33   ` Richard Biener
2018-07-24 10:05 ` [32/46] Use stmt_vec_info in function interfaces (part 2) Richard Sandiford
2018-07-25 10:06   ` Richard Biener
2018-07-24 10:05 ` [31/46] Use stmt_vec_info in function interfaces (part 1) Richard Sandiford
2018-07-25 10:05   ` Richard Biener
2018-07-24 10:06 ` [34/46] Alter interface to vect_get_vec_def_for_stmt_copy Richard Sandiford
2018-07-25 10:13   ` Richard Biener
2018-07-24 10:06 ` [35/46] Alter interfaces within vect_pattern_recog Richard Sandiford
2018-07-25 10:14   ` Richard Biener
2018-07-24 10:06 ` [33/46] Use stmt_vec_infos instead of vec_info/gimple stmt pairs Richard Sandiford
2018-07-25 10:06   ` Richard Biener
2018-07-24 10:07 ` [36/46] Add a pattern_stmt_p field to stmt_vec_info Richard Sandiford
2018-07-25 10:15   ` Richard Biener
2018-07-25 11:09     ` Richard Sandiford
2018-07-25 11:48       ` Richard Biener
2018-07-26 10:29         ` Richard Sandiford
2018-07-26 11:15           ` Richard Biener
2018-07-24 10:07 ` [37/46] Associate alignment information with stmt_vec_infos Richard Sandiford
2018-07-25 10:18   ` Richard Biener
2018-07-26 10:55     ` Richard Sandiford
2018-07-26 11:13       ` Richard Biener
2018-07-24 10:08 ` [38/46] Pass stmt_vec_infos instead of data_references where relevant Richard Sandiford
2018-07-25 10:21   ` Richard Biener
2018-07-25 11:21     ` Richard Sandiford
2018-07-26 11:05       ` Richard Sandiford
2018-07-26 11:13         ` Richard Biener
2018-07-24 10:08 ` [39/46] Replace STMT_VINFO_UNALIGNED_DR with the associated statement Richard Sandiford
2018-07-26 11:08   ` [39/46 v2] Change STMT_VINFO_UNALIGNED_DR to a dr_vec_info Richard Sandiford
2018-07-26 11:13     ` Richard Biener
2018-07-24 10:09 ` [40/46] Add vec_info::lookup_dr Richard Sandiford
2018-07-26 11:10   ` [40/46 v2] " Richard Sandiford
2018-07-26 11:16     ` Richard Biener
2018-07-24 10:09 ` [41/46] Add vec_info::remove_stmt Richard Sandiford
2018-07-31 12:02   ` Richard Biener
2018-07-24 10:09 ` [42/46] Add vec_info::replace_stmt Richard Sandiford
2018-07-31 12:03   ` Richard Biener
2018-07-24 10:10 ` [45/46] Remove vect_stmt_in_region_p Richard Sandiford
2018-07-31 12:06   ` Richard Biener
2018-07-24 10:10 ` [43/46] Make free_stmt_vec_info take a stmt_vec_info Richard Sandiford
2018-07-31 12:03   ` Richard Biener
2018-07-24 10:10 ` [44/46] Remove global vinfo_for_stmt-related routines Richard Sandiford
2018-07-31 12:05   ` Richard Biener
2018-07-24 10:11 ` [46/46] Turn stmt_vec_info back into a typedef Richard Sandiford
2018-07-31 12:07   ` Richard Biener

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).