From: Richard Sandiford <richard.sandiford@arm.com>
To: gcc-patches@gcc.gnu.org
Subject: [37/46] Associate alignment information with stmt_vec_infos
Date: Tue, 24 Jul 2018 10:07:00 -0000 [thread overview]
Message-ID: <87h8kplzew.fsf@arm.com> (raw)
In-Reply-To: <87wotlrmen.fsf@arm.com> (Richard Sandiford's message of "Tue, 24 Jul 2018 10:52:16 +0100")
Alignment information is really a property of a stmt_vec_info
(and the way we want to vectorise it) rather than the original scalar dr.
I think that was true even before the recent dr sharing.
This patch therefore makes the alignment-related interfaces take
stmt_vec_infos rather than data_references.
2018-07-24 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (STMT_VINFO_TARGET_ALIGNMENT): New macro.
(DR_VECT_AUX, DR_MISALIGNMENT, SET_DR_MISALIGNMENT)
(DR_TARGET_ALIGNMENT): Delete.
(set_dr_misalignment, dr_misalignment, aligned_access_p)
(known_alignment_for_access_p, vect_known_alignment_in_bytes)
(vect_dr_behavior): Take a stmt_vec_info rather than a data_reference.
* tree-vect-data-refs.c (vect_calculate_target_alignment)
(vect_compute_data_ref_alignment, vect_update_misalignment_for_peel)
(vector_alignment_reachable_p, vect_get_peeling_costs_all_drs)
(vect_peeling_supportable, vect_enhance_data_refs_alignment)
(vect_duplicate_ssa_name_ptr_info): Update after above changes.
(vect_create_addr_base_for_vector_ref, vect_create_data_ref_ptr)
(vect_setup_realignment, vect_supportable_dr_alignment): Likewise.
* tree-vect-loop-manip.c (get_misalign_in_elems): Likewise.
(vect_gen_prolog_loop_niters): Likewise.
* tree-vect-stmts.c (vect_get_store_cost, vect_get_load_cost)
(compare_step_with_zero, get_group_load_store_type): Likewise.
(vect_get_data_ptr_increment, ensure_base_align, vectorizable_store)
(vectorizable_load): Likewise.
Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h 2018-07-24 10:24:02.364492386 +0100
+++ gcc/tree-vectorizer.h 2018-07-24 10:24:05.744462369 +0100
@@ -1031,6 +1031,9 @@ #define STMT_VINFO_NUM_SLP_USES(S) (S)->
#define STMT_VINFO_REDUC_TYPE(S) (S)->reduc_type
#define STMT_VINFO_REDUC_DEF(S) (S)->reduc_def
+/* Only defined once dr_misalignment is defined. */
+#define STMT_VINFO_TARGET_ALIGNMENT(S) (S)->dr_aux.target_alignment
+
#define DR_GROUP_FIRST_ELEMENT(S) (gcc_checking_assert ((S)->data_ref_info), (S)->first_element)
#define DR_GROUP_NEXT_ELEMENT(S) (gcc_checking_assert ((S)->data_ref_info), (S)->next_element)
#define DR_GROUP_SIZE(S) (gcc_checking_assert ((S)->data_ref_info), (S)->size)
@@ -1048,8 +1051,6 @@ #define HYBRID_SLP_STMT(S)
#define PURE_SLP_STMT(S) ((S)->slp_type == pure_slp)
#define STMT_SLP_TYPE(S) (S)->slp_type
-#define DR_VECT_AUX(dr) (&vinfo_for_stmt (DR_STMT (dr))->dr_aux)
-
#define VECT_MAX_COST 1000
/* The maximum number of intermediate steps required in multi-step type
@@ -1256,73 +1257,72 @@ add_stmt_costs (void *data, stmt_vector_
#define DR_MISALIGNMENT_UNKNOWN (-1)
#define DR_MISALIGNMENT_UNINITIALIZED (-2)
+/* Record that the vectorized form of the data access in STMT_INFO
+ will be misaligned by VAL bytes wrt its target alignment.
+ Negative values have the meanings above. */
+
inline void
-set_dr_misalignment (struct data_reference *dr, int val)
+set_dr_misalignment (stmt_vec_info stmt_info, int val)
{
- dataref_aux *data_aux = DR_VECT_AUX (dr);
- data_aux->misalignment = val;
+ stmt_info->dr_aux.misalignment = val;
}
+/* Return the misalignment in bytes of the vectorized form of the data
+ access in STMT_INFO, relative to its target alignment. Negative
+ values have the meanings above. */
+
inline int
-dr_misalignment (struct data_reference *dr)
+dr_misalignment (stmt_vec_info stmt_info)
{
- int misalign = DR_VECT_AUX (dr)->misalignment;
+ int misalign = stmt_info->dr_aux.misalignment;
gcc_assert (misalign != DR_MISALIGNMENT_UNINITIALIZED);
return misalign;
}
-/* Reflects actual alignment of first access in the vectorized loop,
- taking into account peeling/versioning if applied. */
-#define DR_MISALIGNMENT(DR) dr_misalignment (DR)
-#define SET_DR_MISALIGNMENT(DR, VAL) set_dr_misalignment (DR, VAL)
-
-/* Only defined once DR_MISALIGNMENT is defined. */
-#define DR_TARGET_ALIGNMENT(DR) DR_VECT_AUX (DR)->target_alignment
-
-/* Return true if data access DR is aligned to its target alignment
- (which may be less than a full vector). */
+/* Return true if the vectorized form of the data access in STMT_INFO is
+ aligned to its target alignment (which may be less than a full vector). */
static inline bool
-aligned_access_p (struct data_reference *data_ref_info)
+aligned_access_p (stmt_vec_info stmt_info)
{
- return (DR_MISALIGNMENT (data_ref_info) == 0);
+ return (dr_misalignment (stmt_info) == 0);
}
-/* Return TRUE if the alignment of the data access is known, and FALSE
- otherwise. */
+/* Return true if the alignment of the vectorized form of the data
+ access in STMT_INFO is known at compile time. */
static inline bool
-known_alignment_for_access_p (struct data_reference *data_ref_info)
+known_alignment_for_access_p (stmt_vec_info stmt_info)
{
- return (DR_MISALIGNMENT (data_ref_info) != DR_MISALIGNMENT_UNKNOWN);
+ return (dr_misalignment (stmt_info) != DR_MISALIGNMENT_UNKNOWN);
}
/* Return the minimum alignment in bytes that the vectorized version
- of DR is guaranteed to have. */
+ of the data reference in STMT_INFO is guaranteed to have. */
static inline unsigned int
-vect_known_alignment_in_bytes (struct data_reference *dr)
+vect_known_alignment_in_bytes (stmt_vec_info stmt_info)
{
- if (DR_MISALIGNMENT (dr) == DR_MISALIGNMENT_UNKNOWN)
+ data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
+ int misalignment = dr_misalignment (stmt_info);
+ if (misalignment == DR_MISALIGNMENT_UNKNOWN)
return TYPE_ALIGN_UNIT (TREE_TYPE (DR_REF (dr)));
- if (DR_MISALIGNMENT (dr) == 0)
- return DR_TARGET_ALIGNMENT (dr);
- return DR_MISALIGNMENT (dr) & -DR_MISALIGNMENT (dr);
+ if (misalignment == 0)
+ return STMT_VINFO_TARGET_ALIGNMENT (stmt_info);
+ return misalignment & -misalignment;
}
-/* Return the behavior of DR with respect to the vectorization context
- (which for outer loop vectorization might not be the behavior recorded
- in DR itself). */
+/* Return the data reference behavior of STMT_INFO with respect to the
+ vectorization context (which for outer loop vectorization might not
+ be the behavior recorded in STMT_VINFO_DATA_DEF). */
static inline innermost_loop_behavior *
-vect_dr_behavior (data_reference *dr)
+vect_dr_behavior (stmt_vec_info stmt_info)
{
- gimple *stmt = DR_STMT (dr);
- stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
if (loop_vinfo == NULL
|| !nested_in_vect_loop_p (LOOP_VINFO_LOOP (loop_vinfo), stmt_info))
- return &DR_INNERMOST (dr);
+ return &DR_INNERMOST (STMT_VINFO_DATA_REF (stmt_info));
else
return &STMT_VINFO_DR_WRT_VEC_LOOP (stmt_info);
}
Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c 2018-07-24 10:24:02.356492457 +0100
+++ gcc/tree-vect-data-refs.c 2018-07-24 10:24:05.740462405 +0100
@@ -873,7 +873,7 @@ vect_calculate_target_alignment (struct
Compute the misalignment of the data reference DR.
Output:
- 1. DR_MISALIGNMENT (DR) is defined.
+ 1. dr_misalignment (STMT_INFO) is defined.
FOR NOW: No analysis is actually performed. Misalignment is calculated
only for trivial cases. TODO. */
@@ -896,17 +896,17 @@ vect_compute_data_ref_alignment (struct
loop = LOOP_VINFO_LOOP (loop_vinfo);
/* Initialize misalignment to unknown. */
- SET_DR_MISALIGNMENT (dr, DR_MISALIGNMENT_UNKNOWN);
+ set_dr_misalignment (stmt_info, DR_MISALIGNMENT_UNKNOWN);
if (STMT_VINFO_GATHER_SCATTER_P (stmt_info))
return;
- innermost_loop_behavior *drb = vect_dr_behavior (dr);
+ innermost_loop_behavior *drb = vect_dr_behavior (stmt_info);
bool step_preserves_misalignment_p;
unsigned HOST_WIDE_INT vector_alignment
= vect_calculate_target_alignment (dr) / BITS_PER_UNIT;
- DR_TARGET_ALIGNMENT (dr) = vector_alignment;
+ STMT_VINFO_TARGET_ALIGNMENT (stmt_info) = vector_alignment;
/* No step for BB vectorization. */
if (!loop)
@@ -1009,8 +1009,8 @@ vect_compute_data_ref_alignment (struct
dump_printf (MSG_NOTE, "\n");
}
- DR_VECT_AUX (dr)->base_decl = base;
- DR_VECT_AUX (dr)->base_misaligned = true;
+ stmt_info->dr_aux.base_decl = base;
+ stmt_info->dr_aux.base_misaligned = true;
base_misalignment = 0;
}
poly_int64 misalignment
@@ -1038,12 +1038,13 @@ vect_compute_data_ref_alignment (struct
return;
}
- SET_DR_MISALIGNMENT (dr, const_misalignment);
+ set_dr_misalignment (stmt_info, const_misalignment);
if (dump_enabled_p ())
{
dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
- "misalign = %d bytes of ref ", DR_MISALIGNMENT (dr));
+ "misalign = %d bytes of ref ",
+ dr_misalignment (stmt_info));
dump_generic_expr (MSG_MISSED_OPTIMIZATION, TDF_SLIM, ref);
dump_printf (MSG_MISSED_OPTIMIZATION, "\n");
}
@@ -1089,29 +1090,29 @@ vect_update_misalignment_for_peel (struc
{
if (current_dr != dr)
continue;
- gcc_assert (!known_alignment_for_access_p (dr)
- || !known_alignment_for_access_p (dr_peel)
- || (DR_MISALIGNMENT (dr) / dr_size
- == DR_MISALIGNMENT (dr_peel) / dr_peel_size));
- SET_DR_MISALIGNMENT (dr, 0);
+ gcc_assert (!known_alignment_for_access_p (stmt_info)
+ || !known_alignment_for_access_p (peel_stmt_info)
+ || (dr_misalignment (stmt_info) / dr_size
+ == dr_misalignment (peel_stmt_info) / dr_peel_size));
+ set_dr_misalignment (stmt_info, 0);
return;
}
- if (known_alignment_for_access_p (dr)
- && known_alignment_for_access_p (dr_peel))
+ if (known_alignment_for_access_p (stmt_info)
+ && known_alignment_for_access_p (peel_stmt_info))
{
bool negative = tree_int_cst_compare (DR_STEP (dr), size_zero_node) < 0;
- int misal = DR_MISALIGNMENT (dr);
+ int misal = dr_misalignment (stmt_info);
misal += negative ? -npeel * dr_size : npeel * dr_size;
- misal &= DR_TARGET_ALIGNMENT (dr) - 1;
- SET_DR_MISALIGNMENT (dr, misal);
+ misal &= STMT_VINFO_TARGET_ALIGNMENT (stmt_info) - 1;
+ set_dr_misalignment (stmt_info, misal);
return;
}
if (dump_enabled_p ())
dump_printf_loc (MSG_NOTE, vect_location, "Setting misalignment " \
"to unknown (-1).\n");
- SET_DR_MISALIGNMENT (dr, DR_MISALIGNMENT_UNKNOWN);
+ set_dr_misalignment (stmt_info, DR_MISALIGNMENT_UNKNOWN);
}
@@ -1219,13 +1220,13 @@ vector_alignment_reachable_p (struct dat
int elem_size, mis_in_elements;
/* FORNOW: handle only known alignment. */
- if (!known_alignment_for_access_p (dr))
+ if (!known_alignment_for_access_p (stmt_info))
return false;
poly_uint64 nelements = TYPE_VECTOR_SUBPARTS (vectype);
poly_uint64 vector_size = GET_MODE_SIZE (TYPE_MODE (vectype));
elem_size = vector_element_size (vector_size, nelements);
- mis_in_elements = DR_MISALIGNMENT (dr) / elem_size;
+ mis_in_elements = dr_misalignment (stmt_info) / elem_size;
if (!multiple_p (nelements - mis_in_elements, DR_GROUP_SIZE (stmt_info)))
return false;
@@ -1233,7 +1234,8 @@ vector_alignment_reachable_p (struct dat
/* If misalignment is known at the compile time then allow peeling
only if natural alignment is reachable through peeling. */
- if (known_alignment_for_access_p (dr) && !aligned_access_p (dr))
+ if (known_alignment_for_access_p (stmt_info)
+ && !aligned_access_p (stmt_info))
{
HOST_WIDE_INT elmsize =
int_cst_value (TYPE_SIZE_UNIT (TREE_TYPE (vectype)));
@@ -1241,10 +1243,10 @@ vector_alignment_reachable_p (struct dat
{
dump_printf_loc (MSG_NOTE, vect_location,
"data size =" HOST_WIDE_INT_PRINT_DEC, elmsize);
- dump_printf (MSG_NOTE,
- ". misalignment = %d.\n", DR_MISALIGNMENT (dr));
+ dump_printf (MSG_NOTE, ". misalignment = %d.\n",
+ dr_misalignment (stmt_info));
}
- if (DR_MISALIGNMENT (dr) % elmsize)
+ if (dr_misalignment (stmt_info) % elmsize)
{
if (dump_enabled_p ())
dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
@@ -1253,7 +1255,7 @@ vector_alignment_reachable_p (struct dat
}
}
- if (!known_alignment_for_access_p (dr))
+ if (!known_alignment_for_access_p (stmt_info))
{
tree type = TREE_TYPE (DR_REF (dr));
bool is_packed = not_size_aligned (DR_REF (dr));
@@ -1401,6 +1403,8 @@ vect_get_peeling_costs_all_drs (vec<data
unsigned int npeel,
bool unknown_misalignment)
{
+ stmt_vec_info peel_stmt_info = (dr0 ? vect_dr_stmt (dr0)
+ : NULL_STMT_VEC_INFO);
unsigned i;
data_reference *dr;
@@ -1423,16 +1427,16 @@ vect_get_peeling_costs_all_drs (vec<data
continue;
int save_misalignment;
- save_misalignment = DR_MISALIGNMENT (dr);
+ save_misalignment = dr_misalignment (stmt_info);
if (npeel == 0)
;
- else if (unknown_misalignment && dr == dr0)
- SET_DR_MISALIGNMENT (dr, 0);
+ else if (unknown_misalignment && stmt_info == peel_stmt_info)
+ set_dr_misalignment (stmt_info, 0);
else
vect_update_misalignment_for_peel (dr, dr0, npeel);
vect_get_data_access_cost (dr, inside_cost, outside_cost,
body_cost_vec, prologue_cost_vec);
- SET_DR_MISALIGNMENT (dr, save_misalignment);
+ set_dr_misalignment (stmt_info, save_misalignment);
}
}
@@ -1552,10 +1556,10 @@ vect_peeling_supportable (loop_vec_info
&& !STMT_VINFO_GROUPED_ACCESS (stmt_info))
continue;
- save_misalignment = DR_MISALIGNMENT (dr);
+ save_misalignment = dr_misalignment (stmt_info);
vect_update_misalignment_for_peel (dr, dr0, npeel);
supportable_dr_alignment = vect_supportable_dr_alignment (dr, false);
- SET_DR_MISALIGNMENT (dr, save_misalignment);
+ set_dr_misalignment (stmt_info, save_misalignment);
if (!supportable_dr_alignment)
return false;
@@ -1598,27 +1602,27 @@ vect_peeling_supportable (loop_vec_info
-- original loop, before alignment analysis:
for (i=0; i<N; i++){
- x = q[i]; # DR_MISALIGNMENT(q) = unknown
- p[i] = y; # DR_MISALIGNMENT(p) = unknown
+ x = q[i]; # dr_misalignment(q) = unknown
+ p[i] = y; # dr_misalignment(p) = unknown
}
-- After vect_compute_data_refs_alignment:
for (i=0; i<N; i++){
- x = q[i]; # DR_MISALIGNMENT(q) = 3
- p[i] = y; # DR_MISALIGNMENT(p) = unknown
+ x = q[i]; # dr_misalignment(q) = 3
+ p[i] = y; # dr_misalignment(p) = unknown
}
-- Possibility 1: we do loop versioning:
if (p is aligned) {
for (i=0; i<N; i++){ # loop 1A
- x = q[i]; # DR_MISALIGNMENT(q) = 3
- p[i] = y; # DR_MISALIGNMENT(p) = 0
+ x = q[i]; # dr_misalignment(q) = 3
+ p[i] = y; # dr_misalignment(p) = 0
}
}
else {
for (i=0; i<N; i++){ # loop 1B
- x = q[i]; # DR_MISALIGNMENT(q) = 3
- p[i] = y; # DR_MISALIGNMENT(p) = unaligned
+ x = q[i]; # dr_misalignment(q) = 3
+ p[i] = y; # dr_misalignment(p) = unaligned
}
}
@@ -1628,8 +1632,8 @@ vect_peeling_supportable (loop_vec_info
p[i] = y;
}
for (i = 3; i < N; i++){ # loop 2A
- x = q[i]; # DR_MISALIGNMENT(q) = 0
- p[i] = y; # DR_MISALIGNMENT(p) = unknown
+ x = q[i]; # dr_misalignment(q) = 0
+ p[i] = y; # dr_misalignment(p) = unknown
}
-- Possibility 3: combination of loop peeling and versioning:
@@ -1639,14 +1643,14 @@ vect_peeling_supportable (loop_vec_info
}
if (p is aligned) {
for (i = 3; i<N; i++){ # loop 3A
- x = q[i]; # DR_MISALIGNMENT(q) = 0
- p[i] = y; # DR_MISALIGNMENT(p) = 0
+ x = q[i]; # dr_misalignment(q) = 0
+ p[i] = y; # dr_misalignment(p) = 0
}
}
else {
for (i = 3; i<N; i++){ # loop 3B
- x = q[i]; # DR_MISALIGNMENT(q) = 0
- p[i] = y; # DR_MISALIGNMENT(p) = unaligned
+ x = q[i]; # dr_misalignment(q) = 0
+ p[i] = y; # dr_misalignment(p) = unaligned
}
}
@@ -1745,17 +1749,20 @@ vect_enhance_data_refs_alignment (loop_v
do_peeling = vector_alignment_reachable_p (dr);
if (do_peeling)
{
- if (known_alignment_for_access_p (dr))
+ if (known_alignment_for_access_p (stmt_info))
{
unsigned int npeel_tmp = 0;
bool negative = tree_int_cst_compare (DR_STEP (dr),
size_zero_node) < 0;
vectype = STMT_VINFO_VECTYPE (stmt_info);
- unsigned int target_align = DR_TARGET_ALIGNMENT (dr);
+ unsigned int target_align
+ = STMT_VINFO_TARGET_ALIGNMENT (stmt_info);
unsigned int dr_size = vect_get_scalar_dr_size (dr);
- mis = (negative ? DR_MISALIGNMENT (dr) : -DR_MISALIGNMENT (dr));
- if (DR_MISALIGNMENT (dr) != 0)
+ mis = (negative
+ ? dr_misalignment (stmt_info)
+ : -dr_misalignment (stmt_info));
+ if (mis != 0)
npeel_tmp = (mis & (target_align - 1)) / dr_size;
/* For multiple types, it is possible that the bigger type access
@@ -1780,7 +1787,7 @@ vect_enhance_data_refs_alignment (loop_v
/* NPEEL_TMP is 0 when there is no misalignment, but also
allow peeling NELEMENTS. */
- if (DR_MISALIGNMENT (dr) == 0)
+ if (dr_misalignment (stmt_info) == 0)
possible_npeel_number++;
}
@@ -1841,7 +1848,7 @@ vect_enhance_data_refs_alignment (loop_v
}
else
{
- if (!aligned_access_p (dr))
+ if (!aligned_access_p (stmt_info))
{
if (dump_enabled_p ())
dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
@@ -2010,10 +2017,10 @@ vect_enhance_data_refs_alignment (loop_v
if (do_peeling)
{
- stmt_vec_info stmt_info = vect_dr_stmt (dr0);
- vectype = STMT_VINFO_VECTYPE (stmt_info);
+ stmt_vec_info peel_stmt_info = vect_dr_stmt (dr0);
+ vectype = STMT_VINFO_VECTYPE (peel_stmt_info);
- if (known_alignment_for_access_p (dr0))
+ if (known_alignment_for_access_p (peel_stmt_info))
{
bool negative = tree_int_cst_compare (DR_STEP (dr0),
size_zero_node) < 0;
@@ -2021,11 +2028,14 @@ vect_enhance_data_refs_alignment (loop_v
{
/* Since it's known at compile time, compute the number of
iterations in the peeled loop (the peeling factor) for use in
- updating DR_MISALIGNMENT values. The peeling factor is the
+ updating dr_misalignment values. The peeling factor is the
vectorization factor minus the misalignment as an element
count. */
- mis = negative ? DR_MISALIGNMENT (dr0) : -DR_MISALIGNMENT (dr0);
- unsigned int target_align = DR_TARGET_ALIGNMENT (dr0);
+ mis = (negative
+ ? dr_misalignment (peel_stmt_info)
+ : -dr_misalignment (peel_stmt_info));
+ unsigned int target_align
+ = STMT_VINFO_TARGET_ALIGNMENT (peel_stmt_info);
npeel = ((mis & (target_align - 1))
/ vect_get_scalar_dr_size (dr0));
}
@@ -2033,9 +2043,8 @@ vect_enhance_data_refs_alignment (loop_v
/* For interleaved data access every iteration accesses all the
members of the group, therefore we divide the number of iterations
by the group size. */
- stmt_info = vect_dr_stmt (dr0);
- if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
- npeel /= DR_GROUP_SIZE (stmt_info);
+ if (STMT_VINFO_GROUPED_ACCESS (peel_stmt_info))
+ npeel /= DR_GROUP_SIZE (peel_stmt_info);
if (dump_enabled_p ())
dump_printf_loc (MSG_NOTE, vect_location,
@@ -2047,7 +2056,9 @@ vect_enhance_data_refs_alignment (loop_v
do_peeling = false;
/* Check if all datarefs are supportable and log. */
- if (do_peeling && known_alignment_for_access_p (dr0) && npeel == 0)
+ if (do_peeling
+ && known_alignment_for_access_p (peel_stmt_info)
+ && npeel == 0)
{
stat = vect_verify_datarefs_alignment (loop_vinfo);
if (!stat)
@@ -2066,7 +2077,8 @@ vect_enhance_data_refs_alignment (loop_v
unsigned max_peel = npeel;
if (max_peel == 0)
{
- unsigned int target_align = DR_TARGET_ALIGNMENT (dr0);
+ unsigned int target_align
+ = STMT_VINFO_TARGET_ALIGNMENT (peel_stmt_info);
max_peel = target_align / vect_get_scalar_dr_size (dr0) - 1;
}
if (max_peel > max_allowed_peel)
@@ -2095,19 +2107,20 @@ vect_enhance_data_refs_alignment (loop_v
if (do_peeling)
{
- /* (1.2) Update the DR_MISALIGNMENT of each data reference DR_i.
- If the misalignment of DR_i is identical to that of dr0 then set
- DR_MISALIGNMENT (DR_i) to zero. If the misalignment of DR_i and
- dr0 are known at compile time then increment DR_MISALIGNMENT (DR_i)
- by the peeling factor times the element size of DR_i (MOD the
- vectorization factor times the size). Otherwise, the
- misalignment of DR_i must be set to unknown. */
+ /* (1.2) Update the dr_misalignment of each data reference
+ statement STMT_i. If the misalignment of STMT_i is identical
+ to that of PEEL_STMT_INFO then set dr_misalignment (STMT_i)
+ to zero. If the misalignment of STMT_i and PEEL_STMT_INFO are
+ known at compile time then increment dr_misalignment (STMT_i)
+ by the peeling factor times the element size of STMT_i (MOD
+ the vectorization factor times the size). Otherwise, the
+ misalignment of STMT_i must be set to unknown. */
FOR_EACH_VEC_ELT (datarefs, i, dr)
if (dr != dr0)
{
/* Strided accesses perform only component accesses, alignment
is irrelevant for them. */
- stmt_info = vect_dr_stmt (dr);
+ stmt_vec_info stmt_info = vect_dr_stmt (dr);
if (STMT_VINFO_STRIDED_P (stmt_info)
&& !STMT_VINFO_GROUPED_ACCESS (stmt_info))
continue;
@@ -2120,8 +2133,8 @@ vect_enhance_data_refs_alignment (loop_v
LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) = npeel;
else
LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo)
- = DR_MISALIGNMENT (dr0);
- SET_DR_MISALIGNMENT (dr0, 0);
+ = dr_misalignment (peel_stmt_info);
+ set_dr_misalignment (peel_stmt_info, 0);
if (dump_enabled_p ())
{
dump_printf_loc (MSG_NOTE, vect_location,
@@ -2160,7 +2173,7 @@ vect_enhance_data_refs_alignment (loop_v
/* For interleaving, only the alignment of the first access
matters. */
- if (aligned_access_p (dr)
+ if (aligned_access_p (stmt_info)
|| (STMT_VINFO_GROUPED_ACCESS (stmt_info)
&& DR_GROUP_FIRST_ELEMENT (stmt_info) != stmt_info))
continue;
@@ -2182,7 +2195,7 @@ vect_enhance_data_refs_alignment (loop_v
int mask;
tree vectype;
- if (known_alignment_for_access_p (dr)
+ if (known_alignment_for_access_p (stmt_info)
|| LOOP_VINFO_MAY_MISALIGN_STMTS (loop_vinfo).length ()
>= (unsigned) PARAM_VALUE (PARAM_VECT_MAX_VERSION_FOR_ALIGNMENT_CHECKS))
{
@@ -2241,8 +2254,7 @@ vect_enhance_data_refs_alignment (loop_v
of the loop being vectorized. */
FOR_EACH_VEC_ELT (may_misalign_stmts, i, stmt_info)
{
- dr = STMT_VINFO_DATA_REF (stmt_info);
- SET_DR_MISALIGNMENT (dr, 0);
+ set_dr_misalignment (stmt_info, 0);
if (dump_enabled_p ())
dump_printf_loc (MSG_NOTE, vect_location,
"Alignment of access forced using versioning.\n");
@@ -4456,13 +4468,14 @@ vect_get_new_ssa_name (tree type, enum v
static void
vect_duplicate_ssa_name_ptr_info (tree name, data_reference *dr)
{
+ stmt_vec_info stmt_info = vect_dr_stmt (dr);
duplicate_ssa_name_ptr_info (name, DR_PTR_INFO (dr));
- int misalign = DR_MISALIGNMENT (dr);
+ int misalign = dr_misalignment (stmt_info);
if (misalign == DR_MISALIGNMENT_UNKNOWN)
mark_ptr_info_alignment_unknown (SSA_NAME_PTR_INFO (name));
else
set_ptr_info_alignment (SSA_NAME_PTR_INFO (name),
- DR_TARGET_ALIGNMENT (dr), misalign);
+ STMT_VINFO_TARGET_ALIGNMENT (stmt_info), misalign);
}
/* Function vect_create_addr_base_for_vector_ref.
@@ -4513,7 +4526,7 @@ vect_create_addr_base_for_vector_ref (st
tree vect_ptr_type;
tree step = TYPE_SIZE_UNIT (TREE_TYPE (DR_REF (dr)));
loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
- innermost_loop_behavior *drb = vect_dr_behavior (dr);
+ innermost_loop_behavior *drb = vect_dr_behavior (stmt_info);
tree data_ref_base = unshare_expr (drb->base_address);
tree base_offset = unshare_expr (drb->offset);
@@ -4687,7 +4700,7 @@ vect_create_data_ref_ptr (stmt_vec_info
/* Check the step (evolution) of the load in LOOP, and record
whether it's invariant. */
- step = vect_dr_behavior (dr)->step;
+ step = vect_dr_behavior (stmt_info)->step;
if (integer_zerop (step))
*inv_p = true;
else
@@ -5519,7 +5532,7 @@ vect_setup_realignment (stmt_vec_info st
new_temp = copy_ssa_name (ptr);
else
new_temp = make_ssa_name (TREE_TYPE (ptr));
- unsigned int align = DR_TARGET_ALIGNMENT (dr);
+ unsigned int align = STMT_VINFO_TARGET_ALIGNMENT (stmt_info);
new_stmt = gimple_build_assign
(new_temp, BIT_AND_EXPR, ptr,
build_int_cst (TREE_TYPE (ptr), -(HOST_WIDE_INT) align));
@@ -6438,7 +6451,7 @@ vect_supportable_dr_alignment (struct da
struct loop *vect_loop = NULL;
bool nested_in_vect_loop = false;
- if (aligned_access_p (dr) && !check_aligned_accesses)
+ if (aligned_access_p (stmt_info) && !check_aligned_accesses)
return dr_aligned;
/* For now assume all conditional loads/stores support unaligned
@@ -6546,11 +6559,11 @@ vect_supportable_dr_alignment (struct da
else
return dr_explicit_realign_optimized;
}
- if (!known_alignment_for_access_p (dr))
+ if (!known_alignment_for_access_p (stmt_info))
is_packed = not_size_aligned (DR_REF (dr));
if (targetm.vectorize.support_vector_misalignment
- (mode, type, DR_MISALIGNMENT (dr), is_packed))
+ (mode, type, dr_misalignment (stmt_info), is_packed))
/* Can't software pipeline the loads, but can at least do them. */
return dr_unaligned_supported;
}
@@ -6559,11 +6572,11 @@ vect_supportable_dr_alignment (struct da
bool is_packed = false;
tree type = (TREE_TYPE (DR_REF (dr)));
- if (!known_alignment_for_access_p (dr))
+ if (!known_alignment_for_access_p (stmt_info))
is_packed = not_size_aligned (DR_REF (dr));
if (targetm.vectorize.support_vector_misalignment
- (mode, type, DR_MISALIGNMENT (dr), is_packed))
+ (mode, type, dr_misalignment (stmt_info), is_packed))
return dr_unaligned_supported;
}
Index: gcc/tree-vect-loop-manip.c
===================================================================
--- gcc/tree-vect-loop-manip.c 2018-07-24 10:23:46.112636713 +0100
+++ gcc/tree-vect-loop-manip.c 2018-07-24 10:24:05.740462405 +0100
@@ -1564,7 +1564,7 @@ get_misalign_in_elems (gimple **seq, loo
stmt_vec_info stmt_info = vect_dr_stmt (dr);
tree vectype = STMT_VINFO_VECTYPE (stmt_info);
- unsigned int target_align = DR_TARGET_ALIGNMENT (dr);
+ unsigned int target_align = STMT_VINFO_TARGET_ALIGNMENT (stmt_info);
gcc_assert (target_align != 0);
bool negative = tree_int_cst_compare (DR_STEP (dr), size_zero_node) < 0;
@@ -1600,7 +1600,7 @@ get_misalign_in_elems (gimple **seq, loo
refer to an aligned location. The following computation is generated:
If the misalignment of DR is known at compile time:
- addr_mis = int mis = DR_MISALIGNMENT (dr);
+ addr_mis = int mis = dr_misalignment (stmt-containing-DR);
Else, compute address misalignment in bytes:
addr_mis = addr & (target_align - 1)
@@ -1633,7 +1633,7 @@ vect_gen_prolog_loop_niters (loop_vec_in
tree iters, iters_name;
stmt_vec_info stmt_info = vect_dr_stmt (dr);
tree vectype = STMT_VINFO_VECTYPE (stmt_info);
- unsigned int target_align = DR_TARGET_ALIGNMENT (dr);
+ unsigned int target_align = STMT_VINFO_TARGET_ALIGNMENT (stmt_info);
if (LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) > 0)
{
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c 2018-07-24 10:24:02.364492386 +0100
+++ gcc/tree-vect-stmts.c 2018-07-24 10:24:05.744462369 +0100
@@ -1079,7 +1079,8 @@ vect_get_store_cost (stmt_vec_info stmt_
/* Here, we assign an additional cost for the unaligned store. */
*inside_cost += record_stmt_cost (body_cost_vec, ncopies,
unaligned_store, stmt_info,
- DR_MISALIGNMENT (dr), vect_body);
+ dr_misalignment (stmt_info),
+ vect_body);
if (dump_enabled_p ())
dump_printf_loc (MSG_NOTE, vect_location,
"vect_model_store_cost: unaligned supported by "
@@ -1257,7 +1258,8 @@ vect_get_load_cost (stmt_vec_info stmt_i
/* Here, we assign an additional cost for the unaligned load. */
*inside_cost += record_stmt_cost (body_cost_vec, ncopies,
unaligned_load, stmt_info,
- DR_MISALIGNMENT (dr), vect_body);
+ dr_misalignment (stmt_info),
+ vect_body);
if (dump_enabled_p ())
dump_printf_loc (MSG_NOTE, vect_location,
@@ -2102,8 +2104,7 @@ vect_use_strided_gather_scatters_p (stmt
static int
compare_step_with_zero (stmt_vec_info stmt_info)
{
- data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
- return tree_int_cst_compare (vect_dr_behavior (dr)->step,
+ return tree_int_cst_compare (vect_dr_behavior (stmt_info)->step,
size_zero_node);
}
@@ -2218,7 +2219,7 @@ get_group_load_store_type (stmt_vec_info
be a multiple of B and so we are guaranteed to access a
non-gap element in the same B-sized block. */
if (overrun_p
- && gap < (vect_known_alignment_in_bytes (first_dr)
+ && gap < (vect_known_alignment_in_bytes (first_stmt_info)
/ vect_get_scalar_dr_size (first_dr)))
overrun_p = false;
if (overrun_p && !can_overrun_p)
@@ -2246,7 +2247,7 @@ get_group_load_store_type (stmt_vec_info
same B-sized block. */
if (would_overrun_p
&& !masked_p
- && gap < (vect_known_alignment_in_bytes (first_dr)
+ && gap < (vect_known_alignment_in_bytes (first_stmt_info)
/ vect_get_scalar_dr_size (first_dr)))
would_overrun_p = false;
@@ -2931,11 +2932,12 @@ vect_get_strided_load_store_ops (stmt_ve
vect_get_data_ptr_increment (data_reference *dr, tree aggr_type,
vect_memory_access_type memory_access_type)
{
+ stmt_vec_info stmt_info = vect_dr_stmt (dr);
if (memory_access_type == VMAT_INVARIANT)
return size_zero_node;
tree iv_step = TYPE_SIZE_UNIT (aggr_type);
- tree step = vect_dr_behavior (dr)->step;
+ tree step = vect_dr_behavior (stmt_info)->step;
if (tree_int_cst_sgn (step) == -1)
iv_step = fold_build1 (NEGATE_EXPR, TREE_TYPE (iv_step), iv_step);
return iv_step;
@@ -6174,14 +6176,16 @@ vectorizable_operation (stmt_vec_info st
static void
ensure_base_align (struct data_reference *dr)
{
- if (DR_VECT_AUX (dr)->misalignment == DR_MISALIGNMENT_UNINITIALIZED)
+ stmt_vec_info stmt_info = vect_dr_stmt (dr);
+ if (stmt_info->dr_aux.misalignment == DR_MISALIGNMENT_UNINITIALIZED)
return;
- if (DR_VECT_AUX (dr)->base_misaligned)
+ if (stmt_info->dr_aux.base_misaligned)
{
- tree base_decl = DR_VECT_AUX (dr)->base_decl;
+ tree base_decl = stmt_info->dr_aux.base_decl;
- unsigned int align_base_to = DR_TARGET_ALIGNMENT (dr) * BITS_PER_UNIT;
+ unsigned int align_base_to = (stmt_info->dr_aux.target_alignment
+ * BITS_PER_UNIT);
if (decl_in_symtab_p (base_decl))
symtab_node::get (base_decl)->increase_alignment (align_base_to);
@@ -6190,7 +6194,7 @@ ensure_base_align (struct data_reference
SET_DECL_ALIGN (base_decl, align_base_to);
DECL_USER_ALIGN (base_decl) = 1;
}
- DR_VECT_AUX (dr)->base_misaligned = false;
+ stmt_info->dr_aux.base_misaligned = false;
}
}
@@ -7175,16 +7179,16 @@ vectorizable_store (stmt_vec_info stmt_i
vect_permute_store_chain(). */
vec_oprnd = result_chain[i];
- align = DR_TARGET_ALIGNMENT (first_dr);
- if (aligned_access_p (first_dr))
+ align = STMT_VINFO_TARGET_ALIGNMENT (first_stmt_info);
+ if (aligned_access_p (first_stmt_info))
misalign = 0;
- else if (DR_MISALIGNMENT (first_dr) == -1)
+ else if (dr_misalignment (first_stmt_info) == -1)
{
- align = dr_alignment (vect_dr_behavior (first_dr));
+ align = dr_alignment (vect_dr_behavior (first_stmt_info));
misalign = 0;
}
else
- misalign = DR_MISALIGNMENT (first_dr);
+ misalign = dr_misalignment (first_stmt_info);
if (dataref_offset == NULL_TREE
&& TREE_CODE (dataref_ptr) == SSA_NAME)
set_ptr_info_alignment (get_ptr_info (dataref_ptr), align,
@@ -7227,9 +7231,9 @@ vectorizable_store (stmt_vec_info stmt_i
dataref_offset
? dataref_offset
: build_int_cst (ref_type, 0));
- if (aligned_access_p (first_dr))
+ if (aligned_access_p (first_stmt_info))
;
- else if (DR_MISALIGNMENT (first_dr) == -1)
+ else if (dr_misalignment (first_stmt_info) == -1)
TREE_TYPE (data_ref)
= build_aligned_type (TREE_TYPE (data_ref),
align * BITS_PER_UNIT);
@@ -8326,19 +8330,20 @@ vectorizable_load (stmt_vec_info stmt_in
break;
}
- align = DR_TARGET_ALIGNMENT (dr);
+ align = STMT_VINFO_TARGET_ALIGNMENT (stmt_info);
if (alignment_support_scheme == dr_aligned)
{
- gcc_assert (aligned_access_p (first_dr));
+ gcc_assert (aligned_access_p (first_stmt_info));
misalign = 0;
}
- else if (DR_MISALIGNMENT (first_dr) == -1)
+ else if (dr_misalignment (first_stmt_info) == -1)
{
- align = dr_alignment (vect_dr_behavior (first_dr));
+ align = dr_alignment
+ (vect_dr_behavior (first_stmt_info));
misalign = 0;
}
else
- misalign = DR_MISALIGNMENT (first_dr);
+ misalign = dr_misalignment (first_stmt_info);
if (dataref_offset == NULL_TREE
&& TREE_CODE (dataref_ptr) == SSA_NAME)
set_ptr_info_alignment (get_ptr_info (dataref_ptr),
@@ -8365,7 +8370,7 @@ vectorizable_load (stmt_vec_info stmt_in
: build_int_cst (ref_type, 0));
if (alignment_support_scheme == dr_aligned)
;
- else if (DR_MISALIGNMENT (first_dr) == -1)
+ else if (dr_misalignment (first_stmt_info) == -1)
TREE_TYPE (data_ref)
= build_aligned_type (TREE_TYPE (data_ref),
align * BITS_PER_UNIT);
@@ -8392,7 +8397,8 @@ vectorizable_load (stmt_vec_info stmt_in
ptr = copy_ssa_name (dataref_ptr);
else
ptr = make_ssa_name (TREE_TYPE (dataref_ptr));
- unsigned int align = DR_TARGET_ALIGNMENT (first_dr);
+ unsigned int align
+ = STMT_VINFO_TARGET_ALIGNMENT (first_stmt_info);
new_stmt = gimple_build_assign
(ptr, BIT_AND_EXPR, dataref_ptr,
build_int_cst
@@ -8436,7 +8442,8 @@ vectorizable_load (stmt_vec_info stmt_in
new_temp = copy_ssa_name (dataref_ptr);
else
new_temp = make_ssa_name (TREE_TYPE (dataref_ptr));
- unsigned int align = DR_TARGET_ALIGNMENT (first_dr);
+ unsigned int align
+ = STMT_VINFO_TARGET_ALIGNMENT (first_stmt_info);
new_stmt = gimple_build_assign
(new_temp, BIT_AND_EXPR, dataref_ptr,
build_int_cst (TREE_TYPE (dataref_ptr),
next prev parent reply other threads:[~2018-07-24 10:07 UTC|newest]
Thread overview: 108+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-07-24 9:52 [00/46] Remove vinfo_for_stmt etc Richard Sandiford
2018-07-24 9:52 ` [01/46] Move special cases out of get_initial_def_for_reduction Richard Sandiford
2018-07-25 8:42 ` Richard Biener
2018-07-24 9:53 ` [03/46] Remove unnecessary update of NUM_SLP_USES Richard Sandiford
2018-07-25 8:46 ` Richard Biener
2018-07-24 9:53 ` [02/46] Remove dead vectorizable_reduction code Richard Sandiford
2018-07-25 8:43 ` Richard Biener
2018-07-24 9:54 ` [05/46] Fix make_ssa_name call in vectorizable_reduction Richard Sandiford
2018-07-25 8:47 ` Richard Biener
2018-07-24 9:54 ` [04/46] Factor out the test for a valid reduction input Richard Sandiford
2018-07-25 8:46 ` Richard Biener
2018-07-24 9:55 ` [08/46] Add vec_info::lookup_def Richard Sandiford
2018-07-25 9:12 ` Richard Biener
2018-07-24 9:55 ` [06/46] Add vec_info::add_stmt Richard Sandiford
2018-07-25 9:10 ` Richard Biener
2018-07-24 9:55 ` [07/46] Add vec_info::lookup_stmt Richard Sandiford
2018-07-25 9:11 ` Richard Biener
2018-07-24 9:56 ` [09/46] Add vec_info::lookup_single_use Richard Sandiford
2018-07-25 9:13 ` Richard Biener
2018-07-24 9:57 ` [11/46] Pass back a stmt_vec_info from vect_is_simple_use Richard Sandiford
2018-07-25 9:18 ` Richard Biener
2018-07-24 9:57 ` [10/46] Temporarily make stmt_vec_info a class Richard Sandiford
2018-07-25 9:14 ` Richard Biener
2018-07-24 9:58 ` [12/46] Make vect_finish_stmt_generation return a stmt_vec_info Richard Sandiford
2018-07-25 9:19 ` Richard Biener
2018-07-24 9:58 ` [13/46] Make STMT_VINFO_RELATED_STMT " Richard Sandiford
2018-07-25 9:19 ` Richard Biener
2018-07-24 9:58 ` [14/46] Make STMT_VINFO_VEC_STMT " Richard Sandiford
2018-07-25 9:21 ` Richard Biener
2018-07-25 11:03 ` Richard Sandiford
2018-08-02 0:22 ` H.J. Lu
2018-08-02 9:58 ` Richard Sandiford
2018-07-24 9:59 ` [16/46] Make STMT_VINFO_REDUC_DEF " Richard Sandiford
2018-07-25 9:22 ` Richard Biener
2018-07-24 9:59 ` [17/46] Make LOOP_VINFO_REDUCTIONS an auto_vec<stmt_vec_info> Richard Sandiford
2018-07-25 9:23 ` Richard Biener
2018-07-24 9:59 ` [15/46] Make SLP_TREE_VEC_STMTS a vec<stmt_vec_info> Richard Sandiford
2018-07-25 9:22 ` Richard Biener
2018-07-24 10:00 ` [18/46] Make SLP_TREE_SCALAR_STMTS " Richard Sandiford
2018-07-25 9:27 ` Richard Biener
2018-07-31 15:03 ` Richard Sandiford
2018-07-24 10:01 ` [19/46] Make vect_dr_stmt return a stmt_vec_info Richard Sandiford
2018-07-25 9:28 ` Richard Biener
2018-07-24 10:01 ` [20/46] Make *FIRST_ELEMENT and *NEXT_ELEMENT stmt_vec_infos Richard Sandiford
2018-07-25 9:28 ` Richard Biener
2018-07-24 10:01 ` [21/46] Make grouped_stores and reduction_chains use stmt_vec_infos Richard Sandiford
2018-07-25 9:28 ` Richard Biener
2018-07-24 10:02 ` [22/46] Make DR_GROUP_SAME_DR_STMT a stmt_vec_info Richard Sandiford
2018-07-25 9:29 ` Richard Biener
2018-07-24 10:02 ` [24/46] Make stmt_info_for_cost use " Richard Sandiford
2018-07-25 9:30 ` Richard Biener
2018-07-24 10:02 ` [23/46] Make LOOP_VINFO_MAY_MISALIGN_STMTS use stmt_vec_info Richard Sandiford
2018-07-25 9:29 ` Richard Biener
2018-07-24 10:03 ` [25/46] Make get_earlier/later_stmt take and return stmt_vec_infos Richard Sandiford
2018-07-25 9:31 ` Richard Biener
2018-07-24 10:03 ` [26/46] Make more use of dyn_cast in tree-vect* Richard Sandiford
2018-07-25 9:31 ` Richard Biener
2018-07-24 10:03 ` [27/46] Remove duplicated stmt_vec_info lookups Richard Sandiford
2018-07-25 9:32 ` Richard Biener
2018-07-24 10:04 ` [29/46] Use stmt_vec_info instead of gimple stmts internally (part 2) Richard Sandiford
2018-07-25 10:03 ` Richard Biener
2018-07-24 10:04 ` [28/46] Use stmt_vec_info instead of gimple stmts internally (part 1) Richard Sandiford
2018-07-25 9:33 ` Richard Biener
2018-07-24 10:04 ` [30/46] Use stmt_vec_infos rather than gimple stmts for worklists Richard Sandiford
2018-07-25 10:04 ` Richard Biener
2018-07-24 10:05 ` [32/46] Use stmt_vec_info in function interfaces (part 2) Richard Sandiford
2018-07-25 10:06 ` Richard Biener
2018-07-24 10:05 ` [31/46] Use stmt_vec_info in function interfaces (part 1) Richard Sandiford
2018-07-25 10:05 ` Richard Biener
2018-07-24 10:06 ` [34/46] Alter interface to vect_get_vec_def_for_stmt_copy Richard Sandiford
2018-07-25 10:13 ` Richard Biener
2018-07-24 10:06 ` [33/46] Use stmt_vec_infos instead of vec_info/gimple stmt pairs Richard Sandiford
2018-07-25 10:06 ` Richard Biener
2018-07-24 10:06 ` [35/46] Alter interfaces within vect_pattern_recog Richard Sandiford
2018-07-25 10:14 ` Richard Biener
2018-07-24 10:07 ` [36/46] Add a pattern_stmt_p field to stmt_vec_info Richard Sandiford
2018-07-25 10:15 ` Richard Biener
2018-07-25 11:09 ` Richard Sandiford
2018-07-25 11:48 ` Richard Biener
2018-07-26 10:29 ` Richard Sandiford
2018-07-26 11:15 ` Richard Biener
2018-07-24 10:07 ` Richard Sandiford [this message]
2018-07-25 10:18 ` [37/46] Associate alignment information with stmt_vec_infos Richard Biener
2018-07-26 10:55 ` Richard Sandiford
2018-07-26 11:13 ` Richard Biener
2018-07-24 10:08 ` [39/46] Replace STMT_VINFO_UNALIGNED_DR with the associated statement Richard Sandiford
2018-07-26 11:08 ` [39/46 v2] Change STMT_VINFO_UNALIGNED_DR to a dr_vec_info Richard Sandiford
2018-07-26 11:13 ` Richard Biener
2018-07-24 10:08 ` [38/46] Pass stmt_vec_infos instead of data_references where relevant Richard Sandiford
2018-07-25 10:21 ` Richard Biener
2018-07-25 11:21 ` Richard Sandiford
2018-07-26 11:05 ` Richard Sandiford
2018-07-26 11:13 ` Richard Biener
2018-07-24 10:09 ` [40/46] Add vec_info::lookup_dr Richard Sandiford
2018-07-26 11:10 ` [40/46 v2] " Richard Sandiford
2018-07-26 11:16 ` Richard Biener
2018-07-24 10:09 ` [42/46] Add vec_info::replace_stmt Richard Sandiford
2018-07-31 12:03 ` Richard Biener
2018-07-24 10:09 ` [41/46] Add vec_info::remove_stmt Richard Sandiford
2018-07-31 12:02 ` Richard Biener
2018-07-24 10:10 ` [43/46] Make free_stmt_vec_info take a stmt_vec_info Richard Sandiford
2018-07-31 12:03 ` Richard Biener
2018-07-24 10:10 ` [45/46] Remove vect_stmt_in_region_p Richard Sandiford
2018-07-31 12:06 ` Richard Biener
2018-07-24 10:10 ` [44/46] Remove global vinfo_for_stmt-related routines Richard Sandiford
2018-07-31 12:05 ` Richard Biener
2018-07-24 10:11 ` [46/46] Turn stmt_vec_info back into a typedef Richard Sandiford
2018-07-31 12:07 ` Richard Biener
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87h8kplzew.fsf@arm.com \
--to=richard.sandiford@arm.com \
--cc=gcc-patches@gcc.gnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).