* [0/3] Turn current_vector_size into a vec_info field
@ 2019-10-20 13:23 Richard Sandiford
2019-10-20 13:27 ` [1/3] Avoid setting current_vector_size in get_vec_alignment_for_array_type Richard Sandiford
` (3 more replies)
0 siblings, 4 replies; 7+ messages in thread
From: Richard Sandiford @ 2019-10-20 13:23 UTC (permalink / raw)
To: gcc-patches
Now that we're keeping multiple vec_infos around at the same time,
it seemed worth turning current_vector_size into a vec_info field.
This for example simplifies the book-keeping in vect_analyze_loop
and helps with some follow-on changes.
Tested on aarch64-linux-gnu and x86_64-linux-gnu.
Richard
^ permalink raw reply [flat|nested] 7+ messages in thread
* [1/3] Avoid setting current_vector_size in get_vec_alignment_for_array_type
2019-10-20 13:23 [0/3] Turn current_vector_size into a vec_info field Richard Sandiford
@ 2019-10-20 13:27 ` Richard Sandiford
2019-10-30 14:22 ` Richard Biener
2019-10-20 13:30 ` [2/3] Pass vec_infos to more routines Richard Sandiford
` (2 subsequent siblings)
3 siblings, 1 reply; 7+ messages in thread
From: Richard Sandiford @ 2019-10-20 13:27 UTC (permalink / raw)
To: gcc-patches
The increase_alignment pass was using get_vectype_for_scalar_type
to get the preferred vector type for each array element type.
This has the effect of carrying over the vector size chosen by
the first successful call to all subsequent calls, whereas it seems
more natural to treat each array type independently and pick the
"best" vector type for each element type.
2019-10-20 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.c (get_vec_alignment_for_array_type): Use
get_vectype_for_scalar_type_and_size instead of
get_vectype_for_scalar_type.
Index: gcc/tree-vectorizer.c
===================================================================
--- gcc/tree-vectorizer.c 2019-10-20 13:58:02.091634417 +0100
+++ gcc/tree-vectorizer.c 2019-10-20 14:13:50.784857051 +0100
@@ -1347,7 +1347,8 @@ get_vec_alignment_for_array_type (tree t
gcc_assert (TREE_CODE (type) == ARRAY_TYPE);
poly_uint64 array_size, vector_size;
- tree vectype = get_vectype_for_scalar_type (strip_array_types (type));
+ tree scalar_type = strip_array_types (type);
+ tree vectype = get_vectype_for_scalar_type_and_size (scalar_type, 0);
if (!vectype
|| !poly_int_tree_p (TYPE_SIZE (type), &array_size)
|| !poly_int_tree_p (TYPE_SIZE (vectype), &vector_size)
^ permalink raw reply [flat|nested] 7+ messages in thread
* [2/3] Pass vec_infos to more routines
2019-10-20 13:23 [0/3] Turn current_vector_size into a vec_info field Richard Sandiford
2019-10-20 13:27 ` [1/3] Avoid setting current_vector_size in get_vec_alignment_for_array_type Richard Sandiford
@ 2019-10-20 13:30 ` Richard Sandiford
2019-10-30 14:25 ` Richard Biener
2019-10-20 14:28 ` [3/3] Replace current_vector_size with vec_info::vector_size Richard Sandiford
2019-10-21 6:01 ` [0/3] Turn current_vector_size into a vec_info field Richard Biener
3 siblings, 1 reply; 7+ messages in thread
From: Richard Sandiford @ 2019-10-20 13:30 UTC (permalink / raw)
To: gcc-patches
These 11 patches just pass vec_infos to one routine each. Splitting
them up make it easier to write the changelogs, but they're so trivial
that it seemed better to send them all in one message.
Pass a vec_info to vect_supportable_shift
2019-10-20 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (vect_supportable_shift): Take a vec_info.
* tree-vect-stmts.c (vect_supportable_shift): Likewise.
* tree-vect-patterns.c (vect_synth_mult_by_constant): Update call
accordingly.
Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h 2019-10-20 13:58:02.095634389 +0100
+++ gcc/tree-vectorizer.h 2019-10-20 14:14:00.632786715 +0100
@@ -1634,7 +1634,7 @@ extern void vect_get_load_cost (stmt_vec
stmt_vector_for_cost *, bool);
extern void vect_get_store_cost (stmt_vec_info, int,
unsigned int *, stmt_vector_for_cost *);
-extern bool vect_supportable_shift (enum tree_code, tree);
+extern bool vect_supportable_shift (vec_info *, enum tree_code, tree);
extern tree vect_gen_perm_mask_any (tree, const vec_perm_indices &);
extern tree vect_gen_perm_mask_checked (tree, const vec_perm_indices &);
extern void optimize_mask_stores (class loop*);
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c 2019-10-20 13:58:02.111634275 +0100
+++ gcc/tree-vect-stmts.c 2019-10-20 14:14:00.628786742 +0100
@@ -5465,7 +5465,7 @@ vectorizable_assignment (stmt_vec_info s
either as shift by a scalar or by a vector. */
bool
-vect_supportable_shift (enum tree_code code, tree scalar_type)
+vect_supportable_shift (vec_info *, enum tree_code code, tree scalar_type)
{
machine_mode vec_mode;
Index: gcc/tree-vect-patterns.c
===================================================================
--- gcc/tree-vect-patterns.c 2019-10-17 14:22:55.519309037 +0100
+++ gcc/tree-vect-patterns.c 2019-10-20 14:14:00.628786742 +0100
@@ -2720,6 +2720,7 @@ apply_binop_and_append_stmt (tree_code c
vect_synth_mult_by_constant (tree op, tree val,
stmt_vec_info stmt_vinfo)
{
+ vec_info *vinfo = stmt_vinfo->vinfo;
tree itype = TREE_TYPE (op);
machine_mode mode = TYPE_MODE (itype);
struct algorithm alg;
@@ -2738,7 +2739,7 @@ vect_synth_mult_by_constant (tree op, tr
/* Targets that don't support vector shifts but support vector additions
can synthesize shifts that way. */
- bool synth_shift_p = !vect_supportable_shift (LSHIFT_EXPR, multtype);
+ bool synth_shift_p = !vect_supportable_shift (vinfo, LSHIFT_EXPR, multtype);
HOST_WIDE_INT hwval = tree_to_shwi (val);
/* Use MAX_COST here as we don't want to limit the sequence on rtx costs.
Pass a vec_info to vect_supportable_direct_optab_p
2019-10-20 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-patterns.c (vect_supportable_direct_optab_p): Take
a vec_info.
(vect_recog_dot_prod_pattern): Update call accordingly.
(vect_recog_sad_pattern, vect_recog_pow_pattern): Likewise.
(vect_recog_widen_sum_pattern): Likewise.
Index: gcc/tree-vect-patterns.c
===================================================================
--- gcc/tree-vect-patterns.c 2019-10-20 14:14:00.628786742 +0100
+++ gcc/tree-vect-patterns.c 2019-10-20 14:14:03.588765602 +0100
@@ -187,7 +187,7 @@ vect_get_external_def_edge (vec_info *vi
is nonnull. */
static bool
-vect_supportable_direct_optab_p (tree otype, tree_code code,
+vect_supportable_direct_optab_p (vec_info *, tree otype, tree_code code,
tree itype, tree *vecotype_out,
tree *vecitype_out = NULL)
{
@@ -985,7 +985,7 @@ vect_recog_dot_prod_pattern (stmt_vec_in
vect_pattern_detected ("vect_recog_dot_prod_pattern", last_stmt);
tree half_vectype;
- if (!vect_supportable_direct_optab_p (type, DOT_PROD_EXPR, half_type,
+ if (!vect_supportable_direct_optab_p (vinfo, type, DOT_PROD_EXPR, half_type,
type_out, &half_vectype))
return NULL;
@@ -1143,7 +1143,7 @@ vect_recog_sad_pattern (stmt_vec_info st
vect_pattern_detected ("vect_recog_sad_pattern", last_stmt);
tree half_vectype;
- if (!vect_supportable_direct_optab_p (sum_type, SAD_EXPR, half_type,
+ if (!vect_supportable_direct_optab_p (vinfo, sum_type, SAD_EXPR, half_type,
type_out, &half_vectype))
return NULL;
@@ -1273,6 +1273,7 @@ vect_recog_widen_mult_pattern (stmt_vec_
static gimple *
vect_recog_pow_pattern (stmt_vec_info stmt_vinfo, tree *type_out)
{
+ vec_info *vinfo = stmt_vinfo->vinfo;
gimple *last_stmt = stmt_vinfo->stmt;
tree base, exp;
gimple *stmt;
@@ -1366,7 +1367,7 @@ vect_recog_pow_pattern (stmt_vec_info st
|| (TREE_CODE (exp) == REAL_CST
&& real_equal (&TREE_REAL_CST (exp), &dconst2)))
{
- if (!vect_supportable_direct_optab_p (TREE_TYPE (base), MULT_EXPR,
+ if (!vect_supportable_direct_optab_p (vinfo, TREE_TYPE (base), MULT_EXPR,
TREE_TYPE (base), type_out))
return NULL;
@@ -1472,8 +1473,8 @@ vect_recog_widen_sum_pattern (stmt_vec_i
vect_pattern_detected ("vect_recog_widen_sum_pattern", last_stmt);
- if (!vect_supportable_direct_optab_p (type, WIDEN_SUM_EXPR, unprom0.type,
- type_out))
+ if (!vect_supportable_direct_optab_p (vinfo, type, WIDEN_SUM_EXPR,
+ unprom0.type, type_out))
return NULL;
var = vect_recog_temp_ssa_var (type, NULL);
Pass a vec_info to get_mask_type_for_scalar_type
2019-10-20 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (get_mask_type_for_scalar_type): Take a vec_info.
* tree-vect-stmts.c (get_mask_type_for_scalar_type): Likewise.
(vect_check_load_store_mask): Update call accordingly.
(vect_get_mask_type_for_stmt): Likewise.
* tree-vect-patterns.c (check_bool_pattern): Likewise.
(search_type_for_mask_1, vect_recog_mask_conversion_pattern): Likewise.
(vect_convert_mask_for_vectype): Likewise.
Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h 2019-10-20 14:14:00.632786715 +0100
+++ gcc/tree-vectorizer.h 2019-10-20 14:14:06.472745000 +0100
@@ -1591,7 +1591,7 @@ extern bool vect_can_advance_ivs_p (loop
extern poly_uint64 current_vector_size;
extern tree get_vectype_for_scalar_type (tree);
extern tree get_vectype_for_scalar_type_and_size (tree, poly_uint64);
-extern tree get_mask_type_for_scalar_type (tree);
+extern tree get_mask_type_for_scalar_type (vec_info *, tree);
extern tree get_same_sized_vectype (tree, tree);
extern bool vect_get_loop_mask_type (loop_vec_info);
extern bool vect_is_simple_use (tree, vec_info *, enum vect_def_type *,
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c 2019-10-20 14:14:00.628786742 +0100
+++ gcc/tree-vect-stmts.c 2019-10-20 14:14:06.472745000 +0100
@@ -2558,6 +2558,7 @@ vect_check_load_store_mask (stmt_vec_inf
vect_def_type *mask_dt_out,
tree *mask_vectype_out)
{
+ vec_info *vinfo = stmt_info->vinfo;
if (!VECT_SCALAR_BOOLEAN_TYPE_P (TREE_TYPE (mask)))
{
if (dump_enabled_p ())
@@ -2586,7 +2587,7 @@ vect_check_load_store_mask (stmt_vec_inf
tree vectype = STMT_VINFO_VECTYPE (stmt_info);
if (!mask_vectype)
- mask_vectype = get_mask_type_for_scalar_type (TREE_TYPE (vectype));
+ mask_vectype = get_mask_type_for_scalar_type (vinfo, TREE_TYPE (vectype));
if (!mask_vectype || !VECTOR_BOOLEAN_TYPE_P (mask_vectype))
{
@@ -11156,7 +11157,7 @@ get_vectype_for_scalar_type (tree scalar
of vectors of specified SCALAR_TYPE as supported by target. */
tree
-get_mask_type_for_scalar_type (tree scalar_type)
+get_mask_type_for_scalar_type (vec_info *, tree scalar_type)
{
tree vectype = get_vectype_for_scalar_type (scalar_type);
@@ -11986,6 +11987,7 @@ vect_get_vector_types_for_stmt (stmt_vec
opt_tree
vect_get_mask_type_for_stmt (stmt_vec_info stmt_info)
{
+ vec_info *vinfo = stmt_info->vinfo;
gimple *stmt = stmt_info->stmt;
tree mask_type = NULL;
tree vectype, scalar_type;
@@ -11995,7 +11997,7 @@ vect_get_mask_type_for_stmt (stmt_vec_in
&& !VECT_SCALAR_BOOLEAN_TYPE_P (TREE_TYPE (gimple_assign_rhs1 (stmt))))
{
scalar_type = TREE_TYPE (gimple_assign_rhs1 (stmt));
- mask_type = get_mask_type_for_scalar_type (scalar_type);
+ mask_type = get_mask_type_for_scalar_type (vinfo, scalar_type);
if (!mask_type)
return opt_tree::failure_at (stmt,
Index: gcc/tree-vect-patterns.c
===================================================================
--- gcc/tree-vect-patterns.c 2019-10-20 14:14:03.588765602 +0100
+++ gcc/tree-vect-patterns.c 2019-10-20 14:14:06.468745032 +0100
@@ -3616,7 +3616,8 @@ check_bool_pattern (tree var, vec_info *
if (comp_vectype == NULL_TREE)
return false;
- tree mask_type = get_mask_type_for_scalar_type (TREE_TYPE (rhs1));
+ tree mask_type = get_mask_type_for_scalar_type (vinfo,
+ TREE_TYPE (rhs1));
if (mask_type
&& expand_vec_cmp_expr_p (comp_vectype, mask_type, rhs_code))
return false;
@@ -3943,7 +3944,7 @@ search_type_for_mask_1 (tree var, vec_in
break;
}
- mask_type = get_mask_type_for_scalar_type (TREE_TYPE (rhs1));
+ mask_type = get_mask_type_for_scalar_type (vinfo, TREE_TYPE (rhs1));
if (!mask_type
|| !expand_vec_cmp_expr_p (comp_vectype, mask_type, rhs_code))
{
@@ -4275,7 +4276,7 @@ vect_recog_mask_conversion_pattern (stmt
tree mask_arg_type = search_type_for_mask (mask_arg, vinfo);
if (!mask_arg_type)
return NULL;
- vectype2 = get_mask_type_for_scalar_type (mask_arg_type);
+ vectype2 = get_mask_type_for_scalar_type (vinfo, mask_arg_type);
if (!vectype1 || !vectype2
|| known_eq (TYPE_VECTOR_SUBPARTS (vectype1),
@@ -4352,7 +4353,7 @@ vect_recog_mask_conversion_pattern (stmt
else
return NULL;
- vectype2 = get_mask_type_for_scalar_type (rhs1_type);
+ vectype2 = get_mask_type_for_scalar_type (vinfo, rhs1_type);
if (!vectype1 || !vectype2)
return NULL;
@@ -4442,14 +4443,14 @@ vect_recog_mask_conversion_pattern (stmt
if (TYPE_PRECISION (rhs1_type) < TYPE_PRECISION (rhs2_type))
{
- vectype1 = get_mask_type_for_scalar_type (rhs1_type);
+ vectype1 = get_mask_type_for_scalar_type (vinfo, rhs1_type);
if (!vectype1)
return NULL;
rhs2 = build_mask_conversion (rhs2, vectype1, stmt_vinfo);
}
else
{
- vectype1 = get_mask_type_for_scalar_type (rhs2_type);
+ vectype1 = get_mask_type_for_scalar_type (vinfo, rhs2_type);
if (!vectype1)
return NULL;
rhs1 = build_mask_conversion (rhs1, vectype1, stmt_vinfo);
@@ -4520,7 +4521,7 @@ vect_convert_mask_for_vectype (tree mask
tree mask_type = search_type_for_mask (mask, vinfo);
if (mask_type)
{
- tree mask_vectype = get_mask_type_for_scalar_type (mask_type);
+ tree mask_vectype = get_mask_type_for_scalar_type (vinfo, mask_type);
if (mask_vectype
&& maybe_ne (TYPE_VECTOR_SUBPARTS (vectype),
TYPE_VECTOR_SUBPARTS (mask_vectype)))
Pass a vec_info to get_vectype_for_scalar_type
2019-10-20 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (get_vectype_for_scalar_type): Take a vec_info.
* tree-vect-stmts.c (get_vectype_for_scalar_type): Likewise.
(vect_prologue_cost_for_slp_op): Update call accordingly.
(vect_get_vec_def_for_operand, vect_get_gather_scatter_ops)
(vect_get_strided_load_store_ops, vectorizable_simd_clone_call)
(vect_supportable_shift, vect_is_simple_cond, vectorizable_comparison)
(get_mask_type_for_scalar_type): Likewise.
(vect_get_vector_types_for_stmt): Likewise.
* tree-vect-data-refs.c (vect_analyze_data_refs): Likewise.
* tree-vect-loop.c (vect_determine_vectorization_factor): Likewise.
(get_initial_def_for_reduction, build_vect_cond_expr): Likewise.
* tree-vect-patterns.c (vect_supportable_direct_optab_p): Likewise.
(vect_split_statement, vect_convert_input): Likewise.
(vect_recog_widen_op_pattern, vect_recog_pow_pattern): Likewise.
(vect_recog_over_widening_pattern, vect_recog_mulhs_pattern): Likewise.
(vect_recog_average_pattern, vect_recog_cast_forwprop_pattern)
(vect_recog_rotate_pattern, vect_recog_vector_vector_shift_pattern)
(vect_synth_mult_by_constant, vect_recog_mult_pattern): Likewise.
(vect_recog_divmod_pattern, vect_recog_mixed_size_cond_pattern)
(check_bool_pattern, adjust_bool_pattern_cast, adjust_bool_pattern)
(search_type_for_mask_1, vect_recog_bool_pattern): Likewise.
(vect_recog_mask_conversion_pattern): Likewise.
(vect_add_conversion_to_pattern): Likewise.
(vect_recog_gather_scatter_pattern): Likewise.
* tree-vect-slp.c (vect_build_slp_tree_2): Likewise.
(vect_analyze_slp_instance, vect_get_constant_vectors): Likewise.
Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h 2019-10-20 14:14:06.472745000 +0100
+++ gcc/tree-vectorizer.h 2019-10-20 14:14:09.672722145 +0100
@@ -1589,7 +1589,7 @@ extern bool vect_can_advance_ivs_p (loop
/* In tree-vect-stmts.c. */
extern poly_uint64 current_vector_size;
-extern tree get_vectype_for_scalar_type (tree);
+extern tree get_vectype_for_scalar_type (vec_info *, tree);
extern tree get_vectype_for_scalar_type_and_size (tree, poly_uint64);
extern tree get_mask_type_for_scalar_type (vec_info *, tree);
extern tree get_same_sized_vectype (tree, tree);
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c 2019-10-20 14:14:06.472745000 +0100
+++ gcc/tree-vect-stmts.c 2019-10-20 14:14:09.672722145 +0100
@@ -796,6 +796,7 @@ vect_prologue_cost_for_slp_op (slp_tree
unsigned opno, enum vect_def_type dt,
stmt_vector_for_cost *cost_vec)
{
+ vec_info *vinfo = stmt_info->vinfo;
gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[0]->stmt;
tree op = gimple_op (stmt, opno);
unsigned prologue_cost = 0;
@@ -803,7 +804,7 @@ vect_prologue_cost_for_slp_op (slp_tree
/* Without looking at the actual initializer a vector of
constants can be implemented as load from the constant pool.
When all elements are the same we can use a splat. */
- tree vectype = get_vectype_for_scalar_type (TREE_TYPE (op));
+ tree vectype = get_vectype_for_scalar_type (vinfo, TREE_TYPE (op));
unsigned group_size = SLP_TREE_SCALAR_STMTS (node).length ();
unsigned num_vects_to_check;
unsigned HOST_WIDE_INT const_nunits;
@@ -1610,7 +1611,7 @@ vect_get_vec_def_for_operand (tree op, s
&& VECTOR_BOOLEAN_TYPE_P (stmt_vectype))
vector_type = build_same_sized_truth_vector_type (stmt_vectype);
else
- vector_type = get_vectype_for_scalar_type (TREE_TYPE (op));
+ vector_type = get_vectype_for_scalar_type (loop_vinfo, TREE_TYPE (op));
gcc_assert (vector_type);
return vect_init_vector (stmt_vinfo, op, vector_type, NULL);
@@ -2975,6 +2976,7 @@ vect_get_gather_scatter_ops (class loop
gather_scatter_info *gs_info,
tree *dataref_ptr, tree *vec_offset)
{
+ vec_info *vinfo = stmt_info->vinfo;
gimple_seq stmts = NULL;
*dataref_ptr = force_gimple_operand (gs_info->base, &stmts, true, NULL_TREE);
if (stmts != NULL)
@@ -2985,7 +2987,7 @@ vect_get_gather_scatter_ops (class loop
gcc_assert (!new_bb);
}
tree offset_type = TREE_TYPE (gs_info->offset);
- tree offset_vectype = get_vectype_for_scalar_type (offset_type);
+ tree offset_vectype = get_vectype_for_scalar_type (vinfo, offset_type);
*vec_offset = vect_get_vec_def_for_operand (gs_info->offset, stmt_info,
offset_vectype);
}
@@ -3020,7 +3022,7 @@ vect_get_strided_load_store_ops (stmt_ve
/* The offset given in GS_INFO can have pointer type, so use the element
type of the vector instead. */
tree offset_type = TREE_TYPE (gs_info->offset);
- tree offset_vectype = get_vectype_for_scalar_type (offset_type);
+ tree offset_vectype = get_vectype_for_scalar_type (loop_vinfo, offset_type);
offset_type = TREE_TYPE (offset_vectype);
/* Calculate X = DR_STEP / SCALE and convert it to the appropriate type. */
@@ -4101,9 +4103,8 @@ vectorizable_simd_clone_call (stmt_vec_i
|| arginfo[i].dt == vect_external_def)
&& bestn->simdclone->args[i].arg_type == SIMD_CLONE_ARG_TYPE_VECTOR)
{
- arginfo[i].vectype
- = get_vectype_for_scalar_type (TREE_TYPE (gimple_call_arg (stmt,
- i)));
+ tree arg_type = TREE_TYPE (gimple_call_arg (stmt, i));
+ arginfo[i].vectype = get_vectype_for_scalar_type (vinfo, arg_type);
if (arginfo[i].vectype == NULL
|| (simd_clone_subparts (arginfo[i].vectype)
> bestn->simdclone->simdlen))
@@ -5466,7 +5467,7 @@ vectorizable_assignment (stmt_vec_info s
either as shift by a scalar or by a vector. */
bool
-vect_supportable_shift (vec_info *, enum tree_code code, tree scalar_type)
+vect_supportable_shift (vec_info *vinfo, enum tree_code code, tree scalar_type)
{
machine_mode vec_mode;
@@ -5474,7 +5475,7 @@ vect_supportable_shift (vec_info *, enum
int icode;
tree vectype;
- vectype = get_vectype_for_scalar_type (scalar_type);
+ vectype = get_vectype_for_scalar_type (vinfo, scalar_type);
if (!vectype)
return false;
@@ -9763,7 +9764,7 @@ vect_is_simple_cond (tree cond, vec_info
scalar_type = build_nonstandard_integer_type
(tree_to_uhwi (TYPE_SIZE (TREE_TYPE (vectype))),
TYPE_UNSIGNED (scalar_type));
- *comp_vectype = get_vectype_for_scalar_type (scalar_type);
+ *comp_vectype = get_vectype_for_scalar_type (vinfo, scalar_type);
}
return true;
@@ -10359,7 +10360,7 @@ vectorizable_comparison (stmt_vec_info s
/* Invariant comparison. */
if (!vectype)
{
- vectype = get_vectype_for_scalar_type (TREE_TYPE (rhs1));
+ vectype = get_vectype_for_scalar_type (vinfo, TREE_TYPE (rhs1));
if (maybe_ne (TYPE_VECTOR_SUBPARTS (vectype), nunits))
return false;
}
@@ -11140,7 +11141,7 @@ get_vectype_for_scalar_type_and_size (tr
by the target. */
tree
-get_vectype_for_scalar_type (tree scalar_type)
+get_vectype_for_scalar_type (vec_info *, tree scalar_type)
{
tree vectype;
vectype = get_vectype_for_scalar_type_and_size (scalar_type,
@@ -11157,9 +11158,9 @@ get_vectype_for_scalar_type (tree scalar
of vectors of specified SCALAR_TYPE as supported by target. */
tree
-get_mask_type_for_scalar_type (vec_info *, tree scalar_type)
+get_mask_type_for_scalar_type (vec_info *vinfo, tree scalar_type)
{
- tree vectype = get_vectype_for_scalar_type (scalar_type);
+ tree vectype = get_vectype_for_scalar_type (vinfo, scalar_type);
if (!vectype)
return NULL;
@@ -11853,6 +11854,7 @@ vect_get_vector_types_for_stmt (stmt_vec
tree *stmt_vectype_out,
tree *nunits_vectype_out)
{
+ vec_info *vinfo = stmt_info->vinfo;
gimple *stmt = stmt_info->stmt;
*stmt_vectype_out = NULL_TREE;
@@ -11919,7 +11921,7 @@ vect_get_vector_types_for_stmt (stmt_vec
if (dump_enabled_p ())
dump_printf_loc (MSG_NOTE, vect_location,
"get vectype for scalar type: %T\n", scalar_type);
- vectype = get_vectype_for_scalar_type (scalar_type);
+ vectype = get_vectype_for_scalar_type (vinfo, scalar_type);
if (!vectype)
return opt_result::failure_at (stmt,
"not vectorized:"
@@ -11952,7 +11954,7 @@ vect_get_vector_types_for_stmt (stmt_vec
if (dump_enabled_p ())
dump_printf_loc (MSG_NOTE, vect_location,
"get vectype for scalar type: %T\n", scalar_type);
- nunits_vectype = get_vectype_for_scalar_type (scalar_type);
+ nunits_vectype = get_vectype_for_scalar_type (vinfo, scalar_type);
}
if (!nunits_vectype)
return opt_result::failure_at (stmt,
Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c 2019-10-11 15:43:54.543490491 +0100
+++ gcc/tree-vect-data-refs.c 2019-10-20 14:14:09.664722204 +0100
@@ -4344,7 +4344,7 @@ vect_analyze_data_refs (vec_info *vinfo,
/* Set vectype for STMT. */
scalar_type = TREE_TYPE (DR_REF (dr));
STMT_VINFO_VECTYPE (stmt_info)
- = get_vectype_for_scalar_type (scalar_type);
+ = get_vectype_for_scalar_type (vinfo, scalar_type);
if (!STMT_VINFO_VECTYPE (stmt_info))
{
if (dump_enabled_p ())
@@ -4392,7 +4392,8 @@ vect_analyze_data_refs (vec_info *vinfo,
if (!vect_check_gather_scatter (stmt_info,
as_a <loop_vec_info> (vinfo),
&gs_info)
- || !get_vectype_for_scalar_type (TREE_TYPE (gs_info.offset)))
+ || !get_vectype_for_scalar_type (vinfo,
+ TREE_TYPE (gs_info.offset)))
{
if (fatal)
*fatal = false;
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c 2019-10-20 13:58:02.095634389 +0100
+++ gcc/tree-vect-loop.c 2019-10-20 14:14:09.668722173 +0100
@@ -327,7 +327,7 @@ vect_determine_vectorization_factor (loo
"get vectype for scalar type: %T\n",
scalar_type);
- vectype = get_vectype_for_scalar_type (scalar_type);
+ vectype = get_vectype_for_scalar_type (loop_vinfo, scalar_type);
if (!vectype)
return opt_result::failure_at (phi,
"not vectorized: unsupported "
@@ -3774,7 +3774,7 @@ get_initial_def_for_reduction (stmt_vec_
loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_vinfo);
class loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
tree scalar_type = TREE_TYPE (init_val);
- tree vectype = get_vectype_for_scalar_type (scalar_type);
+ tree vectype = get_vectype_for_scalar_type (loop_vinfo, scalar_type);
tree def_for_init;
tree init_def;
REAL_VALUE_TYPE real_init_val = dconst0;
@@ -5555,11 +5555,11 @@ build_vect_cond_expr (enum tree_code cod
corresponds to the type of arguments to the reduction stmt, and should *NOT*
be used to create the vectorized stmt. The right vectype for the vectorized
stmt is obtained from the type of the result X:
- get_vectype_for_scalar_type (TREE_TYPE (X))
+ get_vectype_for_scalar_type (vinfo, TREE_TYPE (X))
This means that, contrary to "regular" reductions (or "regular" stmts in
general), the following equation:
- STMT_VINFO_VECTYPE == get_vectype_for_scalar_type (TREE_TYPE (X))
+ STMT_VINFO_VECTYPE == get_vectype_for_scalar_type (vinfo, TREE_TYPE (X))
does *NOT* necessarily hold for reduction patterns. */
bool
Index: gcc/tree-vect-patterns.c
===================================================================
--- gcc/tree-vect-patterns.c 2019-10-20 14:14:06.468745032 +0100
+++ gcc/tree-vect-patterns.c 2019-10-20 14:14:09.668722173 +0100
@@ -187,15 +187,15 @@ vect_get_external_def_edge (vec_info *vi
is nonnull. */
static bool
-vect_supportable_direct_optab_p (vec_info *, tree otype, tree_code code,
+vect_supportable_direct_optab_p (vec_info *vinfo, tree otype, tree_code code,
tree itype, tree *vecotype_out,
tree *vecitype_out = NULL)
{
- tree vecitype = get_vectype_for_scalar_type (itype);
+ tree vecitype = get_vectype_for_scalar_type (vinfo, itype);
if (!vecitype)
return false;
- tree vecotype = get_vectype_for_scalar_type (otype);
+ tree vecotype = get_vectype_for_scalar_type (vinfo, otype);
if (!vecotype)
return false;
@@ -635,6 +635,7 @@ vect_recog_temp_ssa_var (tree type, gimp
vect_split_statement (stmt_vec_info stmt2_info, tree new_rhs,
gimple *stmt1, tree vectype)
{
+ vec_info *vinfo = stmt2_info->vinfo;
if (is_pattern_stmt_p (stmt2_info))
{
/* STMT2_INFO is part of a pattern. Get the statement to which
@@ -678,7 +679,7 @@ vect_split_statement (stmt_vec_info stmt
two-statement pattern now. */
gcc_assert (!STMT_VINFO_RELATED_STMT (stmt2_info));
tree lhs_type = TREE_TYPE (gimple_get_lhs (stmt2_info->stmt));
- tree lhs_vectype = get_vectype_for_scalar_type (lhs_type);
+ tree lhs_vectype = get_vectype_for_scalar_type (vinfo, lhs_type);
if (!lhs_vectype)
return false;
@@ -715,6 +716,8 @@ vect_split_statement (stmt_vec_info stmt
vect_convert_input (stmt_vec_info stmt_info, tree type,
vect_unpromoted_value *unprom, tree vectype)
{
+ vec_info *vinfo = stmt_info->vinfo;
+
/* Check for a no-op conversion. */
if (types_compatible_p (type, TREE_TYPE (unprom->op)))
return unprom->op;
@@ -752,7 +755,7 @@ vect_convert_input (stmt_vec_info stmt_i
unsigned promotion. */
tree midtype = build_nonstandard_integer_type
(TYPE_PRECISION (type), TYPE_UNSIGNED (unprom->type));
- tree vec_midtype = get_vectype_for_scalar_type (midtype);
+ tree vec_midtype = get_vectype_for_scalar_type (vinfo, midtype);
if (vec_midtype)
{
input = vect_recog_temp_ssa_var (midtype, NULL);
@@ -1189,6 +1192,7 @@ vect_recog_widen_op_pattern (stmt_vec_in
tree_code orig_code, tree_code wide_code,
bool shift_p, const char *name)
{
+ vec_info *vinfo = last_stmt_info->vinfo;
gimple *last_stmt = last_stmt_info->stmt;
vect_unpromoted_value unprom[2];
@@ -1208,8 +1212,8 @@ vect_recog_widen_op_pattern (stmt_vec_in
TYPE_UNSIGNED (half_type));
/* Check target support */
- tree vectype = get_vectype_for_scalar_type (half_type);
- tree vecitype = get_vectype_for_scalar_type (itype);
+ tree vectype = get_vectype_for_scalar_type (vinfo, half_type);
+ tree vecitype = get_vectype_for_scalar_type (vinfo, itype);
enum tree_code dummy_code;
int dummy_int;
auto_vec<tree> dummy_vec;
@@ -1221,7 +1225,7 @@ vect_recog_widen_op_pattern (stmt_vec_in
&dummy_int, &dummy_vec))
return NULL;
- *type_out = get_vectype_for_scalar_type (type);
+ *type_out = get_vectype_for_scalar_type (vinfo, type);
if (!*type_out)
return NULL;
@@ -1342,7 +1346,7 @@ vect_recog_pow_pattern (stmt_vec_info st
if (node->simd_clones == NULL)
return NULL;
}
- *type_out = get_vectype_for_scalar_type (TREE_TYPE (base));
+ *type_out = get_vectype_for_scalar_type (vinfo, TREE_TYPE (base));
if (!*type_out)
return NULL;
tree def = vect_recog_temp_ssa_var (TREE_TYPE (base), NULL);
@@ -1380,7 +1384,7 @@ vect_recog_pow_pattern (stmt_vec_info st
if (TREE_CODE (exp) == REAL_CST
&& real_equal (&TREE_REAL_CST (exp), &dconsthalf))
{
- *type_out = get_vectype_for_scalar_type (TREE_TYPE (base));
+ *type_out = get_vectype_for_scalar_type (vinfo, TREE_TYPE (base));
if (*type_out
&& direct_internal_fn_supported_p (IFN_SQRT, *type_out,
OPTIMIZE_FOR_SPEED))
@@ -1665,7 +1669,7 @@ vect_recog_over_widening_pattern (stmt_v
vect_pattern_detected ("vect_recog_over_widening_pattern", last_stmt);
- *type_out = get_vectype_for_scalar_type (type);
+ *type_out = get_vectype_for_scalar_type (vinfo, type);
if (!*type_out)
return NULL;
@@ -1686,8 +1690,8 @@ vect_recog_over_widening_pattern (stmt_v
wants to rewrite anyway. If targets have a minimum element size
for some optabs, we should pattern-match smaller ops to larger ops
where beneficial. */
- tree new_vectype = get_vectype_for_scalar_type (new_type);
- tree op_vectype = get_vectype_for_scalar_type (op_type);
+ tree new_vectype = get_vectype_for_scalar_type (vinfo, new_type);
+ tree op_vectype = get_vectype_for_scalar_type (vinfo, op_type);
if (!new_vectype || !op_vectype)
return NULL;
@@ -1864,7 +1868,7 @@ vect_recog_mulhs_pattern (stmt_vec_info
(target_precision, TYPE_UNSIGNED (new_type));
/* Check for target support. */
- tree new_vectype = get_vectype_for_scalar_type (new_type);
+ tree new_vectype = get_vectype_for_scalar_type (vinfo, new_type);
if (!new_vectype
|| !direct_internal_fn_supported_p
(ifn, new_vectype, OPTIMIZE_FOR_SPEED))
@@ -1872,7 +1876,7 @@ vect_recog_mulhs_pattern (stmt_vec_info
/* The IR requires a valid vector type for the cast result, even though
it's likely to be discarded. */
- *type_out = get_vectype_for_scalar_type (lhs_type);
+ *type_out = get_vectype_for_scalar_type (vinfo, lhs_type);
if (!*type_out)
return NULL;
@@ -2014,7 +2018,7 @@ vect_recog_average_pattern (stmt_vec_inf
TYPE_UNSIGNED (new_type));
/* Check for target support. */
- tree new_vectype = get_vectype_for_scalar_type (new_type);
+ tree new_vectype = get_vectype_for_scalar_type (vinfo, new_type);
if (!new_vectype
|| !direct_internal_fn_supported_p (ifn, new_vectype,
OPTIMIZE_FOR_SPEED))
@@ -2022,7 +2026,7 @@ vect_recog_average_pattern (stmt_vec_inf
/* The IR requires a valid vector type for the cast result, even though
it's likely to be discarded. */
- *type_out = get_vectype_for_scalar_type (type);
+ *type_out = get_vectype_for_scalar_type (vinfo, type);
if (!*type_out)
return NULL;
@@ -2108,7 +2112,7 @@ vect_recog_cast_forwprop_pattern (stmt_v
the unnecessary widening and narrowing. */
vect_pattern_detected ("vect_recog_cast_forwprop_pattern", last_stmt);
- *type_out = get_vectype_for_scalar_type (lhs_type);
+ *type_out = get_vectype_for_scalar_type (vinfo, lhs_type);
if (!*type_out)
return NULL;
@@ -2219,7 +2223,7 @@ vect_recog_rotate_pattern (stmt_vec_info
}
type = TREE_TYPE (lhs);
- vectype = get_vectype_for_scalar_type (type);
+ vectype = get_vectype_for_scalar_type (vinfo, type);
if (vectype == NULL_TREE)
return NULL;
@@ -2285,7 +2289,7 @@ vect_recog_rotate_pattern (stmt_vec_info
&& dt != vect_external_def)
return NULL;
- vectype = get_vectype_for_scalar_type (type);
+ vectype = get_vectype_for_scalar_type (vinfo, type);
if (vectype == NULL_TREE)
return NULL;
@@ -2404,7 +2408,7 @@ vect_recog_rotate_pattern (stmt_vec_info
}
else
{
- tree vecstype = get_vectype_for_scalar_type (stype);
+ tree vecstype = get_vectype_for_scalar_type (vinfo, stype);
if (vecstype == NULL_TREE)
return NULL;
@@ -2533,7 +2537,7 @@ vect_recog_vector_vector_shift_pattern (
if (!def_vinfo)
return NULL;
- *type_out = get_vectype_for_scalar_type (TREE_TYPE (oprnd0));
+ *type_out = get_vectype_for_scalar_type (vinfo, TREE_TYPE (oprnd0));
if (*type_out == NULL_TREE)
return NULL;
@@ -2556,7 +2560,8 @@ vect_recog_vector_vector_shift_pattern (
TYPE_PRECISION (TREE_TYPE (oprnd1)));
def = vect_recog_temp_ssa_var (TREE_TYPE (rhs1), NULL);
def_stmt = gimple_build_assign (def, BIT_AND_EXPR, rhs1, mask);
- tree vecstype = get_vectype_for_scalar_type (TREE_TYPE (rhs1));
+ tree vecstype = get_vectype_for_scalar_type (vinfo,
+ TREE_TYPE (rhs1));
append_pattern_def_seq (stmt_vinfo, def_stmt, vecstype);
}
}
@@ -2751,7 +2756,7 @@ vect_synth_mult_by_constant (tree op, tr
if (!possible)
return NULL;
- tree vectype = get_vectype_for_scalar_type (multtype);
+ tree vectype = get_vectype_for_scalar_type (vinfo, multtype);
if (!vectype
|| !target_supports_mult_synth_alg (&alg, variant,
@@ -2897,6 +2902,7 @@ vect_synth_mult_by_constant (tree op, tr
static gimple *
vect_recog_mult_pattern (stmt_vec_info stmt_vinfo, tree *type_out)
{
+ vec_info *vinfo = stmt_vinfo->vinfo;
gimple *last_stmt = stmt_vinfo->stmt;
tree oprnd0, oprnd1, vectype, itype;
gimple *pattern_stmt;
@@ -2917,7 +2923,7 @@ vect_recog_mult_pattern (stmt_vec_info s
|| !type_has_mode_precision_p (itype))
return NULL;
- vectype = get_vectype_for_scalar_type (itype);
+ vectype = get_vectype_for_scalar_type (vinfo, itype);
if (vectype == NULL_TREE)
return NULL;
@@ -2985,6 +2991,7 @@ vect_recog_mult_pattern (stmt_vec_info s
static gimple *
vect_recog_divmod_pattern (stmt_vec_info stmt_vinfo, tree *type_out)
{
+ vec_info *vinfo = stmt_vinfo->vinfo;
gimple *last_stmt = stmt_vinfo->stmt;
tree oprnd0, oprnd1, vectype, itype, cond;
gimple *pattern_stmt, *def_stmt;
@@ -3017,7 +3024,7 @@ vect_recog_divmod_pattern (stmt_vec_info
return NULL;
scalar_int_mode itype_mode = SCALAR_INT_TYPE_MODE (itype);
- vectype = get_vectype_for_scalar_type (itype);
+ vectype = get_vectype_for_scalar_type (vinfo, itype);
if (vectype == NULL_TREE)
return NULL;
@@ -3115,7 +3122,7 @@ vect_recog_divmod_pattern (stmt_vec_info
{
tree utype
= build_nonstandard_integer_type (prec, 1);
- tree vecutype = get_vectype_for_scalar_type (utype);
+ tree vecutype = get_vectype_for_scalar_type (vinfo, utype);
tree shift
= build_int_cst (utype, GET_MODE_BITSIZE (itype_mode)
- tree_log2 (oprnd1));
@@ -3433,6 +3440,7 @@ vect_recog_divmod_pattern (stmt_vec_info
static gimple *
vect_recog_mixed_size_cond_pattern (stmt_vec_info stmt_vinfo, tree *type_out)
{
+ vec_info *vinfo = stmt_vinfo->vinfo;
gimple *last_stmt = stmt_vinfo->stmt;
tree cond_expr, then_clause, else_clause;
tree type, vectype, comp_vectype, itype = NULL_TREE, vecitype;
@@ -3455,7 +3463,7 @@ vect_recog_mixed_size_cond_pattern (stmt
return NULL;
comp_scalar_type = TREE_TYPE (TREE_OPERAND (cond_expr, 0));
- comp_vectype = get_vectype_for_scalar_type (comp_scalar_type);
+ comp_vectype = get_vectype_for_scalar_type (vinfo, comp_scalar_type);
if (comp_vectype == NULL_TREE)
return NULL;
@@ -3503,7 +3511,7 @@ vect_recog_mixed_size_cond_pattern (stmt
if (GET_MODE_BITSIZE (type_mode) == cmp_mode_size)
return NULL;
- vectype = get_vectype_for_scalar_type (type);
+ vectype = get_vectype_for_scalar_type (vinfo, type);
if (vectype == NULL_TREE)
return NULL;
@@ -3518,7 +3526,7 @@ vect_recog_mixed_size_cond_pattern (stmt
|| GET_MODE_BITSIZE (SCALAR_TYPE_MODE (itype)) != cmp_mode_size)
return NULL;
- vecitype = get_vectype_for_scalar_type (itype);
+ vecitype = get_vectype_for_scalar_type (vinfo, itype);
if (vecitype == NULL_TREE)
return NULL;
@@ -3612,7 +3620,7 @@ check_bool_pattern (tree var, vec_info *
if (stmt_could_throw_p (cfun, def_stmt))
return false;
- comp_vectype = get_vectype_for_scalar_type (TREE_TYPE (rhs1));
+ comp_vectype = get_vectype_for_scalar_type (vinfo, TREE_TYPE (rhs1));
if (comp_vectype == NULL_TREE)
return false;
@@ -3627,7 +3635,7 @@ check_bool_pattern (tree var, vec_info *
scalar_mode mode = SCALAR_TYPE_MODE (TREE_TYPE (rhs1));
tree itype
= build_nonstandard_integer_type (GET_MODE_BITSIZE (mode), 1);
- vecitype = get_vectype_for_scalar_type (itype);
+ vecitype = get_vectype_for_scalar_type (vinfo, itype);
if (vecitype == NULL_TREE)
return false;
}
@@ -3656,10 +3664,11 @@ check_bool_pattern (tree var, vec_info *
static tree
adjust_bool_pattern_cast (tree type, tree var, stmt_vec_info stmt_info)
{
+ vec_info *vinfo = stmt_info->vinfo;
gimple *cast_stmt = gimple_build_assign (vect_recog_temp_ssa_var (type, NULL),
NOP_EXPR, var);
append_pattern_def_seq (stmt_info, cast_stmt,
- get_vectype_for_scalar_type (type));
+ get_vectype_for_scalar_type (vinfo, type));
return gimple_assign_lhs (cast_stmt);
}
@@ -3673,6 +3682,7 @@ adjust_bool_pattern_cast (tree type, tre
adjust_bool_pattern (tree var, tree out_type,
stmt_vec_info stmt_info, hash_map <tree, tree> &defs)
{
+ vec_info *vinfo = stmt_info->vinfo;
gimple *stmt = SSA_NAME_DEF_STMT (var);
enum tree_code rhs_code, def_rhs_code;
tree itype, cond_expr, rhs1, rhs2, irhs1, irhs2;
@@ -3834,7 +3844,7 @@ adjust_bool_pattern (tree var, tree out_
gimple_set_location (pattern_stmt, loc);
append_pattern_def_seq (stmt_info, pattern_stmt,
- get_vectype_for_scalar_type (itype));
+ get_vectype_for_scalar_type (vinfo, itype));
defs.put (var, gimple_assign_lhs (pattern_stmt));
}
@@ -3937,7 +3947,7 @@ search_type_for_mask_1 (tree var, vec_in
break;
}
- comp_vectype = get_vectype_for_scalar_type (TREE_TYPE (rhs1));
+ comp_vectype = get_vectype_for_scalar_type (vinfo, TREE_TYPE (rhs1));
if (comp_vectype == NULL_TREE)
{
res = NULL_TREE;
@@ -4052,7 +4062,7 @@ vect_recog_bool_pattern (stmt_vec_info s
if (! INTEGRAL_TYPE_P (TREE_TYPE (lhs))
|| TYPE_PRECISION (TREE_TYPE (lhs)) == 1)
return NULL;
- vectype = get_vectype_for_scalar_type (TREE_TYPE (lhs));
+ vectype = get_vectype_for_scalar_type (vinfo, TREE_TYPE (lhs));
if (vectype == NULL_TREE)
return NULL;
@@ -4089,7 +4099,7 @@ vect_recog_bool_pattern (stmt_vec_info s
if (!useless_type_conversion_p (type, TREE_TYPE (lhs)))
{
- tree new_vectype = get_vectype_for_scalar_type (type);
+ tree new_vectype = get_vectype_for_scalar_type (vinfo, type);
append_pattern_def_seq (stmt_vinfo, pattern_stmt, new_vectype);
lhs = vect_recog_temp_ssa_var (TREE_TYPE (lhs), NULL);
@@ -4105,7 +4115,7 @@ vect_recog_bool_pattern (stmt_vec_info s
else if (rhs_code == COND_EXPR
&& TREE_CODE (var) == SSA_NAME)
{
- vectype = get_vectype_for_scalar_type (TREE_TYPE (lhs));
+ vectype = get_vectype_for_scalar_type (vinfo, TREE_TYPE (lhs));
if (vectype == NULL_TREE)
return NULL;
@@ -4119,7 +4129,7 @@ vect_recog_bool_pattern (stmt_vec_info s
tree type
= build_nonstandard_integer_type (prec,
TYPE_UNSIGNED (TREE_TYPE (var)));
- if (get_vectype_for_scalar_type (type) == NULL_TREE)
+ if (get_vectype_for_scalar_type (vinfo, type) == NULL_TREE)
return NULL;
if (!check_bool_pattern (var, vinfo, bool_stmts))
@@ -4163,7 +4173,7 @@ vect_recog_bool_pattern (stmt_vec_info s
cst0 = build_int_cst (type, 0);
cst1 = build_int_cst (type, 1);
- new_vectype = get_vectype_for_scalar_type (type);
+ new_vectype = get_vectype_for_scalar_type (vinfo, type);
rhs = vect_recog_temp_ssa_var (type, NULL);
pattern_stmt = gimple_build_assign (rhs, COND_EXPR, var, cst1, cst0);
@@ -4264,12 +4274,12 @@ vect_recog_mask_conversion_pattern (stmt
{
int rhs_index = internal_fn_stored_value_index (ifn);
tree rhs = gimple_call_arg (last_stmt, rhs_index);
- vectype1 = get_vectype_for_scalar_type (TREE_TYPE (rhs));
+ vectype1 = get_vectype_for_scalar_type (vinfo, TREE_TYPE (rhs));
}
else
{
lhs = gimple_call_lhs (last_stmt);
- vectype1 = get_vectype_for_scalar_type (TREE_TYPE (lhs));
+ vectype1 = get_vectype_for_scalar_type (vinfo, TREE_TYPE (lhs));
}
tree mask_arg = gimple_call_arg (last_stmt, mask_argno);
@@ -4322,7 +4332,7 @@ vect_recog_mask_conversion_pattern (stmt
/* Check for cond expression requiring mask conversion. */
if (rhs_code == COND_EXPR)
{
- vectype1 = get_vectype_for_scalar_type (TREE_TYPE (lhs));
+ vectype1 = get_vectype_for_scalar_type (vinfo, TREE_TYPE (lhs));
if (TREE_CODE (rhs1) == SSA_NAME)
{
@@ -4388,7 +4398,8 @@ vect_recog_mask_conversion_pattern (stmt
tree wide_scalar_type = build_nonstandard_integer_type
(tree_to_uhwi (TYPE_SIZE (TREE_TYPE (vectype1))),
TYPE_UNSIGNED (rhs1_type));
- tree vectype3 = get_vectype_for_scalar_type (wide_scalar_type);
+ tree vectype3 = get_vectype_for_scalar_type (vinfo,
+ wide_scalar_type);
if (expand_vec_cond_expr_p (vectype1, vectype3, TREE_CODE (rhs1)))
return NULL;
}
@@ -4544,10 +4555,11 @@ vect_add_conversion_to_pattern (tree typ
if (useless_type_conversion_p (type, TREE_TYPE (value)))
return value;
+ vec_info *vinfo = stmt_info->vinfo;
tree new_value = vect_recog_temp_ssa_var (type, NULL);
gassign *conversion = gimple_build_assign (new_value, CONVERT_EXPR, value);
append_pattern_def_seq (stmt_info, conversion,
- get_vectype_for_scalar_type (type));
+ get_vectype_for_scalar_type (vinfo, type));
return new_value;
}
@@ -4583,7 +4595,8 @@ vect_recog_gather_scatter_pattern (stmt_
return NULL;
/* Convert the mask to the right form. */
- tree gs_vectype = get_vectype_for_scalar_type (gs_info.element_type);
+ tree gs_vectype = get_vectype_for_scalar_type (loop_vinfo,
+ gs_info.element_type);
if (mask)
mask = vect_convert_mask_for_vectype (mask, gs_vectype, stmt_info,
loop_vinfo);
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c 2019-10-20 13:59:25.923035567 +0100
+++ gcc/tree-vect-slp.c 2019-10-20 14:14:09.668722173 +0100
@@ -1127,7 +1127,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
if (gphi *stmt = dyn_cast <gphi *> (stmt_info->stmt))
{
tree scalar_type = TREE_TYPE (PHI_RESULT (stmt));
- tree vectype = get_vectype_for_scalar_type (scalar_type);
+ tree vectype = get_vectype_for_scalar_type (vinfo, scalar_type);
if (!vect_record_max_nunits (stmt_info, group_size, vectype, max_nunits))
return NULL;
@@ -1926,7 +1926,7 @@ vect_analyze_slp_instance (vec_info *vin
if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
{
scalar_type = TREE_TYPE (DR_REF (dr));
- vectype = get_vectype_for_scalar_type (scalar_type);
+ vectype = get_vectype_for_scalar_type (vinfo, scalar_type);
group_size = DR_GROUP_SIZE (stmt_info);
}
else if (!dr && REDUC_GROUP_FIRST_ELEMENT (stmt_info))
@@ -3287,6 +3287,7 @@ vect_get_constant_vectors (tree op, slp_
{
vec<stmt_vec_info> stmts = SLP_TREE_SCALAR_STMTS (slp_node);
stmt_vec_info stmt_vinfo = stmts[0];
+ vec_info *vinfo = stmt_vinfo->vinfo;
gimple *stmt = stmt_vinfo->stmt;
unsigned HOST_WIDE_INT nunits;
tree vec_cst;
@@ -3310,7 +3311,7 @@ vect_get_constant_vectors (tree op, slp_
vector_type
= build_same_sized_truth_vector_type (STMT_VINFO_VECTYPE (stmt_vinfo));
else
- vector_type = get_vectype_for_scalar_type (TREE_TYPE (op));
+ vector_type = get_vectype_for_scalar_type (vinfo, TREE_TYPE (op));
if (STMT_VINFO_DATA_REF (stmt_vinfo))
{
Pass a vec_info to duplicate_and_interleave
2019-10-20 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (duplicate_and_interleave): Take a vec_info.
* tree-vect-slp.c (duplicate_and_interleave): Likewise.
(vect_get_constant_vectors): Update call accordingly.
* tree-vect-loop.c (get_initial_defs_for_reduction): Likewise.
Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h 2019-10-20 14:14:09.672722145 +0100
+++ gcc/tree-vectorizer.h 2019-10-20 14:14:13.256696547 +0100
@@ -1754,8 +1754,8 @@ extern bool is_simple_and_all_uses_invar
extern bool can_duplicate_and_interleave_p (unsigned int, machine_mode,
unsigned int * = NULL,
tree * = NULL, tree * = NULL);
-extern void duplicate_and_interleave (gimple_seq *, tree, vec<tree>,
- unsigned int, vec<tree> &);
+extern void duplicate_and_interleave (vec_info *, gimple_seq *, tree,
+ vec<tree>, unsigned int, vec<tree> &);
extern int vect_get_place_in_interleaving_chain (stmt_vec_info, stmt_vec_info);
/* In tree-vect-patterns.c. */
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c 2019-10-20 14:14:09.668722173 +0100
+++ gcc/tree-vect-slp.c 2019-10-20 14:14:13.256696547 +0100
@@ -3183,8 +3183,9 @@ vect_mask_constant_operand_p (stmt_vec_i
to cut down on the number of interleaves. */
void
-duplicate_and_interleave (gimple_seq *seq, tree vector_type, vec<tree> elts,
- unsigned int nresults, vec<tree> &results)
+duplicate_and_interleave (vec_info *, gimple_seq *seq, tree vector_type,
+ vec<tree> elts, unsigned int nresults,
+ vec<tree> &results)
{
unsigned int nelts = elts.length ();
tree element_type = TREE_TYPE (vector_type);
@@ -3473,8 +3474,8 @@ vect_get_constant_vectors (tree op, slp_
else
{
if (vec_oprnds->is_empty ())
- duplicate_and_interleave (&ctor_seq, vector_type, elts,
- number_of_vectors,
+ duplicate_and_interleave (vinfo, &ctor_seq, vector_type,
+ elts, number_of_vectors,
permute_results);
vec_cst = permute_results[number_of_vectors - j - 1];
}
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c 2019-10-20 14:14:09.668722173 +0100
+++ gcc/tree-vect-loop.c 2019-10-20 14:14:13.252696575 +0100
@@ -3878,6 +3878,7 @@ get_initial_defs_for_reduction (slp_tree
{
vec<stmt_vec_info> stmts = SLP_TREE_SCALAR_STMTS (slp_node);
stmt_vec_info stmt_vinfo = stmts[0];
+ vec_info *vinfo = stmt_vinfo->vinfo;
unsigned HOST_WIDE_INT nunits;
unsigned j, number_of_places_left_in_vector;
tree vector_type;
@@ -3970,7 +3971,7 @@ get_initial_defs_for_reduction (slp_tree
{
/* First time round, duplicate ELTS to fill the
required number of vectors. */
- duplicate_and_interleave (&ctor_seq, vector_type, elts,
+ duplicate_and_interleave (vinfo, &ctor_seq, vector_type, elts,
number_of_vectors, *vec_oprnds);
break;
}
Pass a vec_info to can_duplicate_and_interleave_p
2019-10-20 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (can_duplicate_and_interleave_p): Take a vec_info.
* tree-vect-slp.c (can_duplicate_and_interleave_p): Likewise.
(duplicate_and_interleave): Update call accordingly.
* tree-vect-loop.c (vectorizable_reduction): Likewise.
Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h 2019-10-20 14:14:13.256696547 +0100
+++ gcc/tree-vectorizer.h 2019-10-20 14:14:16.688672033 +0100
@@ -1751,7 +1751,8 @@ extern void vect_get_slp_defs (vec<tree>
extern bool vect_slp_bb (basic_block);
extern stmt_vec_info vect_find_last_scalar_stmt_in_slp (slp_tree);
extern bool is_simple_and_all_uses_invariant (stmt_vec_info, loop_vec_info);
-extern bool can_duplicate_and_interleave_p (unsigned int, machine_mode,
+extern bool can_duplicate_and_interleave_p (vec_info *, unsigned int,
+ machine_mode,
unsigned int * = NULL,
tree * = NULL, tree * = NULL);
extern void duplicate_and_interleave (vec_info *, gimple_seq *, tree,
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c 2019-10-20 14:14:13.256696547 +0100
+++ gcc/tree-vect-slp.c 2019-10-20 14:14:16.688672033 +0100
@@ -233,7 +233,8 @@ vect_get_place_in_interleaving_chain (st
(if nonnull). */
bool
-can_duplicate_and_interleave_p (unsigned int count, machine_mode elt_mode,
+can_duplicate_and_interleave_p (vec_info *, unsigned int count,
+ machine_mode elt_mode,
unsigned int *nvectors_out,
tree *vector_type_out,
tree *permutes)
@@ -432,7 +433,7 @@ vect_get_and_check_slp_defs (vec_info *v
|| dt == vect_external_def)
&& !current_vector_size.is_constant ()
&& (TREE_CODE (type) == BOOLEAN_TYPE
- || !can_duplicate_and_interleave_p (stmts.length (),
+ || !can_duplicate_and_interleave_p (vinfo, stmts.length (),
TYPE_MODE (type))))
{
if (dump_enabled_p ())
@@ -3183,7 +3184,7 @@ vect_mask_constant_operand_p (stmt_vec_i
to cut down on the number of interleaves. */
void
-duplicate_and_interleave (vec_info *, gimple_seq *seq, tree vector_type,
+duplicate_and_interleave (vec_info *vinfo, gimple_seq *seq, tree vector_type,
vec<tree> elts, unsigned int nresults,
vec<tree> &results)
{
@@ -3194,7 +3195,7 @@ duplicate_and_interleave (vec_info *, gi
unsigned int nvectors = 1;
tree new_vector_type;
tree permutes[2];
- if (!can_duplicate_and_interleave_p (nelts, TYPE_MODE (element_type),
+ if (!can_duplicate_and_interleave_p (vinfo, nelts, TYPE_MODE (element_type),
&nvectors, &new_vector_type,
permutes))
gcc_unreachable ();
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c 2019-10-20 14:14:13.252696575 +0100
+++ gcc/tree-vect-loop.c 2019-10-20 14:14:16.684672061 +0100
@@ -6145,7 +6145,8 @@ vectorizable_reduction (stmt_vec_info st
unsigned int group_size = SLP_INSTANCE_GROUP_SIZE (slp_node_instance);
scalar_mode elt_mode = SCALAR_TYPE_MODE (TREE_TYPE (vectype_out));
if (!neutral_op
- && !can_duplicate_and_interleave_p (group_size, elt_mode))
+ && !can_duplicate_and_interleave_p (loop_vinfo, group_size,
+ elt_mode))
{
if (dump_enabled_p ())
dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
Pass a vec_info to simple_integer_narrowing
2019-10-20 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-stmts.c (simple_integer_narrowing): Take a vec_info.
(vectorizable_call): Update call accordingly.
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c 2019-10-20 14:14:09.672722145 +0100
+++ gcc/tree-vect-stmts.c 2019-10-20 14:14:19.748650179 +0100
@@ -3175,7 +3175,7 @@ vectorizable_bswap (stmt_vec_info stmt_i
*CONVERT_CODE. */
static bool
-simple_integer_narrowing (tree vectype_out, tree vectype_in,
+simple_integer_narrowing (vec_info *, tree vectype_out, tree vectype_in,
tree_code *convert_code)
{
if (!INTEGRAL_TYPE_P (TREE_TYPE (vectype_out))
@@ -3369,7 +3369,7 @@ vectorizable_call (stmt_vec_info stmt_in
if (cfn != CFN_LAST
&& (modifier == NONE
|| (modifier == NARROW
- && simple_integer_narrowing (vectype_out, vectype_in,
+ && simple_integer_narrowing (vinfo, vectype_out, vectype_in,
&convert_code))))
ifn = vectorizable_internal_function (cfn, callee, vectype_out,
vectype_in);
Pass a vec_info to supportable_narrowing_operation
2019-10-20 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (supportable_narrowing_operation): Take a vec_info.
* tree-vect-stmts.c (supportable_narrowing_operation): Likewise.
(simple_integer_narrowing): Update call accordingly.
(vectorizable_conversion): Likewise.
Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h 2019-10-20 14:14:16.688672033 +0100
+++ gcc/tree-vectorizer.h 2019-10-20 14:14:23.176625692 +0100
@@ -1603,8 +1603,8 @@ extern bool supportable_widening_operati
tree, tree, enum tree_code *,
enum tree_code *, int *,
vec<tree> *);
-extern bool supportable_narrowing_operation (enum tree_code, tree, tree,
- enum tree_code *,
+extern bool supportable_narrowing_operation (vec_info *, enum tree_code, tree,
+ tree, enum tree_code *,
int *, vec<tree> *);
extern unsigned record_stmt_cost (stmt_vector_for_cost *, int,
enum vect_cost_for_stmt, stmt_vec_info,
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c 2019-10-20 14:14:19.748650179 +0100
+++ gcc/tree-vect-stmts.c 2019-10-20 14:14:23.176625692 +0100
@@ -3175,7 +3175,7 @@ vectorizable_bswap (stmt_vec_info stmt_i
*CONVERT_CODE. */
static bool
-simple_integer_narrowing (vec_info *, tree vectype_out, tree vectype_in,
+simple_integer_narrowing (vec_info *vinfo, tree vectype_out, tree vectype_in,
tree_code *convert_code)
{
if (!INTEGRAL_TYPE_P (TREE_TYPE (vectype_out))
@@ -3185,8 +3185,8 @@ simple_integer_narrowing (vec_info *, tr
tree_code code;
int multi_step_cvt = 0;
auto_vec <tree, 8> interm_types;
- if (!supportable_narrowing_operation (NOP_EXPR, vectype_out, vectype_in,
- &code, &multi_step_cvt,
+ if (!supportable_narrowing_operation (vinfo, NOP_EXPR, vectype_out,
+ vectype_in, &code, &multi_step_cvt,
&interm_types)
|| multi_step_cvt)
return false;
@@ -4957,8 +4957,8 @@ vectorizable_conversion (stmt_vec_info s
case NARROW:
gcc_assert (op_type == unary_op);
- if (supportable_narrowing_operation (code, vectype_out, vectype_in,
- &code1, &multi_step_cvt,
+ if (supportable_narrowing_operation (vinfo, code, vectype_out,
+ vectype_in, &code1, &multi_step_cvt,
&interm_types))
break;
@@ -4974,8 +4974,8 @@ vectorizable_conversion (stmt_vec_info s
if (!supportable_convert_operation (code, cvt_type, vectype_in,
&decl1, &codecvt1))
goto unsupported;
- if (supportable_narrowing_operation (NOP_EXPR, vectype_out, cvt_type,
- &code1, &multi_step_cvt,
+ if (supportable_narrowing_operation (vinfo, NOP_EXPR, vectype_out,
+ cvt_type, &code1, &multi_step_cvt,
&interm_types))
break;
goto unsupported;
@@ -11649,7 +11649,7 @@ supportable_widening_operation (enum tre
narrowing operation (short in the above example). */
bool
-supportable_narrowing_operation (enum tree_code code,
+supportable_narrowing_operation (vec_info *, enum tree_code code,
tree vectype_out, tree vectype_in,
enum tree_code *code1, int *multi_step_cvt,
vec<tree> *interm_types)
Pass a loop_vec_info to vect_maybe_permute_loop_masks
2019-10-20 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-loop-manip.c (vect_maybe_permute_loop_masks): Take
a loop_vec_info.
(vect_set_loop_condition_masked): Update call accordingly.
Index: gcc/tree-vect-loop-manip.c
===================================================================
--- gcc/tree-vect-loop-manip.c 2019-10-17 14:22:54.919313309 +0100
+++ gcc/tree-vect-loop-manip.c 2019-10-20 14:14:26.736600265 +0100
@@ -317,7 +317,8 @@ interleave_supported_p (vec_perm_indices
latter. Return true on success, adding any new statements to SEQ. */
static bool
-vect_maybe_permute_loop_masks (gimple_seq *seq, rgroup_masks *dest_rgm,
+vect_maybe_permute_loop_masks (loop_vec_info, gimple_seq *seq,
+ rgroup_masks *dest_rgm,
rgroup_masks *src_rgm)
{
tree src_masktype = src_rgm->mask_type;
@@ -689,7 +690,8 @@ vect_set_loop_condition_masked (class lo
{
rgroup_masks *half_rgm = &(*masks)[nmasks / 2 - 1];
if (!half_rgm->masks.is_empty ()
- && vect_maybe_permute_loop_masks (&header_seq, rgm, half_rgm))
+ && vect_maybe_permute_loop_masks (loop_vinfo, &header_seq,
+ rgm, half_rgm))
continue;
}
Pass a vec_info to vect_halve_mask_nunits
2019-10-20 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (vect_halve_mask_nunits): Take a vec_info.
* tree-vect-loop.c (vect_halve_mask_nunits): Likewise.
* tree-vect-loop-manip.c (vect_maybe_permute_loop_masks): Update
call accordingly.
* tree-vect-stmts.c (supportable_widening_operation): Likewise.
Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h 2019-10-20 14:14:23.176625692 +0100
+++ gcc/tree-vectorizer.h 2019-10-20 14:14:30.500573381 +0100
@@ -1705,7 +1705,7 @@ extern opt_loop_vec_info vect_analyze_lo
extern tree vect_build_loop_niters (loop_vec_info, bool * = NULL);
extern void vect_gen_vector_loop_niters (loop_vec_info, tree, tree *,
tree *, bool);
-extern tree vect_halve_mask_nunits (tree);
+extern tree vect_halve_mask_nunits (vec_info *, tree);
extern tree vect_double_mask_nunits (tree);
extern void vect_record_loop_mask (loop_vec_info, vec_loop_masks *,
unsigned int, tree, tree);
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c 2019-10-20 14:14:16.684672061 +0100
+++ gcc/tree-vect-loop.c 2019-10-20 14:14:30.496573409 +0100
@@ -7745,7 +7745,7 @@ loop_niters_no_overflow (loop_vec_info l
/* Return a mask type with half the number of elements as TYPE. */
tree
-vect_halve_mask_nunits (tree type)
+vect_halve_mask_nunits (vec_info *, tree type)
{
poly_uint64 nunits = exact_div (TYPE_VECTOR_SUBPARTS (type), 2);
return build_truth_vector_type (nunits, current_vector_size);
Index: gcc/tree-vect-loop-manip.c
===================================================================
--- gcc/tree-vect-loop-manip.c 2019-10-20 14:14:26.736600265 +0100
+++ gcc/tree-vect-loop-manip.c 2019-10-20 14:14:30.496573409 +0100
@@ -317,7 +317,7 @@ interleave_supported_p (vec_perm_indices
latter. Return true on success, adding any new statements to SEQ. */
static bool
-vect_maybe_permute_loop_masks (loop_vec_info, gimple_seq *seq,
+vect_maybe_permute_loop_masks (loop_vec_info loop_vinfo, gimple_seq *seq,
rgroup_masks *dest_rgm,
rgroup_masks *src_rgm)
{
@@ -330,7 +330,7 @@ vect_maybe_permute_loop_masks (loop_vec_
{
/* Unpacking the source masks gives at least as many mask bits as
we need. We can then VIEW_CONVERT any excess bits away. */
- tree unpack_masktype = vect_halve_mask_nunits (src_masktype);
+ tree unpack_masktype = vect_halve_mask_nunits (loop_vinfo, src_masktype);
for (unsigned int i = 0; i < dest_rgm->masks.length (); ++i)
{
tree src = src_rgm->masks[i / 2];
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c 2019-10-20 14:14:23.176625692 +0100
+++ gcc/tree-vect-stmts.c 2019-10-20 14:14:30.500573381 +0100
@@ -11385,6 +11385,7 @@ supportable_widening_operation (enum tre
int *multi_step_cvt,
vec<tree> *interm_types)
{
+ vec_info *vinfo = stmt_info->vinfo;
loop_vec_info loop_info = STMT_VINFO_LOOP_VINFO (stmt_info);
class loop *vect_loop = NULL;
machine_mode vec_mode;
@@ -11570,7 +11571,7 @@ supportable_widening_operation (enum tre
intermediate_mode = insn_data[icode1].operand[0].mode;
if (VECTOR_BOOLEAN_TYPE_P (prev_type))
{
- intermediate_type = vect_halve_mask_nunits (prev_type);
+ intermediate_type = vect_halve_mask_nunits (vinfo, prev_type);
if (intermediate_mode != TYPE_MODE (intermediate_type))
return false;
}
Pass a vec_info to vect_double_mask_nunits
2019-10-20 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (vect_double_mask_nunits): Take a vec_info.
* tree-vect-loop.c (vect_double_mask_nunits): Likewise.
* tree-vect-stmts.c (supportable_narrowing_operation): Update call
accordingly.
Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h 2019-10-20 14:14:30.500573381 +0100
+++ gcc/tree-vectorizer.h 2019-10-20 14:14:33.692550581 +0100
@@ -1706,7 +1706,7 @@ extern tree vect_build_loop_niters (loop
extern void vect_gen_vector_loop_niters (loop_vec_info, tree, tree *,
tree *, bool);
extern tree vect_halve_mask_nunits (vec_info *, tree);
-extern tree vect_double_mask_nunits (tree);
+extern tree vect_double_mask_nunits (vec_info *, tree);
extern void vect_record_loop_mask (loop_vec_info, vec_loop_masks *,
unsigned int, tree, tree);
extern tree vect_get_loop_mask (gimple_stmt_iterator *, vec_loop_masks *,
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c 2019-10-20 14:14:30.496573409 +0100
+++ gcc/tree-vect-loop.c 2019-10-20 14:14:33.692550581 +0100
@@ -7754,7 +7754,7 @@ vect_halve_mask_nunits (vec_info *, tree
/* Return a mask type with twice as many elements as TYPE. */
tree
-vect_double_mask_nunits (tree type)
+vect_double_mask_nunits (vec_info *, tree type)
{
poly_uint64 nunits = TYPE_VECTOR_SUBPARTS (type) * 2;
return build_truth_vector_type (nunits, current_vector_size);
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c 2019-10-20 14:14:30.500573381 +0100
+++ gcc/tree-vect-stmts.c 2019-10-20 14:14:33.692550581 +0100
@@ -11650,7 +11650,7 @@ supportable_widening_operation (enum tre
narrowing operation (short in the above example). */
bool
-supportable_narrowing_operation (vec_info *, enum tree_code code,
+supportable_narrowing_operation (vec_info *vinfo, enum tree_code code,
tree vectype_out, tree vectype_in,
enum tree_code *code1, int *multi_step_cvt,
vec<tree> *interm_types)
@@ -11759,7 +11759,7 @@ supportable_narrowing_operation (vec_inf
intermediate_mode = insn_data[icode1].operand[0].mode;
if (VECTOR_BOOLEAN_TYPE_P (prev_type))
{
- intermediate_type = vect_double_mask_nunits (prev_type);
+ intermediate_type = vect_double_mask_nunits (vinfo, prev_type);
if (intermediate_mode != TYPE_MODE (intermediate_type))
return false;
}
^ permalink raw reply [flat|nested] 7+ messages in thread
* [3/3] Replace current_vector_size with vec_info::vector_size
2019-10-20 13:23 [0/3] Turn current_vector_size into a vec_info field Richard Sandiford
2019-10-20 13:27 ` [1/3] Avoid setting current_vector_size in get_vec_alignment_for_array_type Richard Sandiford
2019-10-20 13:30 ` [2/3] Pass vec_infos to more routines Richard Sandiford
@ 2019-10-20 14:28 ` Richard Sandiford
2019-10-21 6:01 ` [0/3] Turn current_vector_size into a vec_info field Richard Biener
3 siblings, 0 replies; 7+ messages in thread
From: Richard Sandiford @ 2019-10-20 14:28 UTC (permalink / raw)
To: gcc-patches
Now that all necessary routines have access to the vec_info,
it's trivial to convert current_vector_size to a member variable.
2019-10-20 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (vec_info::vector_size): New member variable.
(vect_update_max_nunits): Update comment.
(current_vector_size): Delete.
* tree-vect-stmts.c (current_vector_size): Likewise.
(get_vectype_for_scalar_type): Use vec_info::vector_size instead
of current_vector_size.
(get_mask_type_for_scalar_type): Likewise.
* tree-vectorizer.c (try_vectorize_loop_1): Likewise.
* tree-vect-loop.c (vect_update_vf_for_slp): Likewise.
(vect_analyze_loop, vect_halve_mask_nunits): Likewise.
(vect_double_mask_nunits, vect_transform_loop): Likewise.
* tree-vect-slp.c (can_duplicate_and_interleave_p): Likewise.
(vect_make_slp_decision, vect_slp_bb_region): Likewise.
Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h 2019-10-20 14:14:33.692550581 +0100
+++ gcc/tree-vectorizer.h 2019-10-20 14:14:36.768528611 +0100
@@ -326,6 +326,10 @@ typedef std::pair<tree, tree> vec_object
/* Cost data used by the target cost model. */
void *target_cost_data;
+ /* The vector size for this loop in bytes, or 0 if we haven't picked
+ a size yet. */
+ poly_uint64 vector_size;
+
private:
stmt_vec_info new_stmt_vec_info (gimple *stmt);
void set_vinfo_for_stmt (gimple *, stmt_vec_info);
@@ -1472,7 +1476,7 @@ vect_get_num_copies (loop_vec_info loop_
static inline void
vect_update_max_nunits (poly_uint64 *max_nunits, poly_uint64 nunits)
{
- /* All unit counts have the form current_vector_size * X for some
+ /* All unit counts have the form vec_info::vector_size * X for some
rational X, so two unit sizes must have a common multiple.
Everything is a multiple of the initial value of 1. */
*max_nunits = force_common_multiple (*max_nunits, nunits);
@@ -1588,7 +1592,6 @@ extern dump_user_location_t find_loop_lo
extern bool vect_can_advance_ivs_p (loop_vec_info);
/* In tree-vect-stmts.c. */
-extern poly_uint64 current_vector_size;
extern tree get_vectype_for_scalar_type (vec_info *, tree);
extern tree get_vectype_for_scalar_type_and_size (tree, poly_uint64);
extern tree get_mask_type_for_scalar_type (vec_info *, tree);
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c 2019-10-20 14:14:33.692550581 +0100
+++ gcc/tree-vect-stmts.c 2019-10-20 14:14:36.768528611 +0100
@@ -11133,22 +11133,20 @@ get_vectype_for_scalar_type_and_size (tr
return vectype;
}
-poly_uint64 current_vector_size;
-
/* Function get_vectype_for_scalar_type.
Returns the vector type corresponding to SCALAR_TYPE as supported
by the target. */
tree
-get_vectype_for_scalar_type (vec_info *, tree scalar_type)
+get_vectype_for_scalar_type (vec_info *vinfo, tree scalar_type)
{
tree vectype;
vectype = get_vectype_for_scalar_type_and_size (scalar_type,
- current_vector_size);
+ vinfo->vector_size);
if (vectype
- && known_eq (current_vector_size, 0U))
- current_vector_size = GET_MODE_SIZE (TYPE_MODE (vectype));
+ && known_eq (vinfo->vector_size, 0U))
+ vinfo->vector_size = GET_MODE_SIZE (TYPE_MODE (vectype));
return vectype;
}
@@ -11166,7 +11164,7 @@ get_mask_type_for_scalar_type (vec_info
return NULL;
return build_truth_vector_type (TYPE_VECTOR_SUBPARTS (vectype),
- current_vector_size);
+ vinfo->vector_size);
}
/* Function get_same_sized_vectype
Index: gcc/tree-vectorizer.c
===================================================================
--- gcc/tree-vectorizer.c 2019-10-20 14:13:50.784857051 +0100
+++ gcc/tree-vectorizer.c 2019-10-20 14:14:36.768528611 +0100
@@ -971,7 +971,7 @@ try_vectorize_loop_1 (hash_table<simduid
unsigned HOST_WIDE_INT bytes;
if (dump_enabled_p ())
{
- if (current_vector_size.is_constant (&bytes))
+ if (loop_vinfo->vector_size.is_constant (&bytes))
dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, vect_location,
"loop vectorized using %wu byte vectors\n", bytes);
else
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c 2019-10-20 14:14:33.692550581 +0100
+++ gcc/tree-vect-loop.c 2019-10-20 14:14:36.764528643 +0100
@@ -1414,7 +1414,7 @@ vect_update_vf_for_slp (loop_vec_info lo
dump_printf_loc (MSG_NOTE, vect_location,
"Loop contains SLP and non-SLP stmts\n");
/* Both the vectorization factor and unroll factor have the form
- current_vector_size * X for some rational X, so they must have
+ loop_vinfo->vector_size * X for some rational X, so they must have
a common multiple. */
vectorization_factor
= force_common_multiple (vectorization_factor,
@@ -2311,7 +2311,6 @@ vect_analyze_loop (class loop *loop, loo
auto_vector_sizes vector_sizes;
/* Autodetect first vector size we try. */
- current_vector_size = 0;
targetm.vectorize.autovectorize_vector_sizes (&vector_sizes,
loop->simdlen != 0);
unsigned int next_size = 0;
@@ -2333,7 +2332,7 @@ vect_analyze_loop (class loop *loop, loo
unsigned n_stmts = 0;
poly_uint64 autodetected_vector_size = 0;
opt_loop_vec_info first_loop_vinfo = opt_loop_vec_info::success (NULL);
- poly_uint64 first_vector_size = 0;
+ poly_uint64 next_vector_size = 0;
while (1)
{
/* Check the CFG characteristics of the loop (nesting, entry/exit). */
@@ -2347,6 +2346,7 @@ vect_analyze_loop (class loop *loop, loo
gcc_checking_assert (first_loop_vinfo == NULL);
return loop_vinfo;
}
+ loop_vinfo->vector_size = next_vector_size;
bool fatal = false;
@@ -2365,7 +2365,6 @@ vect_analyze_loop (class loop *loop, loo
if (first_loop_vinfo == NULL)
{
first_loop_vinfo = loop_vinfo;
- first_vector_size = current_vector_size;
loop->aux = NULL;
}
else
@@ -2381,7 +2380,7 @@ vect_analyze_loop (class loop *loop, loo
delete loop_vinfo;
if (next_size == 0)
- autodetected_vector_size = current_vector_size;
+ autodetected_vector_size = loop_vinfo->vector_size;
if (next_size < vector_sizes.length ()
&& known_eq (vector_sizes[next_size], autodetected_vector_size))
@@ -2394,17 +2393,16 @@ vect_analyze_loop (class loop *loop, loo
}
if (next_size == vector_sizes.length ()
- || known_eq (current_vector_size, 0U))
+ || known_eq (loop_vinfo->vector_size, 0U))
{
if (first_loop_vinfo)
{
- current_vector_size = first_vector_size;
loop->aux = (loop_vec_info) first_loop_vinfo;
if (dump_enabled_p ())
{
dump_printf_loc (MSG_NOTE, vect_location,
"***** Choosing vector size ");
- dump_dec (MSG_NOTE, current_vector_size);
+ dump_dec (MSG_NOTE, first_loop_vinfo->vector_size);
dump_printf (MSG_NOTE, "\n");
}
return first_loop_vinfo;
@@ -2414,13 +2412,13 @@ vect_analyze_loop (class loop *loop, loo
}
/* Try the next biggest vector size. */
- current_vector_size = vector_sizes[next_size++];
+ next_vector_size = vector_sizes[next_size++];
if (dump_enabled_p ())
{
dump_printf_loc (MSG_NOTE, vect_location,
"***** Re-trying analysis with "
"vector size ");
- dump_dec (MSG_NOTE, current_vector_size);
+ dump_dec (MSG_NOTE, next_vector_size);
dump_printf (MSG_NOTE, "\n");
}
}
@@ -7745,19 +7743,19 @@ loop_niters_no_overflow (loop_vec_info l
/* Return a mask type with half the number of elements as TYPE. */
tree
-vect_halve_mask_nunits (vec_info *, tree type)
+vect_halve_mask_nunits (vec_info *vinfo, tree type)
{
poly_uint64 nunits = exact_div (TYPE_VECTOR_SUBPARTS (type), 2);
- return build_truth_vector_type (nunits, current_vector_size);
+ return build_truth_vector_type (nunits, vinfo->vector_size);
}
/* Return a mask type with twice as many elements as TYPE. */
tree
-vect_double_mask_nunits (vec_info *, tree type)
+vect_double_mask_nunits (vec_info *vinfo, tree type)
{
poly_uint64 nunits = TYPE_VECTOR_SUBPARTS (type) * 2;
- return build_truth_vector_type (nunits, current_vector_size);
+ return build_truth_vector_type (nunits, vinfo->vector_size);
}
/* Record that a fully-masked version of LOOP_VINFO would need MASKS to
@@ -8243,7 +8241,7 @@ vect_transform_loop (loop_vec_info loop_
{
dump_printf_loc (MSG_NOTE, vect_location,
"LOOP EPILOGUE VECTORIZED (VS=");
- dump_dec (MSG_NOTE, current_vector_size);
+ dump_dec (MSG_NOTE, loop_vinfo->vector_size);
dump_printf (MSG_NOTE, ")\n");
}
}
@@ -8295,14 +8293,14 @@ vect_transform_loop (loop_vec_info loop_
unsigned int ratio;
while (next_size < vector_sizes.length ()
- && !(constant_multiple_p (current_vector_size,
+ && !(constant_multiple_p (loop_vinfo->vector_size,
vector_sizes[next_size], &ratio)
&& eiters >= lowest_vf / ratio))
next_size += 1;
}
else
while (next_size < vector_sizes.length ()
- && maybe_lt (current_vector_size, vector_sizes[next_size]))
+ && maybe_lt (loop_vinfo->vector_size, vector_sizes[next_size]))
next_size += 1;
if (next_size == vector_sizes.length ())
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c 2019-10-20 14:14:16.688672033 +0100
+++ gcc/tree-vect-slp.c 2019-10-20 14:14:36.764528643 +0100
@@ -233,7 +233,7 @@ vect_get_place_in_interleaving_chain (st
(if nonnull). */
bool
-can_duplicate_and_interleave_p (vec_info *, unsigned int count,
+can_duplicate_and_interleave_p (vec_info *vinfo, unsigned int count,
machine_mode elt_mode,
unsigned int *nvectors_out,
tree *vector_type_out,
@@ -246,7 +246,7 @@ can_duplicate_and_interleave_p (vec_info
{
scalar_int_mode int_mode;
poly_int64 elt_bits = elt_bytes * BITS_PER_UNIT;
- if (multiple_p (current_vector_size, elt_bytes, &nelts)
+ if (multiple_p (vinfo->vector_size, elt_bytes, &nelts)
&& int_mode_for_size (elt_bits, 0).exists (&int_mode))
{
tree int_type = build_nonstandard_integer_type
@@ -431,7 +431,7 @@ vect_get_and_check_slp_defs (vec_info *v
}
if ((dt == vect_constant_def
|| dt == vect_external_def)
- && !current_vector_size.is_constant ()
+ && !vinfo->vector_size.is_constant ()
&& (TREE_CODE (type) == BOOLEAN_TYPE
|| !can_duplicate_and_interleave_p (vinfo, stmts.length (),
TYPE_MODE (type))))
@@ -2250,7 +2250,7 @@ vect_make_slp_decision (loop_vec_info lo
FOR_EACH_VEC_ELT (slp_instances, i, instance)
{
/* FORNOW: SLP if you can. */
- /* All unroll factors have the form current_vector_size * X for some
+ /* All unroll factors have the form vinfo->vector_size * X for some
rational X, so they must have a common multiple. */
unrolling_factor
= force_common_multiple (unrolling_factor,
@@ -2986,7 +2986,7 @@ vect_slp_bb_region (gimple_stmt_iterator
auto_vector_sizes vector_sizes;
/* Autodetect first vector size we try. */
- current_vector_size = 0;
+ poly_uint64 next_vector_size = 0;
targetm.vectorize.autovectorize_vector_sizes (&vector_sizes, false);
unsigned int next_size = 0;
@@ -3005,6 +3005,7 @@ vect_slp_bb_region (gimple_stmt_iterator
bb_vinfo->shared->save_datarefs ();
else
bb_vinfo->shared->check_datarefs ();
+ bb_vinfo->vector_size = next_vector_size;
if (vect_slp_analyze_bb_1 (bb_vinfo, n_stmts, fatal)
&& dbg_cnt (vect_slp))
@@ -3018,7 +3019,7 @@ vect_slp_bb_region (gimple_stmt_iterator
unsigned HOST_WIDE_INT bytes;
if (dump_enabled_p ())
{
- if (current_vector_size.is_constant (&bytes))
+ if (bb_vinfo->vector_size.is_constant (&bytes))
dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, vect_location,
"basic block part vectorized using %wu byte "
"vectors\n", bytes);
@@ -3030,10 +3031,11 @@ vect_slp_bb_region (gimple_stmt_iterator
vectorized = true;
}
- delete bb_vinfo;
if (next_size == 0)
- autodetected_vector_size = current_vector_size;
+ autodetected_vector_size = bb_vinfo->vector_size;
+
+ delete bb_vinfo;
if (next_size < vector_sizes.length ()
&& known_eq (vector_sizes[next_size], autodetected_vector_size))
@@ -3041,20 +3043,20 @@ vect_slp_bb_region (gimple_stmt_iterator
if (vectorized
|| next_size == vector_sizes.length ()
- || known_eq (current_vector_size, 0U)
+ || known_eq (bb_vinfo->vector_size, 0U)
/* If vect_slp_analyze_bb_1 signaled that analysis for all
vector sizes will fail do not bother iterating. */
|| fatal)
return vectorized;
/* Try the next biggest vector size. */
- current_vector_size = vector_sizes[next_size++];
+ next_vector_size = vector_sizes[next_size++];
if (dump_enabled_p ())
{
dump_printf_loc (MSG_NOTE, vect_location,
"***** Re-trying analysis with "
"vector size ");
- dump_dec (MSG_NOTE, current_vector_size);
+ dump_dec (MSG_NOTE, next_vector_size);
dump_printf (MSG_NOTE, "\n");
}
}
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [0/3] Turn current_vector_size into a vec_info field
2019-10-20 13:23 [0/3] Turn current_vector_size into a vec_info field Richard Sandiford
` (2 preceding siblings ...)
2019-10-20 14:28 ` [3/3] Replace current_vector_size with vec_info::vector_size Richard Sandiford
@ 2019-10-21 6:01 ` Richard Biener
3 siblings, 0 replies; 7+ messages in thread
From: Richard Biener @ 2019-10-21 6:01 UTC (permalink / raw)
To: gcc-patches, Richard Sandiford
On October 20, 2019 3:21:32 PM GMT+02:00, Richard Sandiford <richard.sandiford@arm.com> wrote:
>Now that we're keeping multiple vec_infos around at the same time,
>it seemed worth turning current_vector_size into a vec_info field.
>This for example simplifies the book-keeping in vect_analyze_loop
>and helps with some follow-on changes.
>
>Tested on aarch64-linux-gnu and x86_64-linux-gnu.
OK.
Thanks,
Richard.
>Richard
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [1/3] Avoid setting current_vector_size in get_vec_alignment_for_array_type
2019-10-20 13:27 ` [1/3] Avoid setting current_vector_size in get_vec_alignment_for_array_type Richard Sandiford
@ 2019-10-30 14:22 ` Richard Biener
0 siblings, 0 replies; 7+ messages in thread
From: Richard Biener @ 2019-10-30 14:22 UTC (permalink / raw)
To: Richard Sandiford; +Cc: GCC Patches
On Sun, Oct 20, 2019 at 3:23 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> The increase_alignment pass was using get_vectype_for_scalar_type
> to get the preferred vector type for each array element type.
> This has the effect of carrying over the vector size chosen by
> the first successful call to all subsequent calls, whereas it seems
> more natural to treat each array type independently and pick the
> "best" vector type for each element type.
OK.
>
> 2019-10-20 Richard Sandiford <richard.sandiford@arm.com>
>
> gcc/
> * tree-vectorizer.c (get_vec_alignment_for_array_type): Use
> get_vectype_for_scalar_type_and_size instead of
> get_vectype_for_scalar_type.
>
> Index: gcc/tree-vectorizer.c
> ===================================================================
> --- gcc/tree-vectorizer.c 2019-10-20 13:58:02.091634417 +0100
> +++ gcc/tree-vectorizer.c 2019-10-20 14:13:50.784857051 +0100
> @@ -1347,7 +1347,8 @@ get_vec_alignment_for_array_type (tree t
> gcc_assert (TREE_CODE (type) == ARRAY_TYPE);
> poly_uint64 array_size, vector_size;
>
> - tree vectype = get_vectype_for_scalar_type (strip_array_types (type));
> + tree scalar_type = strip_array_types (type);
> + tree vectype = get_vectype_for_scalar_type_and_size (scalar_type, 0);
> if (!vectype
> || !poly_int_tree_p (TYPE_SIZE (type), &array_size)
> || !poly_int_tree_p (TYPE_SIZE (vectype), &vector_size)
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [2/3] Pass vec_infos to more routines
2019-10-20 13:30 ` [2/3] Pass vec_infos to more routines Richard Sandiford
@ 2019-10-30 14:25 ` Richard Biener
0 siblings, 0 replies; 7+ messages in thread
From: Richard Biener @ 2019-10-30 14:25 UTC (permalink / raw)
To: Richard Sandiford; +Cc: GCC Patches
On Sun, Oct 20, 2019 at 3:29 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> These 11 patches just pass vec_infos to one routine each. Splitting
> them up make it easier to write the changelogs, but they're so trivial
> that it seemed better to send them all in one message.
OK.
>
> Pass a vec_info to vect_supportable_shift
>
> 2019-10-20 Richard Sandiford <richard.sandiford@arm.com>
>
> gcc/
> * tree-vectorizer.h (vect_supportable_shift): Take a vec_info.
> * tree-vect-stmts.c (vect_supportable_shift): Likewise.
> * tree-vect-patterns.c (vect_synth_mult_by_constant): Update call
> accordingly.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h 2019-10-20 13:58:02.095634389 +0100
> +++ gcc/tree-vectorizer.h 2019-10-20 14:14:00.632786715 +0100
> @@ -1634,7 +1634,7 @@ extern void vect_get_load_cost (stmt_vec
> stmt_vector_for_cost *, bool);
> extern void vect_get_store_cost (stmt_vec_info, int,
> unsigned int *, stmt_vector_for_cost *);
> -extern bool vect_supportable_shift (enum tree_code, tree);
> +extern bool vect_supportable_shift (vec_info *, enum tree_code, tree);
> extern tree vect_gen_perm_mask_any (tree, const vec_perm_indices &);
> extern tree vect_gen_perm_mask_checked (tree, const vec_perm_indices &);
> extern void optimize_mask_stores (class loop*);
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c 2019-10-20 13:58:02.111634275 +0100
> +++ gcc/tree-vect-stmts.c 2019-10-20 14:14:00.628786742 +0100
> @@ -5465,7 +5465,7 @@ vectorizable_assignment (stmt_vec_info s
> either as shift by a scalar or by a vector. */
>
> bool
> -vect_supportable_shift (enum tree_code code, tree scalar_type)
> +vect_supportable_shift (vec_info *, enum tree_code code, tree scalar_type)
> {
>
> machine_mode vec_mode;
> Index: gcc/tree-vect-patterns.c
> ===================================================================
> --- gcc/tree-vect-patterns.c 2019-10-17 14:22:55.519309037 +0100
> +++ gcc/tree-vect-patterns.c 2019-10-20 14:14:00.628786742 +0100
> @@ -2720,6 +2720,7 @@ apply_binop_and_append_stmt (tree_code c
> vect_synth_mult_by_constant (tree op, tree val,
> stmt_vec_info stmt_vinfo)
> {
> + vec_info *vinfo = stmt_vinfo->vinfo;
> tree itype = TREE_TYPE (op);
> machine_mode mode = TYPE_MODE (itype);
> struct algorithm alg;
> @@ -2738,7 +2739,7 @@ vect_synth_mult_by_constant (tree op, tr
>
> /* Targets that don't support vector shifts but support vector additions
> can synthesize shifts that way. */
> - bool synth_shift_p = !vect_supportable_shift (LSHIFT_EXPR, multtype);
> + bool synth_shift_p = !vect_supportable_shift (vinfo, LSHIFT_EXPR, multtype);
>
> HOST_WIDE_INT hwval = tree_to_shwi (val);
> /* Use MAX_COST here as we don't want to limit the sequence on rtx costs.
>
>
> Pass a vec_info to vect_supportable_direct_optab_p
>
> 2019-10-20 Richard Sandiford <richard.sandiford@arm.com>
>
> gcc/
> * tree-vect-patterns.c (vect_supportable_direct_optab_p): Take
> a vec_info.
> (vect_recog_dot_prod_pattern): Update call accordingly.
> (vect_recog_sad_pattern, vect_recog_pow_pattern): Likewise.
> (vect_recog_widen_sum_pattern): Likewise.
>
> Index: gcc/tree-vect-patterns.c
> ===================================================================
> --- gcc/tree-vect-patterns.c 2019-10-20 14:14:00.628786742 +0100
> +++ gcc/tree-vect-patterns.c 2019-10-20 14:14:03.588765602 +0100
> @@ -187,7 +187,7 @@ vect_get_external_def_edge (vec_info *vi
> is nonnull. */
>
> static bool
> -vect_supportable_direct_optab_p (tree otype, tree_code code,
> +vect_supportable_direct_optab_p (vec_info *, tree otype, tree_code code,
> tree itype, tree *vecotype_out,
> tree *vecitype_out = NULL)
> {
> @@ -985,7 +985,7 @@ vect_recog_dot_prod_pattern (stmt_vec_in
> vect_pattern_detected ("vect_recog_dot_prod_pattern", last_stmt);
>
> tree half_vectype;
> - if (!vect_supportable_direct_optab_p (type, DOT_PROD_EXPR, half_type,
> + if (!vect_supportable_direct_optab_p (vinfo, type, DOT_PROD_EXPR, half_type,
> type_out, &half_vectype))
> return NULL;
>
> @@ -1143,7 +1143,7 @@ vect_recog_sad_pattern (stmt_vec_info st
> vect_pattern_detected ("vect_recog_sad_pattern", last_stmt);
>
> tree half_vectype;
> - if (!vect_supportable_direct_optab_p (sum_type, SAD_EXPR, half_type,
> + if (!vect_supportable_direct_optab_p (vinfo, sum_type, SAD_EXPR, half_type,
> type_out, &half_vectype))
> return NULL;
>
> @@ -1273,6 +1273,7 @@ vect_recog_widen_mult_pattern (stmt_vec_
> static gimple *
> vect_recog_pow_pattern (stmt_vec_info stmt_vinfo, tree *type_out)
> {
> + vec_info *vinfo = stmt_vinfo->vinfo;
> gimple *last_stmt = stmt_vinfo->stmt;
> tree base, exp;
> gimple *stmt;
> @@ -1366,7 +1367,7 @@ vect_recog_pow_pattern (stmt_vec_info st
> || (TREE_CODE (exp) == REAL_CST
> && real_equal (&TREE_REAL_CST (exp), &dconst2)))
> {
> - if (!vect_supportable_direct_optab_p (TREE_TYPE (base), MULT_EXPR,
> + if (!vect_supportable_direct_optab_p (vinfo, TREE_TYPE (base), MULT_EXPR,
> TREE_TYPE (base), type_out))
> return NULL;
>
> @@ -1472,8 +1473,8 @@ vect_recog_widen_sum_pattern (stmt_vec_i
>
> vect_pattern_detected ("vect_recog_widen_sum_pattern", last_stmt);
>
> - if (!vect_supportable_direct_optab_p (type, WIDEN_SUM_EXPR, unprom0.type,
> - type_out))
> + if (!vect_supportable_direct_optab_p (vinfo, type, WIDEN_SUM_EXPR,
> + unprom0.type, type_out))
> return NULL;
>
> var = vect_recog_temp_ssa_var (type, NULL);
>
>
>
> Pass a vec_info to get_mask_type_for_scalar_type
>
> 2019-10-20 Richard Sandiford <richard.sandiford@arm.com>
>
> gcc/
> * tree-vectorizer.h (get_mask_type_for_scalar_type): Take a vec_info.
> * tree-vect-stmts.c (get_mask_type_for_scalar_type): Likewise.
> (vect_check_load_store_mask): Update call accordingly.
> (vect_get_mask_type_for_stmt): Likewise.
> * tree-vect-patterns.c (check_bool_pattern): Likewise.
> (search_type_for_mask_1, vect_recog_mask_conversion_pattern): Likewise.
> (vect_convert_mask_for_vectype): Likewise.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h 2019-10-20 14:14:00.632786715 +0100
> +++ gcc/tree-vectorizer.h 2019-10-20 14:14:06.472745000 +0100
> @@ -1591,7 +1591,7 @@ extern bool vect_can_advance_ivs_p (loop
> extern poly_uint64 current_vector_size;
> extern tree get_vectype_for_scalar_type (tree);
> extern tree get_vectype_for_scalar_type_and_size (tree, poly_uint64);
> -extern tree get_mask_type_for_scalar_type (tree);
> +extern tree get_mask_type_for_scalar_type (vec_info *, tree);
> extern tree get_same_sized_vectype (tree, tree);
> extern bool vect_get_loop_mask_type (loop_vec_info);
> extern bool vect_is_simple_use (tree, vec_info *, enum vect_def_type *,
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c 2019-10-20 14:14:00.628786742 +0100
> +++ gcc/tree-vect-stmts.c 2019-10-20 14:14:06.472745000 +0100
> @@ -2558,6 +2558,7 @@ vect_check_load_store_mask (stmt_vec_inf
> vect_def_type *mask_dt_out,
> tree *mask_vectype_out)
> {
> + vec_info *vinfo = stmt_info->vinfo;
> if (!VECT_SCALAR_BOOLEAN_TYPE_P (TREE_TYPE (mask)))
> {
> if (dump_enabled_p ())
> @@ -2586,7 +2587,7 @@ vect_check_load_store_mask (stmt_vec_inf
>
> tree vectype = STMT_VINFO_VECTYPE (stmt_info);
> if (!mask_vectype)
> - mask_vectype = get_mask_type_for_scalar_type (TREE_TYPE (vectype));
> + mask_vectype = get_mask_type_for_scalar_type (vinfo, TREE_TYPE (vectype));
>
> if (!mask_vectype || !VECTOR_BOOLEAN_TYPE_P (mask_vectype))
> {
> @@ -11156,7 +11157,7 @@ get_vectype_for_scalar_type (tree scalar
> of vectors of specified SCALAR_TYPE as supported by target. */
>
> tree
> -get_mask_type_for_scalar_type (tree scalar_type)
> +get_mask_type_for_scalar_type (vec_info *, tree scalar_type)
> {
> tree vectype = get_vectype_for_scalar_type (scalar_type);
>
> @@ -11986,6 +11987,7 @@ vect_get_vector_types_for_stmt (stmt_vec
> opt_tree
> vect_get_mask_type_for_stmt (stmt_vec_info stmt_info)
> {
> + vec_info *vinfo = stmt_info->vinfo;
> gimple *stmt = stmt_info->stmt;
> tree mask_type = NULL;
> tree vectype, scalar_type;
> @@ -11995,7 +11997,7 @@ vect_get_mask_type_for_stmt (stmt_vec_in
> && !VECT_SCALAR_BOOLEAN_TYPE_P (TREE_TYPE (gimple_assign_rhs1 (stmt))))
> {
> scalar_type = TREE_TYPE (gimple_assign_rhs1 (stmt));
> - mask_type = get_mask_type_for_scalar_type (scalar_type);
> + mask_type = get_mask_type_for_scalar_type (vinfo, scalar_type);
>
> if (!mask_type)
> return opt_tree::failure_at (stmt,
> Index: gcc/tree-vect-patterns.c
> ===================================================================
> --- gcc/tree-vect-patterns.c 2019-10-20 14:14:03.588765602 +0100
> +++ gcc/tree-vect-patterns.c 2019-10-20 14:14:06.468745032 +0100
> @@ -3616,7 +3616,8 @@ check_bool_pattern (tree var, vec_info *
> if (comp_vectype == NULL_TREE)
> return false;
>
> - tree mask_type = get_mask_type_for_scalar_type (TREE_TYPE (rhs1));
> + tree mask_type = get_mask_type_for_scalar_type (vinfo,
> + TREE_TYPE (rhs1));
> if (mask_type
> && expand_vec_cmp_expr_p (comp_vectype, mask_type, rhs_code))
> return false;
> @@ -3943,7 +3944,7 @@ search_type_for_mask_1 (tree var, vec_in
> break;
> }
>
> - mask_type = get_mask_type_for_scalar_type (TREE_TYPE (rhs1));
> + mask_type = get_mask_type_for_scalar_type (vinfo, TREE_TYPE (rhs1));
> if (!mask_type
> || !expand_vec_cmp_expr_p (comp_vectype, mask_type, rhs_code))
> {
> @@ -4275,7 +4276,7 @@ vect_recog_mask_conversion_pattern (stmt
> tree mask_arg_type = search_type_for_mask (mask_arg, vinfo);
> if (!mask_arg_type)
> return NULL;
> - vectype2 = get_mask_type_for_scalar_type (mask_arg_type);
> + vectype2 = get_mask_type_for_scalar_type (vinfo, mask_arg_type);
>
> if (!vectype1 || !vectype2
> || known_eq (TYPE_VECTOR_SUBPARTS (vectype1),
> @@ -4352,7 +4353,7 @@ vect_recog_mask_conversion_pattern (stmt
> else
> return NULL;
>
> - vectype2 = get_mask_type_for_scalar_type (rhs1_type);
> + vectype2 = get_mask_type_for_scalar_type (vinfo, rhs1_type);
>
> if (!vectype1 || !vectype2)
> return NULL;
> @@ -4442,14 +4443,14 @@ vect_recog_mask_conversion_pattern (stmt
>
> if (TYPE_PRECISION (rhs1_type) < TYPE_PRECISION (rhs2_type))
> {
> - vectype1 = get_mask_type_for_scalar_type (rhs1_type);
> + vectype1 = get_mask_type_for_scalar_type (vinfo, rhs1_type);
> if (!vectype1)
> return NULL;
> rhs2 = build_mask_conversion (rhs2, vectype1, stmt_vinfo);
> }
> else
> {
> - vectype1 = get_mask_type_for_scalar_type (rhs2_type);
> + vectype1 = get_mask_type_for_scalar_type (vinfo, rhs2_type);
> if (!vectype1)
> return NULL;
> rhs1 = build_mask_conversion (rhs1, vectype1, stmt_vinfo);
> @@ -4520,7 +4521,7 @@ vect_convert_mask_for_vectype (tree mask
> tree mask_type = search_type_for_mask (mask, vinfo);
> if (mask_type)
> {
> - tree mask_vectype = get_mask_type_for_scalar_type (mask_type);
> + tree mask_vectype = get_mask_type_for_scalar_type (vinfo, mask_type);
> if (mask_vectype
> && maybe_ne (TYPE_VECTOR_SUBPARTS (vectype),
> TYPE_VECTOR_SUBPARTS (mask_vectype)))
>
>
> Pass a vec_info to get_vectype_for_scalar_type
>
> 2019-10-20 Richard Sandiford <richard.sandiford@arm.com>
>
> gcc/
> * tree-vectorizer.h (get_vectype_for_scalar_type): Take a vec_info.
> * tree-vect-stmts.c (get_vectype_for_scalar_type): Likewise.
> (vect_prologue_cost_for_slp_op): Update call accordingly.
> (vect_get_vec_def_for_operand, vect_get_gather_scatter_ops)
> (vect_get_strided_load_store_ops, vectorizable_simd_clone_call)
> (vect_supportable_shift, vect_is_simple_cond, vectorizable_comparison)
> (get_mask_type_for_scalar_type): Likewise.
> (vect_get_vector_types_for_stmt): Likewise.
> * tree-vect-data-refs.c (vect_analyze_data_refs): Likewise.
> * tree-vect-loop.c (vect_determine_vectorization_factor): Likewise.
> (get_initial_def_for_reduction, build_vect_cond_expr): Likewise.
> * tree-vect-patterns.c (vect_supportable_direct_optab_p): Likewise.
> (vect_split_statement, vect_convert_input): Likewise.
> (vect_recog_widen_op_pattern, vect_recog_pow_pattern): Likewise.
> (vect_recog_over_widening_pattern, vect_recog_mulhs_pattern): Likewise.
> (vect_recog_average_pattern, vect_recog_cast_forwprop_pattern)
> (vect_recog_rotate_pattern, vect_recog_vector_vector_shift_pattern)
> (vect_synth_mult_by_constant, vect_recog_mult_pattern): Likewise.
> (vect_recog_divmod_pattern, vect_recog_mixed_size_cond_pattern)
> (check_bool_pattern, adjust_bool_pattern_cast, adjust_bool_pattern)
> (search_type_for_mask_1, vect_recog_bool_pattern): Likewise.
> (vect_recog_mask_conversion_pattern): Likewise.
> (vect_add_conversion_to_pattern): Likewise.
> (vect_recog_gather_scatter_pattern): Likewise.
> * tree-vect-slp.c (vect_build_slp_tree_2): Likewise.
> (vect_analyze_slp_instance, vect_get_constant_vectors): Likewise.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h 2019-10-20 14:14:06.472745000 +0100
> +++ gcc/tree-vectorizer.h 2019-10-20 14:14:09.672722145 +0100
> @@ -1589,7 +1589,7 @@ extern bool vect_can_advance_ivs_p (loop
>
> /* In tree-vect-stmts.c. */
> extern poly_uint64 current_vector_size;
> -extern tree get_vectype_for_scalar_type (tree);
> +extern tree get_vectype_for_scalar_type (vec_info *, tree);
> extern tree get_vectype_for_scalar_type_and_size (tree, poly_uint64);
> extern tree get_mask_type_for_scalar_type (vec_info *, tree);
> extern tree get_same_sized_vectype (tree, tree);
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c 2019-10-20 14:14:06.472745000 +0100
> +++ gcc/tree-vect-stmts.c 2019-10-20 14:14:09.672722145 +0100
> @@ -796,6 +796,7 @@ vect_prologue_cost_for_slp_op (slp_tree
> unsigned opno, enum vect_def_type dt,
> stmt_vector_for_cost *cost_vec)
> {
> + vec_info *vinfo = stmt_info->vinfo;
> gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[0]->stmt;
> tree op = gimple_op (stmt, opno);
> unsigned prologue_cost = 0;
> @@ -803,7 +804,7 @@ vect_prologue_cost_for_slp_op (slp_tree
> /* Without looking at the actual initializer a vector of
> constants can be implemented as load from the constant pool.
> When all elements are the same we can use a splat. */
> - tree vectype = get_vectype_for_scalar_type (TREE_TYPE (op));
> + tree vectype = get_vectype_for_scalar_type (vinfo, TREE_TYPE (op));
> unsigned group_size = SLP_TREE_SCALAR_STMTS (node).length ();
> unsigned num_vects_to_check;
> unsigned HOST_WIDE_INT const_nunits;
> @@ -1610,7 +1611,7 @@ vect_get_vec_def_for_operand (tree op, s
> && VECTOR_BOOLEAN_TYPE_P (stmt_vectype))
> vector_type = build_same_sized_truth_vector_type (stmt_vectype);
> else
> - vector_type = get_vectype_for_scalar_type (TREE_TYPE (op));
> + vector_type = get_vectype_for_scalar_type (loop_vinfo, TREE_TYPE (op));
>
> gcc_assert (vector_type);
> return vect_init_vector (stmt_vinfo, op, vector_type, NULL);
> @@ -2975,6 +2976,7 @@ vect_get_gather_scatter_ops (class loop
> gather_scatter_info *gs_info,
> tree *dataref_ptr, tree *vec_offset)
> {
> + vec_info *vinfo = stmt_info->vinfo;
> gimple_seq stmts = NULL;
> *dataref_ptr = force_gimple_operand (gs_info->base, &stmts, true, NULL_TREE);
> if (stmts != NULL)
> @@ -2985,7 +2987,7 @@ vect_get_gather_scatter_ops (class loop
> gcc_assert (!new_bb);
> }
> tree offset_type = TREE_TYPE (gs_info->offset);
> - tree offset_vectype = get_vectype_for_scalar_type (offset_type);
> + tree offset_vectype = get_vectype_for_scalar_type (vinfo, offset_type);
> *vec_offset = vect_get_vec_def_for_operand (gs_info->offset, stmt_info,
> offset_vectype);
> }
> @@ -3020,7 +3022,7 @@ vect_get_strided_load_store_ops (stmt_ve
> /* The offset given in GS_INFO can have pointer type, so use the element
> type of the vector instead. */
> tree offset_type = TREE_TYPE (gs_info->offset);
> - tree offset_vectype = get_vectype_for_scalar_type (offset_type);
> + tree offset_vectype = get_vectype_for_scalar_type (loop_vinfo, offset_type);
> offset_type = TREE_TYPE (offset_vectype);
>
> /* Calculate X = DR_STEP / SCALE and convert it to the appropriate type. */
> @@ -4101,9 +4103,8 @@ vectorizable_simd_clone_call (stmt_vec_i
> || arginfo[i].dt == vect_external_def)
> && bestn->simdclone->args[i].arg_type == SIMD_CLONE_ARG_TYPE_VECTOR)
> {
> - arginfo[i].vectype
> - = get_vectype_for_scalar_type (TREE_TYPE (gimple_call_arg (stmt,
> - i)));
> + tree arg_type = TREE_TYPE (gimple_call_arg (stmt, i));
> + arginfo[i].vectype = get_vectype_for_scalar_type (vinfo, arg_type);
> if (arginfo[i].vectype == NULL
> || (simd_clone_subparts (arginfo[i].vectype)
> > bestn->simdclone->simdlen))
> @@ -5466,7 +5467,7 @@ vectorizable_assignment (stmt_vec_info s
> either as shift by a scalar or by a vector. */
>
> bool
> -vect_supportable_shift (vec_info *, enum tree_code code, tree scalar_type)
> +vect_supportable_shift (vec_info *vinfo, enum tree_code code, tree scalar_type)
> {
>
> machine_mode vec_mode;
> @@ -5474,7 +5475,7 @@ vect_supportable_shift (vec_info *, enum
> int icode;
> tree vectype;
>
> - vectype = get_vectype_for_scalar_type (scalar_type);
> + vectype = get_vectype_for_scalar_type (vinfo, scalar_type);
> if (!vectype)
> return false;
>
> @@ -9763,7 +9764,7 @@ vect_is_simple_cond (tree cond, vec_info
> scalar_type = build_nonstandard_integer_type
> (tree_to_uhwi (TYPE_SIZE (TREE_TYPE (vectype))),
> TYPE_UNSIGNED (scalar_type));
> - *comp_vectype = get_vectype_for_scalar_type (scalar_type);
> + *comp_vectype = get_vectype_for_scalar_type (vinfo, scalar_type);
> }
>
> return true;
> @@ -10359,7 +10360,7 @@ vectorizable_comparison (stmt_vec_info s
> /* Invariant comparison. */
> if (!vectype)
> {
> - vectype = get_vectype_for_scalar_type (TREE_TYPE (rhs1));
> + vectype = get_vectype_for_scalar_type (vinfo, TREE_TYPE (rhs1));
> if (maybe_ne (TYPE_VECTOR_SUBPARTS (vectype), nunits))
> return false;
> }
> @@ -11140,7 +11141,7 @@ get_vectype_for_scalar_type_and_size (tr
> by the target. */
>
> tree
> -get_vectype_for_scalar_type (tree scalar_type)
> +get_vectype_for_scalar_type (vec_info *, tree scalar_type)
> {
> tree vectype;
> vectype = get_vectype_for_scalar_type_and_size (scalar_type,
> @@ -11157,9 +11158,9 @@ get_vectype_for_scalar_type (tree scalar
> of vectors of specified SCALAR_TYPE as supported by target. */
>
> tree
> -get_mask_type_for_scalar_type (vec_info *, tree scalar_type)
> +get_mask_type_for_scalar_type (vec_info *vinfo, tree scalar_type)
> {
> - tree vectype = get_vectype_for_scalar_type (scalar_type);
> + tree vectype = get_vectype_for_scalar_type (vinfo, scalar_type);
>
> if (!vectype)
> return NULL;
> @@ -11853,6 +11854,7 @@ vect_get_vector_types_for_stmt (stmt_vec
> tree *stmt_vectype_out,
> tree *nunits_vectype_out)
> {
> + vec_info *vinfo = stmt_info->vinfo;
> gimple *stmt = stmt_info->stmt;
>
> *stmt_vectype_out = NULL_TREE;
> @@ -11919,7 +11921,7 @@ vect_get_vector_types_for_stmt (stmt_vec
> if (dump_enabled_p ())
> dump_printf_loc (MSG_NOTE, vect_location,
> "get vectype for scalar type: %T\n", scalar_type);
> - vectype = get_vectype_for_scalar_type (scalar_type);
> + vectype = get_vectype_for_scalar_type (vinfo, scalar_type);
> if (!vectype)
> return opt_result::failure_at (stmt,
> "not vectorized:"
> @@ -11952,7 +11954,7 @@ vect_get_vector_types_for_stmt (stmt_vec
> if (dump_enabled_p ())
> dump_printf_loc (MSG_NOTE, vect_location,
> "get vectype for scalar type: %T\n", scalar_type);
> - nunits_vectype = get_vectype_for_scalar_type (scalar_type);
> + nunits_vectype = get_vectype_for_scalar_type (vinfo, scalar_type);
> }
> if (!nunits_vectype)
> return opt_result::failure_at (stmt,
> Index: gcc/tree-vect-data-refs.c
> ===================================================================
> --- gcc/tree-vect-data-refs.c 2019-10-11 15:43:54.543490491 +0100
> +++ gcc/tree-vect-data-refs.c 2019-10-20 14:14:09.664722204 +0100
> @@ -4344,7 +4344,7 @@ vect_analyze_data_refs (vec_info *vinfo,
> /* Set vectype for STMT. */
> scalar_type = TREE_TYPE (DR_REF (dr));
> STMT_VINFO_VECTYPE (stmt_info)
> - = get_vectype_for_scalar_type (scalar_type);
> + = get_vectype_for_scalar_type (vinfo, scalar_type);
> if (!STMT_VINFO_VECTYPE (stmt_info))
> {
> if (dump_enabled_p ())
> @@ -4392,7 +4392,8 @@ vect_analyze_data_refs (vec_info *vinfo,
> if (!vect_check_gather_scatter (stmt_info,
> as_a <loop_vec_info> (vinfo),
> &gs_info)
> - || !get_vectype_for_scalar_type (TREE_TYPE (gs_info.offset)))
> + || !get_vectype_for_scalar_type (vinfo,
> + TREE_TYPE (gs_info.offset)))
> {
> if (fatal)
> *fatal = false;
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c 2019-10-20 13:58:02.095634389 +0100
> +++ gcc/tree-vect-loop.c 2019-10-20 14:14:09.668722173 +0100
> @@ -327,7 +327,7 @@ vect_determine_vectorization_factor (loo
> "get vectype for scalar type: %T\n",
> scalar_type);
>
> - vectype = get_vectype_for_scalar_type (scalar_type);
> + vectype = get_vectype_for_scalar_type (loop_vinfo, scalar_type);
> if (!vectype)
> return opt_result::failure_at (phi,
> "not vectorized: unsupported "
> @@ -3774,7 +3774,7 @@ get_initial_def_for_reduction (stmt_vec_
> loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_vinfo);
> class loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
> tree scalar_type = TREE_TYPE (init_val);
> - tree vectype = get_vectype_for_scalar_type (scalar_type);
> + tree vectype = get_vectype_for_scalar_type (loop_vinfo, scalar_type);
> tree def_for_init;
> tree init_def;
> REAL_VALUE_TYPE real_init_val = dconst0;
> @@ -5555,11 +5555,11 @@ build_vect_cond_expr (enum tree_code cod
> corresponds to the type of arguments to the reduction stmt, and should *NOT*
> be used to create the vectorized stmt. The right vectype for the vectorized
> stmt is obtained from the type of the result X:
> - get_vectype_for_scalar_type (TREE_TYPE (X))
> + get_vectype_for_scalar_type (vinfo, TREE_TYPE (X))
>
> This means that, contrary to "regular" reductions (or "regular" stmts in
> general), the following equation:
> - STMT_VINFO_VECTYPE == get_vectype_for_scalar_type (TREE_TYPE (X))
> + STMT_VINFO_VECTYPE == get_vectype_for_scalar_type (vinfo, TREE_TYPE (X))
> does *NOT* necessarily hold for reduction patterns. */
>
> bool
> Index: gcc/tree-vect-patterns.c
> ===================================================================
> --- gcc/tree-vect-patterns.c 2019-10-20 14:14:06.468745032 +0100
> +++ gcc/tree-vect-patterns.c 2019-10-20 14:14:09.668722173 +0100
> @@ -187,15 +187,15 @@ vect_get_external_def_edge (vec_info *vi
> is nonnull. */
>
> static bool
> -vect_supportable_direct_optab_p (vec_info *, tree otype, tree_code code,
> +vect_supportable_direct_optab_p (vec_info *vinfo, tree otype, tree_code code,
> tree itype, tree *vecotype_out,
> tree *vecitype_out = NULL)
> {
> - tree vecitype = get_vectype_for_scalar_type (itype);
> + tree vecitype = get_vectype_for_scalar_type (vinfo, itype);
> if (!vecitype)
> return false;
>
> - tree vecotype = get_vectype_for_scalar_type (otype);
> + tree vecotype = get_vectype_for_scalar_type (vinfo, otype);
> if (!vecotype)
> return false;
>
> @@ -635,6 +635,7 @@ vect_recog_temp_ssa_var (tree type, gimp
> vect_split_statement (stmt_vec_info stmt2_info, tree new_rhs,
> gimple *stmt1, tree vectype)
> {
> + vec_info *vinfo = stmt2_info->vinfo;
> if (is_pattern_stmt_p (stmt2_info))
> {
> /* STMT2_INFO is part of a pattern. Get the statement to which
> @@ -678,7 +679,7 @@ vect_split_statement (stmt_vec_info stmt
> two-statement pattern now. */
> gcc_assert (!STMT_VINFO_RELATED_STMT (stmt2_info));
> tree lhs_type = TREE_TYPE (gimple_get_lhs (stmt2_info->stmt));
> - tree lhs_vectype = get_vectype_for_scalar_type (lhs_type);
> + tree lhs_vectype = get_vectype_for_scalar_type (vinfo, lhs_type);
> if (!lhs_vectype)
> return false;
>
> @@ -715,6 +716,8 @@ vect_split_statement (stmt_vec_info stmt
> vect_convert_input (stmt_vec_info stmt_info, tree type,
> vect_unpromoted_value *unprom, tree vectype)
> {
> + vec_info *vinfo = stmt_info->vinfo;
> +
> /* Check for a no-op conversion. */
> if (types_compatible_p (type, TREE_TYPE (unprom->op)))
> return unprom->op;
> @@ -752,7 +755,7 @@ vect_convert_input (stmt_vec_info stmt_i
> unsigned promotion. */
> tree midtype = build_nonstandard_integer_type
> (TYPE_PRECISION (type), TYPE_UNSIGNED (unprom->type));
> - tree vec_midtype = get_vectype_for_scalar_type (midtype);
> + tree vec_midtype = get_vectype_for_scalar_type (vinfo, midtype);
> if (vec_midtype)
> {
> input = vect_recog_temp_ssa_var (midtype, NULL);
> @@ -1189,6 +1192,7 @@ vect_recog_widen_op_pattern (stmt_vec_in
> tree_code orig_code, tree_code wide_code,
> bool shift_p, const char *name)
> {
> + vec_info *vinfo = last_stmt_info->vinfo;
> gimple *last_stmt = last_stmt_info->stmt;
>
> vect_unpromoted_value unprom[2];
> @@ -1208,8 +1212,8 @@ vect_recog_widen_op_pattern (stmt_vec_in
> TYPE_UNSIGNED (half_type));
>
> /* Check target support */
> - tree vectype = get_vectype_for_scalar_type (half_type);
> - tree vecitype = get_vectype_for_scalar_type (itype);
> + tree vectype = get_vectype_for_scalar_type (vinfo, half_type);
> + tree vecitype = get_vectype_for_scalar_type (vinfo, itype);
> enum tree_code dummy_code;
> int dummy_int;
> auto_vec<tree> dummy_vec;
> @@ -1221,7 +1225,7 @@ vect_recog_widen_op_pattern (stmt_vec_in
> &dummy_int, &dummy_vec))
> return NULL;
>
> - *type_out = get_vectype_for_scalar_type (type);
> + *type_out = get_vectype_for_scalar_type (vinfo, type);
> if (!*type_out)
> return NULL;
>
> @@ -1342,7 +1346,7 @@ vect_recog_pow_pattern (stmt_vec_info st
> if (node->simd_clones == NULL)
> return NULL;
> }
> - *type_out = get_vectype_for_scalar_type (TREE_TYPE (base));
> + *type_out = get_vectype_for_scalar_type (vinfo, TREE_TYPE (base));
> if (!*type_out)
> return NULL;
> tree def = vect_recog_temp_ssa_var (TREE_TYPE (base), NULL);
> @@ -1380,7 +1384,7 @@ vect_recog_pow_pattern (stmt_vec_info st
> if (TREE_CODE (exp) == REAL_CST
> && real_equal (&TREE_REAL_CST (exp), &dconsthalf))
> {
> - *type_out = get_vectype_for_scalar_type (TREE_TYPE (base));
> + *type_out = get_vectype_for_scalar_type (vinfo, TREE_TYPE (base));
> if (*type_out
> && direct_internal_fn_supported_p (IFN_SQRT, *type_out,
> OPTIMIZE_FOR_SPEED))
> @@ -1665,7 +1669,7 @@ vect_recog_over_widening_pattern (stmt_v
>
> vect_pattern_detected ("vect_recog_over_widening_pattern", last_stmt);
>
> - *type_out = get_vectype_for_scalar_type (type);
> + *type_out = get_vectype_for_scalar_type (vinfo, type);
> if (!*type_out)
> return NULL;
>
> @@ -1686,8 +1690,8 @@ vect_recog_over_widening_pattern (stmt_v
> wants to rewrite anyway. If targets have a minimum element size
> for some optabs, we should pattern-match smaller ops to larger ops
> where beneficial. */
> - tree new_vectype = get_vectype_for_scalar_type (new_type);
> - tree op_vectype = get_vectype_for_scalar_type (op_type);
> + tree new_vectype = get_vectype_for_scalar_type (vinfo, new_type);
> + tree op_vectype = get_vectype_for_scalar_type (vinfo, op_type);
> if (!new_vectype || !op_vectype)
> return NULL;
>
> @@ -1864,7 +1868,7 @@ vect_recog_mulhs_pattern (stmt_vec_info
> (target_precision, TYPE_UNSIGNED (new_type));
>
> /* Check for target support. */
> - tree new_vectype = get_vectype_for_scalar_type (new_type);
> + tree new_vectype = get_vectype_for_scalar_type (vinfo, new_type);
> if (!new_vectype
> || !direct_internal_fn_supported_p
> (ifn, new_vectype, OPTIMIZE_FOR_SPEED))
> @@ -1872,7 +1876,7 @@ vect_recog_mulhs_pattern (stmt_vec_info
>
> /* The IR requires a valid vector type for the cast result, even though
> it's likely to be discarded. */
> - *type_out = get_vectype_for_scalar_type (lhs_type);
> + *type_out = get_vectype_for_scalar_type (vinfo, lhs_type);
> if (!*type_out)
> return NULL;
>
> @@ -2014,7 +2018,7 @@ vect_recog_average_pattern (stmt_vec_inf
> TYPE_UNSIGNED (new_type));
>
> /* Check for target support. */
> - tree new_vectype = get_vectype_for_scalar_type (new_type);
> + tree new_vectype = get_vectype_for_scalar_type (vinfo, new_type);
> if (!new_vectype
> || !direct_internal_fn_supported_p (ifn, new_vectype,
> OPTIMIZE_FOR_SPEED))
> @@ -2022,7 +2026,7 @@ vect_recog_average_pattern (stmt_vec_inf
>
> /* The IR requires a valid vector type for the cast result, even though
> it's likely to be discarded. */
> - *type_out = get_vectype_for_scalar_type (type);
> + *type_out = get_vectype_for_scalar_type (vinfo, type);
> if (!*type_out)
> return NULL;
>
> @@ -2108,7 +2112,7 @@ vect_recog_cast_forwprop_pattern (stmt_v
> the unnecessary widening and narrowing. */
> vect_pattern_detected ("vect_recog_cast_forwprop_pattern", last_stmt);
>
> - *type_out = get_vectype_for_scalar_type (lhs_type);
> + *type_out = get_vectype_for_scalar_type (vinfo, lhs_type);
> if (!*type_out)
> return NULL;
>
> @@ -2219,7 +2223,7 @@ vect_recog_rotate_pattern (stmt_vec_info
> }
>
> type = TREE_TYPE (lhs);
> - vectype = get_vectype_for_scalar_type (type);
> + vectype = get_vectype_for_scalar_type (vinfo, type);
> if (vectype == NULL_TREE)
> return NULL;
>
> @@ -2285,7 +2289,7 @@ vect_recog_rotate_pattern (stmt_vec_info
> && dt != vect_external_def)
> return NULL;
>
> - vectype = get_vectype_for_scalar_type (type);
> + vectype = get_vectype_for_scalar_type (vinfo, type);
> if (vectype == NULL_TREE)
> return NULL;
>
> @@ -2404,7 +2408,7 @@ vect_recog_rotate_pattern (stmt_vec_info
> }
> else
> {
> - tree vecstype = get_vectype_for_scalar_type (stype);
> + tree vecstype = get_vectype_for_scalar_type (vinfo, stype);
>
> if (vecstype == NULL_TREE)
> return NULL;
> @@ -2533,7 +2537,7 @@ vect_recog_vector_vector_shift_pattern (
> if (!def_vinfo)
> return NULL;
>
> - *type_out = get_vectype_for_scalar_type (TREE_TYPE (oprnd0));
> + *type_out = get_vectype_for_scalar_type (vinfo, TREE_TYPE (oprnd0));
> if (*type_out == NULL_TREE)
> return NULL;
>
> @@ -2556,7 +2560,8 @@ vect_recog_vector_vector_shift_pattern (
> TYPE_PRECISION (TREE_TYPE (oprnd1)));
> def = vect_recog_temp_ssa_var (TREE_TYPE (rhs1), NULL);
> def_stmt = gimple_build_assign (def, BIT_AND_EXPR, rhs1, mask);
> - tree vecstype = get_vectype_for_scalar_type (TREE_TYPE (rhs1));
> + tree vecstype = get_vectype_for_scalar_type (vinfo,
> + TREE_TYPE (rhs1));
> append_pattern_def_seq (stmt_vinfo, def_stmt, vecstype);
> }
> }
> @@ -2751,7 +2756,7 @@ vect_synth_mult_by_constant (tree op, tr
> if (!possible)
> return NULL;
>
> - tree vectype = get_vectype_for_scalar_type (multtype);
> + tree vectype = get_vectype_for_scalar_type (vinfo, multtype);
>
> if (!vectype
> || !target_supports_mult_synth_alg (&alg, variant,
> @@ -2897,6 +2902,7 @@ vect_synth_mult_by_constant (tree op, tr
> static gimple *
> vect_recog_mult_pattern (stmt_vec_info stmt_vinfo, tree *type_out)
> {
> + vec_info *vinfo = stmt_vinfo->vinfo;
> gimple *last_stmt = stmt_vinfo->stmt;
> tree oprnd0, oprnd1, vectype, itype;
> gimple *pattern_stmt;
> @@ -2917,7 +2923,7 @@ vect_recog_mult_pattern (stmt_vec_info s
> || !type_has_mode_precision_p (itype))
> return NULL;
>
> - vectype = get_vectype_for_scalar_type (itype);
> + vectype = get_vectype_for_scalar_type (vinfo, itype);
> if (vectype == NULL_TREE)
> return NULL;
>
> @@ -2985,6 +2991,7 @@ vect_recog_mult_pattern (stmt_vec_info s
> static gimple *
> vect_recog_divmod_pattern (stmt_vec_info stmt_vinfo, tree *type_out)
> {
> + vec_info *vinfo = stmt_vinfo->vinfo;
> gimple *last_stmt = stmt_vinfo->stmt;
> tree oprnd0, oprnd1, vectype, itype, cond;
> gimple *pattern_stmt, *def_stmt;
> @@ -3017,7 +3024,7 @@ vect_recog_divmod_pattern (stmt_vec_info
> return NULL;
>
> scalar_int_mode itype_mode = SCALAR_INT_TYPE_MODE (itype);
> - vectype = get_vectype_for_scalar_type (itype);
> + vectype = get_vectype_for_scalar_type (vinfo, itype);
> if (vectype == NULL_TREE)
> return NULL;
>
> @@ -3115,7 +3122,7 @@ vect_recog_divmod_pattern (stmt_vec_info
> {
> tree utype
> = build_nonstandard_integer_type (prec, 1);
> - tree vecutype = get_vectype_for_scalar_type (utype);
> + tree vecutype = get_vectype_for_scalar_type (vinfo, utype);
> tree shift
> = build_int_cst (utype, GET_MODE_BITSIZE (itype_mode)
> - tree_log2 (oprnd1));
> @@ -3433,6 +3440,7 @@ vect_recog_divmod_pattern (stmt_vec_info
> static gimple *
> vect_recog_mixed_size_cond_pattern (stmt_vec_info stmt_vinfo, tree *type_out)
> {
> + vec_info *vinfo = stmt_vinfo->vinfo;
> gimple *last_stmt = stmt_vinfo->stmt;
> tree cond_expr, then_clause, else_clause;
> tree type, vectype, comp_vectype, itype = NULL_TREE, vecitype;
> @@ -3455,7 +3463,7 @@ vect_recog_mixed_size_cond_pattern (stmt
> return NULL;
>
> comp_scalar_type = TREE_TYPE (TREE_OPERAND (cond_expr, 0));
> - comp_vectype = get_vectype_for_scalar_type (comp_scalar_type);
> + comp_vectype = get_vectype_for_scalar_type (vinfo, comp_scalar_type);
> if (comp_vectype == NULL_TREE)
> return NULL;
>
> @@ -3503,7 +3511,7 @@ vect_recog_mixed_size_cond_pattern (stmt
> if (GET_MODE_BITSIZE (type_mode) == cmp_mode_size)
> return NULL;
>
> - vectype = get_vectype_for_scalar_type (type);
> + vectype = get_vectype_for_scalar_type (vinfo, type);
> if (vectype == NULL_TREE)
> return NULL;
>
> @@ -3518,7 +3526,7 @@ vect_recog_mixed_size_cond_pattern (stmt
> || GET_MODE_BITSIZE (SCALAR_TYPE_MODE (itype)) != cmp_mode_size)
> return NULL;
>
> - vecitype = get_vectype_for_scalar_type (itype);
> + vecitype = get_vectype_for_scalar_type (vinfo, itype);
> if (vecitype == NULL_TREE)
> return NULL;
>
> @@ -3612,7 +3620,7 @@ check_bool_pattern (tree var, vec_info *
> if (stmt_could_throw_p (cfun, def_stmt))
> return false;
>
> - comp_vectype = get_vectype_for_scalar_type (TREE_TYPE (rhs1));
> + comp_vectype = get_vectype_for_scalar_type (vinfo, TREE_TYPE (rhs1));
> if (comp_vectype == NULL_TREE)
> return false;
>
> @@ -3627,7 +3635,7 @@ check_bool_pattern (tree var, vec_info *
> scalar_mode mode = SCALAR_TYPE_MODE (TREE_TYPE (rhs1));
> tree itype
> = build_nonstandard_integer_type (GET_MODE_BITSIZE (mode), 1);
> - vecitype = get_vectype_for_scalar_type (itype);
> + vecitype = get_vectype_for_scalar_type (vinfo, itype);
> if (vecitype == NULL_TREE)
> return false;
> }
> @@ -3656,10 +3664,11 @@ check_bool_pattern (tree var, vec_info *
> static tree
> adjust_bool_pattern_cast (tree type, tree var, stmt_vec_info stmt_info)
> {
> + vec_info *vinfo = stmt_info->vinfo;
> gimple *cast_stmt = gimple_build_assign (vect_recog_temp_ssa_var (type, NULL),
> NOP_EXPR, var);
> append_pattern_def_seq (stmt_info, cast_stmt,
> - get_vectype_for_scalar_type (type));
> + get_vectype_for_scalar_type (vinfo, type));
> return gimple_assign_lhs (cast_stmt);
> }
>
> @@ -3673,6 +3682,7 @@ adjust_bool_pattern_cast (tree type, tre
> adjust_bool_pattern (tree var, tree out_type,
> stmt_vec_info stmt_info, hash_map <tree, tree> &defs)
> {
> + vec_info *vinfo = stmt_info->vinfo;
> gimple *stmt = SSA_NAME_DEF_STMT (var);
> enum tree_code rhs_code, def_rhs_code;
> tree itype, cond_expr, rhs1, rhs2, irhs1, irhs2;
> @@ -3834,7 +3844,7 @@ adjust_bool_pattern (tree var, tree out_
>
> gimple_set_location (pattern_stmt, loc);
> append_pattern_def_seq (stmt_info, pattern_stmt,
> - get_vectype_for_scalar_type (itype));
> + get_vectype_for_scalar_type (vinfo, itype));
> defs.put (var, gimple_assign_lhs (pattern_stmt));
> }
>
> @@ -3937,7 +3947,7 @@ search_type_for_mask_1 (tree var, vec_in
> break;
> }
>
> - comp_vectype = get_vectype_for_scalar_type (TREE_TYPE (rhs1));
> + comp_vectype = get_vectype_for_scalar_type (vinfo, TREE_TYPE (rhs1));
> if (comp_vectype == NULL_TREE)
> {
> res = NULL_TREE;
> @@ -4052,7 +4062,7 @@ vect_recog_bool_pattern (stmt_vec_info s
> if (! INTEGRAL_TYPE_P (TREE_TYPE (lhs))
> || TYPE_PRECISION (TREE_TYPE (lhs)) == 1)
> return NULL;
> - vectype = get_vectype_for_scalar_type (TREE_TYPE (lhs));
> + vectype = get_vectype_for_scalar_type (vinfo, TREE_TYPE (lhs));
> if (vectype == NULL_TREE)
> return NULL;
>
> @@ -4089,7 +4099,7 @@ vect_recog_bool_pattern (stmt_vec_info s
>
> if (!useless_type_conversion_p (type, TREE_TYPE (lhs)))
> {
> - tree new_vectype = get_vectype_for_scalar_type (type);
> + tree new_vectype = get_vectype_for_scalar_type (vinfo, type);
> append_pattern_def_seq (stmt_vinfo, pattern_stmt, new_vectype);
>
> lhs = vect_recog_temp_ssa_var (TREE_TYPE (lhs), NULL);
> @@ -4105,7 +4115,7 @@ vect_recog_bool_pattern (stmt_vec_info s
> else if (rhs_code == COND_EXPR
> && TREE_CODE (var) == SSA_NAME)
> {
> - vectype = get_vectype_for_scalar_type (TREE_TYPE (lhs));
> + vectype = get_vectype_for_scalar_type (vinfo, TREE_TYPE (lhs));
> if (vectype == NULL_TREE)
> return NULL;
>
> @@ -4119,7 +4129,7 @@ vect_recog_bool_pattern (stmt_vec_info s
> tree type
> = build_nonstandard_integer_type (prec,
> TYPE_UNSIGNED (TREE_TYPE (var)));
> - if (get_vectype_for_scalar_type (type) == NULL_TREE)
> + if (get_vectype_for_scalar_type (vinfo, type) == NULL_TREE)
> return NULL;
>
> if (!check_bool_pattern (var, vinfo, bool_stmts))
> @@ -4163,7 +4173,7 @@ vect_recog_bool_pattern (stmt_vec_info s
>
> cst0 = build_int_cst (type, 0);
> cst1 = build_int_cst (type, 1);
> - new_vectype = get_vectype_for_scalar_type (type);
> + new_vectype = get_vectype_for_scalar_type (vinfo, type);
>
> rhs = vect_recog_temp_ssa_var (type, NULL);
> pattern_stmt = gimple_build_assign (rhs, COND_EXPR, var, cst1, cst0);
> @@ -4264,12 +4274,12 @@ vect_recog_mask_conversion_pattern (stmt
> {
> int rhs_index = internal_fn_stored_value_index (ifn);
> tree rhs = gimple_call_arg (last_stmt, rhs_index);
> - vectype1 = get_vectype_for_scalar_type (TREE_TYPE (rhs));
> + vectype1 = get_vectype_for_scalar_type (vinfo, TREE_TYPE (rhs));
> }
> else
> {
> lhs = gimple_call_lhs (last_stmt);
> - vectype1 = get_vectype_for_scalar_type (TREE_TYPE (lhs));
> + vectype1 = get_vectype_for_scalar_type (vinfo, TREE_TYPE (lhs));
> }
>
> tree mask_arg = gimple_call_arg (last_stmt, mask_argno);
> @@ -4322,7 +4332,7 @@ vect_recog_mask_conversion_pattern (stmt
> /* Check for cond expression requiring mask conversion. */
> if (rhs_code == COND_EXPR)
> {
> - vectype1 = get_vectype_for_scalar_type (TREE_TYPE (lhs));
> + vectype1 = get_vectype_for_scalar_type (vinfo, TREE_TYPE (lhs));
>
> if (TREE_CODE (rhs1) == SSA_NAME)
> {
> @@ -4388,7 +4398,8 @@ vect_recog_mask_conversion_pattern (stmt
> tree wide_scalar_type = build_nonstandard_integer_type
> (tree_to_uhwi (TYPE_SIZE (TREE_TYPE (vectype1))),
> TYPE_UNSIGNED (rhs1_type));
> - tree vectype3 = get_vectype_for_scalar_type (wide_scalar_type);
> + tree vectype3 = get_vectype_for_scalar_type (vinfo,
> + wide_scalar_type);
> if (expand_vec_cond_expr_p (vectype1, vectype3, TREE_CODE (rhs1)))
> return NULL;
> }
> @@ -4544,10 +4555,11 @@ vect_add_conversion_to_pattern (tree typ
> if (useless_type_conversion_p (type, TREE_TYPE (value)))
> return value;
>
> + vec_info *vinfo = stmt_info->vinfo;
> tree new_value = vect_recog_temp_ssa_var (type, NULL);
> gassign *conversion = gimple_build_assign (new_value, CONVERT_EXPR, value);
> append_pattern_def_seq (stmt_info, conversion,
> - get_vectype_for_scalar_type (type));
> + get_vectype_for_scalar_type (vinfo, type));
> return new_value;
> }
>
> @@ -4583,7 +4595,8 @@ vect_recog_gather_scatter_pattern (stmt_
> return NULL;
>
> /* Convert the mask to the right form. */
> - tree gs_vectype = get_vectype_for_scalar_type (gs_info.element_type);
> + tree gs_vectype = get_vectype_for_scalar_type (loop_vinfo,
> + gs_info.element_type);
> if (mask)
> mask = vect_convert_mask_for_vectype (mask, gs_vectype, stmt_info,
> loop_vinfo);
> Index: gcc/tree-vect-slp.c
> ===================================================================
> --- gcc/tree-vect-slp.c 2019-10-20 13:59:25.923035567 +0100
> +++ gcc/tree-vect-slp.c 2019-10-20 14:14:09.668722173 +0100
> @@ -1127,7 +1127,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
> if (gphi *stmt = dyn_cast <gphi *> (stmt_info->stmt))
> {
> tree scalar_type = TREE_TYPE (PHI_RESULT (stmt));
> - tree vectype = get_vectype_for_scalar_type (scalar_type);
> + tree vectype = get_vectype_for_scalar_type (vinfo, scalar_type);
> if (!vect_record_max_nunits (stmt_info, group_size, vectype, max_nunits))
> return NULL;
>
> @@ -1926,7 +1926,7 @@ vect_analyze_slp_instance (vec_info *vin
> if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
> {
> scalar_type = TREE_TYPE (DR_REF (dr));
> - vectype = get_vectype_for_scalar_type (scalar_type);
> + vectype = get_vectype_for_scalar_type (vinfo, scalar_type);
> group_size = DR_GROUP_SIZE (stmt_info);
> }
> else if (!dr && REDUC_GROUP_FIRST_ELEMENT (stmt_info))
> @@ -3287,6 +3287,7 @@ vect_get_constant_vectors (tree op, slp_
> {
> vec<stmt_vec_info> stmts = SLP_TREE_SCALAR_STMTS (slp_node);
> stmt_vec_info stmt_vinfo = stmts[0];
> + vec_info *vinfo = stmt_vinfo->vinfo;
> gimple *stmt = stmt_vinfo->stmt;
> unsigned HOST_WIDE_INT nunits;
> tree vec_cst;
> @@ -3310,7 +3311,7 @@ vect_get_constant_vectors (tree op, slp_
> vector_type
> = build_same_sized_truth_vector_type (STMT_VINFO_VECTYPE (stmt_vinfo));
> else
> - vector_type = get_vectype_for_scalar_type (TREE_TYPE (op));
> + vector_type = get_vectype_for_scalar_type (vinfo, TREE_TYPE (op));
>
> if (STMT_VINFO_DATA_REF (stmt_vinfo))
> {
>
>
> Pass a vec_info to duplicate_and_interleave
>
> 2019-10-20 Richard Sandiford <richard.sandiford@arm.com>
>
> gcc/
> * tree-vectorizer.h (duplicate_and_interleave): Take a vec_info.
> * tree-vect-slp.c (duplicate_and_interleave): Likewise.
> (vect_get_constant_vectors): Update call accordingly.
> * tree-vect-loop.c (get_initial_defs_for_reduction): Likewise.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h 2019-10-20 14:14:09.672722145 +0100
> +++ gcc/tree-vectorizer.h 2019-10-20 14:14:13.256696547 +0100
> @@ -1754,8 +1754,8 @@ extern bool is_simple_and_all_uses_invar
> extern bool can_duplicate_and_interleave_p (unsigned int, machine_mode,
> unsigned int * = NULL,
> tree * = NULL, tree * = NULL);
> -extern void duplicate_and_interleave (gimple_seq *, tree, vec<tree>,
> - unsigned int, vec<tree> &);
> +extern void duplicate_and_interleave (vec_info *, gimple_seq *, tree,
> + vec<tree>, unsigned int, vec<tree> &);
> extern int vect_get_place_in_interleaving_chain (stmt_vec_info, stmt_vec_info);
>
> /* In tree-vect-patterns.c. */
> Index: gcc/tree-vect-slp.c
> ===================================================================
> --- gcc/tree-vect-slp.c 2019-10-20 14:14:09.668722173 +0100
> +++ gcc/tree-vect-slp.c 2019-10-20 14:14:13.256696547 +0100
> @@ -3183,8 +3183,9 @@ vect_mask_constant_operand_p (stmt_vec_i
> to cut down on the number of interleaves. */
>
> void
> -duplicate_and_interleave (gimple_seq *seq, tree vector_type, vec<tree> elts,
> - unsigned int nresults, vec<tree> &results)
> +duplicate_and_interleave (vec_info *, gimple_seq *seq, tree vector_type,
> + vec<tree> elts, unsigned int nresults,
> + vec<tree> &results)
> {
> unsigned int nelts = elts.length ();
> tree element_type = TREE_TYPE (vector_type);
> @@ -3473,8 +3474,8 @@ vect_get_constant_vectors (tree op, slp_
> else
> {
> if (vec_oprnds->is_empty ())
> - duplicate_and_interleave (&ctor_seq, vector_type, elts,
> - number_of_vectors,
> + duplicate_and_interleave (vinfo, &ctor_seq, vector_type,
> + elts, number_of_vectors,
> permute_results);
> vec_cst = permute_results[number_of_vectors - j - 1];
> }
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c 2019-10-20 14:14:09.668722173 +0100
> +++ gcc/tree-vect-loop.c 2019-10-20 14:14:13.252696575 +0100
> @@ -3878,6 +3878,7 @@ get_initial_defs_for_reduction (slp_tree
> {
> vec<stmt_vec_info> stmts = SLP_TREE_SCALAR_STMTS (slp_node);
> stmt_vec_info stmt_vinfo = stmts[0];
> + vec_info *vinfo = stmt_vinfo->vinfo;
> unsigned HOST_WIDE_INT nunits;
> unsigned j, number_of_places_left_in_vector;
> tree vector_type;
> @@ -3970,7 +3971,7 @@ get_initial_defs_for_reduction (slp_tree
> {
> /* First time round, duplicate ELTS to fill the
> required number of vectors. */
> - duplicate_and_interleave (&ctor_seq, vector_type, elts,
> + duplicate_and_interleave (vinfo, &ctor_seq, vector_type, elts,
> number_of_vectors, *vec_oprnds);
> break;
> }
>
>
> Pass a vec_info to can_duplicate_and_interleave_p
>
> 2019-10-20 Richard Sandiford <richard.sandiford@arm.com>
>
> gcc/
> * tree-vectorizer.h (can_duplicate_and_interleave_p): Take a vec_info.
> * tree-vect-slp.c (can_duplicate_and_interleave_p): Likewise.
> (duplicate_and_interleave): Update call accordingly.
> * tree-vect-loop.c (vectorizable_reduction): Likewise.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h 2019-10-20 14:14:13.256696547 +0100
> +++ gcc/tree-vectorizer.h 2019-10-20 14:14:16.688672033 +0100
> @@ -1751,7 +1751,8 @@ extern void vect_get_slp_defs (vec<tree>
> extern bool vect_slp_bb (basic_block);
> extern stmt_vec_info vect_find_last_scalar_stmt_in_slp (slp_tree);
> extern bool is_simple_and_all_uses_invariant (stmt_vec_info, loop_vec_info);
> -extern bool can_duplicate_and_interleave_p (unsigned int, machine_mode,
> +extern bool can_duplicate_and_interleave_p (vec_info *, unsigned int,
> + machine_mode,
> unsigned int * = NULL,
> tree * = NULL, tree * = NULL);
> extern void duplicate_and_interleave (vec_info *, gimple_seq *, tree,
> Index: gcc/tree-vect-slp.c
> ===================================================================
> --- gcc/tree-vect-slp.c 2019-10-20 14:14:13.256696547 +0100
> +++ gcc/tree-vect-slp.c 2019-10-20 14:14:16.688672033 +0100
> @@ -233,7 +233,8 @@ vect_get_place_in_interleaving_chain (st
> (if nonnull). */
>
> bool
> -can_duplicate_and_interleave_p (unsigned int count, machine_mode elt_mode,
> +can_duplicate_and_interleave_p (vec_info *, unsigned int count,
> + machine_mode elt_mode,
> unsigned int *nvectors_out,
> tree *vector_type_out,
> tree *permutes)
> @@ -432,7 +433,7 @@ vect_get_and_check_slp_defs (vec_info *v
> || dt == vect_external_def)
> && !current_vector_size.is_constant ()
> && (TREE_CODE (type) == BOOLEAN_TYPE
> - || !can_duplicate_and_interleave_p (stmts.length (),
> + || !can_duplicate_and_interleave_p (vinfo, stmts.length (),
> TYPE_MODE (type))))
> {
> if (dump_enabled_p ())
> @@ -3183,7 +3184,7 @@ vect_mask_constant_operand_p (stmt_vec_i
> to cut down on the number of interleaves. */
>
> void
> -duplicate_and_interleave (vec_info *, gimple_seq *seq, tree vector_type,
> +duplicate_and_interleave (vec_info *vinfo, gimple_seq *seq, tree vector_type,
> vec<tree> elts, unsigned int nresults,
> vec<tree> &results)
> {
> @@ -3194,7 +3195,7 @@ duplicate_and_interleave (vec_info *, gi
> unsigned int nvectors = 1;
> tree new_vector_type;
> tree permutes[2];
> - if (!can_duplicate_and_interleave_p (nelts, TYPE_MODE (element_type),
> + if (!can_duplicate_and_interleave_p (vinfo, nelts, TYPE_MODE (element_type),
> &nvectors, &new_vector_type,
> permutes))
> gcc_unreachable ();
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c 2019-10-20 14:14:13.252696575 +0100
> +++ gcc/tree-vect-loop.c 2019-10-20 14:14:16.684672061 +0100
> @@ -6145,7 +6145,8 @@ vectorizable_reduction (stmt_vec_info st
> unsigned int group_size = SLP_INSTANCE_GROUP_SIZE (slp_node_instance);
> scalar_mode elt_mode = SCALAR_TYPE_MODE (TREE_TYPE (vectype_out));
> if (!neutral_op
> - && !can_duplicate_and_interleave_p (group_size, elt_mode))
> + && !can_duplicate_and_interleave_p (loop_vinfo, group_size,
> + elt_mode))
> {
> if (dump_enabled_p ())
> dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>
>
> Pass a vec_info to simple_integer_narrowing
>
> 2019-10-20 Richard Sandiford <richard.sandiford@arm.com>
>
> gcc/
> * tree-vect-stmts.c (simple_integer_narrowing): Take a vec_info.
> (vectorizable_call): Update call accordingly.
>
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c 2019-10-20 14:14:09.672722145 +0100
> +++ gcc/tree-vect-stmts.c 2019-10-20 14:14:19.748650179 +0100
> @@ -3175,7 +3175,7 @@ vectorizable_bswap (stmt_vec_info stmt_i
> *CONVERT_CODE. */
>
> static bool
> -simple_integer_narrowing (tree vectype_out, tree vectype_in,
> +simple_integer_narrowing (vec_info *, tree vectype_out, tree vectype_in,
> tree_code *convert_code)
> {
> if (!INTEGRAL_TYPE_P (TREE_TYPE (vectype_out))
> @@ -3369,7 +3369,7 @@ vectorizable_call (stmt_vec_info stmt_in
> if (cfn != CFN_LAST
> && (modifier == NONE
> || (modifier == NARROW
> - && simple_integer_narrowing (vectype_out, vectype_in,
> + && simple_integer_narrowing (vinfo, vectype_out, vectype_in,
> &convert_code))))
> ifn = vectorizable_internal_function (cfn, callee, vectype_out,
> vectype_in);
>
>
> Pass a vec_info to supportable_narrowing_operation
>
> 2019-10-20 Richard Sandiford <richard.sandiford@arm.com>
>
> gcc/
> * tree-vectorizer.h (supportable_narrowing_operation): Take a vec_info.
> * tree-vect-stmts.c (supportable_narrowing_operation): Likewise.
> (simple_integer_narrowing): Update call accordingly.
> (vectorizable_conversion): Likewise.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h 2019-10-20 14:14:16.688672033 +0100
> +++ gcc/tree-vectorizer.h 2019-10-20 14:14:23.176625692 +0100
> @@ -1603,8 +1603,8 @@ extern bool supportable_widening_operati
> tree, tree, enum tree_code *,
> enum tree_code *, int *,
> vec<tree> *);
> -extern bool supportable_narrowing_operation (enum tree_code, tree, tree,
> - enum tree_code *,
> +extern bool supportable_narrowing_operation (vec_info *, enum tree_code, tree,
> + tree, enum tree_code *,
> int *, vec<tree> *);
> extern unsigned record_stmt_cost (stmt_vector_for_cost *, int,
> enum vect_cost_for_stmt, stmt_vec_info,
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c 2019-10-20 14:14:19.748650179 +0100
> +++ gcc/tree-vect-stmts.c 2019-10-20 14:14:23.176625692 +0100
> @@ -3175,7 +3175,7 @@ vectorizable_bswap (stmt_vec_info stmt_i
> *CONVERT_CODE. */
>
> static bool
> -simple_integer_narrowing (vec_info *, tree vectype_out, tree vectype_in,
> +simple_integer_narrowing (vec_info *vinfo, tree vectype_out, tree vectype_in,
> tree_code *convert_code)
> {
> if (!INTEGRAL_TYPE_P (TREE_TYPE (vectype_out))
> @@ -3185,8 +3185,8 @@ simple_integer_narrowing (vec_info *, tr
> tree_code code;
> int multi_step_cvt = 0;
> auto_vec <tree, 8> interm_types;
> - if (!supportable_narrowing_operation (NOP_EXPR, vectype_out, vectype_in,
> - &code, &multi_step_cvt,
> + if (!supportable_narrowing_operation (vinfo, NOP_EXPR, vectype_out,
> + vectype_in, &code, &multi_step_cvt,
> &interm_types)
> || multi_step_cvt)
> return false;
> @@ -4957,8 +4957,8 @@ vectorizable_conversion (stmt_vec_info s
>
> case NARROW:
> gcc_assert (op_type == unary_op);
> - if (supportable_narrowing_operation (code, vectype_out, vectype_in,
> - &code1, &multi_step_cvt,
> + if (supportable_narrowing_operation (vinfo, code, vectype_out,
> + vectype_in, &code1, &multi_step_cvt,
> &interm_types))
> break;
>
> @@ -4974,8 +4974,8 @@ vectorizable_conversion (stmt_vec_info s
> if (!supportable_convert_operation (code, cvt_type, vectype_in,
> &decl1, &codecvt1))
> goto unsupported;
> - if (supportable_narrowing_operation (NOP_EXPR, vectype_out, cvt_type,
> - &code1, &multi_step_cvt,
> + if (supportable_narrowing_operation (vinfo, NOP_EXPR, vectype_out,
> + cvt_type, &code1, &multi_step_cvt,
> &interm_types))
> break;
> goto unsupported;
> @@ -11649,7 +11649,7 @@ supportable_widening_operation (enum tre
> narrowing operation (short in the above example). */
>
> bool
> -supportable_narrowing_operation (enum tree_code code,
> +supportable_narrowing_operation (vec_info *, enum tree_code code,
> tree vectype_out, tree vectype_in,
> enum tree_code *code1, int *multi_step_cvt,
> vec<tree> *interm_types)
>
>
> Pass a loop_vec_info to vect_maybe_permute_loop_masks
>
> 2019-10-20 Richard Sandiford <richard.sandiford@arm.com>
>
> gcc/
> * tree-vect-loop-manip.c (vect_maybe_permute_loop_masks): Take
> a loop_vec_info.
> (vect_set_loop_condition_masked): Update call accordingly.
>
> Index: gcc/tree-vect-loop-manip.c
> ===================================================================
> --- gcc/tree-vect-loop-manip.c 2019-10-17 14:22:54.919313309 +0100
> +++ gcc/tree-vect-loop-manip.c 2019-10-20 14:14:26.736600265 +0100
> @@ -317,7 +317,8 @@ interleave_supported_p (vec_perm_indices
> latter. Return true on success, adding any new statements to SEQ. */
>
> static bool
> -vect_maybe_permute_loop_masks (gimple_seq *seq, rgroup_masks *dest_rgm,
> +vect_maybe_permute_loop_masks (loop_vec_info, gimple_seq *seq,
> + rgroup_masks *dest_rgm,
> rgroup_masks *src_rgm)
> {
> tree src_masktype = src_rgm->mask_type;
> @@ -689,7 +690,8 @@ vect_set_loop_condition_masked (class lo
> {
> rgroup_masks *half_rgm = &(*masks)[nmasks / 2 - 1];
> if (!half_rgm->masks.is_empty ()
> - && vect_maybe_permute_loop_masks (&header_seq, rgm, half_rgm))
> + && vect_maybe_permute_loop_masks (loop_vinfo, &header_seq,
> + rgm, half_rgm))
> continue;
> }
>
>
> Pass a vec_info to vect_halve_mask_nunits
>
> 2019-10-20 Richard Sandiford <richard.sandiford@arm.com>
>
> gcc/
> * tree-vectorizer.h (vect_halve_mask_nunits): Take a vec_info.
> * tree-vect-loop.c (vect_halve_mask_nunits): Likewise.
> * tree-vect-loop-manip.c (vect_maybe_permute_loop_masks): Update
> call accordingly.
> * tree-vect-stmts.c (supportable_widening_operation): Likewise.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h 2019-10-20 14:14:23.176625692 +0100
> +++ gcc/tree-vectorizer.h 2019-10-20 14:14:30.500573381 +0100
> @@ -1705,7 +1705,7 @@ extern opt_loop_vec_info vect_analyze_lo
> extern tree vect_build_loop_niters (loop_vec_info, bool * = NULL);
> extern void vect_gen_vector_loop_niters (loop_vec_info, tree, tree *,
> tree *, bool);
> -extern tree vect_halve_mask_nunits (tree);
> +extern tree vect_halve_mask_nunits (vec_info *, tree);
> extern tree vect_double_mask_nunits (tree);
> extern void vect_record_loop_mask (loop_vec_info, vec_loop_masks *,
> unsigned int, tree, tree);
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c 2019-10-20 14:14:16.684672061 +0100
> +++ gcc/tree-vect-loop.c 2019-10-20 14:14:30.496573409 +0100
> @@ -7745,7 +7745,7 @@ loop_niters_no_overflow (loop_vec_info l
> /* Return a mask type with half the number of elements as TYPE. */
>
> tree
> -vect_halve_mask_nunits (tree type)
> +vect_halve_mask_nunits (vec_info *, tree type)
> {
> poly_uint64 nunits = exact_div (TYPE_VECTOR_SUBPARTS (type), 2);
> return build_truth_vector_type (nunits, current_vector_size);
> Index: gcc/tree-vect-loop-manip.c
> ===================================================================
> --- gcc/tree-vect-loop-manip.c 2019-10-20 14:14:26.736600265 +0100
> +++ gcc/tree-vect-loop-manip.c 2019-10-20 14:14:30.496573409 +0100
> @@ -317,7 +317,7 @@ interleave_supported_p (vec_perm_indices
> latter. Return true on success, adding any new statements to SEQ. */
>
> static bool
> -vect_maybe_permute_loop_masks (loop_vec_info, gimple_seq *seq,
> +vect_maybe_permute_loop_masks (loop_vec_info loop_vinfo, gimple_seq *seq,
> rgroup_masks *dest_rgm,
> rgroup_masks *src_rgm)
> {
> @@ -330,7 +330,7 @@ vect_maybe_permute_loop_masks (loop_vec_
> {
> /* Unpacking the source masks gives at least as many mask bits as
> we need. We can then VIEW_CONVERT any excess bits away. */
> - tree unpack_masktype = vect_halve_mask_nunits (src_masktype);
> + tree unpack_masktype = vect_halve_mask_nunits (loop_vinfo, src_masktype);
> for (unsigned int i = 0; i < dest_rgm->masks.length (); ++i)
> {
> tree src = src_rgm->masks[i / 2];
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c 2019-10-20 14:14:23.176625692 +0100
> +++ gcc/tree-vect-stmts.c 2019-10-20 14:14:30.500573381 +0100
> @@ -11385,6 +11385,7 @@ supportable_widening_operation (enum tre
> int *multi_step_cvt,
> vec<tree> *interm_types)
> {
> + vec_info *vinfo = stmt_info->vinfo;
> loop_vec_info loop_info = STMT_VINFO_LOOP_VINFO (stmt_info);
> class loop *vect_loop = NULL;
> machine_mode vec_mode;
> @@ -11570,7 +11571,7 @@ supportable_widening_operation (enum tre
> intermediate_mode = insn_data[icode1].operand[0].mode;
> if (VECTOR_BOOLEAN_TYPE_P (prev_type))
> {
> - intermediate_type = vect_halve_mask_nunits (prev_type);
> + intermediate_type = vect_halve_mask_nunits (vinfo, prev_type);
> if (intermediate_mode != TYPE_MODE (intermediate_type))
> return false;
> }
>
>
> Pass a vec_info to vect_double_mask_nunits
>
> 2019-10-20 Richard Sandiford <richard.sandiford@arm.com>
>
> gcc/
> * tree-vectorizer.h (vect_double_mask_nunits): Take a vec_info.
> * tree-vect-loop.c (vect_double_mask_nunits): Likewise.
> * tree-vect-stmts.c (supportable_narrowing_operation): Update call
> accordingly.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h 2019-10-20 14:14:30.500573381 +0100
> +++ gcc/tree-vectorizer.h 2019-10-20 14:14:33.692550581 +0100
> @@ -1706,7 +1706,7 @@ extern tree vect_build_loop_niters (loop
> extern void vect_gen_vector_loop_niters (loop_vec_info, tree, tree *,
> tree *, bool);
> extern tree vect_halve_mask_nunits (vec_info *, tree);
> -extern tree vect_double_mask_nunits (tree);
> +extern tree vect_double_mask_nunits (vec_info *, tree);
> extern void vect_record_loop_mask (loop_vec_info, vec_loop_masks *,
> unsigned int, tree, tree);
> extern tree vect_get_loop_mask (gimple_stmt_iterator *, vec_loop_masks *,
> Index: gcc/tree-vect-loop.c
> ===================================================================
> --- gcc/tree-vect-loop.c 2019-10-20 14:14:30.496573409 +0100
> +++ gcc/tree-vect-loop.c 2019-10-20 14:14:33.692550581 +0100
> @@ -7754,7 +7754,7 @@ vect_halve_mask_nunits (vec_info *, tree
> /* Return a mask type with twice as many elements as TYPE. */
>
> tree
> -vect_double_mask_nunits (tree type)
> +vect_double_mask_nunits (vec_info *, tree type)
> {
> poly_uint64 nunits = TYPE_VECTOR_SUBPARTS (type) * 2;
> return build_truth_vector_type (nunits, current_vector_size);
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c 2019-10-20 14:14:30.500573381 +0100
> +++ gcc/tree-vect-stmts.c 2019-10-20 14:14:33.692550581 +0100
> @@ -11650,7 +11650,7 @@ supportable_widening_operation (enum tre
> narrowing operation (short in the above example). */
>
> bool
> -supportable_narrowing_operation (vec_info *, enum tree_code code,
> +supportable_narrowing_operation (vec_info *vinfo, enum tree_code code,
> tree vectype_out, tree vectype_in,
> enum tree_code *code1, int *multi_step_cvt,
> vec<tree> *interm_types)
> @@ -11759,7 +11759,7 @@ supportable_narrowing_operation (vec_inf
> intermediate_mode = insn_data[icode1].operand[0].mode;
> if (VECTOR_BOOLEAN_TYPE_P (prev_type))
> {
> - intermediate_type = vect_double_mask_nunits (prev_type);
> + intermediate_type = vect_double_mask_nunits (vinfo, prev_type);
> if (intermediate_mode != TYPE_MODE (intermediate_type))
> return false;
> }
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2019-10-30 14:22 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-20 13:23 [0/3] Turn current_vector_size into a vec_info field Richard Sandiford
2019-10-20 13:27 ` [1/3] Avoid setting current_vector_size in get_vec_alignment_for_array_type Richard Sandiford
2019-10-30 14:22 ` Richard Biener
2019-10-20 13:30 ` [2/3] Pass vec_infos to more routines Richard Sandiford
2019-10-30 14:25 ` Richard Biener
2019-10-20 14:28 ` [3/3] Replace current_vector_size with vec_info::vector_size Richard Sandiford
2019-10-21 6:01 ` [0/3] Turn current_vector_size into a vec_info field Richard Biener
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).