>> I guess as a temporary thing your approach is OK but we shouldn't >> add these as part of new code - it's supposed to handle legacy >> cases that we didn't fixup yet. Do you mean we need to fix LC SSA PHI flow so that we don't need to set vinfo->any_known_not_updated_vssa = true ? After it's fixed then this patch with removing 'vinfo->any_known_not_updated_vssa = true' is ok for trunk, am I right? Thanks. juzhe.zhong@rivai.ai From: Richard Biener Date: 2023-08-10 15:58 To: Ju-Zhe Zhong CC: gcc-patches; richard.sandiford Subject: Re: [PATCH V2] VECT: Support loop len control on EXTRACT_LAST vectorization On Thu, 10 Aug 2023, juzhe.zhong@rivai.ai wrote: > From: Ju-Zhe Zhong > > Hi, Richard and Richi. > > This patch add support live vectorization by VEC_EXTRACT for LEN loop control. > > Consider this following case: > > #include > > #define EXTRACT_LAST(TYPE) \ > TYPE __attribute__ ((noinline, noclone)) \ > test_##TYPE (TYPE *x, int n, TYPE value) \ > { \ > TYPE last; \ > for (int j = 0; j < n; ++j) \ > { \ > last = x[j]; \ > x[j] = last * value; \ > } \ > return last; \ > } > > #define TEST_ALL(T) \ > T (uint8_t) \ > > TEST_ALL (EXTRACT_LAST) > > ARM SVE IR: > > Preheader: > max_mask_34 = .WHILE_ULT (0, bnd.5_6, { 0, ... }); > > Loop: > ... > # loop_mask_22 = PHI > ... > vect_last_12.8_23 = .MASK_LOAD (_7, 8B, loop_mask_22); > vect__4.9_27 = vect_last_12.8_23 * vect_cst__26; > .MASK_STORE (_7, 8B, loop_mask_22, vect__4.9_27); > ... > next_mask_35 = .WHILE_ULT (_1, bnd.5_6, { 0, ... }); > ... > > Epilogue: > _25 = .EXTRACT_LAST (loop_mask_22, vect_last_12.8_23); > > For RVV since we prefer len in loop control, after this patch for RVV: > > Loop: > ... > loop_len_22 = SELECT_VL; > vect_last_12.8_23 = .MASK_LOAD (_7, 8B, loop_len_22); > vect__4.9_27 = vect_last_12.8_23 * vect_cst__26; > .MASK_STORE (_7, 8B, loop_len_22, vect__4.9_27); > ... > > Epilogue: > _25 = .VEC_EXTRACT (loop_len_22 - 1 - bias, vect_last_12.8_23); > > Details of this approach: > > 1. Step 1 - Add 'vect_can_vectorize_extract_last_with_len_p' to enable live vectorization > for LEN loop control. > > This function we check whether target support: > - Use LEN as the loop control. > - Support VEC_EXTRACT optab. > > 2. Step 2 - Record LEN for loop control if 'vect_can_vectorize_extract_last_with_len_p' is true. > > 3. Step 3 - Gerenate VEC_EXTRACT (v, LEN - 1 - BIAS). > > NOTE: This patch set 'vinfo->any_known_not_updated_vssa = true;' since the original STMT is a simple > assignment wheras VEC_EXTRACT is neither pure nor const function according to internal-fn.def: > > DEF_INTERNAL_OPTAB_FN (VEC_EXTRACT, 0, vec_extract, vec_extract) > > If we don't set 'vinfo->any_known_not_updated_vssa' as true, it will cause ICE in: > > if (need_ssa_update_p (cfun)) > { > gcc_assert (loop_vinfo->any_known_not_updated_vssa); ----> Report assertion fail here. > fun->gimple_df->ssa_renaming_needed = false; > todo |= TODO_update_ssa_only_virtuals; > } > > I saw there are 2 places set 'vinfo->any_known_not_updated_vssa' as true: > > - The one is in 'vectorizable_simd_clone_call': > > /* When the original call is pure or const but the SIMD ABI dictates > an aggregate return we will have to use a virtual definition and > in a loop eventually even need to add a virtual PHI. That's > not straight-forward so allow to fix this up via renaming. */ > if (gimple_call_lhs (stmt) > && !gimple_vdef (stmt) > && TREE_CODE (TREE_TYPE (TREE_TYPE (bestn->decl))) == ARRAY_TYPE) > vinfo->any_known_not_updated_vssa = true; > > - The other is in 'vectorizable_load': > > if (memory_access_type == VMAT_LOAD_STORE_LANES) > vinfo->any_known_not_updated_vssa = true; > > It seems that they are the same reason as me doing in 'vectorizable_live_operation'. > Feel free to correct me if I am wrong. You should always manually update things. Did you verify the mask case is handled by this? There's the odd if (stmts) { gimple_stmt_iterator exit_gsi = gsi_after_labels (exit_bb); gsi_insert_seq_before (&exit_gsi, stmts, GSI_SAME_STMT); /* Remove existing phi from lhs and create one copy from new_tree. */ tree lhs_phi = NULL_TREE; gimple_stmt_iterator gsi; for (gsi = gsi_start_phis (exit_bb); !gsi_end_p (gsi); gsi_next (&gsi)) { gimple *phi = gsi_stmt (gsi); if ((gimple_phi_arg_def (phi, 0) == lhs)) { remove_phi_node (&gsi, false); lhs_phi = gimple_phi_result (phi); gimple *copy = gimple_build_assign (lhs_phi, new_tree); gsi_insert_before (&exit_gsi, copy, GSI_SAME_STMT); break; } } code but I don't think it will create new LC PHIs for the mask, instead it will break LC SSA as well by removing a PHI? I guess as a temporary thing your approach is OK but we shouldn't add these as part of new code - it's supposed to handle legacy cases that we didn't fixup yet. Richard. > Bootstrap and Regression on X86 passed. > > gcc/ChangeLog: > > * tree-vect-loop.cc (vect_can_vectorize_extract_last_with_len_p): New function. > (vectorizable_live_operation): Add loop LEN control. > > --- > gcc/tree-vect-loop.cc | 74 +++++++++++++++++++++++++++++++++++++++---- > 1 file changed, 68 insertions(+), 6 deletions(-) > > diff --git a/gcc/tree-vect-loop.cc b/gcc/tree-vect-loop.cc > index 00058c3c13e..208918f53fb 100644 > --- a/gcc/tree-vect-loop.cc > +++ b/gcc/tree-vect-loop.cc > @@ -8964,6 +8964,24 @@ vect_can_vectorize_without_simd_p (code_helper code) > && vect_can_vectorize_without_simd_p (tree_code (code))); > } > > +/* Return true if target supports extract last vectorization with LEN. */ > + > +static bool > +vect_can_vectorize_extract_last_with_len_p (tree vectype) > +{ > + /* Return false if target doesn't support LEN in loop control. */ > + machine_mode vmode; > + if (!get_len_load_store_mode (TYPE_MODE (vectype), true).exists (&vmode) > + || !get_len_load_store_mode (TYPE_MODE (vectype), false).exists (&vmode)) > + return false; > + > + /* Target need to support VEC_EXTRACT to extract the last active element. */ > + return convert_optab_handler (vec_extract_optab, > + TYPE_MODE (vectype), > + TYPE_MODE (TREE_TYPE (vectype))) > + != CODE_FOR_nothing; > +} > + > /* Create vector init for vectorized iv. */ > static tree > vect_create_nonlinear_iv_init (gimple_seq* stmts, tree init_expr, > @@ -10282,7 +10300,8 @@ vectorizable_live_operation (vec_info *vinfo, > if (loop_vinfo && LOOP_VINFO_CAN_USE_PARTIAL_VECTORS_P (loop_vinfo)) > { > if (!direct_internal_fn_supported_p (IFN_EXTRACT_LAST, vectype, > - OPTIMIZE_FOR_SPEED)) > + OPTIMIZE_FOR_SPEED) > + && !vect_can_vectorize_extract_last_with_len_p (vectype)) > { > if (dump_enabled_p ()) > dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, > @@ -10311,9 +10330,14 @@ vectorizable_live_operation (vec_info *vinfo, > else > { > gcc_assert (ncopies == 1 && !slp_node); > - vect_record_loop_mask (loop_vinfo, > - &LOOP_VINFO_MASKS (loop_vinfo), > - 1, vectype, NULL); > + if (vect_can_vectorize_extract_last_with_len_p (vectype)) > + vect_record_loop_len (loop_vinfo, > + &LOOP_VINFO_LENS (loop_vinfo), > + 1, vectype, 1); > + else > + vect_record_loop_mask (loop_vinfo, > + &LOOP_VINFO_MASKS (loop_vinfo), > + 1, vectype, NULL); > } > } > /* ??? Enable for loop costing as well. */ > @@ -10339,7 +10363,9 @@ vectorizable_live_operation (vec_info *vinfo, > gimple *vec_stmt; > if (slp_node) > { > - gcc_assert (!loop_vinfo || !LOOP_VINFO_FULLY_MASKED_P (loop_vinfo)); > + gcc_assert (!loop_vinfo > + || (!LOOP_VINFO_FULLY_MASKED_P (loop_vinfo) > + && !LOOP_VINFO_FULLY_WITH_LENGTH_P (loop_vinfo))); > > /* Get the correct slp vectorized stmt. */ > vec_lhs = SLP_TREE_VEC_DEFS (slp_node)[vec_entry]; > @@ -10383,7 +10409,43 @@ vectorizable_live_operation (vec_info *vinfo, > > gimple_seq stmts = NULL; > tree new_tree; > - if (LOOP_VINFO_FULLY_MASKED_P (loop_vinfo)) > + if (LOOP_VINFO_FULLY_WITH_LENGTH_P (loop_vinfo)) > + { > + /* Emit: > + > + SCALAR_RES = VEC_EXTRACT > + > + where VEC_LHS is the vectorized live-out result and MASK is > + the loop mask for the final iteration. */ > + gcc_assert (ncopies == 1 && !slp_node); > + tree scalar_type = TREE_TYPE (STMT_VINFO_VECTYPE (stmt_info)); > + tree len > + = vect_get_loop_len (loop_vinfo, gsi, &LOOP_VINFO_LENS (loop_vinfo), > + 1, vectype, 0, 0); > + > + /* BIAS + 1. */ > + signed char biasval = LOOP_VINFO_PARTIAL_LOAD_STORE_BIAS (loop_vinfo); > + tree bias_one > + = size_binop (PLUS_EXPR, build_int_cst (TREE_TYPE (len), biasval), > + build_one_cst (TREE_TYPE (len))); > + > + /* LAST_INDEX = LEN - (BIAS + 1). */ > + tree last_index > + = gimple_build (&stmts, MINUS_EXPR, TREE_TYPE (len), len, bias_one); > + > + tree scalar_res = gimple_build (&stmts, CFN_VEC_EXTRACT, scalar_type, > + vec_lhs_phi, last_index); > + > + /* Convert the extracted vector element to the scalar type. */ > + new_tree = gimple_convert (&stmts, lhs_type, scalar_res); > + /* When the original stmt is an assignment but VEC_EXTRACT is not pure > + or const since it may return a memory result. We will have to use > + a virtual definition and in a loop eventually even need to add a > + virtual PHI. That's not straight-forward so allow to fix this up > + via renaming. */ > + vinfo->any_known_not_updated_vssa = true; > + } > + else if (LOOP_VINFO_FULLY_MASKED_P (loop_vinfo)) > { > /* Emit: > > -- Richard Biener SUSE Software Solutions Germany GmbH, Frankenstrasse 146, 90461 Nuernberg, Germany; GF: Ivo Totev, Andrew McDonald, Werner Knoblich; (HRB 36809, AG Nuernberg)