From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 20355 invoked by alias); 25 Jul 2018 09:33:07 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 20344 invoked by uid 89); 25 Jul 2018 09:33:06 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-15.7 required=5.0 tests=AWL,BAYES_00,FREEMAIL_FROM,GIT_PATCH_1,GIT_PATCH_2,GIT_PATCH_3,KAM_ASCII_DIVIDERS,RCVD_IN_DNSWL_NONE,SPF_PASS autolearn=ham version=3.3.2 spammy= X-HELO: mail-lf1-f41.google.com Received: from mail-lf1-f41.google.com (HELO mail-lf1-f41.google.com) (209.85.167.41) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Wed, 25 Jul 2018 09:33:00 +0000 Received: by mail-lf1-f41.google.com with SMTP id b22-v6so4962227lfa.3 for ; Wed, 25 Jul 2018 02:32:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=DNRFpw2QK8SUSvuy0sZwr+eu7iGFHpAYGvG+Mu6/hlM=; b=kx3BAO4Nn+27Kq5NJQUWnNSdoIrN7SNnBe4jr3houufALvwnh1JEgWod3TH4NL3p04 IUPjAP/Y3dBEdoXlJlD6d4f6rlDpW8PlT3kVEEOowhTMqbl2bRUnQEvDJajemnPJUPd0 yEbuUnQ46QmDy89CaJI1gQllf0UHy6pE3RwnDzbMPqnScB6KBW+mRnya0hEsdFt3zCOE TJLOdRufq2wOGslFhDR1lSgln88Tuh5C4zhxr8dmEAO0usmtFrgaqdlFUQfJUN/tuHlX lr1wGIGqSFmxNXrL49Zd8yqdC+ACuUf89eeg/nUBnfgfqcvpY4Re3ZX1Xc07rk/Rxkhm mySg== MIME-Version: 1.0 References: <87wotlrmen.fsf@arm.com> <87k1plne5l.fsf@arm.com> In-Reply-To: <87k1plne5l.fsf@arm.com> From: Richard Biener Date: Wed, 25 Jul 2018 09:33:00 -0000 Message-ID: Subject: Re: [28/46] Use stmt_vec_info instead of gimple stmts internally (part 1) To: GCC Patches , richard.sandiford@arm.com Content-Type: text/plain; charset="UTF-8" X-IsSubscribed: yes X-SW-Source: 2018-07/txt/msg01490.txt.bz2 On Tue, Jul 24, 2018 at 12:04 PM Richard Sandiford wrote: > > This first part makes functions use stmt_vec_infos instead of > gimple stmts in cases where the stmt_vec_info was already available > and where the change is mechanical. Most of it is just replacing > "stmt" with "stmt_info". OK > > 2018-07-24 Richard Sandiford > > gcc/ > * tree-vect-data-refs.c (vect_slp_analyze_node_dependences): > (vect_check_gather_scatter, vect_create_data_ref_ptr, bump_vector_ptr) > (vect_permute_store_chain, vect_setup_realignment) > (vect_permute_load_chain, vect_shift_permute_load_chain) > (vect_transform_grouped_load): Use stmt_vec_info rather than gimple > stmts internally, and when passing values to other vectorizer routines. > * tree-vect-loop-manip.c (vect_can_advance_ivs_p): Likewise. > * tree-vect-loop.c (vect_analyze_scalar_cycles_1) > (vect_analyze_loop_operations, get_initial_def_for_reduction) > (vect_create_epilog_for_reduction, vectorize_fold_left_reduction) > (vectorizable_reduction, vectorizable_induction) > (vectorizable_live_operation, vect_transform_loop_stmt) > (vect_transform_loop): Likewise. > * tree-vect-patterns.c (vect_reassociating_reduction_p) > (vect_recog_widen_op_pattern, vect_recog_mixed_size_cond_pattern) > (vect_recog_bool_pattern, vect_recog_gather_scatter_pattern): Likewise. > * tree-vect-slp.c (vect_analyze_slp_instance): Likewise. > (vect_slp_analyze_node_operations_1): Likewise. > * tree-vect-stmts.c (vect_mark_relevant, process_use) > (exist_non_indexing_operands_for_use_p, vect_init_vector_1) > (vect_mark_stmts_to_be_vectorized, vect_get_vec_def_for_operand) > (vect_finish_stmt_generation_1, get_group_load_store_type) > (get_load_store_type, vect_build_gather_load_calls) > (vectorizable_bswap, vectorizable_call, vectorizable_simd_clone_call) > (vect_create_vectorized_demotion_stmts, vectorizable_conversion) > (vectorizable_assignment, vectorizable_shift, vectorizable_operation) > (vectorizable_store, vectorizable_load, vectorizable_condition) > (vectorizable_comparison, vect_analyze_stmt, vect_transform_stmt) > (supportable_widening_operation): Likewise. > (vect_get_vector_types_for_stmt): Likewise. > * tree-vectorizer.h (vect_dr_behavior): Likewise. > > Index: gcc/tree-vect-data-refs.c > =================================================================== > --- gcc/tree-vect-data-refs.c 2018-07-24 10:23:31.736764378 +0100 > +++ gcc/tree-vect-data-refs.c 2018-07-24 10:23:35.376732054 +0100 > @@ -712,7 +712,7 @@ vect_slp_analyze_node_dependences (slp_i > been sunk to (and we verify if we can do that as well). */ > if (gimple_visited_p (stmt)) > { > - if (stmt != last_store) > + if (stmt_info != last_store) > continue; > unsigned i; > stmt_vec_info store_info; > @@ -3666,7 +3666,7 @@ vect_check_gather_scatter (gimple *stmt, > > /* See whether this is already a call to a gather/scatter internal function. > If not, see whether it's a masked load or store. */ > - gcall *call = dyn_cast (stmt); > + gcall *call = dyn_cast (stmt_info->stmt); > if (call && gimple_call_internal_p (call)) > { > ifn = gimple_call_internal_fn (call); > @@ -4677,8 +4677,8 @@ vect_create_data_ref_ptr (gimple *stmt, > if (loop_vinfo) > { > loop = LOOP_VINFO_LOOP (loop_vinfo); > - nested_in_vect_loop = nested_in_vect_loop_p (loop, stmt); > - containing_loop = (gimple_bb (stmt))->loop_father; > + nested_in_vect_loop = nested_in_vect_loop_p (loop, stmt_info); > + containing_loop = (gimple_bb (stmt_info->stmt))->loop_father; > pe = loop_preheader_edge (loop); > } > else > @@ -4786,7 +4786,7 @@ vect_create_data_ref_ptr (gimple *stmt, > > /* Create: (&(base[init_val+offset]+byte_offset) in the loop preheader. */ > > - new_temp = vect_create_addr_base_for_vector_ref (stmt, &new_stmt_list, > + new_temp = vect_create_addr_base_for_vector_ref (stmt_info, &new_stmt_list, > offset, byte_offset); > if (new_stmt_list) > { > @@ -4934,7 +4934,7 @@ bump_vector_ptr (tree dataref_ptr, gimpl > new_dataref_ptr = make_ssa_name (TREE_TYPE (dataref_ptr)); > incr_stmt = gimple_build_assign (new_dataref_ptr, POINTER_PLUS_EXPR, > dataref_ptr, update); > - vect_finish_stmt_generation (stmt, incr_stmt, gsi); > + vect_finish_stmt_generation (stmt_info, incr_stmt, gsi); > > /* Copy the points-to information if it exists. */ > if (DR_PTR_INFO (dr)) > @@ -5282,7 +5282,7 @@ vect_permute_store_chain (vec dr_c > data_ref = make_temp_ssa_name (vectype, NULL, "vect_shuffle3_low"); > perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR, vect1, > vect2, perm3_mask_low); > - vect_finish_stmt_generation (stmt, perm_stmt, gsi); > + vect_finish_stmt_generation (stmt_info, perm_stmt, gsi); > > vect1 = data_ref; > vect2 = dr_chain[2]; > @@ -5293,7 +5293,7 @@ vect_permute_store_chain (vec dr_c > data_ref = make_temp_ssa_name (vectype, NULL, "vect_shuffle3_high"); > perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR, vect1, > vect2, perm3_mask_high); > - vect_finish_stmt_generation (stmt, perm_stmt, gsi); > + vect_finish_stmt_generation (stmt_info, perm_stmt, gsi); > (*result_chain)[j] = data_ref; > } > } > @@ -5332,7 +5332,7 @@ vect_permute_store_chain (vec dr_c > high = make_temp_ssa_name (vectype, NULL, "vect_inter_high"); > perm_stmt = gimple_build_assign (high, VEC_PERM_EXPR, vect1, > vect2, perm_mask_high); > - vect_finish_stmt_generation (stmt, perm_stmt, gsi); > + vect_finish_stmt_generation (stmt_info, perm_stmt, gsi); > (*result_chain)[2*j] = high; > > /* Create interleaving stmt: > @@ -5342,7 +5342,7 @@ vect_permute_store_chain (vec dr_c > low = make_temp_ssa_name (vectype, NULL, "vect_inter_low"); > perm_stmt = gimple_build_assign (low, VEC_PERM_EXPR, vect1, > vect2, perm_mask_low); > - vect_finish_stmt_generation (stmt, perm_stmt, gsi); > + vect_finish_stmt_generation (stmt_info, perm_stmt, gsi); > (*result_chain)[2*j+1] = low; > } > memcpy (dr_chain.address (), result_chain->address (), > @@ -5415,7 +5415,7 @@ vect_setup_realignment (gimple *stmt, gi > struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info); > struct loop *loop = NULL; > edge pe = NULL; > - tree scalar_dest = gimple_assign_lhs (stmt); > + tree scalar_dest = gimple_assign_lhs (stmt_info->stmt); > tree vec_dest; > gimple *inc; > tree ptr; > @@ -5429,13 +5429,13 @@ vect_setup_realignment (gimple *stmt, gi > bool inv_p; > bool compute_in_loop = false; > bool nested_in_vect_loop = false; > - struct loop *containing_loop = (gimple_bb (stmt))->loop_father; > + struct loop *containing_loop = (gimple_bb (stmt_info->stmt))->loop_father; > struct loop *loop_for_initial_load = NULL; > > if (loop_vinfo) > { > loop = LOOP_VINFO_LOOP (loop_vinfo); > - nested_in_vect_loop = nested_in_vect_loop_p (loop, stmt); > + nested_in_vect_loop = nested_in_vect_loop_p (loop, stmt_info); > } > > gcc_assert (alignment_support_scheme == dr_explicit_realign > @@ -5518,9 +5518,9 @@ vect_setup_realignment (gimple *stmt, gi > > gcc_assert (!compute_in_loop); > vec_dest = vect_create_destination_var (scalar_dest, vectype); > - ptr = vect_create_data_ref_ptr (stmt, vectype, loop_for_initial_load, > - NULL_TREE, &init_addr, NULL, &inc, > - true, &inv_p); > + ptr = vect_create_data_ref_ptr (stmt_info, vectype, > + loop_for_initial_load, NULL_TREE, > + &init_addr, NULL, &inc, true, &inv_p); > if (TREE_CODE (ptr) == SSA_NAME) > new_temp = copy_ssa_name (ptr); > else > @@ -5562,7 +5562,7 @@ vect_setup_realignment (gimple *stmt, gi > if (!init_addr) > { > /* Generate the INIT_ADDR computation outside LOOP. */ > - init_addr = vect_create_addr_base_for_vector_ref (stmt, &stmts, > + init_addr = vect_create_addr_base_for_vector_ref (stmt_info, &stmts, > NULL_TREE); > if (loop) > { > @@ -5890,7 +5890,7 @@ vect_permute_load_chain (vec dr_ch > data_ref = make_temp_ssa_name (vectype, NULL, "vect_shuffle3_low"); > perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR, first_vect, > second_vect, perm3_mask_low); > - vect_finish_stmt_generation (stmt, perm_stmt, gsi); > + vect_finish_stmt_generation (stmt_info, perm_stmt, gsi); > > /* Create interleaving stmt (high part of): > high = VEC_PERM_EXPR @@ -5900,7 +5900,7 @@ vect_permute_load_chain (vec dr_ch > data_ref = make_temp_ssa_name (vectype, NULL, "vect_shuffle3_high"); > perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR, first_vect, > second_vect, perm3_mask_high); > - vect_finish_stmt_generation (stmt, perm_stmt, gsi); > + vect_finish_stmt_generation (stmt_info, perm_stmt, gsi); > (*result_chain)[k] = data_ref; > } > } > @@ -5935,7 +5935,7 @@ vect_permute_load_chain (vec dr_ch > perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR, > first_vect, second_vect, > perm_mask_even); > - vect_finish_stmt_generation (stmt, perm_stmt, gsi); > + vect_finish_stmt_generation (stmt_info, perm_stmt, gsi); > (*result_chain)[j/2] = data_ref; > > /* data_ref = permute_odd (first_data_ref, second_data_ref); */ > @@ -5943,7 +5943,7 @@ vect_permute_load_chain (vec dr_ch > perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR, > first_vect, second_vect, > perm_mask_odd); > - vect_finish_stmt_generation (stmt, perm_stmt, gsi); > + vect_finish_stmt_generation (stmt_info, perm_stmt, gsi); > (*result_chain)[j/2+length/2] = data_ref; > } > memcpy (dr_chain.address (), result_chain->address (), > @@ -6143,26 +6143,26 @@ vect_shift_permute_load_chain (vec > perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR, > first_vect, first_vect, > perm2_mask1); > - vect_finish_stmt_generation (stmt, perm_stmt, gsi); > + vect_finish_stmt_generation (stmt_info, perm_stmt, gsi); > vect[0] = data_ref; > > data_ref = make_temp_ssa_name (vectype, NULL, "vect_shuffle2"); > perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR, > second_vect, second_vect, > perm2_mask2); > - vect_finish_stmt_generation (stmt, perm_stmt, gsi); > + vect_finish_stmt_generation (stmt_info, perm_stmt, gsi); > vect[1] = data_ref; > > data_ref = make_temp_ssa_name (vectype, NULL, "vect_shift"); > perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR, > vect[0], vect[1], shift1_mask); > - vect_finish_stmt_generation (stmt, perm_stmt, gsi); > + vect_finish_stmt_generation (stmt_info, perm_stmt, gsi); > (*result_chain)[j/2 + length/2] = data_ref; > > data_ref = make_temp_ssa_name (vectype, NULL, "vect_select"); > perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR, > vect[0], vect[1], select_mask); > - vect_finish_stmt_generation (stmt, perm_stmt, gsi); > + vect_finish_stmt_generation (stmt_info, perm_stmt, gsi); > (*result_chain)[j/2] = data_ref; > } > memcpy (dr_chain.address (), result_chain->address (), > @@ -6259,7 +6259,7 @@ vect_shift_permute_load_chain (vec > perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR, > dr_chain[k], dr_chain[k], > perm3_mask); > - vect_finish_stmt_generation (stmt, perm_stmt, gsi); > + vect_finish_stmt_generation (stmt_info, perm_stmt, gsi); > vect[k] = data_ref; > } > > @@ -6269,7 +6269,7 @@ vect_shift_permute_load_chain (vec > perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR, > vect[k % 3], vect[(k + 1) % 3], > shift1_mask); > - vect_finish_stmt_generation (stmt, perm_stmt, gsi); > + vect_finish_stmt_generation (stmt_info, perm_stmt, gsi); > vect_shift[k] = data_ref; > } > > @@ -6280,7 +6280,7 @@ vect_shift_permute_load_chain (vec > vect_shift[(4 - k) % 3], > vect_shift[(3 - k) % 3], > shift2_mask); > - vect_finish_stmt_generation (stmt, perm_stmt, gsi); > + vect_finish_stmt_generation (stmt_info, perm_stmt, gsi); > vect[k] = data_ref; > } > > @@ -6289,13 +6289,13 @@ vect_shift_permute_load_chain (vec > data_ref = make_temp_ssa_name (vectype, NULL, "vect_shift3"); > perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR, vect[0], > vect[0], shift3_mask); > - vect_finish_stmt_generation (stmt, perm_stmt, gsi); > + vect_finish_stmt_generation (stmt_info, perm_stmt, gsi); > (*result_chain)[nelt % 3] = data_ref; > > data_ref = make_temp_ssa_name (vectype, NULL, "vect_shift4"); > perm_stmt = gimple_build_assign (data_ref, VEC_PERM_EXPR, vect[1], > vect[1], shift4_mask); > - vect_finish_stmt_generation (stmt, perm_stmt, gsi); > + vect_finish_stmt_generation (stmt_info, perm_stmt, gsi); > (*result_chain)[0] = data_ref; > return true; > } > @@ -6328,10 +6328,10 @@ vect_transform_grouped_load (gimple *stm > mode = TYPE_MODE (STMT_VINFO_VECTYPE (stmt_info)); > if (targetm.sched.reassociation_width (VEC_PERM_EXPR, mode) > 1 > || pow2p_hwi (size) > - || !vect_shift_permute_load_chain (dr_chain, size, stmt, > + || !vect_shift_permute_load_chain (dr_chain, size, stmt_info, > gsi, &result_chain)) > - vect_permute_load_chain (dr_chain, size, stmt, gsi, &result_chain); > - vect_record_grouped_load_vectors (stmt, result_chain); > + vect_permute_load_chain (dr_chain, size, stmt_info, gsi, &result_chain); > + vect_record_grouped_load_vectors (stmt_info, result_chain); > result_chain.release (); > } > > Index: gcc/tree-vect-loop-manip.c > =================================================================== > --- gcc/tree-vect-loop-manip.c 2018-07-24 10:23:31.736764378 +0100 > +++ gcc/tree-vect-loop-manip.c 2018-07-24 10:23:35.376732054 +0100 > @@ -1380,8 +1380,8 @@ vect_can_advance_ivs_p (loop_vec_info lo > stmt_vec_info phi_info = loop_vinfo->lookup_stmt (phi); > if (dump_enabled_p ()) > { > - dump_printf_loc (MSG_NOTE, vect_location, "Analyze phi: "); > - dump_gimple_stmt (MSG_NOTE, TDF_SLIM, phi, 0); > + dump_printf_loc (MSG_NOTE, vect_location, "Analyze phi: "); > + dump_gimple_stmt (MSG_NOTE, TDF_SLIM, phi_info->stmt, 0); > } > > /* Skip virtual phi's. The data dependences that are associated with > Index: gcc/tree-vect-loop.c > =================================================================== > --- gcc/tree-vect-loop.c 2018-07-24 10:23:31.740764343 +0100 > +++ gcc/tree-vect-loop.c 2018-07-24 10:23:35.376732054 +0100 > @@ -526,7 +526,7 @@ vect_analyze_scalar_cycles_1 (loop_vec_i > || (LOOP_VINFO_LOOP (loop_vinfo) != loop > && TREE_CODE (step) != INTEGER_CST)) > { > - worklist.safe_push (phi); > + worklist.safe_push (stmt_vinfo); > continue; > } > > @@ -1595,11 +1595,12 @@ vect_analyze_loop_operations (loop_vec_i > need_to_vectorize = true; > if (STMT_VINFO_DEF_TYPE (stmt_info) == vect_induction_def > && ! PURE_SLP_STMT (stmt_info)) > - ok = vectorizable_induction (phi, NULL, NULL, NULL, &cost_vec); > + ok = vectorizable_induction (stmt_info, NULL, NULL, NULL, > + &cost_vec); > else if ((STMT_VINFO_DEF_TYPE (stmt_info) == vect_reduction_def > || STMT_VINFO_DEF_TYPE (stmt_info) == vect_nested_cycle) > && ! PURE_SLP_STMT (stmt_info)) > - ok = vectorizable_reduction (phi, NULL, NULL, NULL, NULL, > + ok = vectorizable_reduction (stmt_info, NULL, NULL, NULL, NULL, > &cost_vec); > } > > @@ -1607,7 +1608,7 @@ vect_analyze_loop_operations (loop_vec_i > if (ok > && STMT_VINFO_LIVE_P (stmt_info) > && !PURE_SLP_STMT (stmt_info)) > - ok = vectorizable_live_operation (phi, NULL, NULL, -1, NULL, > + ok = vectorizable_live_operation (stmt_info, NULL, NULL, -1, NULL, > &cost_vec); > > if (!ok) > @@ -4045,7 +4046,7 @@ get_initial_def_for_reduction (gimple *s > struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo); > tree scalar_type = TREE_TYPE (init_val); > tree vectype = get_vectype_for_scalar_type (scalar_type); > - enum tree_code code = gimple_assign_rhs_code (stmt); > + enum tree_code code = gimple_assign_rhs_code (stmt_vinfo->stmt); > tree def_for_init; > tree init_def; > REAL_VALUE_TYPE real_init_val = dconst0; > @@ -4057,8 +4058,8 @@ get_initial_def_for_reduction (gimple *s > gcc_assert (POINTER_TYPE_P (scalar_type) || INTEGRAL_TYPE_P (scalar_type) > || SCALAR_FLOAT_TYPE_P (scalar_type)); > > - gcc_assert (nested_in_vect_loop_p (loop, stmt) > - || loop == (gimple_bb (stmt))->loop_father); > + gcc_assert (nested_in_vect_loop_p (loop, stmt_vinfo) > + || loop == (gimple_bb (stmt_vinfo->stmt))->loop_father); > > vect_reduction_type reduction_type > = STMT_VINFO_VEC_REDUCTION_TYPE (stmt_vinfo); > @@ -4127,7 +4128,7 @@ get_initial_def_for_reduction (gimple *s > if (reduction_type != COND_REDUCTION > && reduction_type != EXTRACT_LAST_REDUCTION) > { > - init_def = vect_get_vec_def_for_operand (init_val, stmt); > + init_def = vect_get_vec_def_for_operand (init_val, stmt_vinfo); > break; > } > } > @@ -4406,7 +4407,7 @@ vect_create_epilog_for_reduction (vec tree vec_dest; > tree new_temp = NULL_TREE, new_dest, new_name, new_scalar_dest; > gimple *epilog_stmt = NULL; > - enum tree_code code = gimple_assign_rhs_code (stmt); > + enum tree_code code = gimple_assign_rhs_code (stmt_info->stmt); > gimple *exit_phi; > tree bitsize; > tree adjustment_def = NULL; > @@ -4435,7 +4436,7 @@ vect_create_epilog_for_reduction (vec if (slp_node) > group_size = SLP_TREE_SCALAR_STMTS (slp_node).length (); > > - if (nested_in_vect_loop_p (loop, stmt)) > + if (nested_in_vect_loop_p (loop, stmt_info)) > { > outer_loop = loop; > loop = loop->inner; > @@ -4504,11 +4505,13 @@ vect_create_epilog_for_reduction (vec /* Do not use an adjustment def as that case is not supported > correctly if ncopies is not one. */ > vect_is_simple_use (initial_def, loop_vinfo, &initial_def_dt); > - vec_initial_def = vect_get_vec_def_for_operand (initial_def, stmt); > + vec_initial_def = vect_get_vec_def_for_operand (initial_def, > + stmt_info); > } > else > - vec_initial_def = get_initial_def_for_reduction (stmt, initial_def, > - &adjustment_def); > + vec_initial_def > + = get_initial_def_for_reduction (stmt_info, initial_def, > + &adjustment_def); > vec_initial_defs.create (1); > vec_initial_defs.quick_push (vec_initial_def); > } > @@ -5676,7 +5679,7 @@ vect_create_epilog_for_reduction (vec preheader_arg = PHI_ARG_DEF_FROM_EDGE (use_stmt, > loop_preheader_edge (outer_loop)); > vect_phi_init = get_initial_def_for_reduction > - (stmt, preheader_arg, NULL); > + (stmt_info, preheader_arg, NULL); > > /* Update phi node arguments with vs0 and vs2. */ > add_phi_arg (vect_phi, vect_phi_init, > @@ -5841,7 +5844,7 @@ vectorize_fold_left_reduction (gimple *s > else > ncopies = vect_get_num_copies (loop_vinfo, vectype_in); > > - gcc_assert (!nested_in_vect_loop_p (loop, stmt)); > + gcc_assert (!nested_in_vect_loop_p (loop, stmt_info)); > gcc_assert (ncopies == 1); > gcc_assert (TREE_CODE_LENGTH (code) == binary_op); > gcc_assert (reduc_index == (code == MINUS_EXPR ? 0 : 1)); > @@ -5859,13 +5862,14 @@ vectorize_fold_left_reduction (gimple *s > auto_vec vec_oprnds0; > if (slp_node) > { > - vect_get_vec_defs (op0, NULL_TREE, stmt, &vec_oprnds0, NULL, slp_node); > + vect_get_vec_defs (op0, NULL_TREE, stmt_info, &vec_oprnds0, NULL, > + slp_node); > group_size = SLP_TREE_SCALAR_STMTS (slp_node).length (); > scalar_dest_def_info = SLP_TREE_SCALAR_STMTS (slp_node)[group_size - 1]; > } > else > { > - tree loop_vec_def0 = vect_get_vec_def_for_operand (op0, stmt); > + tree loop_vec_def0 = vect_get_vec_def_for_operand (op0, stmt_info); > vec_oprnds0.create (1); > vec_oprnds0.quick_push (loop_vec_def0); > scalar_dest_def_info = stmt_info; > @@ -6099,7 +6103,7 @@ vectorizable_reduction (gimple *stmt, gi > && STMT_VINFO_DEF_TYPE (stmt_info) != vect_nested_cycle) > return false; > > - if (nested_in_vect_loop_p (loop, stmt)) > + if (nested_in_vect_loop_p (loop, stmt_info)) > { > loop = loop->inner; > nested_cycle = true; > @@ -6109,7 +6113,7 @@ vectorizable_reduction (gimple *stmt, gi > gcc_assert (slp_node > && REDUC_GROUP_FIRST_ELEMENT (stmt_info) == stmt_info); > > - if (gphi *phi = dyn_cast (stmt)) > + if (gphi *phi = dyn_cast (stmt_info->stmt)) > { > tree phi_result = gimple_phi_result (phi); > /* Analysis is fully done on the reduction stmt invocation. */ > @@ -6164,7 +6168,7 @@ vectorizable_reduction (gimple *stmt, gi > && STMT_VINFO_RELEVANT (reduc_stmt_info) <= vect_used_only_live > && (use_stmt_info = loop_vinfo->lookup_single_use (phi_result)) > && (use_stmt_info == reduc_stmt_info > - || STMT_VINFO_RELATED_STMT (use_stmt_info) == reduc_stmt)) > + || STMT_VINFO_RELATED_STMT (use_stmt_info) == reduc_stmt_info)) > single_defuse_cycle = true; > > /* Create the destination vector */ > @@ -6548,7 +6552,7 @@ vectorizable_reduction (gimple *stmt, gi > { > /* Only call during the analysis stage, otherwise we'll lose > STMT_VINFO_TYPE. */ > - if (!vec_stmt && !vectorizable_condition (stmt, gsi, NULL, > + if (!vec_stmt && !vectorizable_condition (stmt_info, gsi, NULL, > ops[reduc_index], 0, NULL, > cost_vec)) > { > @@ -6935,7 +6939,7 @@ vectorizable_reduction (gimple *stmt, gi > && (STMT_VINFO_RELEVANT (stmt_info) <= vect_used_only_live) > && (use_stmt_info = loop_vinfo->lookup_single_use (reduc_phi_result)) > && (use_stmt_info == stmt_info > - || STMT_VINFO_RELATED_STMT (use_stmt_info) == stmt)) > + || STMT_VINFO_RELATED_STMT (use_stmt_info) == stmt_info)) > { > single_defuse_cycle = true; > epilog_copies = 1; > @@ -7015,13 +7019,13 @@ vectorizable_reduction (gimple *stmt, gi > > if (reduction_type == FOLD_LEFT_REDUCTION) > return vectorize_fold_left_reduction > - (stmt, gsi, vec_stmt, slp_node, reduc_def_phi, code, > + (stmt_info, gsi, vec_stmt, slp_node, reduc_def_phi, code, > reduc_fn, ops, vectype_in, reduc_index, masks); > > if (reduction_type == EXTRACT_LAST_REDUCTION) > { > gcc_assert (!slp_node); > - return vectorizable_condition (stmt, gsi, vec_stmt, > + return vectorizable_condition (stmt_info, gsi, vec_stmt, > NULL, reduc_index, NULL, NULL); > } > > @@ -7053,7 +7057,7 @@ vectorizable_reduction (gimple *stmt, gi > if (code == COND_EXPR) > { > gcc_assert (!slp_node); > - vectorizable_condition (stmt, gsi, vec_stmt, > + vectorizable_condition (stmt_info, gsi, vec_stmt, > PHI_RESULT (phis[0]->stmt), > reduc_index, NULL, NULL); > /* Multiple types are not supported for condition. */ > @@ -7090,12 +7094,12 @@ vectorizable_reduction (gimple *stmt, gi > else > { > vec_oprnds0.quick_push > - (vect_get_vec_def_for_operand (ops[0], stmt)); > + (vect_get_vec_def_for_operand (ops[0], stmt_info)); > vec_oprnds1.quick_push > - (vect_get_vec_def_for_operand (ops[1], stmt)); > + (vect_get_vec_def_for_operand (ops[1], stmt_info)); > if (op_type == ternary_op) > vec_oprnds2.quick_push > - (vect_get_vec_def_for_operand (ops[2], stmt)); > + (vect_get_vec_def_for_operand (ops[2], stmt_info)); > } > } > else > @@ -7144,7 +7148,8 @@ vectorizable_reduction (gimple *stmt, gi > new_temp = make_ssa_name (vec_dest, call); > gimple_call_set_lhs (call, new_temp); > gimple_call_set_nothrow (call, true); > - new_stmt_info = vect_finish_stmt_generation (stmt, call, gsi); > + new_stmt_info > + = vect_finish_stmt_generation (stmt_info, call, gsi); > } > else > { > @@ -7156,7 +7161,7 @@ vectorizable_reduction (gimple *stmt, gi > new_temp = make_ssa_name (vec_dest, new_stmt); > gimple_assign_set_lhs (new_stmt, new_temp); > new_stmt_info > - = vect_finish_stmt_generation (stmt, new_stmt, gsi); > + = vect_finish_stmt_generation (stmt_info, new_stmt, gsi); > } > > if (slp_node) > @@ -7184,7 +7189,7 @@ vectorizable_reduction (gimple *stmt, gi > if ((!single_defuse_cycle || code == COND_EXPR) && !slp_node) > vect_defs[0] = gimple_get_lhs ((*vec_stmt)->stmt); > > - vect_create_epilog_for_reduction (vect_defs, stmt, reduc_def_phi, > + vect_create_epilog_for_reduction (vect_defs, stmt_info, reduc_def_phi, > epilog_copies, reduc_fn, phis, > double_reduc, slp_node, slp_node_instance, > cond_reduc_val, cond_reduc_op_code, > @@ -7293,7 +7298,7 @@ vectorizable_induction (gimple *phi, > gcc_assert (ncopies >= 1); > > /* FORNOW. These restrictions should be relaxed. */ > - if (nested_in_vect_loop_p (loop, phi)) > + if (nested_in_vect_loop_p (loop, stmt_info)) > { > imm_use_iterator imm_iter; > use_operand_p use_p; > @@ -7443,10 +7448,10 @@ vectorizable_induction (gimple *phi, > new_name = fold_build2 (MULT_EXPR, TREE_TYPE (step_expr), > expr, step_expr); > if (! CONSTANT_CLASS_P (new_name)) > - new_name = vect_init_vector (phi, new_name, > + new_name = vect_init_vector (stmt_info, new_name, > TREE_TYPE (step_expr), NULL); > new_vec = build_vector_from_val (vectype, new_name); > - vec_step = vect_init_vector (phi, new_vec, vectype, NULL); > + vec_step = vect_init_vector (stmt_info, new_vec, vectype, NULL); > > /* Now generate the IVs. */ > unsigned group_size = SLP_TREE_SCALAR_STMTS (slp_node).length (); > @@ -7513,10 +7518,10 @@ vectorizable_induction (gimple *phi, > new_name = fold_build2 (MULT_EXPR, TREE_TYPE (step_expr), > expr, step_expr); > if (! CONSTANT_CLASS_P (new_name)) > - new_name = vect_init_vector (phi, new_name, > + new_name = vect_init_vector (stmt_info, new_name, > TREE_TYPE (step_expr), NULL); > new_vec = build_vector_from_val (vectype, new_name); > - vec_step = vect_init_vector (phi, new_vec, vectype, NULL); > + vec_step = vect_init_vector (stmt_info, new_vec, vectype, NULL); > for (; ivn < nvects; ++ivn) > { > gimple *iv = SLP_TREE_VEC_STMTS (slp_node)[ivn - nivs]->stmt; > @@ -7549,7 +7554,7 @@ vectorizable_induction (gimple *phi, > /* iv_loop is nested in the loop to be vectorized. init_expr had already > been created during vectorization of previous stmts. We obtain it > from the STMT_VINFO_VEC_STMT of the defining stmt. */ > - vec_init = vect_get_vec_def_for_operand (init_expr, phi); > + vec_init = vect_get_vec_def_for_operand (init_expr, stmt_info); > /* If the initial value is not of proper type, convert it. */ > if (!useless_type_conversion_p (vectype, TREE_TYPE (vec_init))) > { > @@ -7651,7 +7656,7 @@ vectorizable_induction (gimple *phi, > gcc_assert (CONSTANT_CLASS_P (new_name) > || TREE_CODE (new_name) == SSA_NAME); > new_vec = build_vector_from_val (vectype, t); > - vec_step = vect_init_vector (phi, new_vec, vectype, NULL); > + vec_step = vect_init_vector (stmt_info, new_vec, vectype, NULL); > > > /* Create the following def-use cycle: > @@ -7717,7 +7722,7 @@ vectorizable_induction (gimple *phi, > gcc_assert (CONSTANT_CLASS_P (new_name) > || TREE_CODE (new_name) == SSA_NAME); > new_vec = build_vector_from_val (vectype, t); > - vec_step = vect_init_vector (phi, new_vec, vectype, NULL); > + vec_step = vect_init_vector (stmt_info, new_vec, vectype, NULL); > > vec_def = induc_def; > prev_stmt_vinfo = induction_phi_info; > @@ -7815,7 +7820,7 @@ vectorizable_live_operation (gimple *stm > return false; > > /* FORNOW. CHECKME. */ > - if (nested_in_vect_loop_p (loop, stmt)) > + if (nested_in_vect_loop_p (loop, stmt_info)) > return false; > > /* If STMT is not relevant and it is a simple assignment and its inputs are > @@ -7823,7 +7828,7 @@ vectorizable_live_operation (gimple *stm > scalar value that it computes will be used. */ > if (!STMT_VINFO_RELEVANT_P (stmt_info)) > { > - gcc_assert (is_simple_and_all_uses_invariant (stmt, loop_vinfo)); > + gcc_assert (is_simple_and_all_uses_invariant (stmt_info, loop_vinfo)); > if (dump_enabled_p ()) > dump_printf_loc (MSG_NOTE, vect_location, > "statement is simple and uses invariant. Leaving in " > @@ -8222,11 +8227,11 @@ vect_transform_loop_stmt (loop_vec_info > { > dump_printf_loc (MSG_NOTE, vect_location, > "------>vectorizing statement: "); > - dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0); > + dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0); > } > > if (MAY_HAVE_DEBUG_BIND_STMTS && !STMT_VINFO_LIVE_P (stmt_info)) > - vect_loop_kill_debug_uses (loop, stmt); > + vect_loop_kill_debug_uses (loop, stmt_info); > > if (!STMT_VINFO_RELEVANT_P (stmt_info) > && !STMT_VINFO_LIVE_P (stmt_info)) > @@ -8267,7 +8272,7 @@ vect_transform_loop_stmt (loop_vec_info > dump_printf_loc (MSG_NOTE, vect_location, "transform statement.\n"); > > bool grouped_store = false; > - if (vect_transform_stmt (stmt, gsi, &grouped_store, NULL, NULL)) > + if (vect_transform_stmt (stmt_info, gsi, &grouped_store, NULL, NULL)) > *seen_store = stmt_info; > } > > @@ -8422,7 +8427,7 @@ vect_transform_loop (loop_vec_info loop_ > continue; > > if (MAY_HAVE_DEBUG_BIND_STMTS && !STMT_VINFO_LIVE_P (stmt_info)) > - vect_loop_kill_debug_uses (loop, phi); > + vect_loop_kill_debug_uses (loop, stmt_info); > > if (!STMT_VINFO_RELEVANT_P (stmt_info) > && !STMT_VINFO_LIVE_P (stmt_info)) > @@ -8441,7 +8446,7 @@ vect_transform_loop (loop_vec_info loop_ > { > if (dump_enabled_p ()) > dump_printf_loc (MSG_NOTE, vect_location, "transform phi.\n"); > - vect_transform_stmt (phi, NULL, NULL, NULL, NULL); > + vect_transform_stmt (stmt_info, NULL, NULL, NULL, NULL); > } > } > > Index: gcc/tree-vect-patterns.c > =================================================================== > --- gcc/tree-vect-patterns.c 2018-07-24 10:23:31.740764343 +0100 > +++ gcc/tree-vect-patterns.c 2018-07-24 10:23:35.380732018 +0100 > @@ -842,7 +842,7 @@ vect_reassociating_reduction_p (stmt_vec > /* We don't allow changing the order of the computation in the inner-loop > when doing outer-loop vectorization. */ > struct loop *loop = LOOP_VINFO_LOOP (loop_info); > - if (loop && nested_in_vect_loop_p (loop, assign)) > + if (loop && nested_in_vect_loop_p (loop, stmt_info)) > return false; > > if (!vect_reassociating_reduction_p (stmt_info)) > @@ -1196,7 +1196,7 @@ vect_recog_widen_op_pattern (stmt_vec_in > auto_vec dummy_vec; > if (!vectype > || !vecitype > - || !supportable_widening_operation (wide_code, last_stmt, > + || !supportable_widening_operation (wide_code, last_stmt_info, > vecitype, vectype, > &dummy_code, &dummy_code, > &dummy_int, &dummy_vec)) > @@ -3118,11 +3118,11 @@ vect_recog_mixed_size_cond_pattern (stmt > return NULL; > > if ((TREE_CODE (then_clause) != INTEGER_CST > - && !type_conversion_p (then_clause, last_stmt, false, &orig_type0, > - &def_stmt0, &promotion)) > + && !type_conversion_p (then_clause, stmt_vinfo, false, &orig_type0, > + &def_stmt0, &promotion)) > || (TREE_CODE (else_clause) != INTEGER_CST > - && !type_conversion_p (else_clause, last_stmt, false, &orig_type1, > - &def_stmt1, &promotion))) > + && !type_conversion_p (else_clause, stmt_vinfo, false, &orig_type1, > + &def_stmt1, &promotion))) > return NULL; > > if (orig_type0 && orig_type1 > @@ -3709,7 +3709,7 @@ vect_recog_bool_pattern (stmt_vec_info s > > if (check_bool_pattern (var, vinfo, bool_stmts)) > { > - rhs = adjust_bool_stmts (bool_stmts, TREE_TYPE (lhs), last_stmt); > + rhs = adjust_bool_stmts (bool_stmts, TREE_TYPE (lhs), stmt_vinfo); > lhs = vect_recog_temp_ssa_var (TREE_TYPE (lhs), NULL); > if (useless_type_conversion_p (TREE_TYPE (lhs), TREE_TYPE (rhs))) > pattern_stmt = gimple_build_assign (lhs, SSA_NAME, rhs); > @@ -3776,7 +3776,7 @@ vect_recog_bool_pattern (stmt_vec_info s > if (!check_bool_pattern (var, vinfo, bool_stmts)) > return NULL; > > - rhs = adjust_bool_stmts (bool_stmts, type, last_stmt); > + rhs = adjust_bool_stmts (bool_stmts, type, stmt_vinfo); > > lhs = vect_recog_temp_ssa_var (TREE_TYPE (lhs), NULL); > pattern_stmt > @@ -3800,7 +3800,7 @@ vect_recog_bool_pattern (stmt_vec_info s > return NULL; > > if (check_bool_pattern (var, vinfo, bool_stmts)) > - rhs = adjust_bool_stmts (bool_stmts, TREE_TYPE (vectype), last_stmt); > + rhs = adjust_bool_stmts (bool_stmts, TREE_TYPE (vectype), stmt_vinfo); > else > { > tree type = search_type_for_mask (var, vinfo); > @@ -4234,13 +4234,12 @@ vect_recog_gather_scatter_pattern (stmt_ > > /* Get the boolean that controls whether the load or store happens. > This is null if the operation is unconditional. */ > - gimple *stmt = stmt_info->stmt; > - tree mask = vect_get_load_store_mask (stmt); > + tree mask = vect_get_load_store_mask (stmt_info); > > /* Make sure that the target supports an appropriate internal > function for the gather/scatter operation. */ > gather_scatter_info gs_info; > - if (!vect_check_gather_scatter (stmt, loop_vinfo, &gs_info) > + if (!vect_check_gather_scatter (stmt_info, loop_vinfo, &gs_info) > || gs_info.decl) > return NULL; > > @@ -4273,7 +4272,7 @@ vect_recog_gather_scatter_pattern (stmt_ > } > else > { > - tree rhs = vect_get_store_rhs (stmt); > + tree rhs = vect_get_store_rhs (stmt_info); > if (mask != NULL) > pattern_stmt = gimple_build_call_internal (IFN_MASK_SCATTER_STORE, 5, > base, offset, scale, rhs, > @@ -4295,7 +4294,7 @@ vect_recog_gather_scatter_pattern (stmt_ > > tree vectype = STMT_VINFO_VECTYPE (stmt_info); > *type_out = vectype; > - vect_pattern_detected ("gather/scatter pattern", stmt); > + vect_pattern_detected ("gather/scatter pattern", stmt_info->stmt); > > return pattern_stmt; > } > Index: gcc/tree-vect-slp.c > =================================================================== > --- gcc/tree-vect-slp.c 2018-07-24 10:23:31.740764343 +0100 > +++ gcc/tree-vect-slp.c 2018-07-24 10:23:35.380732018 +0100 > @@ -2096,8 +2096,8 @@ vect_analyze_slp_instance (vec_info *vin > dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, > "Build SLP failed: unsupported load " > "permutation "); > - dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, > - TDF_SLIM, stmt, 0); > + dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, > + TDF_SLIM, stmt_info->stmt, 0); > } > vect_free_slp_instance (new_instance, false); > return false; > @@ -2172,8 +2172,9 @@ vect_analyze_slp_instance (vec_info *vin > gcc_assert ((const_nunits & (const_nunits - 1)) == 0); > unsigned group1_size = i & ~(const_nunits - 1); > > - gimple *rest = vect_split_slp_store_group (stmt, group1_size); > - bool res = vect_analyze_slp_instance (vinfo, stmt, max_tree_size); > + gimple *rest = vect_split_slp_store_group (stmt_info, group1_size); > + bool res = vect_analyze_slp_instance (vinfo, stmt_info, > + max_tree_size); > /* If the first non-match was in the middle of a vector, > skip the rest of that vector. */ > if (group1_size < i) > @@ -2513,7 +2514,6 @@ vect_slp_analyze_node_operations_1 (vec_ > stmt_vector_for_cost *cost_vec) > { > stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0]; > - gimple *stmt = stmt_info->stmt; > gcc_assert (STMT_SLP_TYPE (stmt_info) != loop_vect); > > /* For BB vectorization vector types are assigned here. > @@ -2567,7 +2567,7 @@ vect_slp_analyze_node_operations_1 (vec_ > } > > bool dummy; > - return vect_analyze_stmt (stmt, &dummy, node, node_instance, cost_vec); > + return vect_analyze_stmt (stmt_info, &dummy, node, node_instance, cost_vec); > } > > /* Analyze statements contained in SLP tree NODE after recursively analyzing > Index: gcc/tree-vect-stmts.c > =================================================================== > --- gcc/tree-vect-stmts.c 2018-07-24 10:23:31.744764307 +0100 > +++ gcc/tree-vect-stmts.c 2018-07-24 10:23:35.384731983 +0100 > @@ -205,7 +205,7 @@ vect_mark_relevant (vec *workl > { > dump_printf_loc (MSG_NOTE, vect_location, > "mark relevant %d, live %d: ", relevant, live_p); > - dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0); > + dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0); > } > > /* If this stmt is an original stmt in a pattern, we might need to mark its > @@ -244,7 +244,7 @@ vect_mark_relevant (vec *workl > return; > } > > - worklist->safe_push (stmt); > + worklist->safe_push (stmt_info); > } > > > @@ -389,10 +389,10 @@ exist_non_indexing_operands_for_use_p (t > Therefore, all we need to check is if STMT falls into the > first case, and whether var corresponds to USE. */ > > - gassign *assign = dyn_cast (stmt); > + gassign *assign = dyn_cast (stmt_info->stmt); > if (!assign || !gimple_assign_copy_p (assign)) > { > - gcall *call = dyn_cast (stmt); > + gcall *call = dyn_cast (stmt_info->stmt); > if (call && gimple_call_internal_p (call)) > { > internal_fn ifn = gimple_call_internal_fn (call); > @@ -463,7 +463,7 @@ process_use (gimple *stmt, tree use, loo > > /* case 1: we are only interested in uses that need to be vectorized. Uses > that are used for address computation are not considered relevant. */ > - if (!force && !exist_non_indexing_operands_for_use_p (use, stmt)) > + if (!force && !exist_non_indexing_operands_for_use_p (use, stmt_vinfo)) > return true; > > if (!vect_is_simple_use (use, loop_vinfo, &dt, &dstmt_vinfo)) > @@ -484,8 +484,8 @@ process_use (gimple *stmt, tree use, loo > only way that STMT, which is a reduction-phi, was put in the worklist, > as there should be no other uses for DSTMT_VINFO in the loop. So we just > check that everything is as expected, and we are done. */ > - bb = gimple_bb (stmt); > - if (gimple_code (stmt) == GIMPLE_PHI > + bb = gimple_bb (stmt_vinfo->stmt); > + if (gimple_code (stmt_vinfo->stmt) == GIMPLE_PHI > && STMT_VINFO_DEF_TYPE (stmt_vinfo) == vect_reduction_def > && gimple_code (dstmt_vinfo->stmt) != GIMPLE_PHI > && STMT_VINFO_DEF_TYPE (dstmt_vinfo) == vect_reduction_def > @@ -576,10 +576,11 @@ process_use (gimple *stmt, tree use, loo > inductions. Otherwise we'll needlessly vectorize the IV increment > and cause hybrid SLP for SLP inductions. Unless the PHI is live > of course. */ > - else if (gimple_code (stmt) == GIMPLE_PHI > + else if (gimple_code (stmt_vinfo->stmt) == GIMPLE_PHI > && STMT_VINFO_DEF_TYPE (stmt_vinfo) == vect_induction_def > && ! STMT_VINFO_LIVE_P (stmt_vinfo) > - && (PHI_ARG_DEF_FROM_EDGE (stmt, loop_latch_edge (bb->loop_father)) > + && (PHI_ARG_DEF_FROM_EDGE (stmt_vinfo->stmt, > + loop_latch_edge (bb->loop_father)) > == use)) > { > if (dump_enabled_p ()) > @@ -740,7 +741,7 @@ vect_mark_stmts_to_be_vectorized (loop_v > /* Pattern statements are not inserted into the code, so > FOR_EACH_PHI_OR_STMT_USE optimizes their operands out, and we > have to scan the RHS or function arguments instead. */ > - if (gassign *assign = dyn_cast (stmt)) > + if (gassign *assign = dyn_cast (stmt_vinfo->stmt)) > { > enum tree_code rhs_code = gimple_assign_rhs_code (assign); > tree op = gimple_assign_rhs1 (assign); > @@ -748,10 +749,10 @@ vect_mark_stmts_to_be_vectorized (loop_v > i = 1; > if (rhs_code == COND_EXPR && COMPARISON_CLASS_P (op)) > { > - if (!process_use (stmt, TREE_OPERAND (op, 0), loop_vinfo, > - relevant, &worklist, false) > - || !process_use (stmt, TREE_OPERAND (op, 1), loop_vinfo, > - relevant, &worklist, false)) > + if (!process_use (stmt_vinfo, TREE_OPERAND (op, 0), > + loop_vinfo, relevant, &worklist, false) > + || !process_use (stmt_vinfo, TREE_OPERAND (op, 1), > + loop_vinfo, relevant, &worklist, false)) > return false; > i = 2; > } > @@ -759,27 +760,27 @@ vect_mark_stmts_to_be_vectorized (loop_v > { > op = gimple_op (assign, i); > if (TREE_CODE (op) == SSA_NAME > - && !process_use (stmt, op, loop_vinfo, relevant, > + && !process_use (stmt_vinfo, op, loop_vinfo, relevant, > &worklist, false)) > return false; > } > } > - else if (gcall *call = dyn_cast (stmt)) > + else if (gcall *call = dyn_cast (stmt_vinfo->stmt)) > { > for (i = 0; i < gimple_call_num_args (call); i++) > { > tree arg = gimple_call_arg (call, i); > - if (!process_use (stmt, arg, loop_vinfo, relevant, > + if (!process_use (stmt_vinfo, arg, loop_vinfo, relevant, > &worklist, false)) > return false; > } > } > } > else > - FOR_EACH_PHI_OR_STMT_USE (use_p, stmt, iter, SSA_OP_USE) > + FOR_EACH_PHI_OR_STMT_USE (use_p, stmt_vinfo->stmt, iter, SSA_OP_USE) > { > tree op = USE_FROM_PTR (use_p); > - if (!process_use (stmt, op, loop_vinfo, relevant, > + if (!process_use (stmt_vinfo, op, loop_vinfo, relevant, > &worklist, false)) > return false; > } > @@ -787,9 +788,9 @@ vect_mark_stmts_to_be_vectorized (loop_v > if (STMT_VINFO_GATHER_SCATTER_P (stmt_vinfo)) > { > gather_scatter_info gs_info; > - if (!vect_check_gather_scatter (stmt, loop_vinfo, &gs_info)) > + if (!vect_check_gather_scatter (stmt_vinfo, loop_vinfo, &gs_info)) > gcc_unreachable (); > - if (!process_use (stmt, gs_info.offset, loop_vinfo, relevant, > + if (!process_use (stmt_vinfo, gs_info.offset, loop_vinfo, relevant, > &worklist, true)) > return false; > } > @@ -1362,8 +1363,8 @@ vect_init_vector_1 (gimple *stmt, gimple > basic_block new_bb; > edge pe; > > - if (nested_in_vect_loop_p (loop, stmt)) > - loop = loop->inner; > + if (nested_in_vect_loop_p (loop, stmt_vinfo)) > + loop = loop->inner; > > pe = loop_preheader_edge (loop); > new_bb = gsi_insert_on_edge_immediate (pe, new_stmt); > @@ -1573,7 +1574,7 @@ vect_get_vec_def_for_operand (tree op, g > vector_type = get_vectype_for_scalar_type (TREE_TYPE (op)); > > gcc_assert (vector_type); > - return vect_init_vector (stmt, op, vector_type, NULL); > + return vect_init_vector (stmt_vinfo, op, vector_type, NULL); > } > else > return vect_get_vec_def_for_operand_1 (def_stmt_info, dt); > @@ -1740,12 +1741,12 @@ vect_finish_stmt_generation_1 (gimple *s > dump_gimple_stmt (MSG_NOTE, TDF_SLIM, vec_stmt, 0); > } > > - gimple_set_location (vec_stmt, gimple_location (stmt)); > + gimple_set_location (vec_stmt, gimple_location (stmt_info->stmt)); > > /* While EH edges will generally prevent vectorization, stmt might > e.g. be in a must-not-throw region. Ensure newly created stmts > that could throw are part of the same region. */ > - int lp_nr = lookup_stmt_eh_lp (stmt); > + int lp_nr = lookup_stmt_eh_lp (stmt_info->stmt); > if (lp_nr != 0 && stmt_could_throw_p (vec_stmt)) > add_stmt_to_eh_lp (vec_stmt, lp_nr); > > @@ -2269,7 +2270,7 @@ get_group_load_store_type (gimple *stmt, > > if (!STMT_VINFO_STRIDED_P (stmt_info) > && (can_overrun_p || !would_overrun_p) > - && compare_step_with_zero (stmt) > 0) > + && compare_step_with_zero (stmt_info) > 0) > { > /* First cope with the degenerate case of a single-element > vector. */ > @@ -2309,7 +2310,7 @@ get_group_load_store_type (gimple *stmt, > if (*memory_access_type == VMAT_ELEMENTWISE > && single_element_p > && loop_vinfo > - && vect_use_strided_gather_scatters_p (stmt, loop_vinfo, > + && vect_use_strided_gather_scatters_p (stmt_info, loop_vinfo, > masked_p, gs_info)) > *memory_access_type = VMAT_GATHER_SCATTER; > } > @@ -2421,7 +2422,7 @@ get_load_store_type (gimple *stmt, tree > if (STMT_VINFO_GATHER_SCATTER_P (stmt_info)) > { > *memory_access_type = VMAT_GATHER_SCATTER; > - if (!vect_check_gather_scatter (stmt, loop_vinfo, gs_info)) > + if (!vect_check_gather_scatter (stmt_info, loop_vinfo, gs_info)) > gcc_unreachable (); > else if (!vect_is_simple_use (gs_info->offset, vinfo, > &gs_info->offset_dt, > @@ -2436,15 +2437,15 @@ get_load_store_type (gimple *stmt, tree > } > else if (STMT_VINFO_GROUPED_ACCESS (stmt_info)) > { > - if (!get_group_load_store_type (stmt, vectype, slp, masked_p, vls_type, > - memory_access_type, gs_info)) > + if (!get_group_load_store_type (stmt_info, vectype, slp, masked_p, > + vls_type, memory_access_type, gs_info)) > return false; > } > else if (STMT_VINFO_STRIDED_P (stmt_info)) > { > gcc_assert (!slp); > if (loop_vinfo > - && vect_use_strided_gather_scatters_p (stmt, loop_vinfo, > + && vect_use_strided_gather_scatters_p (stmt_info, loop_vinfo, > masked_p, gs_info)) > *memory_access_type = VMAT_GATHER_SCATTER; > else > @@ -2452,10 +2453,10 @@ get_load_store_type (gimple *stmt, tree > } > else > { > - int cmp = compare_step_with_zero (stmt); > + int cmp = compare_step_with_zero (stmt_info); > if (cmp < 0) > *memory_access_type = get_negative_load_store_type > - (stmt, vectype, vls_type, ncopies); > + (stmt_info, vectype, vls_type, ncopies); > else if (cmp == 0) > { > gcc_assert (vls_type == VLS_LOAD); > @@ -2742,8 +2743,8 @@ vect_build_gather_load_calls (gimple *st > else > gcc_unreachable (); > > - tree vec_dest = vect_create_destination_var (gimple_get_lhs (stmt), > - vectype); > + tree scalar_dest = gimple_get_lhs (stmt_info->stmt); > + tree vec_dest = vect_create_destination_var (scalar_dest, vectype); > > tree ptr = fold_convert (ptrtype, gs_info->base); > if (!is_gimple_min_invariant (ptr)) > @@ -2765,8 +2766,8 @@ vect_build_gather_load_calls (gimple *st > > if (!mask) > { > - src_op = vect_build_zero_merge_argument (stmt, rettype); > - mask_op = vect_build_all_ones_mask (stmt, masktype); > + src_op = vect_build_zero_merge_argument (stmt_info, rettype); > + mask_op = vect_build_all_ones_mask (stmt_info, masktype); > } > > for (int j = 0; j < ncopies; ++j) > @@ -2774,10 +2775,10 @@ vect_build_gather_load_calls (gimple *st > tree op, var; > if (modifier == WIDEN && (j & 1)) > op = permute_vec_elements (vec_oprnd0, vec_oprnd0, > - perm_mask, stmt, gsi); > + perm_mask, stmt_info, gsi); > else if (j == 0) > op = vec_oprnd0 > - = vect_get_vec_def_for_operand (gs_info->offset, stmt); > + = vect_get_vec_def_for_operand (gs_info->offset, stmt_info); > else > op = vec_oprnd0 > = vect_get_vec_def_for_stmt_copy (gs_info->offset_dt, vec_oprnd0); > @@ -2789,7 +2790,7 @@ vect_build_gather_load_calls (gimple *st > var = vect_get_new_ssa_name (idxtype, vect_simple_var); > op = build1 (VIEW_CONVERT_EXPR, idxtype, op); > gassign *new_stmt = gimple_build_assign (var, VIEW_CONVERT_EXPR, op); > - vect_finish_stmt_generation (stmt, new_stmt, gsi); > + vect_finish_stmt_generation (stmt_info, new_stmt, gsi); > op = var; > } > > @@ -2797,11 +2798,11 @@ vect_build_gather_load_calls (gimple *st > { > if (mask_perm_mask && (j & 1)) > mask_op = permute_vec_elements (mask_op, mask_op, > - mask_perm_mask, stmt, gsi); > + mask_perm_mask, stmt_info, gsi); > else > { > if (j == 0) > - vec_mask = vect_get_vec_def_for_operand (mask, stmt); > + vec_mask = vect_get_vec_def_for_operand (mask, stmt_info); > else > vec_mask = vect_get_vec_def_for_stmt_copy (mask_dt, vec_mask); > > @@ -2815,7 +2816,7 @@ vect_build_gather_load_calls (gimple *st > mask_op = build1 (VIEW_CONVERT_EXPR, masktype, mask_op); > gassign *new_stmt > = gimple_build_assign (var, VIEW_CONVERT_EXPR, mask_op); > - vect_finish_stmt_generation (stmt, new_stmt, gsi); > + vect_finish_stmt_generation (stmt_info, new_stmt, gsi); > mask_op = var; > } > } > @@ -2832,17 +2833,19 @@ vect_build_gather_load_calls (gimple *st > TYPE_VECTOR_SUBPARTS (rettype))); > op = vect_get_new_ssa_name (rettype, vect_simple_var); > gimple_call_set_lhs (new_call, op); > - vect_finish_stmt_generation (stmt, new_call, gsi); > + vect_finish_stmt_generation (stmt_info, new_call, gsi); > var = make_ssa_name (vec_dest); > op = build1 (VIEW_CONVERT_EXPR, vectype, op); > gassign *new_stmt = gimple_build_assign (var, VIEW_CONVERT_EXPR, op); > - new_stmt_info = vect_finish_stmt_generation (stmt, new_stmt, gsi); > + new_stmt_info > + = vect_finish_stmt_generation (stmt_info, new_stmt, gsi); > } > else > { > var = make_ssa_name (vec_dest, new_call); > gimple_call_set_lhs (new_call, var); > - new_stmt_info = vect_finish_stmt_generation (stmt, new_call, gsi); > + new_stmt_info > + = vect_finish_stmt_generation (stmt_info, new_call, gsi); > } > > if (modifier == NARROW) > @@ -2852,7 +2855,8 @@ vect_build_gather_load_calls (gimple *st > prev_res = var; > continue; > } > - var = permute_vec_elements (prev_res, var, perm_mask, stmt, gsi); > + var = permute_vec_elements (prev_res, var, perm_mask, > + stmt_info, gsi); > new_stmt_info = loop_vinfo->lookup_def (var); > } > > @@ -3027,7 +3031,7 @@ vectorizable_bswap (gimple *stmt, gimple > { > /* Handle uses. */ > if (j == 0) > - vect_get_vec_defs (op, NULL, stmt, &vec_oprnds, NULL, slp_node); > + vect_get_vec_defs (op, NULL, stmt_info, &vec_oprnds, NULL, slp_node); > else > vect_get_vec_defs_for_stmt_copy (dt, &vec_oprnds, NULL); > > @@ -3040,15 +3044,16 @@ vectorizable_bswap (gimple *stmt, gimple > tree tem = make_ssa_name (char_vectype); > new_stmt = gimple_build_assign (tem, build1 (VIEW_CONVERT_EXPR, > char_vectype, vop)); > - vect_finish_stmt_generation (stmt, new_stmt, gsi); > + vect_finish_stmt_generation (stmt_info, new_stmt, gsi); > tree tem2 = make_ssa_name (char_vectype); > new_stmt = gimple_build_assign (tem2, VEC_PERM_EXPR, > tem, tem, bswap_vconst); > - vect_finish_stmt_generation (stmt, new_stmt, gsi); > + vect_finish_stmt_generation (stmt_info, new_stmt, gsi); > tem = make_ssa_name (vectype); > new_stmt = gimple_build_assign (tem, build1 (VIEW_CONVERT_EXPR, > vectype, tem2)); > - new_stmt_info = vect_finish_stmt_generation (stmt, new_stmt, gsi); > + new_stmt_info > + = vect_finish_stmt_generation (stmt_info, new_stmt, gsi); > if (slp_node) > SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info); > } > @@ -3137,8 +3142,8 @@ vectorizable_call (gimple *gs, gimple_st > && ! vec_stmt) > return false; > > - /* Is GS a vectorizable call? */ > - stmt = dyn_cast (gs); > + /* Is STMT_INFO a vectorizable call? */ > + stmt = dyn_cast (stmt_info->stmt); > if (!stmt) > return false; > > @@ -3307,7 +3312,7 @@ vectorizable_call (gimple *gs, gimple_st > && (gimple_call_builtin_p (stmt, BUILT_IN_BSWAP16) > || gimple_call_builtin_p (stmt, BUILT_IN_BSWAP32) > || gimple_call_builtin_p (stmt, BUILT_IN_BSWAP64))) > - return vectorizable_bswap (stmt, gsi, vec_stmt, slp_node, > + return vectorizable_bswap (stmt_info, gsi, vec_stmt, slp_node, > vectype_in, dt, cost_vec); > else > { > @@ -3400,7 +3405,7 @@ vectorizable_call (gimple *gs, gimple_st > gimple_call_set_lhs (call, half_res); > gimple_call_set_nothrow (call, true); > new_stmt_info > - = vect_finish_stmt_generation (stmt, call, gsi); > + = vect_finish_stmt_generation (stmt_info, call, gsi); > if ((i & 1) == 0) > { > prev_res = half_res; > @@ -3411,7 +3416,8 @@ vectorizable_call (gimple *gs, gimple_st > = gimple_build_assign (new_temp, convert_code, > prev_res, half_res); > new_stmt_info > - = vect_finish_stmt_generation (stmt, new_stmt, gsi); > + = vect_finish_stmt_generation (stmt_info, new_stmt, > + gsi); > } > else > { > @@ -3435,7 +3441,7 @@ vectorizable_call (gimple *gs, gimple_st > gimple_call_set_lhs (call, new_temp); > gimple_call_set_nothrow (call, true); > new_stmt_info > - = vect_finish_stmt_generation (stmt, call, gsi); > + = vect_finish_stmt_generation (stmt_info, call, gsi); > } > SLP_TREE_VEC_STMTS (slp_node).quick_push (new_stmt_info); > } > @@ -3453,7 +3459,7 @@ vectorizable_call (gimple *gs, gimple_st > op = gimple_call_arg (stmt, i); > if (j == 0) > vec_oprnd0 > - = vect_get_vec_def_for_operand (op, stmt); > + = vect_get_vec_def_for_operand (op, stmt_info); > else > vec_oprnd0 > = vect_get_vec_def_for_stmt_copy (dt[i], orig_vargs[i]); > @@ -3476,11 +3482,11 @@ vectorizable_call (gimple *gs, gimple_st > tree new_var > = vect_get_new_ssa_name (vectype_out, vect_simple_var, "cst_"); > gimple *init_stmt = gimple_build_assign (new_var, cst); > - vect_init_vector_1 (stmt, init_stmt, NULL); > + vect_init_vector_1 (stmt_info, init_stmt, NULL); > new_temp = make_ssa_name (vec_dest); > gimple *new_stmt = gimple_build_assign (new_temp, new_var); > new_stmt_info > - = vect_finish_stmt_generation (stmt, new_stmt, gsi); > + = vect_finish_stmt_generation (stmt_info, new_stmt, gsi); > } > else if (modifier == NARROW) > { > @@ -3491,7 +3497,8 @@ vectorizable_call (gimple *gs, gimple_st > gcall *call = gimple_build_call_internal_vec (ifn, vargs); > gimple_call_set_lhs (call, half_res); > gimple_call_set_nothrow (call, true); > - new_stmt_info = vect_finish_stmt_generation (stmt, call, gsi); > + new_stmt_info > + = vect_finish_stmt_generation (stmt_info, call, gsi); > if ((j & 1) == 0) > { > prev_res = half_res; > @@ -3501,7 +3508,7 @@ vectorizable_call (gimple *gs, gimple_st > gassign *new_stmt = gimple_build_assign (new_temp, convert_code, > prev_res, half_res); >