From: Richard Biener <rguenther@suse.de>
To: Tamar Christina <Tamar.Christina@arm.com>
Cc: "gcc-patches@gcc.gnu.org" <gcc-patches@gcc.gnu.org>,
nd <nd@arm.com>, "jlaw@ventanamicro.com" <jlaw@ventanamicro.com>
Subject: RE: [PATCH 4/21]middle-end: update loop peeling code to maintain LCSSA form for early breaks
Date: Wed, 15 Nov 2023 12:40:45 +0000 (UTC) [thread overview]
Message-ID: <nycvar.YFH.7.77.849.2311151231250.8772@jbgna.fhfr.qr> (raw)
In-Reply-To: <VI1PR08MB53254CC2C5490804977E0852FFB1A@VI1PR08MB5325.eurprd08.prod.outlook.com>
On Wed, 15 Nov 2023, Tamar Christina wrote:
> Patch updated to latest trunk,
>
> This splits the part of the function that does peeling for loops at exits to
> a different function. In this new function we also peel for early breaks.
>
> Peeling for early breaks works by redirecting all early break exits to a
> single "early break" block and combine them and the normal exit edge together
> later in a different block which then goes into the epilog preheader.
>
> This allows us to re-use all the existing code for IV updates, Additionally this
> also enables correct linking for multiple vector epilogues.
>
> flush_pending_stmts cannot be used in this scenario since it updates the PHI
> nodes in the order that they are in the exit destination blocks. This means
> they are in CFG visit order. With a single exit this doesn't matter but with
> multiple exits with different live values through the different exits the order
> usually does not line up.
>
> Additionally the vectorizer helper functions expect to be able to iterate over
> the nodes in the order that they occur in the loop header blocks. This is an
> invariant we must maintain. To do this we just inline the work
> flush_pending_stmts but maintain the order by using the header blocks to guide
> the work.
>
> Bootstrapped Regtested on aarch64-none-linux-gnu and no issues.
>
> Ok for master?
>
> Thanks,
> Tamar
>
> gcc/ChangeLog:
>
> * tree-vect-loop-manip.cc (vect_is_loop_exit_latch_pred): New.
> (slpeel_tree_duplicate_loop_for_vectorization): New.
> (slpeel_tree_duplicate_loop_to_edge_cfg): use it.
> * tree-vectorizer.h (is_loop_header_bb_p): Drop assert.
> (slpeel_tree_duplicate_loop_to_edge_cfg): Update signature.
> (vect_is_loop_exit_latch_pred): New.
>
> --- inline copy of patch ---
>
> diff --git a/gcc/tree-vect-loop-manip.cc b/gcc/tree-vect-loop-manip.cc
> index b9161274ce401a7307f3e61ad23aa036701190d7..fafbf924e8db18eb4eec7a4a1906d10f6ce9812f 100644
> --- a/gcc/tree-vect-loop-manip.cc
> +++ b/gcc/tree-vect-loop-manip.cc
> @@ -1392,6 +1392,153 @@ vect_set_loop_condition (class loop *loop, edge loop_e, loop_vec_info loop_vinfo
> (gimple *) cond_stmt);
> }
>
> +/* Determine if the exit choosen by the loop vectorizer differs from the
> + natural loop exit. i.e. if the exit leads to the loop patch or not.
> + When this happens we need to flip the understanding of main and other
> + exits by peeling and IV updates. */
> +
> +bool inline
> +vect_is_loop_exit_latch_pred (edge loop_exit, class loop *loop)
Ick, bad name - didn't see its use(s) in this patch?
> +{
> + return single_pred (loop->latch) == loop_exit->src;
> +}
> +
> +/* Perform peeling for when the peeled loop is placed after the original loop.
> + This maintains LCSSA and creates the appropriate blocks for multiple exit
> + vectorization. */
> +
> +void static
> +slpeel_tree_duplicate_loop_for_vectorization (class loop *loop, edge loop_exit,
> + vec<edge> &loop_exits,
> + class loop *new_loop,
> + bool flow_loops,
> + basic_block new_preheader)
also bad name ;) I don't see a strong reason to factor this out.
> +{
> + bool multiple_exits_p = loop_exits.length () > 1;
> + basic_block main_loop_exit_block = new_preheader;
> + if (multiple_exits_p)
> + {
> + edge loop_entry = single_succ_edge (new_preheader);
> + new_preheader = split_edge (loop_entry);
> + }
> +
> + auto_vec <gimple *> new_phis;
> + hash_map <tree, tree> new_phi_args;
> + /* First create the empty phi nodes so that when we flush the
> + statements they can be filled in. However because there is no order
> + between the PHI nodes in the exits and the loop headers we need to
> + order them base on the order of the two headers. First record the new
> + phi nodes. Then redirect the edges and flush the changes. This writes out
> + the new SSA names. */
> + for (auto gsi_from = gsi_start_phis (loop_exit->dest);
> + !gsi_end_p (gsi_from); gsi_next (&gsi_from))
> + {
> + gimple *from_phi = gsi_stmt (gsi_from);
> + tree new_res = copy_ssa_name (gimple_phi_result (from_phi));
> + gphi *res = create_phi_node (new_res, main_loop_exit_block);
> + new_phis.safe_push (res);
> + }
> +
> + for (auto exit : loop_exits)
> + {
> + basic_block dest
> + = exit == loop_exit ? main_loop_exit_block : new_preheader;
> + redirect_edge_and_branch (exit, dest);
> + }
> +
> + /* Only fush the main exit, the remaining exits we need to match the order
> + in the loop->header which with multiple exits may not be the same. */
> + flush_pending_stmts (loop_exit);
> +
> + /* Record the new SSA names in the cache so that we can skip materializing
> + them again when we fill in the rest of the LCSSA variables. */
> + for (auto phi : new_phis)
> + {
> + tree new_arg = gimple_phi_arg (phi, 0)->def;
> +
> + if (!SSA_VAR_P (new_arg))
> + continue;
> +
> + /* If the PHI MEM node dominates the loop then we shouldn't create
> + a new LC-SSSA PHI for it in the intermediate block. */
> + /* A MEM phi that consitutes a new DEF for the vUSE chain can either
> + be a .VDEF or a PHI that operates on MEM. And said definition
> + must not be inside the main loop. Or we must be a parameter.
> + In the last two cases we may remove a non-MEM PHI node, but since
> + they dominate both loops the removal is unlikely to cause trouble
> + as the exits must already be using them. */
> + if (virtual_operand_p (new_arg)
> + && (SSA_NAME_IS_DEFAULT_DEF (new_arg)
> + || !flow_bb_inside_loop_p (loop,
> + gimple_bb (SSA_NAME_DEF_STMT (new_arg)))))
> + {
> + auto gsi = gsi_for_stmt (phi);
> + remove_phi_node (&gsi, true);
> + continue;
> + }
> +
> + /* If we decide to remove the PHI node we should also not
> + rematerialize it later on. */
> + new_phi_args.put (new_arg, gimple_phi_result (phi));
> +
> + if (TREE_CODE (new_arg) != SSA_NAME)
> + continue;
> + }
> +
> + /* Copy the current loop LC PHI nodes between the original loop exit
> + block and the new loop header. This allows us to later split the
> + preheader block and still find the right LC nodes. */
> + edge loop_entry = single_succ_edge (new_preheader);
> + if (flow_loops)
> + for (auto gsi_from = gsi_start_phis (loop->header),
> + gsi_to = gsi_start_phis (new_loop->header);
> + !gsi_end_p (gsi_from) && !gsi_end_p (gsi_to);
> + gsi_next (&gsi_from), gsi_next (&gsi_to))
> + {
> + gimple *from_phi = gsi_stmt (gsi_from);
> + gimple *to_phi = gsi_stmt (gsi_to);
> + tree new_arg = PHI_ARG_DEF_FROM_EDGE (from_phi, loop_latch_edge (loop));
> + tree *res = NULL;
> +
> + /* Check if we've already created a new phi node during edge
> + redirection. If we have, only propagate the value downwards. */
> + if ((res = new_phi_args.get (new_arg)))
> + new_arg = *res;
> +
> + /* All other exits use the previous iters. */
> + if (multiple_exits_p)
> + {
> + tree alt_arg = gimple_phi_result (from_phi);
> + tree alt_res = copy_ssa_name (alt_arg);
> + gphi *alt_lcssa_phi = create_phi_node (alt_res, new_preheader);
> + edge main_e = single_succ_edge (main_loop_exit_block);
> + for (edge e : loop_exits)
> + if (e != loop_exit)
> + {
> + add_phi_arg (alt_lcssa_phi, alt_arg, e, UNKNOWN_LOCATION);
> + SET_PHI_ARG_DEF (alt_lcssa_phi, main_e->dest_idx, new_arg);
> + }
> + new_arg = alt_res; /* Push it down to the new_loop header. */
I think it would be clearer to separate alternate exit from main exit
handling more completely - we don't have the new_phi_args map for
the alternate exits.
Thus first only redirect and fixup the main exit and then redirect
the alternate exits, immediately wiping the edge_var_map, and
manually create the only relevant PHIs.
In principle this early-break handling could be fully within the
if (flow_loops) condition (including populating the new_phi_args
map for the main exit).
The code itself looks fine to me.
Richard.
> + } else if (!res) {
> + /* For non-early break we need to keep the possibly live values in
> + the exit block. For early break these are kept in the merge
> + block in the code above. */
> + tree new_res = copy_ssa_name (gimple_phi_result (from_phi));
> + gphi *lcssa_phi = create_phi_node (new_res, new_preheader);
> +
> + /* Main loop exit should use the final iter value. */
> + add_phi_arg (lcssa_phi, new_arg, loop_exit, UNKNOWN_LOCATION);
> + new_arg = new_res;
> + }
> +
> + adjust_phi_and_debug_stmts (to_phi, loop_entry, new_arg);
> + }
> +
> + /* Now clear all the redirect maps. */
> + for (auto exit : loop_exits)
> + redirect_edge_var_map_clear (exit);
> +}
> +
> /* Given LOOP this function generates a new copy of it and puts it
> on E which is either the entry or exit of LOOP. If SCALAR_LOOP is
> non-NULL, assume LOOP and SCALAR_LOOP are equivalent and copy the
> @@ -1403,13 +1550,16 @@ vect_set_loop_condition (class loop *loop, edge loop_e, loop_vec_info loop_vinfo
> copies remains the same.
>
> If UPDATED_DOMS is not NULL it is update with the list of basic blocks whoms
> - dominators were updated during the peeling. */
> + dominators were updated during the peeling. When doing early break vectorization
> + then LOOP_VINFO needs to be provided and is used to keep track of any newly created
> + memory references that need to be updated should we decide to vectorize. */
>
> class loop *
> slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, edge loop_exit,
> class loop *scalar_loop,
> edge scalar_exit, edge e, edge *new_e,
> - bool flow_loops)
> + bool flow_loops,
> + vec<basic_block> *updated_doms)
> {
> class loop *new_loop;
> basic_block *new_bbs, *bbs, *pbbs;
> @@ -1526,7 +1676,9 @@ slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, edge loop_exit,
> }
>
> auto loop_exits = get_loop_exit_edges (loop);
> + bool multiple_exits_p = loop_exits.length () > 1;
> auto_vec<basic_block> doms;
> + class loop *update_loop = NULL;
>
> if (at_exit) /* Add the loop copy at exit. */
> {
> @@ -1536,91 +1688,9 @@ slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, edge loop_exit,
> flush_pending_stmts (new_exit);
> }
>
> - auto_vec <gimple *> new_phis;
> - hash_map <tree, tree> new_phi_args;
> - /* First create the empty phi nodes so that when we flush the
> - statements they can be filled in. However because there is no order
> - between the PHI nodes in the exits and the loop headers we need to
> - order them base on the order of the two headers. First record the new
> - phi nodes. */
> - for (auto gsi_from = gsi_start_phis (scalar_exit->dest);
> - !gsi_end_p (gsi_from); gsi_next (&gsi_from))
> - {
> - gimple *from_phi = gsi_stmt (gsi_from);
> - tree new_res = copy_ssa_name (gimple_phi_result (from_phi));
> - gphi *res = create_phi_node (new_res, new_preheader);
> - new_phis.safe_push (res);
> - }
> -
> - /* Then redirect the edges and flush the changes. This writes out the new
> - SSA names. */
> - for (edge exit : loop_exits)
> - {
> - edge temp_e = redirect_edge_and_branch (exit, new_preheader);
> - flush_pending_stmts (temp_e);
> - }
> - /* Record the new SSA names in the cache so that we can skip materializing
> - them again when we fill in the rest of the LCSSA variables. */
> - for (auto phi : new_phis)
> - {
> - tree new_arg = gimple_phi_arg (phi, 0)->def;
> -
> - if (!SSA_VAR_P (new_arg))
> - continue;
> - /* If the PHI MEM node dominates the loop then we shouldn't create
> - a new LC-SSSA PHI for it in the intermediate block. */
> - /* A MEM phi that consitutes a new DEF for the vUSE chain can either
> - be a .VDEF or a PHI that operates on MEM. And said definition
> - must not be inside the main loop. Or we must be a parameter.
> - In the last two cases we may remove a non-MEM PHI node, but since
> - they dominate both loops the removal is unlikely to cause trouble
> - as the exits must already be using them. */
> - if (virtual_operand_p (new_arg)
> - && (SSA_NAME_IS_DEFAULT_DEF (new_arg)
> - || !flow_bb_inside_loop_p (loop,
> - gimple_bb (SSA_NAME_DEF_STMT (new_arg)))))
> - {
> - auto gsi = gsi_for_stmt (phi);
> - remove_phi_node (&gsi, true);
> - continue;
> - }
> - new_phi_args.put (new_arg, gimple_phi_result (phi));
> -
> - if (TREE_CODE (new_arg) != SSA_NAME)
> - continue;
> - }
> -
> - /* Copy the current loop LC PHI nodes between the original loop exit
> - block and the new loop header. This allows us to later split the
> - preheader block and still find the right LC nodes. */
> - edge loop_entry = single_succ_edge (new_preheader);
> - if (flow_loops)
> - for (auto gsi_from = gsi_start_phis (loop->header),
> - gsi_to = gsi_start_phis (new_loop->header);
> - !gsi_end_p (gsi_from) && !gsi_end_p (gsi_to);
> - gsi_next (&gsi_from), gsi_next (&gsi_to))
> - {
> - gimple *from_phi = gsi_stmt (gsi_from);
> - gimple *to_phi = gsi_stmt (gsi_to);
> - tree new_arg = PHI_ARG_DEF_FROM_EDGE (from_phi,
> - loop_latch_edge (loop));
> -
> - /* Check if we've already created a new phi node during edge
> - redirection. If we have, only propagate the value downwards. */
> - if (tree *res = new_phi_args.get (new_arg))
> - {
> - adjust_phi_and_debug_stmts (to_phi, loop_entry, *res);
> - continue;
> - }
> -
> - tree new_res = copy_ssa_name (gimple_phi_result (from_phi));
> - gphi *lcssa_phi = create_phi_node (new_res, new_preheader);
> -
> - /* Main loop exit should use the final iter value. */
> - add_phi_arg (lcssa_phi, new_arg, loop_exit, UNKNOWN_LOCATION);
> -
> - adjust_phi_and_debug_stmts (to_phi, loop_entry, new_res);
> - }
> + slpeel_tree_duplicate_loop_for_vectorization (loop, loop_exit, loop_exits,
> + new_loop, flow_loops,
> + new_preheader);
>
> set_immediate_dominator (CDI_DOMINATORS, new_preheader, e->src);
>
> @@ -1634,6 +1704,21 @@ slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, edge loop_exit,
> delete_basic_block (preheader);
> set_immediate_dominator (CDI_DOMINATORS, scalar_loop->header,
> loop_preheader_edge (scalar_loop)->src);
> +
> + /* Finally after wiring the new epilogue we need to update its main exit
> + to the original function exit we recorded. Other exits are already
> + correct. */
> + if (multiple_exits_p)
> + {
> + update_loop = new_loop;
> + for (edge e : get_loop_exit_edges (loop))
> + doms.safe_push (e->dest);
> + doms.safe_push (exit_dest);
> +
> + /* Likely a fall-through edge, so update if needed. */
> + if (single_succ_p (exit_dest))
> + doms.safe_push (single_succ (exit_dest));
> + }
> }
> else /* Add the copy at entry. */
> {
> @@ -1681,6 +1766,34 @@ slpeel_tree_duplicate_loop_to_edge_cfg (class loop *loop, edge loop_exit,
> delete_basic_block (new_preheader);
> set_immediate_dominator (CDI_DOMINATORS, new_loop->header,
> loop_preheader_edge (new_loop)->src);
> +
> + if (multiple_exits_p)
> + update_loop = loop;
> + }
> +
> + if (multiple_exits_p)
> + {
> + for (edge e : get_loop_exit_edges (update_loop))
> + {
> + edge ex;
> + edge_iterator ei;
> + FOR_EACH_EDGE (ex, ei, e->dest->succs)
> + {
> + /* Find the first non-fallthrough block as fall-throughs can't
> + dominate other blocks. */
> + if (single_succ_p (ex->dest))
> + {
> + doms.safe_push (ex->dest);
> + ex = single_succ_edge (ex->dest);
> + }
> + doms.safe_push (ex->dest);
> + }
> + doms.safe_push (e->dest);
> + }
> +
> + iterate_fix_dominators (CDI_DOMINATORS, doms, false);
> + if (updated_doms)
> + updated_doms->safe_splice (doms);
> }
>
> free (new_bbs);
> diff --git a/gcc/tree-vectorizer.h b/gcc/tree-vectorizer.h
> index 76451a7fefe6ff966cecfa2cbc7b11336b038565..b9a71a0b5f5407417e8366b0df132df20c7f60aa 100644
> --- a/gcc/tree-vectorizer.h
> +++ b/gcc/tree-vectorizer.h
> @@ -1821,7 +1821,7 @@ is_loop_header_bb_p (basic_block bb)
> {
> if (bb == (bb->loop_father)->header)
> return true;
> - gcc_checking_assert (EDGE_COUNT (bb->preds) == 1);
> +
> return false;
> }
>
> @@ -2212,7 +2212,8 @@ extern bool slpeel_can_duplicate_loop_p (const class loop *, const_edge,
> const_edge);
> class loop *slpeel_tree_duplicate_loop_to_edge_cfg (class loop *, edge,
> class loop *, edge,
> - edge, edge *, bool = true);
> + edge, edge *, bool = true,
> + vec<basic_block> * = NULL);
> class loop *vect_loop_versioning (loop_vec_info, gimple *);
> extern class loop *vect_do_peeling (loop_vec_info, tree, tree,
> tree *, tree *, tree *, int, bool, bool,
> @@ -2223,6 +2224,7 @@ extern dump_user_location_t find_loop_location (class loop *);
> extern bool vect_can_advance_ivs_p (loop_vec_info);
> extern void vect_update_inits_of_drs (loop_vec_info, tree, tree_code);
> extern edge vec_init_loop_exit_info (class loop *);
> +extern bool vect_is_loop_exit_latch_pred (edge, class loop *);
>
> /* In tree-vect-stmts.cc. */
> extern tree get_related_vectype_for_scalar_type (machine_mode, tree,
>
--
Richard Biener <rguenther@suse.de>
SUSE Software Solutions Germany GmbH,
Frankenstrasse 146, 90461 Nuernberg, Germany;
GF: Ivo Totev, Andrew McDonald, Werner Knoblich; (HRB 36809, AG Nuernberg)
next prev parent reply other threads:[~2023-11-15 12:40 UTC|newest]
Thread overview: 200+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-28 13:40 [PATCH v5 0/19] Support early break/return auto-vectorization Tamar Christina
2023-06-28 13:41 ` [PATCH 1/19]middle-end ifcvt: Support bitfield lowering of multiple-exit loops Tamar Christina
2023-07-04 11:29 ` Richard Biener
2023-06-28 13:41 ` [PATCH 2/19][front-end] C/C++ front-end: add pragma GCC novector Tamar Christina
2023-06-29 22:17 ` Jason Merrill
2023-06-30 16:18 ` Tamar Christina
2023-06-30 16:44 ` Jason Merrill
2023-06-28 13:42 ` [PATCH 3/19]middle-end clean up vect testsuite using pragma novector Tamar Christina
2023-06-28 13:54 ` Tamar Christina
2023-07-04 11:31 ` Richard Biener
2023-06-28 13:43 ` [PATCH 4/19]middle-end: Fix scale_loop_frequencies segfault on multiple-exits Tamar Christina
2023-07-04 11:52 ` Richard Biener
2023-07-04 14:57 ` Jan Hubicka
2023-07-06 14:34 ` Jan Hubicka
2023-07-07 5:59 ` Richard Biener
2023-07-07 12:20 ` Jan Hubicka
2023-07-07 12:27 ` Tamar Christina
2023-07-07 14:10 ` Jan Hubicka
2023-07-10 7:07 ` Richard Biener
2023-07-10 8:33 ` Jan Hubicka
2023-07-10 9:24 ` Richard Biener
2023-07-10 9:23 ` Jan Hubicka
2023-07-10 9:29 ` Richard Biener
2023-07-11 9:28 ` Jan Hubicka
2023-07-11 10:31 ` Richard Biener
2023-07-11 12:40 ` Jan Hubicka
2023-07-11 13:04 ` Richard Biener
2023-06-28 13:43 ` [PATCH 5/19]middle-end: Enable bit-field vectorization to work correctly when we're vectoring inside conds Tamar Christina
2023-07-04 12:05 ` Richard Biener
2023-07-10 15:32 ` Tamar Christina
2023-07-11 11:03 ` Richard Biener
2023-06-28 13:44 ` [PATCH 6/19]middle-end: Don't enter piecewise expansion if VF is not constant Tamar Christina
2023-07-04 12:10 ` Richard Biener
2023-07-06 10:37 ` Tamar Christina
2023-07-06 10:51 ` Richard Biener
2023-06-28 13:44 ` [PATCH 7/19]middle-end: Refactor vectorizer loop conditionals and separate out IV to new variables Tamar Christina
2023-07-13 11:32 ` Richard Biener
2023-07-13 11:54 ` Tamar Christina
2023-07-13 12:10 ` Richard Biener
2023-06-28 13:45 ` [PATCH 8/19]middle-end: updated niters analysis to handle multiple exits Tamar Christina
2023-07-13 11:49 ` Richard Biener
2023-07-13 12:03 ` Tamar Christina
2023-07-14 9:09 ` Richard Biener
2023-06-28 13:45 ` [PATCH 9/19]AArch64 middle-end: refactor vectorizable_comparison to make the main body re-usable Tamar Christina
2023-06-28 13:55 ` [PATCH 9/19] " Tamar Christina
2023-07-13 16:23 ` Richard Biener
2023-06-28 13:46 ` [PATCH 10/19]middle-end: implement vectorizable_early_break Tamar Christina
2023-06-28 13:46 ` [PATCH 11/19]middle-end: implement code motion for early break Tamar Christina
2023-06-28 13:47 ` [PATCH 12/19]middle-end: implement loop peeling and IV updates " Tamar Christina
2023-07-13 17:31 ` Richard Biener
2023-07-13 19:05 ` Tamar Christina
2023-07-14 13:34 ` Richard Biener
2023-07-17 10:56 ` Tamar Christina
2023-07-17 12:48 ` Richard Biener
2023-08-18 11:35 ` Tamar Christina
2023-08-18 12:53 ` Richard Biener
2023-08-18 13:12 ` Tamar Christina
2023-08-18 13:15 ` Richard Biener
2023-10-23 20:21 ` Tamar Christina
2023-06-28 13:47 ` [PATCH 13/19]middle-end testsuite: un-xfail TSVC loops that check for exit control flow vectorization Tamar Christina
2023-06-28 13:47 ` [PATCH 14/19]middle-end testsuite: Add new tests for early break vectorization Tamar Christina
2023-06-28 13:48 ` [PATCH 15/19]AArch64: Add implementation for vector cbranch for Advanced SIMD Tamar Christina
2023-06-28 13:48 ` [PATCH 16/19]AArch64 Add optimization for vector != cbranch fed into compare with 0 " Tamar Christina
2023-06-28 13:48 ` [PATCH 17/19]AArch64 Add optimization for vector cbranch combining SVE and " Tamar Christina
2023-06-28 13:49 ` [PATCH 18/19]Arm: Add Advanced SIMD cbranch implementation Tamar Christina
2023-06-28 13:50 ` [PATCH 19/19]Arm: Add MVE " Tamar Christina
[not found] ` <MW5PR11MB5908414D8B2AB0580A888ECAA924A@MW5PR11MB5908.namprd11.prod.outlook.com>
2023-06-28 14:49 ` FW: [PATCH v5 0/19] Support early break/return auto-vectorization 钟居哲
2023-06-28 16:00 ` Tamar Christina
2023-11-06 7:36 ` [PATCH v6 0/21]middle-end: " Tamar Christina
2023-11-06 7:37 ` [PATCH 1/21]middle-end testsuite: Add more pragma novector to new tests Tamar Christina
2023-11-07 9:46 ` Richard Biener
2023-11-06 7:37 ` [PATCH 2/21]middle-end testsuite: Add tests for early break vectorization Tamar Christina
2023-11-07 9:52 ` Richard Biener
2023-11-16 10:53 ` Richard Biener
2023-11-06 7:37 ` [PATCH 3/21]middle-end: Implement code motion and dependency analysis for early breaks Tamar Christina
2023-11-07 10:53 ` Richard Biener
2023-11-07 11:34 ` Tamar Christina
2023-11-07 14:23 ` Richard Biener
2023-12-19 10:11 ` Tamar Christina
2023-12-19 14:05 ` Richard Biener
2023-12-20 10:51 ` Tamar Christina
2023-12-20 12:24 ` Richard Biener
2023-11-06 7:38 ` [PATCH 4/21]middle-end: update loop peeling code to maintain LCSSA form " Tamar Christina
2023-11-15 0:00 ` Tamar Christina
2023-11-15 12:40 ` Richard Biener [this message]
2023-11-20 21:51 ` Tamar Christina
2023-11-24 10:16 ` Tamar Christina
2023-11-24 12:38 ` Richard Biener
2023-11-06 7:38 ` [PATCH 5/21]middle-end: update vectorizer's control update to support picking an exit other than loop latch Tamar Christina
2023-11-07 15:04 ` Richard Biener
2023-11-07 23:10 ` Tamar Christina
2023-11-13 20:11 ` Tamar Christina
2023-11-14 7:56 ` Richard Biener
2023-11-14 8:07 ` Tamar Christina
2023-11-14 23:59 ` Tamar Christina
2023-11-15 12:14 ` Richard Biener
2023-11-06 7:38 ` [PATCH 6/21]middle-end: support multiple exits in loop versioning Tamar Christina
2023-11-07 14:54 ` Richard Biener
2023-11-06 7:39 ` [PATCH 7/21]middle-end: update IV update code to support early breaks and arbitrary exits Tamar Christina
2023-11-15 0:03 ` Tamar Christina
2023-11-15 13:01 ` Richard Biener
2023-11-15 13:09 ` Tamar Christina
2023-11-15 13:22 ` Richard Biener
2023-11-15 14:14 ` Tamar Christina
2023-11-16 10:40 ` Richard Biener
2023-11-16 11:08 ` Tamar Christina
2023-11-16 11:27 ` Richard Biener
2023-11-16 12:01 ` Tamar Christina
2023-11-16 12:30 ` Richard Biener
2023-11-16 13:22 ` Tamar Christina
2023-11-16 13:35 ` Richard Biener
2023-11-16 14:14 ` Tamar Christina
2023-11-16 14:17 ` Richard Biener
2023-11-16 15:19 ` Tamar Christina
2023-11-16 18:41 ` Tamar Christina
2023-11-17 10:40 ` Tamar Christina
2023-11-17 12:13 ` Richard Biener
2023-11-20 21:54 ` Tamar Christina
2023-11-24 10:18 ` Tamar Christina
2023-11-24 12:41 ` Richard Biener
2023-11-06 7:39 ` [PATCH 8/21]middle-end: update vectorizable_live_reduction with support for multiple exits and different exits Tamar Christina
2023-11-15 0:05 ` Tamar Christina
2023-11-15 13:41 ` Richard Biener
2023-11-15 14:26 ` Tamar Christina
2023-11-16 11:16 ` Richard Biener
2023-11-20 21:57 ` Tamar Christina
2023-11-24 10:20 ` Tamar Christina
2023-11-24 13:23 ` Richard Biener
2023-11-27 22:47 ` Tamar Christina
2023-11-29 13:28 ` Richard Biener
2023-11-29 21:22 ` Tamar Christina
2023-11-30 13:23 ` Richard Biener
2023-12-06 4:21 ` Tamar Christina
2023-12-06 9:33 ` Richard Biener
2023-11-06 7:39 ` [PATCH 9/21]middle-end: implement vectorizable_early_exit for codegen of exit code Tamar Christina
2023-11-27 22:49 ` Tamar Christina
2023-11-29 13:50 ` Richard Biener
2023-12-06 4:37 ` Tamar Christina
2023-12-06 9:37 ` Richard Biener
2023-12-08 8:58 ` Tamar Christina
2023-12-08 10:28 ` Richard Biener
2023-12-08 13:45 ` Tamar Christina
2023-12-08 13:59 ` Richard Biener
2023-12-08 15:01 ` Tamar Christina
2023-12-11 7:09 ` Tamar Christina
2023-12-11 9:36 ` Richard Biener
2023-12-11 23:12 ` Tamar Christina
2023-12-12 10:10 ` Richard Biener
2023-12-12 10:27 ` Tamar Christina
2023-12-12 10:59 ` Richard Sandiford
2023-12-12 11:30 ` Richard Biener
2023-12-13 14:13 ` Tamar Christina
2023-12-14 13:12 ` Richard Biener
2023-12-14 18:44 ` Tamar Christina
2023-11-06 7:39 ` [PATCH 10/21]middle-end: implement relevancy analysis support for control flow Tamar Christina
2023-11-27 22:49 ` Tamar Christina
2023-11-29 14:47 ` Richard Biener
2023-12-06 4:10 ` Tamar Christina
2023-12-06 9:44 ` Richard Biener
2023-11-06 7:40 ` [PATCH 11/21]middle-end: wire through peeling changes and dominator updates after guard edge split Tamar Christina
2023-11-06 7:40 ` [PATCH 12/21]middle-end: Add remaining changes to peeling and vectorizer to support early breaks Tamar Christina
2023-11-27 22:48 ` Tamar Christina
2023-12-06 8:31 ` Richard Biener
2023-12-06 9:10 ` Tamar Christina
2023-12-06 9:27 ` Richard Biener
2023-11-06 7:40 ` [PATCH 13/21]middle-end: Update loop form analysis to support early break Tamar Christina
2023-11-27 22:48 ` Tamar Christina
2023-12-06 4:00 ` Tamar Christina
2023-12-06 8:18 ` Richard Biener
2023-12-06 8:52 ` Tamar Christina
2023-12-06 9:15 ` Richard Biener
2023-12-06 9:29 ` Tamar Christina
2023-11-06 7:41 ` [PATCH 14/21]middle-end: Change loop analysis from looking at at number of BB to actual cfg Tamar Christina
2023-11-06 14:44 ` Richard Biener
2023-11-06 7:41 ` [PATCH 15/21]middle-end: [RFC] conditionally support forcing final edge for debugging Tamar Christina
2023-12-09 10:38 ` Richard Sandiford
2023-12-11 7:38 ` Richard Biener
2023-12-11 8:49 ` Tamar Christina
2023-12-11 9:00 ` Richard Biener
2023-11-06 7:41 ` [PATCH 16/21]middle-end testsuite: un-xfail TSVC loops that check for exit control flow vectorization Tamar Christina
2023-11-06 7:41 ` [PATCH 17/21]AArch64: Add implementation for vector cbranch for Advanced SIMD Tamar Christina
2023-11-28 16:37 ` Richard Sandiford
2023-11-28 17:55 ` Richard Sandiford
2023-12-06 16:25 ` Tamar Christina
2023-12-07 0:56 ` Richard Sandiford
2023-12-14 18:40 ` Tamar Christina
2023-12-14 19:34 ` Richard Sandiford
2023-11-06 7:42 ` [PATCH 18/21]AArch64: Add optimization for vector != cbranch fed into compare with 0 " Tamar Christina
2023-11-06 7:42 ` [PATCH 19/21]AArch64: Add optimization for vector cbranch combining SVE and " Tamar Christina
2023-11-06 7:42 ` [PATCH 20/21]Arm: Add Advanced SIMD cbranch implementation Tamar Christina
2023-11-27 12:48 ` Kyrylo Tkachov
2023-11-06 7:43 ` [PATCH 21/21]Arm: Add MVE " Tamar Christina
2023-11-27 12:47 ` Kyrylo Tkachov
2023-11-06 14:25 ` [PATCH v6 0/21]middle-end: Support early break/return auto-vectorization Richard Biener
2023-11-06 15:17 ` Tamar Christina
2023-11-07 9:42 ` Richard Biener
2023-11-07 10:47 ` Tamar Christina
2023-11-07 13:58 ` Richard Biener
2023-11-27 18:30 ` Richard Sandiford
2023-11-28 8:11 ` Richard Biener
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=nycvar.YFH.7.77.849.2311151231250.8772@jbgna.fhfr.qr \
--to=rguenther@suse.de \
--cc=Tamar.Christina@arm.com \
--cc=gcc-patches@gcc.gnu.org \
--cc=jlaw@ventanamicro.com \
--cc=nd@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).