public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH] Support threading of just the exit edge
@ 2022-08-12 12:01 Richard Biener
  2022-08-12 16:03 ` Aldy Hernandez
  0 siblings, 1 reply; 17+ messages in thread
From: Richard Biener @ 2022-08-12 12:01 UTC (permalink / raw)
  To: gcc-patches

This started with noticing we add ENTRY_BLOCK to our threads
just for the sake of simplifying the conditional at the end of
the first block in a function.  That's not really threading
anything but it ends up duplicating the entry block, and
re-writing the result instead of statically fold the jump.

The following tries to handle those by recording simplifications
of the exit conditional as a thread of length one.  That requires
special-casing them in the backward copier since if we do not
have any block to copy but modify the jump in place and remove
not taken edges this confuses the hell out of remaining threads.

So back_jt_path_registry::update_cfg now first marks all
edges we know are never taken and then prunes the threading
candidates when they include such edge.  Then it makes sure
to first perform unreachable edge removal (so we avoid
copying them when other thread paths contain the prevailing
edge) before continuing to apply the remaining threads.

In statistics you can see this avoids quite a bunch of useless
threads (I've investiated 3 random files from cc1files with
dropped stats in any of the thread passes).

Still thinking about it it would be nice to avoid the work of
discovering those candidates we have to throw away later
which could eventually be done by having the backward threader
perform a RPO walk over the CFG, skipping edges that can be
statically determined as not being executed.  Below I'm
abusing the path range query to statically analyze the exit
branch but I assume there's a simpler way of folding this stmt
which could then better integrate with such a walk.

In any case it seems worth more conciously handling the
case of exit branches that simplify without path sensitive
information.

Then the patch also restricts path discovery when we'd produce
threads we'll reject later during copying - the backward threader
copying cannot handle paths where the to duplicate blocks are
not from exactly the same loop.  I'm probably going to split this
part out.

Any thoughts?

	* gimple-range-path.cc (path_range_query::set_path): Adjust
	assert to allow paths of size one.
	* tree-ssa-threadbackward.cc (back_threader::maybe_register_path):
	Paths of size one are always profitable.
	(back_threader::find_paths_to_names): Likewise.
	Do not walk further if we are leaving the current loop.
	(back_threader::find_taken_edge): Remove assert.  Do not
	walk to ENTRY_BLOCK.
	* tree-ssa-threadupdate.cc (back_jt_path_registry::update_cfg):
	Handle jump threads of just the exit edge by modifying the
	control statement in-place.
---
 gcc/gimple-range-path.cc       |  2 +-
 gcc/tree-ssa-threadbackward.cc | 21 ++++++++-----
 gcc/tree-ssa-threadupdate.cc   | 54 ++++++++++++++++++++++++++++++++++
 3 files changed, 69 insertions(+), 8 deletions(-)

diff --git a/gcc/gimple-range-path.cc b/gcc/gimple-range-path.cc
index 78146f5683e..a7d277c31b8 100644
--- a/gcc/gimple-range-path.cc
+++ b/gcc/gimple-range-path.cc
@@ -220,7 +220,7 @@ path_range_query::unreachable_path_p ()
 void
 path_range_query::set_path (const vec<basic_block> &path)
 {
-  gcc_checking_assert (path.length () > 1);
+  gcc_checking_assert (!path.is_empty ());
   m_path = path.copy ();
   m_pos = m_path.length () - 1;
   bitmap_clear (m_has_cache_entry);
diff --git a/gcc/tree-ssa-threadbackward.cc b/gcc/tree-ssa-threadbackward.cc
index b886027fccf..669098e4ec3 100644
--- a/gcc/tree-ssa-threadbackward.cc
+++ b/gcc/tree-ssa-threadbackward.cc
@@ -241,8 +241,9 @@ back_threader::maybe_register_path ()
       else
 	{
 	  bool irreducible = false;
-	  if (m_profit.profitable_path_p (m_path, m_name, taken_edge,
-					  &irreducible)
+	  if ((m_path.length () == 1
+	       || m_profit.profitable_path_p (m_path, m_name, taken_edge,
+					      &irreducible))
 	      && debug_counter ()
 	      && m_registry.register_path (m_path, taken_edge))
 	    {
@@ -267,7 +268,6 @@ back_threader::maybe_register_path ()
 edge
 back_threader::find_taken_edge (const vec<basic_block> &path)
 {
-  gcc_checking_assert (path.length () > 1);
   switch (gimple_code (m_last_stmt))
     {
     case GIMPLE_COND:
@@ -350,9 +350,15 @@ back_threader::find_paths_to_names (basic_block bb, bitmap interesting,
   m_path.safe_push (bb);
 
   // Try to resolve the path without looking back.
-  if (m_path.length () > 1
-      && (!m_profit.profitable_path_p (m_path, m_name, NULL)
-	  || maybe_register_path ()))
+  if ((m_path.length () > 1
+       && !m_profit.profitable_path_p (m_path, m_name, NULL))
+      || maybe_register_path ())
+    ;
+
+  // The backwards thread copier cannot copy blocks that do not belong
+  // to the same loop, so when the new source of the path entry no
+  // longer belongs to it we don't need to search further.
+  else if (m_path[0]->loop_father != bb->loop_father)
     ;
 
   // Continue looking for ways to extend the path but limit the
@@ -445,7 +451,8 @@ back_threader::find_paths_to_names (basic_block bb, bitmap interesting,
 	  edge e;
 	  FOR_EACH_EDGE (e, iter, bb->preds)
 	    {
-	      if (e->flags & EDGE_ABNORMAL
+	      if ((e->flags & EDGE_ABNORMAL)
+		  || e->src->index == ENTRY_BLOCK
 		  // This is like path_crosses_loops in profitable_path_p but
 		  // more restrictive to avoid peeling off loop iterations (see
 		  // tree-ssa/pr14341.c for an example).
diff --git a/gcc/tree-ssa-threadupdate.cc b/gcc/tree-ssa-threadupdate.cc
index 59c268a3567..d40fa7c4cff 100644
--- a/gcc/tree-ssa-threadupdate.cc
+++ b/gcc/tree-ssa-threadupdate.cc
@@ -2613,6 +2613,60 @@ back_jt_path_registry::update_cfg (bool /*peel_loop_headers*/)
   bool retval = false;
   hash_set<edge> visited_starting_edges;
 
+  /* Mark never taken edges from paths that are just jump simplifications.  */
+  auto_edge_flag never_taken (cfun);
+  for (auto path : m_paths)
+    if (path->length () == 1)
+      {
+	edge_iterator ei;
+	edge e;
+	FOR_EACH_EDGE (e, ei, (*path)[0]->e->src->succs)
+	  if (e != (*path)[0]->e)
+	    e->flags |= never_taken;
+      }
+
+  /* And prune paths that contain such edge before we remove them.  */
+  for (unsigned i = 0; i < m_paths.length ();)
+    {
+      bool remove = false;
+      for (auto je : *m_paths[i])
+	{
+	  if (je->e->flags & never_taken)
+	    {
+	      cancel_thread (m_paths[i],
+			     "Avoiding threading through unreachable edge");
+	      remove = true;
+	      break;
+	    }
+	}
+      if (!remove)
+	++i;
+      else
+	m_paths.unordered_remove (i);
+    }
+
+  /* Finally perform those threads first, this way we avoid copying the
+     dead outgoing edges when other theads contain the prevailing edge.  */
+  for (unsigned i = 0; i < m_paths.length ();)
+    {
+      vec<jump_thread_edge *> *path = m_paths[i];
+      if (path->length () != 1)
+	{
+	  ++i;
+	  continue;
+	}
+      edge exit = (*path)[0]->e;
+      remove_ctrl_stmt_and_useless_edges (exit->src, exit->dest);
+      exit->flags &= ~(EDGE_TRUE_VALUE | EDGE_FALSE_VALUE | EDGE_ABNORMAL);
+      exit->flags |= EDGE_FALLTHRU;
+      /* We do not update dominance info.  */
+      free_dominance_info (CDI_DOMINATORS);
+      retval = true;
+      m_num_threaded_edges++;
+      path->release ();
+      m_paths.unordered_remove (i);
+    }
+
   while (m_paths.length ())
     {
       vec<jump_thread_edge *> *path = m_paths[0];
-- 
2.35.3

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2022-08-18 13:18 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-12 12:01 [PATCH] Support threading of just the exit edge Richard Biener
2022-08-12 16:03 ` Aldy Hernandez
2022-08-15  9:39   ` Richard Biener
2022-08-15 19:09     ` Aldy Hernandez
2022-08-15 19:24       ` Andrew MacLeod
2022-08-15 19:29         ` Aldy Hernandez
2022-08-16  9:18           ` Richard Biener
2022-08-16 10:06             ` Aldy Hernandez
2022-08-16 11:32               ` Richard Biener
2022-08-16 11:42                 ` Aldy Hernandez
2022-08-16 13:44                 ` Richard Biener
2022-08-16 14:30             ` Andrew MacLeod
2022-08-17  7:42               ` Richard Biener
2022-08-17 14:39                 ` Andrew MacLeod
2022-08-18  7:08                   ` Richard Biener
2022-08-18 13:18                     ` Andrew MacLeod
2022-08-15 15:22   ` Jeff Law

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).