public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* Re: [PATCH] tree-optimization/110991 - unroll size estimate after vectorization
@ 2023-08-14 13:44 Richard Biener
  0 siblings, 0 replies; 3+ messages in thread
From: Richard Biener @ 2023-08-14 13:44 UTC (permalink / raw)
  To: gcc-patches; +Cc: Jan Hubicka

On Mon, 14 Aug 2023, Richard Biener wrote:

> The following testcase shows that we are bad at identifying inductions
> that will be optimized away after vectorizing them because SCEV doesn't
> handle vectorized defs.  The following rolls a simpler identification
> of SSA cycles covering a PHI and an assignment with a binary operator
> with a constant second operand.
> 
> Bootstrapped and tested on x86_64-unknown-linux-gnu.
> 
> Note, I also have a more general approach (will reply to this mail
> with an RFC).

So the following is an RFC, it replaces constant_after_peeling
with verifying all SSA operands are constants and then folding
the stmt, recording constant outcomes for further stmts becoming
constants.

We now traverse the loop body twice - once with the optimistic
constant initial values of IVs and after the first traversal
we drop these if the backedge value turns out non-constant.

We then use the outcomes from the second traversal for the size
estimate.

Now, we could use the sizes of the first traversal somehow
if we recorded them separately.  Maybe as followup.

I've again chickened out from doing the transform-with-value-numbering
approach, stopping when we hit a stmt copy limit.  The reason is
of course it's only reasonably simple if there's no branching in the
copied body (for example if we can resolve all branches during
unrolling).  Maybe we should really try harder here ...

I'm currently re-testing this (I made it less optimistic) and having
to fixup some fortran frontend -Warray-bound diagnostics (meh) as
we now unroll sth there.

Does this look better than trying to ad-hoc match the PHI "IV"s
that SCEV doesn't handle?

Thanks,
Richard.

From 75bc2d108ebc23d513fa49664ffc6bcdb5559495 Mon Sep 17 00:00:00 2001
From: Richard Biener <rguenther@suse.de>
Date: Mon, 14 Aug 2023 12:02:41 +0200
Subject: [PATCH] test unroll
To: gcc-patches@gcc.gnu.org

---
 .../gcc.dg/fstack-protector-strong.c          |   4 +-
 gcc/tree-ssa-loop-ivcanon.cc                  | 157 ++++++++++++------
 2 files changed, 112 insertions(+), 49 deletions(-)

diff --git a/gcc/testsuite/gcc.dg/fstack-protector-strong.c b/gcc/testsuite/gcc.dg/fstack-protector-strong.c
index 94dc3508f1a..fafa1917449 100644
--- a/gcc/testsuite/gcc.dg/fstack-protector-strong.c
+++ b/gcc/testsuite/gcc.dg/fstack-protector-strong.c
@@ -28,7 +28,7 @@ foo1 ()
 struct ArrayStruct
 {
   int a;
-  int array[10];
+  int array[18];
 };
 
 struct AA
@@ -43,7 +43,7 @@ foo2 ()
 {
   struct AA aa;
   int i;
-  for (i = 0; i < 10; ++i)
+  for (i = 0; i < 18; ++i)
     {
       aa.as.array[i] = i * (i-1) + i / 2;
     }
diff --git a/gcc/tree-ssa-loop-ivcanon.cc b/gcc/tree-ssa-loop-ivcanon.cc
index 99e50ee2efe..51543e43cbc 100644
--- a/gcc/tree-ssa-loop-ivcanon.cc
+++ b/gcc/tree-ssa-loop-ivcanon.cc
@@ -158,6 +158,7 @@ struct loop_size
   int num_branches_on_hot_path;
 };
 
+#if 0
 /* Return true if OP in STMT will be constant after peeling LOOP.  */
 
 static bool
@@ -245,6 +246,7 @@ constant_after_peeling (tree op, gimple *stmt, class loop *loop)
     }
   return true;
 }
+#endif
 
 /* Computes an estimated number of insns in LOOP.
    EXIT (if non-NULL) is an exite edge that will be eliminated in all but last
@@ -276,6 +278,31 @@ tree_estimate_loop_size (class loop *loop, edge exit, edge edge_to_cancel,
 
   if (dump_file && (dump_flags & TDF_DETAILS))
     fprintf (dump_file, "Estimating sizes for loop %i\n", loop->num);
+
+  static hash_map<tree, tree> *vals;
+  vals = new hash_map<tree, tree>;
+  edge pe = loop_preheader_edge (loop);
+  for (auto si = gsi_start_phis (loop->header);
+       !gsi_end_p (si); gsi_next (&si))
+    {
+      if (virtual_operand_p (gimple_phi_result (*si)))
+	continue;
+      tree val = gimple_phi_arg_def_from_edge (*si, pe);
+      if (CONSTANT_CLASS_P (val))
+	{
+	  vals->put (gimple_phi_result (*si), val);
+	  tree ev = analyze_scalar_evolution (loop, gimple_phi_result (*si));
+	  if (!chrec_contains_undetermined (ev)
+	      && !chrec_contains_symbols (ev))
+	    size->constant_iv = true;
+	}
+    }
+
+  auto els_valueize = [] (tree op) -> tree
+    { if (tree *val = vals->get (op)) return *val; return op; };
+
+  auto process_loop = [&] () -> bool
+    {
   for (i = 0; i < loop->num_nodes; i++)
     {
       if (edge_to_cancel && body[i] != edge_to_cancel->src
@@ -322,54 +349,47 @@ tree_estimate_loop_size (class loop *loop, edge exit, edge edge_to_cancel,
 			     "in last copy.\n");
 		  likely_eliminated_last = true;
 		}
-	      /* Sets of IV variables  */
-	      if (gimple_code (stmt) == GIMPLE_ASSIGN
-		  && constant_after_peeling (gimple_assign_lhs (stmt), stmt, loop))
+	      /* Stores are not eliminated.  */
+	      if (gimple_vdef (stmt))
+		continue;
+	      /* Below we are using constant folding to decide whether
+		 we can elide a stmt.  While for the first iteration we
+		 could use the actual value for the rest we have to
+		 avoid the situation re-using a * 1 or + 0 operand, so
+		 require all SSA operands to be constants here.  */
+	      bool fail = false;
+	      ssa_op_iter iter;
+	      use_operand_p use_p;
+	      FOR_EACH_SSA_USE_OPERAND (use_p, stmt, iter, SSA_OP_USE)
+		if (!vals->get (USE_FROM_PTR (use_p)))
+		  {
+		    fail = true;
+		    break;
+		  }
+	      if (fail)
+		continue;
+	      tree val;
+	      /* Switches are not handled by folding.  */
+	      if (gimple_code (stmt) == GIMPLE_SWITCH
+		  && ! is_gimple_min_invariant
+		  (gimple_switch_index (as_a <gswitch *> (stmt)))
+		  && vals->get (gimple_switch_index
+				(as_a <gswitch *> (stmt))))
 		{
 		  if (dump_file && (dump_flags & TDF_DETAILS))
-		    fprintf (dump_file, "   Induction variable computation will"
-			     " be folded away.\n");
+		    fprintf (dump_file, "   Constant conditional.\n");
 		  likely_eliminated = true;
 		}
-	      /* Assignments of IV variables.  */
-	      else if (gimple_code (stmt) == GIMPLE_ASSIGN
-		       && TREE_CODE (gimple_assign_lhs (stmt)) == SSA_NAME
-		       && constant_after_peeling (gimple_assign_rhs1 (stmt),
-						  stmt, loop)
-		       && (gimple_assign_rhs_class (stmt) != GIMPLE_BINARY_RHS
-			   || constant_after_peeling (gimple_assign_rhs2 (stmt),
-						      stmt, loop))
-		       && gimple_assign_rhs_class (stmt) != GIMPLE_TERNARY_RHS)
+	      else if ((val = gimple_fold_stmt_to_constant (stmt, els_valueize))
+		       && CONSTANT_CLASS_P (val))
 		{
-		  size->constant_iv = true;
 		  if (dump_file && (dump_flags & TDF_DETAILS))
 		    fprintf (dump_file,
 			     "   Constant expression will be folded away.\n");
 		  likely_eliminated = true;
-		}
-	      /* Conditionals.  */
-	      else if ((gimple_code (stmt) == GIMPLE_COND
-			&& constant_after_peeling (gimple_cond_lhs (stmt), stmt,
-						   loop)
-			&& constant_after_peeling (gimple_cond_rhs (stmt), stmt,
-						   loop)
-			/* We don't simplify all constant compares so make sure
-			   they are not both constant already.  See PR70288.  */
-			&& (! is_gimple_min_invariant (gimple_cond_lhs (stmt))
-			    || ! is_gimple_min_invariant
-				 (gimple_cond_rhs (stmt))))
-		       || (gimple_code (stmt) == GIMPLE_SWITCH
-			   && constant_after_peeling (gimple_switch_index (
-							as_a <gswitch *>
-							  (stmt)),
-						      stmt, loop)
-			   && ! is_gimple_min_invariant
-				   (gimple_switch_index
-				      (as_a <gswitch *> (stmt)))))
-		{
-		  if (dump_file && (dump_flags & TDF_DETAILS))
-		    fprintf (dump_file, "   Constant conditional.\n");
-		  likely_eliminated = true;
+		  if (tree lhs = gimple_get_lhs (stmt))
+		    if (TREE_CODE (lhs) == SSA_NAME)
+		      vals->put (lhs, val);
 		}
 	    }
 
@@ -385,11 +405,55 @@ tree_estimate_loop_size (class loop *loop, edge exit, edge edge_to_cancel,
 	  if ((size->overall * 3 / 2 - size->eliminated_by_peeling
 	      - size->last_iteration_eliminated_by_peeling) > upper_bound)
 	    {
-              free (body);
-	      return true;
+	      free (body);
+	      delete vals;
+	      vals = nullptr;
+	      return false;
 	    }
 	}
     }
+  return true;
+  };
+
+  /* Estimate the size of the unrolled first iteration.  */
+  if (!process_loop ())
+    return true;
+
+  /* Determine whether the IVs will stay constant (we simply assume that
+     if the 2nd iteration receives a constant value the third and all
+     further will so as well).  */
+  for (auto si = gsi_start_phis (loop->header);
+       !gsi_end_p (si); gsi_next (&si))
+    {
+      if (virtual_operand_p (gimple_phi_result (*si)))
+	continue;
+      tree def = gimple_phi_arg_def_from_edge (*si, loop_latch_edge (loop));
+      if (CONSTANT_CLASS_P (def) || vals->get (def))
+	/* ???  If we compute the first iteration size separately we
+	   could also handle an invariant backedge value more
+	   optimistically.
+	   ???  Note the actual value we leave here may still have an
+	   effect on the constant-ness.  */
+	;
+      else
+	vals->remove (gimple_phi_result (*si));
+    }
+
+  /* Reset sizes and compute the size based on the adjustment above.
+     ???  We could keep the more precise and optimistic counts for
+     the first iteration.  */
+  size->overall = 0;
+  size->eliminated_by_peeling = 0;
+  size->last_iteration = 0;
+  size->last_iteration_eliminated_by_peeling = 0;
+  size->num_pure_calls_on_hot_path = 0;
+  size->num_non_pure_calls_on_hot_path = 0;
+  size->non_call_stmts_on_hot_path = 0;
+  size->num_branches_on_hot_path = 0;
+  size->constant_iv = 0;
+  if (!process_loop ())
+    return true;
+
   while (path.length ())
     {
       basic_block bb = path.pop ();
@@ -411,13 +475,10 @@ tree_estimate_loop_size (class loop *loop, edge exit, edge edge_to_cancel,
 	  else if (gimple_code (stmt) != GIMPLE_DEBUG)
 	    size->non_call_stmts_on_hot_path++;
 	  if (((gimple_code (stmt) == GIMPLE_COND
-	        && (!constant_after_peeling (gimple_cond_lhs (stmt), stmt, loop)
-		    || !constant_after_peeling (gimple_cond_rhs (stmt), stmt,
-						loop)))
+		&& (!vals->get (gimple_cond_lhs (stmt))
+		    || !vals->get (gimple_cond_rhs (stmt))))
 	       || (gimple_code (stmt) == GIMPLE_SWITCH
-		   && !constant_after_peeling (gimple_switch_index (
-						 as_a <gswitch *> (stmt)),
-					       stmt, loop)))
+		   && !vals->get (gimple_switch_index (as_a <gswitch *> (stmt)))))
 	      && (!exit || bb != exit->src))
 	    size->num_branches_on_hot_path++;
 	}
@@ -429,6 +490,8 @@ tree_estimate_loop_size (class loop *loop, edge exit, edge edge_to_cancel,
 	     size->last_iteration_eliminated_by_peeling);
 
   free (body);
+  delete vals;
+  vals = nullptr;
   return false;
 }
 
-- 
2.35.3


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] tree-optimization/110991 - unroll size estimate after vectorization
       [not found] <20230814132954.12C5B385840B@sourceware.org>
@ 2023-08-14 16:40 ` Jan Hubicka
  0 siblings, 0 replies; 3+ messages in thread
From: Jan Hubicka @ 2023-08-14 16:40 UTC (permalink / raw)
  To: Richard Biener; +Cc: gcc-patches

> The following testcase shows that we are bad at identifying inductions
> that will be optimized away after vectorizing them because SCEV doesn't
> handle vectorized defs.  The following rolls a simpler identification
> of SSA cycles covering a PHI and an assignment with a binary operator
> with a constant second operand.
> 
> Bootstrapped and tested on x86_64-unknown-linux-gnu.
> 
> Note, I also have a more general approach (will reply to this mail
> with an RFC).

Looks good to me.  This clearly be generalized to more complicated
expressions, so that is what you plan to do next?

Honza
> 
> Any comments on this particular change?
> 
> 	PR tree-optimization/110991
> 	* tree-ssa-loop-ivcanon.cc (constant_after_peeling): Handle
> 	VIEW_CONVERT_EXPR <op>, handle more simple IV-like SSA cycles
> 	that will end up constant.
> 
> 	* gcc.dg/tree-ssa/cunroll-16.c: New testcase.
> ---
>  gcc/testsuite/gcc.dg/tree-ssa/cunroll-16.c | 17 ++++++++
>  gcc/tree-ssa-loop-ivcanon.cc               | 46 +++++++++++++++++++++-
>  2 files changed, 62 insertions(+), 1 deletion(-)
>  create mode 100644 gcc/testsuite/gcc.dg/tree-ssa/cunroll-16.c
> 
> diff --git a/gcc/testsuite/gcc.dg/tree-ssa/cunroll-16.c b/gcc/testsuite/gcc.dg/tree-ssa/cunroll-16.c
> new file mode 100644
> index 00000000000..9bb66ff8299
> --- /dev/null
> +++ b/gcc/testsuite/gcc.dg/tree-ssa/cunroll-16.c
> @@ -0,0 +1,17 @@
> +/* PR/110991 */
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -fdump-tree-cunroll-details -fdump-tree-optimized" } */
> +
> +static unsigned char a;
> +static signed char b;
> +void foo(void);
> +int main() {
> +  a = 25;
> +  for (; a > 13; --a)
> +    b = a > 127 ?: a << 3;
> +  if (!b)
> +    foo();
> +}
> +
> +/* { dg-final { scan-tree-dump "optimized: loop with \[0-9\]\+ iterations completely unrolled" "cunroll" } } */
> +/* { dg-final { scan-tree-dump-not "foo" "optimized" } } */
> diff --git a/gcc/tree-ssa-loop-ivcanon.cc b/gcc/tree-ssa-loop-ivcanon.cc
> index a895e8e65be..99e50ee2efe 100644
> --- a/gcc/tree-ssa-loop-ivcanon.cc
> +++ b/gcc/tree-ssa-loop-ivcanon.cc
> @@ -166,6 +166,11 @@ constant_after_peeling (tree op, gimple *stmt, class loop *loop)
>    if (CONSTANT_CLASS_P (op))
>      return true;
>  
> +  /* Get at the actual SSA operand.  */
> +  if (handled_component_p (op)
> +      && TREE_CODE (TREE_OPERAND (op, 0)) == SSA_NAME)
> +    op = TREE_OPERAND (op, 0);
> +
>    /* We can still fold accesses to constant arrays when index is known.  */
>    if (TREE_CODE (op) != SSA_NAME)
>      {
> @@ -198,7 +203,46 @@ constant_after_peeling (tree op, gimple *stmt, class loop *loop)
>    tree ev = analyze_scalar_evolution (loop, op);
>    if (chrec_contains_undetermined (ev)
>        || chrec_contains_symbols (ev))
> -    return false;
> +    {
> +      if (ANY_INTEGRAL_TYPE_P (TREE_TYPE (op)))
> +	{
> +	  gassign *ass = nullptr;
> +	  gphi *phi = nullptr;
> +	  if (is_a <gassign *> (SSA_NAME_DEF_STMT (op)))
> +	    {
> +	      ass = as_a <gassign *> (SSA_NAME_DEF_STMT (op));
> +	      if (TREE_CODE (gimple_assign_rhs1 (ass)) == SSA_NAME)
> +		phi = dyn_cast <gphi *>
> +			(SSA_NAME_DEF_STMT (gimple_assign_rhs1  (ass)));
> +	    }
> +	  else if (is_a <gphi *> (SSA_NAME_DEF_STMT (op)))
> +	    {
> +	      phi = as_a <gphi *> (SSA_NAME_DEF_STMT (op));
> +	      if (gimple_bb (phi) == loop->header)
> +		{
> +		  tree def = gimple_phi_arg_def_from_edge
> +		    (phi, loop_latch_edge (loop));
> +		  if (TREE_CODE (def) == SSA_NAME
> +		      && is_a <gassign *> (SSA_NAME_DEF_STMT (def)))
> +		    ass = as_a <gassign *> (SSA_NAME_DEF_STMT (def));
> +		}
> +	    }
> +	  if (ass && phi)
> +	    {
> +	      tree rhs1 = gimple_assign_rhs1 (ass);
> +	      if (gimple_assign_rhs_class (ass) == GIMPLE_BINARY_RHS
> +		  && CONSTANT_CLASS_P (gimple_assign_rhs2 (ass))
> +		  && rhs1 == gimple_phi_result (phi)
> +		  && gimple_bb (phi) == loop->header
> +		  && (gimple_phi_arg_def_from_edge (phi, loop_latch_edge (loop))
> +		      == gimple_assign_lhs (ass))
> +		  && (CONSTANT_CLASS_P (gimple_phi_arg_def_from_edge
> +					 (phi, loop_preheader_edge (loop)))))
> +		return true;
> +	    }
> +	}
> +      return false;
> +    }
>    return true;
>  }
>  
> -- 
> 2.35.3

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [PATCH] tree-optimization/110991 - unroll size estimate after vectorization
@ 2023-08-14 13:29 Richard Biener
  0 siblings, 0 replies; 3+ messages in thread
From: Richard Biener @ 2023-08-14 13:29 UTC (permalink / raw)
  To: gcc-patches; +Cc: Jan Hubicka

The following testcase shows that we are bad at identifying inductions
that will be optimized away after vectorizing them because SCEV doesn't
handle vectorized defs.  The following rolls a simpler identification
of SSA cycles covering a PHI and an assignment with a binary operator
with a constant second operand.

Bootstrapped and tested on x86_64-unknown-linux-gnu.

Note, I also have a more general approach (will reply to this mail
with an RFC).

Any comments on this particular change?

	PR tree-optimization/110991
	* tree-ssa-loop-ivcanon.cc (constant_after_peeling): Handle
	VIEW_CONVERT_EXPR <op>, handle more simple IV-like SSA cycles
	that will end up constant.

	* gcc.dg/tree-ssa/cunroll-16.c: New testcase.
---
 gcc/testsuite/gcc.dg/tree-ssa/cunroll-16.c | 17 ++++++++
 gcc/tree-ssa-loop-ivcanon.cc               | 46 +++++++++++++++++++++-
 2 files changed, 62 insertions(+), 1 deletion(-)
 create mode 100644 gcc/testsuite/gcc.dg/tree-ssa/cunroll-16.c

diff --git a/gcc/testsuite/gcc.dg/tree-ssa/cunroll-16.c b/gcc/testsuite/gcc.dg/tree-ssa/cunroll-16.c
new file mode 100644
index 00000000000..9bb66ff8299
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/tree-ssa/cunroll-16.c
@@ -0,0 +1,17 @@
+/* PR/110991 */
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-cunroll-details -fdump-tree-optimized" } */
+
+static unsigned char a;
+static signed char b;
+void foo(void);
+int main() {
+  a = 25;
+  for (; a > 13; --a)
+    b = a > 127 ?: a << 3;
+  if (!b)
+    foo();
+}
+
+/* { dg-final { scan-tree-dump "optimized: loop with \[0-9\]\+ iterations completely unrolled" "cunroll" } } */
+/* { dg-final { scan-tree-dump-not "foo" "optimized" } } */
diff --git a/gcc/tree-ssa-loop-ivcanon.cc b/gcc/tree-ssa-loop-ivcanon.cc
index a895e8e65be..99e50ee2efe 100644
--- a/gcc/tree-ssa-loop-ivcanon.cc
+++ b/gcc/tree-ssa-loop-ivcanon.cc
@@ -166,6 +166,11 @@ constant_after_peeling (tree op, gimple *stmt, class loop *loop)
   if (CONSTANT_CLASS_P (op))
     return true;
 
+  /* Get at the actual SSA operand.  */
+  if (handled_component_p (op)
+      && TREE_CODE (TREE_OPERAND (op, 0)) == SSA_NAME)
+    op = TREE_OPERAND (op, 0);
+
   /* We can still fold accesses to constant arrays when index is known.  */
   if (TREE_CODE (op) != SSA_NAME)
     {
@@ -198,7 +203,46 @@ constant_after_peeling (tree op, gimple *stmt, class loop *loop)
   tree ev = analyze_scalar_evolution (loop, op);
   if (chrec_contains_undetermined (ev)
       || chrec_contains_symbols (ev))
-    return false;
+    {
+      if (ANY_INTEGRAL_TYPE_P (TREE_TYPE (op)))
+	{
+	  gassign *ass = nullptr;
+	  gphi *phi = nullptr;
+	  if (is_a <gassign *> (SSA_NAME_DEF_STMT (op)))
+	    {
+	      ass = as_a <gassign *> (SSA_NAME_DEF_STMT (op));
+	      if (TREE_CODE (gimple_assign_rhs1 (ass)) == SSA_NAME)
+		phi = dyn_cast <gphi *>
+			(SSA_NAME_DEF_STMT (gimple_assign_rhs1  (ass)));
+	    }
+	  else if (is_a <gphi *> (SSA_NAME_DEF_STMT (op)))
+	    {
+	      phi = as_a <gphi *> (SSA_NAME_DEF_STMT (op));
+	      if (gimple_bb (phi) == loop->header)
+		{
+		  tree def = gimple_phi_arg_def_from_edge
+		    (phi, loop_latch_edge (loop));
+		  if (TREE_CODE (def) == SSA_NAME
+		      && is_a <gassign *> (SSA_NAME_DEF_STMT (def)))
+		    ass = as_a <gassign *> (SSA_NAME_DEF_STMT (def));
+		}
+	    }
+	  if (ass && phi)
+	    {
+	      tree rhs1 = gimple_assign_rhs1 (ass);
+	      if (gimple_assign_rhs_class (ass) == GIMPLE_BINARY_RHS
+		  && CONSTANT_CLASS_P (gimple_assign_rhs2 (ass))
+		  && rhs1 == gimple_phi_result (phi)
+		  && gimple_bb (phi) == loop->header
+		  && (gimple_phi_arg_def_from_edge (phi, loop_latch_edge (loop))
+		      == gimple_assign_lhs (ass))
+		  && (CONSTANT_CLASS_P (gimple_phi_arg_def_from_edge
+					 (phi, loop_preheader_edge (loop)))))
+		return true;
+	    }
+	}
+      return false;
+    }
   return true;
 }
 
-- 
2.35.3

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2023-08-14 16:41 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-14 13:44 [PATCH] tree-optimization/110991 - unroll size estimate after vectorization Richard Biener
     [not found] <20230814132954.12C5B385840B@sourceware.org>
2023-08-14 16:40 ` Jan Hubicka
  -- strict thread matches above, loose matches on Subject: below --
2023-08-14 13:29 Richard Biener

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).