public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org>
To: Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org>,
	gcc Patches <gcc-patches@gcc.gnu.org>,
	 richard.sandiford@arm.com
Subject: Re: [aarch64] Use dup and zip1 for interleaving elements in initializing vector
Date: Thu, 2 Feb 2023 20:21:37 +0530	[thread overview]
Message-ID: <CAAgBjMkxZXVPYoX_C=deX1P83ZXXqxoWWAkhuFMVE2ha3XJG+A@mail.gmail.com> (raw)
In-Reply-To: <mptv8kle4hd.fsf@arm.com>

[-- Attachment #1: Type: text/plain, Size: 10533 bytes --]

On Wed, 1 Feb 2023 at 21:56, Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
> > On Thu, 12 Jan 2023 at 21:21, Richard Sandiford
> > <richard.sandiford@arm.com> wrote:
> >>
> >> Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
> >> > On Tue, 6 Dec 2022 at 07:01, Prathamesh Kulkarni
> >> > <prathamesh.kulkarni@linaro.org> wrote:
> >> >>
> >> >> On Mon, 5 Dec 2022 at 16:50, Richard Sandiford
> >> >> <richard.sandiford@arm.com> wrote:
> >> >> >
> >> >> > Richard Sandiford via Gcc-patches <gcc-patches@gcc.gnu.org> writes:
> >> >> > > Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
> >> >> > >> Hi,
> >> >> > >> For the following test-case:
> >> >> > >>
> >> >> > >> int16x8_t foo(int16_t x, int16_t y)
> >> >> > >> {
> >> >> > >>   return (int16x8_t) { x, y, x, y, x, y, x, y };
> >> >> > >> }
> >> >> > >>
> >> >> > >> Code gen at -O3:
> >> >> > >> foo:
> >> >> > >>         dup    v0.8h, w0
> >> >> > >>         ins     v0.h[1], w1
> >> >> > >>         ins     v0.h[3], w1
> >> >> > >>         ins     v0.h[5], w1
> >> >> > >>         ins     v0.h[7], w1
> >> >> > >>         ret
> >> >> > >>
> >> >> > >> For 16 elements, it results in 8 ins instructions which might not be
> >> >> > >> optimal perhaps.
> >> >> > >> I guess, the above code-gen would be equivalent to the following ?
> >> >> > >> dup v0.8h, w0
> >> >> > >> dup v1.8h, w1
> >> >> > >> zip1 v0.8h, v0.8h, v1.8h
> >> >> > >>
> >> >> > >> I have attached patch to do the same, if number of elements >= 8,
> >> >> > >> which should be possibly better compared to current code-gen ?
> >> >> > >> Patch passes bootstrap+test on aarch64-linux-gnu.
> >> >> > >> Does the patch look OK ?
> >> >> > >>
> >> >> > >> Thanks,
> >> >> > >> Prathamesh
> >> >> > >>
> >> >> > >> diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc
> >> >> > >> index c91df6f5006..e5dea70e363 100644
> >> >> > >> --- a/gcc/config/aarch64/aarch64.cc
> >> >> > >> +++ b/gcc/config/aarch64/aarch64.cc
> >> >> > >> @@ -22028,6 +22028,39 @@ aarch64_expand_vector_init (rtx target, rtx vals)
> >> >> > >>        return;
> >> >> > >>      }
> >> >> > >>
> >> >> > >> +  /* Check for interleaving case.
> >> >> > >> +     For eg if initializer is (int16x8_t) {x, y, x, y, x, y, x, y}.
> >> >> > >> +     Generate following code:
> >> >> > >> +     dup v0.h, x
> >> >> > >> +     dup v1.h, y
> >> >> > >> +     zip1 v0.h, v0.h, v1.h
> >> >> > >> +     for "large enough" initializer.  */
> >> >> > >> +
> >> >> > >> +  if (n_elts >= 8)
> >> >> > >> +    {
> >> >> > >> +      int i;
> >> >> > >> +      for (i = 2; i < n_elts; i++)
> >> >> > >> +    if (!rtx_equal_p (XVECEXP (vals, 0, i), XVECEXP (vals, 0, i % 2)))
> >> >> > >> +      break;
> >> >> > >> +
> >> >> > >> +      if (i == n_elts)
> >> >> > >> +    {
> >> >> > >> +      machine_mode mode = GET_MODE (target);
> >> >> > >> +      rtx dest[2];
> >> >> > >> +
> >> >> > >> +      for (int i = 0; i < 2; i++)
> >> >> > >> +        {
> >> >> > >> +          rtx x = copy_to_mode_reg (GET_MODE_INNER (mode), XVECEXP (vals, 0, i));
> >> >> > >
> >> >> > > Formatting nit: long line.
> >> >> > >
> >> >> > >> +          dest[i] = gen_reg_rtx (mode);
> >> >> > >> +          aarch64_emit_move (dest[i], gen_vec_duplicate (mode, x));
> >> >> > >> +        }
> >> >> > >
> >> >> > > This could probably be written:
> >> >> > >
> >> >> > >         for (int i = 0; i < 2; i++)
> >> >> > >           {
> >> >> > >             rtx x = expand_vector_broadcast (mode, XVECEXP (vals, 0, i));
> >> >> > >             dest[i] = force_reg (GET_MODE_INNER (mode), x);
> >> >> >
> >> >> > Oops, I meant "mode" rather than "GET_MODE_INNER (mode)", sorry.
> >> >> Thanks, I have pushed the change in
> >> >> 769370f3e2e04823c8a621d8ffa756dd83ebf21e after running
> >> >> bootstrap+test on aarch64-linux-gnu.
> >> > Hi Richard,
> >> > I have attached a patch that extends the transform if one half is dup
> >> > and other is set of constants.
> >> > For eg:
> >> > int8x16_t f(int8_t x)
> >> > {
> >> >   return (int8x16_t) { x, 1, x, 2, x, 3, x, 4, x, 5, x, 6, x, 7, x, 8 };
> >> > }
> >> >
> >> > code-gen trunk:
> >> > f:
> >> >         adrp    x1, .LC0
> >> >         ldr     q0, [x1, #:lo12:.LC0]
> >> >         ins     v0.b[0], w0
> >> >         ins     v0.b[2], w0
> >> >         ins     v0.b[4], w0
> >> >         ins     v0.b[6], w0
> >> >         ins     v0.b[8], w0
> >> >         ins     v0.b[10], w0
> >> >         ins     v0.b[12], w0
> >> >         ins     v0.b[14], w0
> >> >         ret
> >> >
> >> > code-gen with patch:
> >> > f:
> >> >         dup     v0.16b, w0
> >> >         adrp    x0, .LC0
> >> >         ldr     q1, [x0, #:lo12:.LC0]
> >> >         zip1    v0.16b, v0.16b, v1.16b
> >> >         ret
> >> >
> >> > Bootstrapped+tested on aarch64-linux-gnu.
> >> > Does it look OK ?
> >>
> >> Looks like a nice improvement.  It'll need to wait for GCC 14 now though.
> >>
> >> However, rather than handle this case specially, I think we should instead
> >> take a divide-and-conquer approach: split the initialiser into even and
> >> odd elements, find the best way of loading each part, then compare the
> >> cost of these sequences + ZIP with the cost of the fallback code (the code
> >> later in aarch64_expand_vector_init).
> >>
> >> For example, doing that would allow:
> >>
> >>   { x, y, 0, y, 0, y, 0, y, 0, y }
> >>
> >> to be loaded more easily, even though the even elements aren't wholly
> >> constant.
> > Hi Richard,
> > I have attached a prototype patch based on the above approach.
> > It subsumes specializing for above {x, y, x, y, x, y, x, y} case by generating
> > same sequence, thus I removed that hunk, and improves the following cases:
> >
> > (a)
> > int8x16_t f_s16(int8_t x)
> > {
> >   return (int8x16_t) { x, 1, x, 2, x, 3, x, 4,
> >                                  x, 5, x, 6, x, 7, x, 8 };
> > }
> >
> > code-gen trunk:
> > f_s16:
> >         adrp    x1, .LC0
> >         ldr     q0, [x1, #:lo12:.LC0]
> >         ins     v0.b[0], w0
> >         ins     v0.b[2], w0
> >         ins     v0.b[4], w0
> >         ins     v0.b[6], w0
> >         ins     v0.b[8], w0
> >         ins     v0.b[10], w0
> >         ins     v0.b[12], w0
> >         ins     v0.b[14], w0
> >         ret
> >
> > code-gen with patch:
> > f_s16:
> >         dup     v0.16b, w0
> >         adrp    x0, .LC0
> >         ldr     q1, [x0, #:lo12:.LC0]
> >         zip1    v0.16b, v0.16b, v1.16b
> >         ret
> >
> > (b)
> > int8x16_t f_s16(int8_t x, int8_t y)
> > {
> >   return (int8x16_t) { x, y, 1, y, 2, y, 3, y,
> >                                 4, y, 5, y, 6, y, 7, y };
> > }
> >
> > code-gen trunk:
> > f_s16:
> >         adrp    x2, .LC0
> >         ldr     q0, [x2, #:lo12:.LC0]
> >         ins     v0.b[0], w0
> >         ins     v0.b[1], w1
> >         ins     v0.b[3], w1
> >         ins     v0.b[5], w1
> >         ins     v0.b[7], w1
> >         ins     v0.b[9], w1
> >         ins     v0.b[11], w1
> >         ins     v0.b[13], w1
> >         ins     v0.b[15], w1
> >         ret
> >
> > code-gen patch:
> > f_s16:
> >         adrp    x2, .LC0
> >         dup     v1.16b, w1
> >         ldr     q0, [x2, #:lo12:.LC0]
> >         ins     v0.b[0], w0
> >         zip1    v0.16b, v0.16b, v1.16b
> >         ret
>
> Nice.
>
> > There are a couple of issues I have come across:
> > (1) Choosing element to pad vector.
> > For eg, if we are initiailizing a vector say { x, y, 0, y, 1, y, 2, y }
> > with mode V8HI.
> > We split it into { x, 0, 1, 2 } and { y, y, y, y}
> > However since the mode is V8HI, we would need to pad the above split vectors
> > with 4 more elements to match up to vector length.
> > For {x, 0, 1, 2} using any constant is the obvious choice while for {y, y, y, y}
> > using 'y' is the obvious choice thus making them:
> > {x, 0, 1, 2, 0, 0, 0, 0} and {y, y, y, y, y, y, y, y}
> > These would be then merged using zip1 which would discard the lower half
> > of both vectors.
> > Currently I encoded the above two heuristics in
> > aarch64_expand_vector_init_get_padded_elem:
> > (a) If split portion contains a constant, use the constant to pad the vector.
> > (b) If split portion only contains variables, then use the most
> > frequently repeating variable
> > to pad the vector.
> > I suppose tho this could be improved ?
>
> I think we should just build two 64-bit vectors (V4HIs) and use a subreg
> to fill the upper elements with undefined values.
>
> I suppose in principle we would have the same problem when splitting
> a 64-bit vector into 2 32-bit vectors, but it's probably better to punt
> on that for now.  Eventually it would be worth adding full support for
> 32-bit Advanced SIMD modes (with necessary restrictions for FP exceptions)
> but it's quite a big task.  The 128-bit to 64-bit split is the one that
> matters most.
>
> > (2) Setting cost for zip1:
> > Currently it returns 4 as cost for following zip1 insn:
> > (set (reg:V8HI 102)
> >     (unspec:V8HI [
> >             (reg:V8HI 103)
> >             (reg:V8HI 108)
> >         ] UNSPEC_ZIP1))
> > I am not sure if that's correct, or if not, what cost to use in this case
> > for zip1 ?
>
> TBH 4 seems a bit optimistic.  It's COSTS_N_INSNS (1), whereas the
> generic advsimd_vec_cost::permute_cost is 2 insns.  But the costs of
> inserts are probably underestimated to the same extent, so hopefully
> things work out.
>
> So it's probably best to accept the costs as they're currently given.
> Changing them would need extensive testing.
>
> However, one of the advantages of the split is that it allows the
> subvectors to be built in parallel.  When optimising for speed,
> it might make sense to take the maximum of the subsequence costs
> and add the cost of the zip to that.
Hi Richard,
Thanks for the suggestions.
In the attached patch, it recurses only if nelts == 16 to punt for 64
-> 32 bit split,
and uses std::max(even_init, odd_init) + insn_cost (zip1_insn) for
computing total cost of the sequence.

So, for following case:
int8x16_t f_s8(int8_t x)
{
  return (int8x16_t) { x, 1, x, 2, x, 3, x, 4,
                                x, 5, x, 6, x, 7, x, 8 };
}

it now generates:
f_s16:
        dup     v0.8b, w0
        adrp    x0, .LC0
        ldr       d1, [x0, #:lo12:.LC0]
        zip1    v0.16b, v0.16b, v1.16b
        ret

Which I assume is correct, since zip1 will merge the lower halves of
two vectors while leaving the upper halves undefined ?

Thanks,
Prathamesh
>
> Thanks,
> Richard

[-- Attachment #2: gnu-821-2.txt --]
[-- Type: text/plain, Size: 6799 bytes --]

diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc
index acc0cfe5f94..a527c48e916 100644
--- a/gcc/config/aarch64/aarch64.cc
+++ b/gcc/config/aarch64/aarch64.cc
@@ -21976,7 +21976,7 @@ aarch64_simd_make_constant (rtx vals)
    initialised to contain VALS.  */
 
 void
-aarch64_expand_vector_init (rtx target, rtx vals)
+aarch64_expand_vector_init_fallback (rtx target, rtx vals)
 {
   machine_mode mode = GET_MODE (target);
   scalar_mode inner_mode = GET_MODE_INNER (mode);
@@ -22189,7 +22189,7 @@ aarch64_expand_vector_init (rtx target, rtx vals)
 	    }
 	  XVECEXP (copy, 0, i) = subst;
 	}
-      aarch64_expand_vector_init (target, copy);
+      aarch64_expand_vector_init_fallback (target, copy);
     }
 
   /* Insert the variable lanes directly.  */
@@ -22203,6 +22203,89 @@ aarch64_expand_vector_init (rtx target, rtx vals)
     }
 }
 
+DEBUG_FUNCTION
+static void
+aarch64_expand_vector_init_debug_seq (rtx_insn *seq, const char *s)
+{
+  fprintf (stderr, "%s: %u\n", s, seq_cost (seq, !optimize_size));
+  for (rtx_insn *i = seq; i; i = NEXT_INSN (i))
+    {
+      debug_rtx (PATTERN (i));
+      fprintf (stderr, "cost: %d\n", pattern_cost (PATTERN (i), !optimize_size));
+    }
+}
+
+static rtx
+aarch64_expand_vector_init_split_vals (machine_mode mode, rtx vals, bool even_p)
+{
+  int n = XVECLEN (vals, 0);
+  machine_mode new_mode
+    = aarch64_simd_container_mode (GET_MODE_INNER (mode), 64);
+  rtvec vec = rtvec_alloc (n / 2);
+  for (int i = 0; i < n; i++)
+    RTVEC_ELT (vec, i) = (even_p) ? XVECEXP (vals, 0, 2 * i)
+				  : XVECEXP (vals, 0, 2 * i + 1);
+  return gen_rtx_PARALLEL (new_mode, vec);
+}
+
+/*
+The function does the following:
+(a) Generates code sequence by splitting VALS into even and odd halves,
+    and recursively calling itself to initialize them and then merge using
+    zip1.
+(b) Generate code sequence directly using aarch64_expand_vector_init_fallback.
+(c) Compare the cost of code sequences generated by (a) and (b), and choose
+    the more efficient one.
+*/
+
+void
+aarch64_expand_vector_init (rtx target, rtx vals)
+{
+  machine_mode mode = GET_MODE (target);
+  int n_elts = XVECLEN (vals, 0);
+
+  if (n_elts < 16)
+    {
+      aarch64_expand_vector_init_fallback (target, vals);
+      return;
+    }
+
+  start_sequence ();
+  rtx dest[2];
+  unsigned costs[2];
+  for (int i = 0; i < 2; i++)
+    {
+      start_sequence ();
+      dest[i] = gen_reg_rtx (mode);
+      rtx new_vals
+	= aarch64_expand_vector_init_split_vals (mode, vals, (i % 2) == 0);
+      rtx tmp_reg = gen_reg_rtx (GET_MODE (new_vals));
+      aarch64_expand_vector_init (tmp_reg, new_vals);
+      dest[i] = gen_rtx_SUBREG (mode, tmp_reg, 0);
+      rtx_insn *rec_seq = get_insns ();
+      end_sequence ();
+      costs[i] = seq_cost (rec_seq, !optimize_size);
+      emit_insn (rec_seq);
+    }
+
+  rtvec v = gen_rtvec (2, dest[0], dest[1]);
+  rtx_insn *zip1_insn
+    = emit_set_insn (target, gen_rtx_UNSPEC (mode, v, UNSPEC_ZIP1));
+  unsigned seq_total_cost
+    = std::max (costs[0], costs[1]) + insn_cost (zip1_insn, !optimize_size);
+
+  rtx_insn *seq = get_insns ();
+  end_sequence ();
+
+  start_sequence ();
+  aarch64_expand_vector_init_fallback (target, vals);
+  rtx_insn *fallback_seq = get_insns ();
+  end_sequence ();
+
+  emit_insn (seq_total_cost < seq_cost (fallback_seq, !optimize_size)
+	     ? seq : fallback_seq);
+}
+
 /* Emit RTL corresponding to:
    insr TARGET, ELEM.  */
 
diff --git a/gcc/testsuite/gcc.target/aarch64/interleave-init-1.c b/gcc/testsuite/gcc.target/aarch64/vec-init-18.c
similarity index 100%
rename from gcc/testsuite/gcc.target/aarch64/interleave-init-1.c
rename to gcc/testsuite/gcc.target/aarch64/vec-init-18.c
diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-19.c b/gcc/testsuite/gcc.target/aarch64/vec-init-19.c
new file mode 100644
index 00000000000..e28fdcda29d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/vec-init-19.c
@@ -0,0 +1,21 @@
+/* { dg-do compile } */
+/* { dg-options "-O3" } */
+/* { dg-final { check-function-bodies "**" "" "" } } */
+
+#include <arm_neon.h>
+
+/*
+** f_s8:
+**	...
+**	dup	v[0-9]+\.8b, w[0-9]+
+**	adrp	x[0-9]+, \.LC[0-9]+
+**	ldr	d[0-9]+, \[x[0-9]+, #:lo12:.LC[0-9]+\]
+**	zip1	v[0-9]+\.16b, v[0-9]+\.16b, v[0-9]+\.16b
+**	ret
+*/
+
+int8x16_t f_s8(int8_t x)
+{
+  return (int8x16_t) { x, 1, x, 2, x, 3, x, 4,
+                       x, 5, x, 6, x, 7, x, 8 };
+}
diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-20.c b/gcc/testsuite/gcc.target/aarch64/vec-init-20.c
new file mode 100644
index 00000000000..9366ca349b6
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/vec-init-20.c
@@ -0,0 +1,22 @@
+/* { dg-do compile } */
+/* { dg-options "-O3" } */
+/* { dg-final { check-function-bodies "**" "" "" } } */
+
+#include <arm_neon.h>
+
+/*
+** f_s8:
+**	...
+**	adrp	x[0-9]+, \.LC[0-9]+
+**	dup	v[0-9]+\.8b, w[0-9]+
+**	ldr	d[0-9]+, \[x[0-9]+, #:lo12:\.LC[0-9]+\]
+**	ins	v0\.b\[0\], w0
+**	zip1	v[0-9]+\.16b, v[0-9]+\.16b, v[0-9]+\.16b
+**	ret
+*/
+
+int8x16_t f_s8(int8_t x, int8_t y)
+{
+  return (int8x16_t) { x, y, 1, y, 2, y, 3, y,
+                       4, y, 5, y, 6, y, 7, y };
+}
diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-21.c b/gcc/testsuite/gcc.target/aarch64/vec-init-21.c
new file mode 100644
index 00000000000..e16459486d7
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/vec-init-21.c
@@ -0,0 +1,22 @@
+/* { dg-do compile } */
+/* { dg-options "-O3" } */
+/* { dg-final { check-function-bodies "**" "" "" } } */
+
+#include <arm_neon.h>
+
+/*
+** f_s8:
+**	...
+**	adrp	x[0-9]+, \.LC[0-9]+
+**	ldr	q[0-9]+, \[x[0-9]+, #:lo12:\.LC[0-9]+\]
+**	ins	v0\.b\[0\], w0
+**	ins	v0\.b\[1\], w1
+**	...
+**	ret
+*/
+
+int8x16_t f_s8(int8_t x, int8_t y)
+{
+  return (int8x16_t) { x, y, 1, 2, 3, 4, 5, 6,
+                       7, 8, 9, 10, 11, 12, 13, 14 };
+}
diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-22.c b/gcc/testsuite/gcc.target/aarch64/vec-init-22.c
new file mode 100644
index 00000000000..e5016a47a3b
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/vec-init-22.c
@@ -0,0 +1,30 @@
+/* { dg-do compile } */
+/* { dg-options "-O3" } */
+/* { dg-final { check-function-bodies "**" "" "" } } */
+
+#include <arm_neon.h>
+
+/* Verify that fallback code-sequence is chosen over
+   recursively generated code-sequence merged with zip1.  */
+
+/*
+** f_s16:
+**	...
+**	sxth	w0, w0
+**	fmov	s0, w0
+**	ins	v0\.h\[1\], w1
+**	ins	v0\.h\[2\], w2
+**	ins	v0\.h\[3\], w3
+**	ins	v0\.h\[4\], w4
+**	ins	v0\.h\[5\], w5
+**	ins	v0\.h\[6\], w6
+**	ins	v0\.h\[7\], w7
+**	...
+**	ret
+*/
+
+int16x8_t f_s16 (int16_t x0, int16_t x1, int16_t x2, int16_t x3,
+                 int16_t x4, int16_t x5, int16_t x6, int16_t x7)
+{
+  return (int16x8_t) { x0, x1, x2, x3, x4, x5, x6, x7 };
+}

  reply	other threads:[~2023-02-02 14:52 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-29 14:39 Prathamesh Kulkarni
2022-11-29 15:13 ` Andrew Pinski
2022-11-29 17:06   ` Prathamesh Kulkarni
2022-12-05 10:52 ` Richard Sandiford
2022-12-05 11:20   ` Richard Sandiford
2022-12-06  1:31     ` Prathamesh Kulkarni
2022-12-26  4:22       ` Prathamesh Kulkarni
2023-01-12 15:51         ` Richard Sandiford
2023-02-01  9:36           ` Prathamesh Kulkarni
2023-02-01 16:26             ` Richard Sandiford
2023-02-02 14:51               ` Prathamesh Kulkarni [this message]
2023-02-02 15:20                 ` Richard Sandiford
2023-02-03  1:40                   ` Prathamesh Kulkarni
2023-02-03  3:02                     ` Prathamesh Kulkarni
2023-02-03 15:17                       ` Richard Sandiford
2023-02-04  6:49                         ` Prathamesh Kulkarni
2023-02-06 12:13                           ` Richard Sandiford
2023-02-11  9:12                             ` Prathamesh Kulkarni
2023-03-10 18:08                               ` Richard Sandiford
2023-03-13  7:33                                 ` Richard Biener
2023-04-03 16:33                                   ` Prathamesh Kulkarni
2023-04-04 18:05                                     ` Richard Sandiford
2023-04-06 10:26                                       ` Prathamesh Kulkarni
2023-04-06 10:34                                         ` Richard Sandiford
2023-04-06 11:21                                           ` Prathamesh Kulkarni
2023-04-12  8:59                                             ` Richard Sandiford
2023-04-21  7:27                                               ` Prathamesh Kulkarni
2023-04-21  9:17                                                 ` Richard Sandiford
2023-04-21 15:15                                                   ` Prathamesh Kulkarni
2023-04-23  1:53                                                     ` Prathamesh Kulkarni
2023-04-24  9:29                                                       ` Richard Sandiford
2023-05-04 11:47                                                         ` Prathamesh Kulkarni
2023-05-11 19:07                                                           ` Richard Sandiford
2023-05-13  9:10                                                             ` Prathamesh Kulkarni

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAAgBjMkxZXVPYoX_C=deX1P83ZXXqxoWWAkhuFMVE2ha3XJG+A@mail.gmail.com' \
    --to=prathamesh.kulkarni@linaro.org \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=richard.sandiford@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).