public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org>
To: Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org>,
	Richard Biener <rguenther@suse.de>,
	 gcc Patches <gcc-patches@gcc.gnu.org>,
	richard.sandiford@arm.com
Subject: Re: [aarch64] Use dup and zip1 for interleaving elements in initializing vector
Date: Fri, 21 Apr 2023 12:57:32 +0530	[thread overview]
Message-ID: <CAAgBjMkiCcrK_GWEZTdcWWxS3d398LmyQDoPpfpkKvwKCvnncQ@mail.gmail.com> (raw)
In-Reply-To: <mptedopjx1d.fsf@arm.com>

[-- Attachment #1: Type: text/plain, Size: 7462 bytes --]

On Wed, 12 Apr 2023 at 14:29, Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
> > On Thu, 6 Apr 2023 at 16:05, Richard Sandiford
> > <richard.sandiford@arm.com> wrote:
> >>
> >> Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
> >> > On Tue, 4 Apr 2023 at 23:35, Richard Sandiford
> >> > <richard.sandiford@arm.com> wrote:
> >> >> > diff --git a/gcc/config/aarch64/aarch64-sve-builtins-base.cc b/gcc/config/aarch64/aarch64-sve-builtins-base.cc
> >> >> > index cd9cace3c9b..3de79060619 100644
> >> >> > --- a/gcc/config/aarch64/aarch64-sve-builtins-base.cc
> >> >> > +++ b/gcc/config/aarch64/aarch64-sve-builtins-base.cc
> >> >> > @@ -817,6 +817,62 @@ public:
> >> >> >
> >> >> >  class svdupq_impl : public quiet<function_base>
> >> >> >  {
> >> >> > +private:
> >> >> > +  gimple *
> >> >> > +  fold_nonconst_dupq (gimple_folder &f, unsigned factor) const
> >> >> > +  {
> >> >> > +    /* Lower lhs = svdupq (arg0, arg1, ..., argN} into:
> >> >> > +       tmp = {arg0, arg1, ..., arg<N-1>}
> >> >> > +       lhs = VEC_PERM_EXPR (tmp, tmp, {0, 1, 2, N-1, ...})  */
> >> >> > +
> >> >> > +    /* TODO: Revisit to handle factor by padding zeros.  */
> >> >> > +    if (factor > 1)
> >> >> > +      return NULL;
> >> >>
> >> >> Isn't the key thing here predicate vs. vector rather than factor == 1 vs.
> >> >> factor != 1?  Do we generate good code for b8, where factor should be 1?
> >> > Hi,
> >> > It generates the following code for svdup_n_b8:
> >> > https://pastebin.com/ypYt590c
> >>
> >> Hmm, yeah, not pretty :-)  But it's not pretty without either.
> >>
> >> > I suppose lowering to ctor+vec_perm_expr is not really useful
> >> > for this case because it won't simplify ctor, unlike the above case of
> >> > svdupq_s32 (x[0], x[1], x[2], x[3]);
> >> > However I wonder if it's still a good idea to lower svdupq for predicates, for
> >> > representing svdupq (or other intrinsics) using GIMPLE constructs as
> >> > far as possible ?
> >>
> >> It's possible, but I think we'd need an example in which its a clear
> >> benefit.
> > Sorry I posted for wrong test case above.
> > For the following test:
> > svbool_t f(uint8x16_t x)
> > {
> >   return svdupq_n_b8 (x[0], x[1], x[2], x[3], x[4], x[5], x[6], x[7],
> >                                     x[8], x[9], x[10], x[11], x[12],
> > x[13], x[14], x[15]);
> > }
> >
> > Code-gen:
> > https://pastebin.com/maexgeJn
> >
> > I suppose it's equivalent to following ?
> >
> > svbool_t f2(uint8x16_t x)
> > {
> >   svuint8_t tmp = svdupq_n_u8 ((bool) x[0], (bool) x[1], (bool) x[2],
> > (bool) x[3],
> >                                (bool) x[4], (bool) x[5], (bool) x[6],
> > (bool) x[7],
> >                                (bool) x[8], (bool) x[9], (bool) x[10],
> > (bool) x[11],
> >                                (bool) x[12], (bool) x[13], (bool)
> > x[14], (bool) x[15]);
> >   return svcmpne_n_u8 (svptrue_b8 (), tmp, 0);
> > }
>
> Yeah, this is essentially the transformation that the svdupq rtl
> expander uses.  It would probably be a good idea to do that in
> gimple too.
Hi,
I tested the interleave+zip1 for vector init patch and it segfaulted
during bootstrap while trying to build
libgfortran/generated/matmul_i2.c.
Rebuilding with --enable-checking=rtl showed out of bounds access in
aarch64_unzip_vector_init in following hunk:

+  rtvec vec = rtvec_alloc (n / 2);
+  for (int i = 0; i < n; i++)
+    RTVEC_ELT (vec, i) = (even_p) ? XVECEXP (vals, 0, 2 * i)
+                                 : XVECEXP (vals, 0, 2 * i + 1);

which is incorrect since it allocates n/2 but iterates and stores upto n.
The attached patch fixes the issue, which passed bootstrap, however
resulted in following fallout during testsuite run:

1] sve/acle/general/dupq_[1-4].c tests fail.
For the following test:
int32x4_t f(int32_t x)
{
  return (int32x4_t) { x, 1, 2, 3 };
}

Code-gen without patch:
f:
        adrp    x1, .LC0
        ldr     q0, [x1, #:lo12:.LC0]
        ins     v0.s[0], w0
        ret

Code-gen with patch:
f:
        movi    v0.2s, 0x2
        adrp    x1, .LC0
        ldr     d1, [x1, #:lo12:.LC0]
        ins     v0.s[0], w0
        zip1    v0.4s, v0.4s, v1.4s
        ret

It shows, fallback_seq_cost = 20, seq_total_cost = 16
where seq_total_cost determines the cost for interleave+zip1 sequence
and fallback_seq_cost is the cost for fallback sequence.
Altho it shows lesser cost, I am not sure if the interleave+zip1
sequence is better in this case ?

2] sve/acle/general/dupq_[5-6].c tests fail:
int32x4_t f(int32_t x0, int32_t x1, int32_t x2, int32_t x3)
{
  return (int32x4_t) { x0, x1, x2, x3 };
}

code-gen without patch:
f:
        fmov    s0, w0
        ins     v0.s[1], w1
        ins     v0.s[2], w2
        ins     v0.s[3], w3
        ret

code-gen with patch:
f:
        fmov    s0, w0
        fmov    s1, w1
        ins     v0.s[1], w2
        ins     v1.s[1], w3
        zip1    v0.4s, v0.4s, v1.4s
        ret

It shows fallback_seq_cost = 28, seq_total_cost = 16

3] aarch64/ldp_stp_16.c's cons2_8_float test fails.
Test case:
void cons2_8_float(float *x, float val0, float val1)
{
#pragma GCC unroll(8)
  for (int i = 0; i < 8 * 2; i += 2) {
    x[i + 0] = val0;
    x[i + 1] = val1;
  }
}

which is lowered to:
void cons2_8_float (float * x, float val0, float val1)
{
  vector(4) float _86;

  <bb 2> [local count: 119292720]:
  _86 = {val0_11(D), val1_13(D), val0_11(D), val1_13(D)};
  MEM <vector(4) float> [(float *)x_10(D)] = _86;
  MEM <vector(4) float> [(float *)x_10(D) + 16B] = _86;
  MEM <vector(4) float> [(float *)x_10(D) + 32B] = _86;
  MEM <vector(4) float> [(float *)x_10(D) + 48B] = _86;
  return;
}

code-gen without patch:
cons2_8_float:
        dup     v0.4s, v0.s[0]
        ins     v0.s[1], v1.s[0]
        ins     v0.s[3], v1.s[0]
        stp     q0, q0, [x0]
        stp     q0, q0, [x0, 32]
        ret

code-gen with patch:
cons2_8_float:
        dup     v1.2s, v1.s[0]
        dup     v0.2s, v0.s[0]
        zip1    v0.4s, v0.4s, v1.4s
        stp     q0, q0, [x0]
        stp     q0, q0, [x0, 32]
        ret

It shows fallback_seq_cost = 28, seq_total_cost = 16

I think the test fails because it doesn't match:
**      dup     v([0-9]+)\.4s, .*

Shall it be OK to amend the test assuming code-gen with patch is better ?

4] aarch64/pr109072_1.c s32x4_3 test fails:
For the following test:
int32x4_t s32x4_3 (int32_t x, int32_t y)
{
  int32_t arr[] = { x, y, y, y };
  return vld1q_s32 (arr);
}

code-gen without patch:
s32x4_3:
        dup     v0.4s, w1
        ins     v0.s[0], w0
        ret

code-gen with patch:
s32x4_3:
        fmov    s1, w1
        fmov    s0, w0
        ins     v0.s[1], v1.s[0]
        dup     v1.2s, v1.s[0]
        zip1    v0.4s, v0.4s, v1.4s
        ret

It shows fallback_seq_cost = 20, seq_total_cost = 16
I am not sure how interleave+zip1 cost is lesser than fallback seq
cost for this case.
I assume that the fallback sequence is better here ?

PS: The patch for folding svdupq to ctor+vec_perm_expr passes
bootstrap+test without any issues.

Thanks,
Prathamesh

>
> Thanks,
> Richard
>
> >
> > which generates:
> > f2:
> > .LFB3901:
> >         .cfi_startproc
> >         movi    v1.16b, 0x1
> >         ptrue   p0.b, all
> >         cmeq    v0.16b, v0.16b, #0
> >         bic     v0.16b, v1.16b, v0.16b
> >         dup     z0.q, z0.q[0]
> >         cmpne   p0.b, p0/z, z0.b, #0
> >         ret
> >
> > Thanks,
> > Prathamesh

[-- Attachment #2: gnu-821-6.diff --]
[-- Type: application/octet-stream, Size: 9774 bytes --]

diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc
index 42617ced73a..c6b8894386b 100644
--- a/gcc/config/aarch64/aarch64.cc
+++ b/gcc/config/aarch64/aarch64.cc
@@ -22045,11 +22045,12 @@ aarch64_simd_make_constant (rtx vals)
     return NULL_RTX;
 }
 
-/* Expand a vector initialisation sequence, such that TARGET is
-   initialised to contain VALS.  */
+/* A subroutine of aarch64_expand_vector_init, with the same interface.
+   The caller has already tried a divide-and-conquer approach, so do
+   not consider that case here.  */
 
 void
-aarch64_expand_vector_init (rtx target, rtx vals)
+aarch64_expand_vector_init_fallback (rtx target, rtx vals)
 {
   machine_mode mode = GET_MODE (target);
   scalar_mode inner_mode = GET_MODE_INNER (mode);
@@ -22109,38 +22110,6 @@ aarch64_expand_vector_init (rtx target, rtx vals)
       return;
     }
 
-  /* Check for interleaving case.
-     For eg if initializer is (int16x8_t) {x, y, x, y, x, y, x, y}.
-     Generate following code:
-     dup v0.h, x
-     dup v1.h, y
-     zip1 v0.h, v0.h, v1.h
-     for "large enough" initializer.  */
-
-  if (n_elts >= 8)
-    {
-      int i;
-      for (i = 2; i < n_elts; i++)
-	if (!rtx_equal_p (XVECEXP (vals, 0, i), XVECEXP (vals, 0, i % 2)))
-	  break;
-
-      if (i == n_elts)
-	{
-	  machine_mode mode = GET_MODE (target);
-	  rtx dest[2];
-
-	  for (int i = 0; i < 2; i++)
-	    {
-	      rtx x = expand_vector_broadcast (mode, XVECEXP (vals, 0, i));
-	      dest[i] = force_reg (mode, x);
-	    }
-
-	  rtvec v = gen_rtvec (2, dest[0], dest[1]);
-	  emit_set_insn (target, gen_rtx_UNSPEC (mode, v, UNSPEC_ZIP1));
-	  return;
-	}
-    }
-
   enum insn_code icode = optab_handler (vec_set_optab, mode);
   gcc_assert (icode != CODE_FOR_nothing);
 
@@ -22262,7 +22231,7 @@ aarch64_expand_vector_init (rtx target, rtx vals)
 	    }
 	  XVECEXP (copy, 0, i) = subst;
 	}
-      aarch64_expand_vector_init (target, copy);
+      aarch64_expand_vector_init_fallback (target, copy);
     }
 
   /* Insert the variable lanes directly.  */
@@ -22276,6 +22245,81 @@ aarch64_expand_vector_init (rtx target, rtx vals)
     }
 }
 
+/* Return even or odd half of VALS depending on EVEN_P.  */
+
+static rtx
+aarch64_unzip_vector_init (machine_mode mode, rtx vals, bool even_p)
+{
+  int n = XVECLEN (vals, 0);
+  machine_mode new_mode
+    = aarch64_simd_container_mode (GET_MODE_INNER (mode),
+				   GET_MODE_BITSIZE (mode).to_constant () / 2);
+  rtvec vec = rtvec_alloc (n / 2);
+  for (int i = 0; i < n/2; i++)
+    RTVEC_ELT (vec, i) = (even_p) ? XVECEXP (vals, 0, 2 * i)
+				  : XVECEXP (vals, 0, 2 * i + 1);
+  return gen_rtx_PARALLEL (new_mode, vec);
+}
+
+/* Expand a vector initialisation sequence, such that TARGET is
+   initialized to contain VALS.  */
+
+void
+aarch64_expand_vector_init (rtx target, rtx vals)
+{
+  /* Try decomposing the initializer into even and odd halves and
+     then ZIP them together.  Use the resulting sequence if it is
+     strictly cheaper than loading VALS directly.
+
+     Prefer the fallback sequence in the event of a tie, since it
+     will tend to use fewer registers.  */
+
+  machine_mode mode = GET_MODE (target);
+  int n_elts = XVECLEN (vals, 0);
+
+  if (n_elts < 4
+      || maybe_ne (GET_MODE_BITSIZE (mode), 128))
+    {
+      aarch64_expand_vector_init_fallback (target, vals);
+      return;
+    }
+
+  start_sequence ();
+  rtx halves[2];
+  unsigned costs[2];
+  for (int i = 0; i < 2; i++)
+    {
+      start_sequence ();
+      rtx new_vals
+	= aarch64_unzip_vector_init (mode, vals, (i % 2) == 0);
+      rtx tmp_reg = gen_reg_rtx (GET_MODE (new_vals));
+      aarch64_expand_vector_init (tmp_reg, new_vals);
+      halves[i] = gen_rtx_SUBREG (mode, tmp_reg, 0);
+      rtx_insn *rec_seq = get_insns ();
+      end_sequence ();
+      costs[i] = seq_cost (rec_seq, !optimize_size);
+      emit_insn (rec_seq);
+    }
+
+  rtvec v = gen_rtvec (2, halves[0], halves[1]);
+  rtx_insn *zip1_insn
+    = emit_set_insn (target, gen_rtx_UNSPEC (mode, v, UNSPEC_ZIP1));
+  unsigned seq_total_cost
+    = (!optimize_size) ? std::max (costs[0], costs[1]) : costs[0] + costs[1];
+  seq_total_cost += insn_cost (zip1_insn, !optimize_size);
+
+  rtx_insn *seq = get_insns ();
+  end_sequence ();
+
+  start_sequence ();
+  aarch64_expand_vector_init_fallback (target, vals);
+  rtx_insn *fallback_seq = get_insns ();
+  unsigned fallback_seq_cost = seq_cost (fallback_seq, !optimize_size);
+  end_sequence ();
+
+  emit_insn (seq_total_cost < fallback_seq_cost ? seq : fallback_seq);
+}
+
 /* Emit RTL corresponding to:
    insr TARGET, ELEM.  */
 
diff --git a/gcc/testsuite/gcc.target/aarch64/interleave-init-1.c b/gcc/testsuite/gcc.target/aarch64/vec-init-18.c
similarity index 82%
rename from gcc/testsuite/gcc.target/aarch64/interleave-init-1.c
rename to gcc/testsuite/gcc.target/aarch64/vec-init-18.c
index ee775048589..e812d3946de 100644
--- a/gcc/testsuite/gcc.target/aarch64/interleave-init-1.c
+++ b/gcc/testsuite/gcc.target/aarch64/vec-init-18.c
@@ -7,8 +7,8 @@
 /*
 ** foo:
 **	...
-**	dup	v[0-9]+\.8h, w[0-9]+
-**	dup	v[0-9]+\.8h, w[0-9]+
+**	dup	v[0-9]+\.4h, w[0-9]+
+**	dup	v[0-9]+\.4h, w[0-9]+
 **	zip1	v[0-9]+\.8h, v[0-9]+\.8h, v[0-9]+\.8h
 **	...
 **	ret
@@ -23,8 +23,8 @@ int16x8_t foo(int16_t x, int y)
 /*
 ** foo2:
 **	...
-**	dup	v[0-9]+\.8h, w[0-9]+
-**	movi	v[0-9]+\.8h, 0x1
+**	dup	v[0-9]+\.4h, w[0-9]+
+**	movi	v[0-9]+\.4h, 0x1
 **	zip1	v[0-9]+\.8h, v[0-9]+\.8h, v[0-9]+\.8h
 **	...
 **	ret
diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-19.c b/gcc/testsuite/gcc.target/aarch64/vec-init-19.c
new file mode 100644
index 00000000000..e28fdcda29d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/vec-init-19.c
@@ -0,0 +1,21 @@
+/* { dg-do compile } */
+/* { dg-options "-O3" } */
+/* { dg-final { check-function-bodies "**" "" "" } } */
+
+#include <arm_neon.h>
+
+/*
+** f_s8:
+**	...
+**	dup	v[0-9]+\.8b, w[0-9]+
+**	adrp	x[0-9]+, \.LC[0-9]+
+**	ldr	d[0-9]+, \[x[0-9]+, #:lo12:.LC[0-9]+\]
+**	zip1	v[0-9]+\.16b, v[0-9]+\.16b, v[0-9]+\.16b
+**	ret
+*/
+
+int8x16_t f_s8(int8_t x)
+{
+  return (int8x16_t) { x, 1, x, 2, x, 3, x, 4,
+                       x, 5, x, 6, x, 7, x, 8 };
+}
diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-20.c b/gcc/testsuite/gcc.target/aarch64/vec-init-20.c
new file mode 100644
index 00000000000..9366ca349b6
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/vec-init-20.c
@@ -0,0 +1,22 @@
+/* { dg-do compile } */
+/* { dg-options "-O3" } */
+/* { dg-final { check-function-bodies "**" "" "" } } */
+
+#include <arm_neon.h>
+
+/*
+** f_s8:
+**	...
+**	adrp	x[0-9]+, \.LC[0-9]+
+**	dup	v[0-9]+\.8b, w[0-9]+
+**	ldr	d[0-9]+, \[x[0-9]+, #:lo12:\.LC[0-9]+\]
+**	ins	v0\.b\[0\], w0
+**	zip1	v[0-9]+\.16b, v[0-9]+\.16b, v[0-9]+\.16b
+**	ret
+*/
+
+int8x16_t f_s8(int8_t x, int8_t y)
+{
+  return (int8x16_t) { x, y, 1, y, 2, y, 3, y,
+                       4, y, 5, y, 6, y, 7, y };
+}
diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-21.c b/gcc/testsuite/gcc.target/aarch64/vec-init-21.c
new file mode 100644
index 00000000000..e16459486d7
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/vec-init-21.c
@@ -0,0 +1,22 @@
+/* { dg-do compile } */
+/* { dg-options "-O3" } */
+/* { dg-final { check-function-bodies "**" "" "" } } */
+
+#include <arm_neon.h>
+
+/*
+** f_s8:
+**	...
+**	adrp	x[0-9]+, \.LC[0-9]+
+**	ldr	q[0-9]+, \[x[0-9]+, #:lo12:\.LC[0-9]+\]
+**	ins	v0\.b\[0\], w0
+**	ins	v0\.b\[1\], w1
+**	...
+**	ret
+*/
+
+int8x16_t f_s8(int8_t x, int8_t y)
+{
+  return (int8x16_t) { x, y, 1, 2, 3, 4, 5, 6,
+                       7, 8, 9, 10, 11, 12, 13, 14 };
+}
diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-22-size.c b/gcc/testsuite/gcc.target/aarch64/vec-init-22-size.c
new file mode 100644
index 00000000000..8f35854c008
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/vec-init-22-size.c
@@ -0,0 +1,24 @@
+/* { dg-do compile } */
+/* { dg-options "-Os" } */
+/* { dg-final { check-function-bodies "**" "" "" } } */
+
+/* Verify that fallback code-sequence is chosen over
+   recursively generated code-sequence merged with zip1.  */
+
+/*
+** f_s16:
+**	...
+**	sxth	w0, w0
+**	fmov	s0, w0
+**	ins	v0\.h\[1\], w1
+**	ins	v0\.h\[2\], w2
+**	ins	v0\.h\[3\], w3
+**	ins	v0\.h\[4\], w4
+**	ins	v0\.h\[5\], w5
+**	ins	v0\.h\[6\], w6
+**	ins	v0\.h\[7\], w7
+**	...
+**	ret
+*/
+
+#include "vec-init-22.h"
diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-22-speed.c b/gcc/testsuite/gcc.target/aarch64/vec-init-22-speed.c
new file mode 100644
index 00000000000..172d56ffdf1
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/vec-init-22-speed.c
@@ -0,0 +1,27 @@
+/* { dg-do compile } */
+/* { dg-options "-O3" } */
+/* { dg-final { check-function-bodies "**" "" "" } } */
+
+/* Verify that we recursively generate code for even and odd halves
+   instead of fallback code. This is so despite the longer code-gen
+   because it has fewer dependencies and thus has lesser cost.  */
+
+/*
+** f_s16:
+**	...
+**	sxth	w0, w0
+**	sxth	w1, w1
+**	fmov	d0, x0
+**	fmov	d1, x1
+**	ins	v[0-9]+\.h\[1\], w2
+**	ins	v[0-9]+\.h\[1\], w3
+**	ins	v[0-9]+\.h\[2\], w4
+**	ins	v[0-9]+\.h\[2\], w5
+**	ins	v[0-9]+\.h\[3\], w6
+**	ins	v[0-9]+\.h\[3\], w7
+**	zip1	v[0-9]+\.8h, v[0-9]+\.8h, v[0-9]+\.8h
+**	...
+**	ret
+*/
+
+#include "vec-init-22.h"
diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-22.h b/gcc/testsuite/gcc.target/aarch64/vec-init-22.h
new file mode 100644
index 00000000000..15b889d4097
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/vec-init-22.h
@@ -0,0 +1,7 @@
+#include <arm_neon.h>
+
+int16x8_t f_s16 (int16_t x0, int16_t x1, int16_t x2, int16_t x3,
+                 int16_t x4, int16_t x5, int16_t x6, int16_t x7)
+{
+  return (int16x8_t) { x0, x1, x2, x3, x4, x5, x6, x7 };
+}

  reply	other threads:[~2023-04-21  7:28 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-29 14:39 Prathamesh Kulkarni
2022-11-29 15:13 ` Andrew Pinski
2022-11-29 17:06   ` Prathamesh Kulkarni
2022-12-05 10:52 ` Richard Sandiford
2022-12-05 11:20   ` Richard Sandiford
2022-12-06  1:31     ` Prathamesh Kulkarni
2022-12-26  4:22       ` Prathamesh Kulkarni
2023-01-12 15:51         ` Richard Sandiford
2023-02-01  9:36           ` Prathamesh Kulkarni
2023-02-01 16:26             ` Richard Sandiford
2023-02-02 14:51               ` Prathamesh Kulkarni
2023-02-02 15:20                 ` Richard Sandiford
2023-02-03  1:40                   ` Prathamesh Kulkarni
2023-02-03  3:02                     ` Prathamesh Kulkarni
2023-02-03 15:17                       ` Richard Sandiford
2023-02-04  6:49                         ` Prathamesh Kulkarni
2023-02-06 12:13                           ` Richard Sandiford
2023-02-11  9:12                             ` Prathamesh Kulkarni
2023-03-10 18:08                               ` Richard Sandiford
2023-03-13  7:33                                 ` Richard Biener
2023-04-03 16:33                                   ` Prathamesh Kulkarni
2023-04-04 18:05                                     ` Richard Sandiford
2023-04-06 10:26                                       ` Prathamesh Kulkarni
2023-04-06 10:34                                         ` Richard Sandiford
2023-04-06 11:21                                           ` Prathamesh Kulkarni
2023-04-12  8:59                                             ` Richard Sandiford
2023-04-21  7:27                                               ` Prathamesh Kulkarni [this message]
2023-04-21  9:17                                                 ` Richard Sandiford
2023-04-21 15:15                                                   ` Prathamesh Kulkarni
2023-04-23  1:53                                                     ` Prathamesh Kulkarni
2023-04-24  9:29                                                       ` Richard Sandiford
2023-05-04 11:47                                                         ` Prathamesh Kulkarni
2023-05-11 19:07                                                           ` Richard Sandiford
2023-05-13  9:10                                                             ` Prathamesh Kulkarni

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAAgBjMkiCcrK_GWEZTdcWWxS3d398LmyQDoPpfpkKvwKCvnncQ@mail.gmail.com \
    --to=prathamesh.kulkarni@linaro.org \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=rguenther@suse.de \
    --cc=richard.sandiford@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).