public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org>
To: Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org>,
	Richard Biener <rguenther@suse.de>,
	 gcc Patches <gcc-patches@gcc.gnu.org>,
	richard.sandiford@arm.com
Subject: Re: [aarch64] Use dup and zip1 for interleaving elements in initializing vector
Date: Thu, 4 May 2023 17:17:31 +0530	[thread overview]
Message-ID: <CAAgBjMm2Fn5THbXFHudD--M7QzZYXDv=qi1XMRxZHm-tHFTRWA@mail.gmail.com> (raw)
In-Reply-To: <mptmt2x631l.fsf@arm.com>

[-- Attachment #1: Type: text/plain, Size: 14687 bytes --]

On Mon, 24 Apr 2023 at 15:00, Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
> > [aarch64] Recursively intialize even and odd sub-parts and merge with zip1.
> >
> > gcc/ChangeLog:
> >       * config/aarch64/aarch64.cc (aarch64_expand_vector_init_fallback): Rename
> >       aarch64_expand_vector_init to this, and remove  interleaving case.
> >       Recursively call aarch64_expand_vector_init_fallback, instead of
> >       aarch64_expand_vector_init.
> >       (aarch64_unzip_vector_init): New function.
> >       (aarch64_expand_vector_init): Likewise.
> >
> > gcc/testsuite/ChangeLog:
> >       * gcc.target/aarch64/ldp_stp_16.c (cons2_8_float): Adjust for new
> >       code-gen.
> >       * gcc.target/aarch64/sve/acle/general/dupq_5.c: Likewise.
> >       * gcc.target/aarch64/sve/acle/general/dupq_6.c: Likewise.
> >       * gcc.target/aarch64/vec-init-18.c: Rename interleave-init-1.c to
> >       this.
> >       * gcc.target/aarch64/vec-init-19.c: New test.
> >       * gcc.target/aarch64/vec-init-20.c: Likewise.
> >       * gcc.target/aarch64/vec-init-21.c: Likewise.
> >       * gcc.target/aarch64/vec-init-22-size.c: Likewise.
> >       * gcc.target/aarch64/vec-init-22-speed.c: Likewise.
> >       * gcc.target/aarch64/vec-init-22.h: New header.
> >
> > diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc
> > index d7e895f8d34..416e062829c 100644
> > --- a/gcc/config/aarch64/aarch64.cc
> > +++ b/gcc/config/aarch64/aarch64.cc
> > @@ -22026,11 +22026,12 @@ aarch64_simd_make_constant (rtx vals)
> >      return NULL_RTX;
> >  }
> >
> > -/* Expand a vector initialisation sequence, such that TARGET is
> > -   initialised to contain VALS.  */
> > +/* A subroutine of aarch64_expand_vector_init, with the same interface.
> > +   The caller has already tried a divide-and-conquer approach, so do
> > +   not consider that case here.  */
> >
> >  void
> > -aarch64_expand_vector_init (rtx target, rtx vals)
> > +aarch64_expand_vector_init_fallback (rtx target, rtx vals)
> >  {
> >    machine_mode mode = GET_MODE (target);
> >    scalar_mode inner_mode = GET_MODE_INNER (mode);
> > @@ -22090,38 +22091,6 @@ aarch64_expand_vector_init (rtx target, rtx vals)
> >        return;
> >      }
> >
> > -  /* Check for interleaving case.
> > -     For eg if initializer is (int16x8_t) {x, y, x, y, x, y, x, y}.
> > -     Generate following code:
> > -     dup v0.h, x
> > -     dup v1.h, y
> > -     zip1 v0.h, v0.h, v1.h
> > -     for "large enough" initializer.  */
> > -
> > -  if (n_elts >= 8)
> > -    {
> > -      int i;
> > -      for (i = 2; i < n_elts; i++)
> > -     if (!rtx_equal_p (XVECEXP (vals, 0, i), XVECEXP (vals, 0, i % 2)))
> > -       break;
> > -
> > -      if (i == n_elts)
> > -     {
> > -       machine_mode mode = GET_MODE (target);
> > -       rtx dest[2];
> > -
> > -       for (int i = 0; i < 2; i++)
> > -         {
> > -           rtx x = expand_vector_broadcast (mode, XVECEXP (vals, 0, i));
> > -           dest[i] = force_reg (mode, x);
> > -         }
> > -
> > -       rtvec v = gen_rtvec (2, dest[0], dest[1]);
> > -       emit_set_insn (target, gen_rtx_UNSPEC (mode, v, UNSPEC_ZIP1));
> > -       return;
> > -     }
> > -    }
> > -
> >    enum insn_code icode = optab_handler (vec_set_optab, mode);
> >    gcc_assert (icode != CODE_FOR_nothing);
> >
> > @@ -22243,7 +22212,7 @@ aarch64_expand_vector_init (rtx target, rtx vals)
> >           }
> >         XVECEXP (copy, 0, i) = subst;
> >       }
> > -      aarch64_expand_vector_init (target, copy);
> > +      aarch64_expand_vector_init_fallback (target, copy);
> >      }
> >
> >    /* Insert the variable lanes directly.  */
> > @@ -22257,6 +22226,81 @@ aarch64_expand_vector_init (rtx target, rtx vals)
> >      }
> >  }
> >
> > +/* Return even or odd half of VALS depending on EVEN_P.  */
> > +
> > +static rtx
> > +aarch64_unzip_vector_init (machine_mode mode, rtx vals, bool even_p)
> > +{
> > +  int n = XVECLEN (vals, 0);
> > +  machine_mode new_mode
> > +    = aarch64_simd_container_mode (GET_MODE_INNER (mode),
> > +                                GET_MODE_BITSIZE (mode).to_constant () / 2);
> > +  rtvec vec = rtvec_alloc (n / 2);
> > +  for (int i = 0; i < n/2; i++)
>
> Formatting nit: n / 2
>
> > +    RTVEC_ELT (vec, i) = (even_p) ? XVECEXP (vals, 0, 2 * i)
> > +                               : XVECEXP (vals, 0, 2 * i + 1);
> > +  return gen_rtx_PARALLEL (new_mode, vec);
> > +}
> > +
> > +/* Expand a vector initialisation sequence, such that TARGET is
>
> initialization
>
> > +   initialized to contain VALS.  */
> > +
> > +void
> > +aarch64_expand_vector_init (rtx target, rtx vals)
> > +{
> > +  /* Try decomposing the initializer into even and odd halves and
> > +     then ZIP them together.  Use the resulting sequence if it is
> > +     strictly cheaper than loading VALS directly.
> > +
> > +     Prefer the fallback sequence in the event of a tie, since it
> > +     will tend to use fewer registers.  */
> > +
> > +  machine_mode mode = GET_MODE (target);
> > +  int n_elts = XVECLEN (vals, 0);
> > +
> > +  if (n_elts < 4
> > +      || maybe_ne (GET_MODE_BITSIZE (mode), 128))
> > +    {
> > +      aarch64_expand_vector_init_fallback (target, vals);
> > +      return;
> > +    }
> > +
> > +  start_sequence ();
> > +  rtx halves[2];
> > +  unsigned costs[2];
> > +  for (int i = 0; i < 2; i++)
> > +    {
> > +      start_sequence ();
> > +      rtx new_vals
> > +     = aarch64_unzip_vector_init (mode, vals, (i % 2) == 0);
>
> Just i == 0 wouold be enough.  Also, this fits on one line.
>
> > +      rtx tmp_reg = gen_reg_rtx (GET_MODE (new_vals));
> > +      aarch64_expand_vector_init (tmp_reg, new_vals);
> > +      halves[i] = gen_rtx_SUBREG (mode, tmp_reg, 0);
> > +      rtx_insn *rec_seq = get_insns ();
> > +      end_sequence ();
> > +      costs[i] = seq_cost (rec_seq, !optimize_size);
> > +      emit_insn (rec_seq);
> > +    }
> > +
> > +  rtvec v = gen_rtvec (2, halves[0], halves[1]);
> > +  rtx_insn *zip1_insn
> > +    = emit_set_insn (target, gen_rtx_UNSPEC (mode, v, UNSPEC_ZIP1));
> > +  unsigned seq_total_cost
> > +    = (!optimize_size) ? std::max (costs[0], costs[1]) : costs[0] + costs[1];
> > +  seq_total_cost += insn_cost (zip1_insn, !optimize_size);
> > +
> > +  rtx_insn *seq = get_insns ();
> > +  end_sequence ();
> > +
> > +  start_sequence ();
> > +  aarch64_expand_vector_init_fallback (target, vals);
> > +  rtx_insn *fallback_seq = get_insns ();
> > +  unsigned fallback_seq_cost = seq_cost (fallback_seq, !optimize_size);
> > +  end_sequence ();
> > +
> > +  emit_insn (seq_total_cost < fallback_seq_cost ? seq : fallback_seq);
> > +}
> > +
> >  /* Emit RTL corresponding to:
> >     insr TARGET, ELEM.  */
> >
> > diff --git a/gcc/testsuite/gcc.target/aarch64/ldp_stp_16.c b/gcc/testsuite/gcc.target/aarch64/ldp_stp_16.c
> > index 8ab117c4dcd..30c86018773 100644
> > --- a/gcc/testsuite/gcc.target/aarch64/ldp_stp_16.c
> > +++ b/gcc/testsuite/gcc.target/aarch64/ldp_stp_16.c
> > @@ -96,10 +96,10 @@ CONS2_FN (4, float);
> >
> >  /*
> >  ** cons2_8_float:
> > -**   dup     v([0-9]+)\.4s, .*
> > +**   dup     v([0-9]+)\.2s, v1.s\[0\]
> >  **   ...
> > -**   stp     q\1, q\1, \[x0\]
> > -**   stp     q\1, q\1, \[x0, #?32\]
> > +**   stp     q0, q0, \[x0\]
> > +**   stp     q0, q0, \[x0, #?32\]
>
> Leaving the capture in the first line while hard-coding q0 at the end
> doesn't look right.  The original was written that way because nothing
> guarantees a particular register allocation.
>
> I think this now needs to match more of the sequence.
>
> > diff --git a/gcc/testsuite/gcc.target/aarch64/interleave-init-1.c b/gcc/testsuite/gcc.target/aarch64/vec-init-18.c
> > similarity index 82%
> > rename from gcc/testsuite/gcc.target/aarch64/interleave-init-1.c
> > rename to gcc/testsuite/gcc.target/aarch64/vec-init-18.c
> > index ee775048589..e812d3946de 100644
> > --- a/gcc/testsuite/gcc.target/aarch64/interleave-init-1.c
> > +++ b/gcc/testsuite/gcc.target/aarch64/vec-init-18.c
> > @@ -7,8 +7,8 @@
> >  /*
> >  ** foo:
> >  **   ...
> > -**   dup     v[0-9]+\.8h, w[0-9]+
> > -**   dup     v[0-9]+\.8h, w[0-9]+
> > +**   dup     v[0-9]+\.4h, w[0-9]+
> > +**   dup     v[0-9]+\.4h, w[0-9]+
> >  **   zip1    v[0-9]+\.8h, v[0-9]+\.8h, v[0-9]+\.8h
> >  **   ...
> >  **   ret
> > @@ -23,8 +23,8 @@ int16x8_t foo(int16_t x, int y)
> >  /*
> >  ** foo2:
> >  **   ...
> > -**   dup     v[0-9]+\.8h, w[0-9]+
> > -**   movi    v[0-9]+\.8h, 0x1
> > +**   dup     v[0-9]+\.4h, w[0-9]+
> > +**   movi    v[0-9]+\.4h, 0x1
> >  **   zip1    v[0-9]+\.8h, v[0-9]+\.8h, v[0-9]+\.8h
> >  **   ...
> >  **   ret
> > diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-19.c b/gcc/testsuite/gcc.target/aarch64/vec-init-19.c
> > new file mode 100644
> > index 00000000000..e28fdcda29d
> > --- /dev/null
> > +++ b/gcc/testsuite/gcc.target/aarch64/vec-init-19.c
> > @@ -0,0 +1,21 @@
> > +/* { dg-do compile } */
> > +/* { dg-options "-O3" } */
> > +/* { dg-final { check-function-bodies "**" "" "" } } */
> > +
> > +#include <arm_neon.h>
> > +
> > +/*
> > +** f_s8:
> > +**   ...
> > +**   dup     v[0-9]+\.8b, w[0-9]+
> > +**   adrp    x[0-9]+, \.LC[0-9]+
> > +**   ldr     d[0-9]+, \[x[0-9]+, #:lo12:.LC[0-9]+\]
> > +**   zip1    v[0-9]+\.16b, v[0-9]+\.16b, v[0-9]+\.16b
>
> This kind of match is dangerous for a test that enables scheduling,
> since the zip sequences start with two independent sequences that
> build 64-bit vectors.
>
> Since the lines of the match don't build on each other (e.g. they
> don't use captures to ensure that the zip operands are in the right
> order), I think it'd be better to use scan-assemblers instead.
>
> There's then no need to match the adrp. or the exact addressing
> mode of the ldr.  Just {ldr\td[0-9]+, } would be enough.
>
> Same comments for the other tests.
>
> Please also check that the new tests pass on big-endian targets.
Hi Richard,
Thanks for the suggestions, I have tried to address them in the attached patch.
I verified the new tests pass on aarch64_be-linux-gnu, and patch is
under bootstrap+progress on aarch64-linux-gnu.
OK to commit if passes ?

Thanks,
Prathamesh
>
> Thanks,
> Richard
>
> > +**   ret
> > +*/
> > +
> > +int8x16_t f_s8(int8_t x)
> > +{
> > +  return (int8x16_t) { x, 1, x, 2, x, 3, x, 4,
> > +                       x, 5, x, 6, x, 7, x, 8 };
> > +}
> > diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-20.c b/gcc/testsuite/gcc.target/aarch64/vec-init-20.c
> > new file mode 100644
> > index 00000000000..9366ca349b6
> > --- /dev/null
> > +++ b/gcc/testsuite/gcc.target/aarch64/vec-init-20.c
> > @@ -0,0 +1,22 @@
> > +/* { dg-do compile } */
> > +/* { dg-options "-O3" } */
> > +/* { dg-final { check-function-bodies "**" "" "" } } */
> > +
> > +#include <arm_neon.h>
> > +
> > +/*
> > +** f_s8:
> > +**   ...
> > +**   adrp    x[0-9]+, \.LC[0-9]+
> > +**   dup     v[0-9]+\.8b, w[0-9]+
> > +**   ldr     d[0-9]+, \[x[0-9]+, #:lo12:\.LC[0-9]+\]
> > +**   ins     v0\.b\[0\], w0
> > +**   zip1    v[0-9]+\.16b, v[0-9]+\.16b, v[0-9]+\.16b
> > +**   ret
> > +*/
> > +
> > +int8x16_t f_s8(int8_t x, int8_t y)
> > +{
> > +  return (int8x16_t) { x, y, 1, y, 2, y, 3, y,
> > +                       4, y, 5, y, 6, y, 7, y };
> > +}
> > diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-21.c b/gcc/testsuite/gcc.target/aarch64/vec-init-21.c
> > new file mode 100644
> > index 00000000000..e16459486d7
> > --- /dev/null
> > +++ b/gcc/testsuite/gcc.target/aarch64/vec-init-21.c
> > @@ -0,0 +1,22 @@
> > +/* { dg-do compile } */
> > +/* { dg-options "-O3" } */
> > +/* { dg-final { check-function-bodies "**" "" "" } } */
> > +
> > +#include <arm_neon.h>
> > +
> > +/*
> > +** f_s8:
> > +**   ...
> > +**   adrp    x[0-9]+, \.LC[0-9]+
> > +**   ldr     q[0-9]+, \[x[0-9]+, #:lo12:\.LC[0-9]+\]
> > +**   ins     v0\.b\[0\], w0
> > +**   ins     v0\.b\[1\], w1
> > +**   ...
> > +**   ret
> > +*/
> > +
> > +int8x16_t f_s8(int8_t x, int8_t y)
> > +{
> > +  return (int8x16_t) { x, y, 1, 2, 3, 4, 5, 6,
> > +                       7, 8, 9, 10, 11, 12, 13, 14 };
> > +}
> > diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-22-size.c b/gcc/testsuite/gcc.target/aarch64/vec-init-22-size.c
> > new file mode 100644
> > index 00000000000..8f35854c008
> > --- /dev/null
> > +++ b/gcc/testsuite/gcc.target/aarch64/vec-init-22-size.c
> > @@ -0,0 +1,24 @@
> > +/* { dg-do compile } */
> > +/* { dg-options "-Os" } */
> > +/* { dg-final { check-function-bodies "**" "" "" } } */
> > +
> > +/* Verify that fallback code-sequence is chosen over
> > +   recursively generated code-sequence merged with zip1.  */
> > +
> > +/*
> > +** f_s16:
> > +**   ...
> > +**   sxth    w0, w0
> > +**   fmov    s0, w0
> > +**   ins     v0\.h\[1\], w1
> > +**   ins     v0\.h\[2\], w2
> > +**   ins     v0\.h\[3\], w3
> > +**   ins     v0\.h\[4\], w4
> > +**   ins     v0\.h\[5\], w5
> > +**   ins     v0\.h\[6\], w6
> > +**   ins     v0\.h\[7\], w7
> > +**   ...
> > +**   ret
> > +*/
> > +
> > +#include "vec-init-22.h"
> > diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-22-speed.c b/gcc/testsuite/gcc.target/aarch64/vec-init-22-speed.c
> > new file mode 100644
> > index 00000000000..172d56ffdf1
> > --- /dev/null
> > +++ b/gcc/testsuite/gcc.target/aarch64/vec-init-22-speed.c
> > @@ -0,0 +1,27 @@
> > +/* { dg-do compile } */
> > +/* { dg-options "-O3" } */
> > +/* { dg-final { check-function-bodies "**" "" "" } } */
> > +
> > +/* Verify that we recursively generate code for even and odd halves
> > +   instead of fallback code. This is so despite the longer code-gen
> > +   because it has fewer dependencies and thus has lesser cost.  */
> > +
> > +/*
> > +** f_s16:
> > +**   ...
> > +**   sxth    w0, w0
> > +**   sxth    w1, w1
> > +**   fmov    d0, x0
> > +**   fmov    d1, x1
> > +**   ins     v[0-9]+\.h\[1\], w2
> > +**   ins     v[0-9]+\.h\[1\], w3
> > +**   ins     v[0-9]+\.h\[2\], w4
> > +**   ins     v[0-9]+\.h\[2\], w5
> > +**   ins     v[0-9]+\.h\[3\], w6
> > +**   ins     v[0-9]+\.h\[3\], w7
> > +**   zip1    v[0-9]+\.8h, v[0-9]+\.8h, v[0-9]+\.8h
> > +**   ...
> > +**   ret
> > +*/
> > +
> > +#include "vec-init-22.h"
> > diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-22.h b/gcc/testsuite/gcc.target/aarch64/vec-init-22.h
> > new file mode 100644
> > index 00000000000..15b889d4097
> > --- /dev/null
> > +++ b/gcc/testsuite/gcc.target/aarch64/vec-init-22.h
> > @@ -0,0 +1,7 @@
> > +#include <arm_neon.h>
> > +
> > +int16x8_t f_s16 (int16_t x0, int16_t x1, int16_t x2, int16_t x3,
> > +                 int16_t x4, int16_t x5, int16_t x6, int16_t x7)
> > +{
> > +  return (int16x8_t) { x0, x1, x2, x3, x4, x5, x6, x7 };
> > +}

[-- Attachment #2: gnu-821-10.txt --]
[-- Type: text/plain, Size: 13770 bytes --]

[aarch64] Recursively intialize even and odd sub-parts and merge with zip1.

gcc/ChangeLog:
	* config/aarch64/aarch64.cc (aarch64_expand_vector_init_fallback): Rename
	aarch64_expand_vector_init to this, and remove 	interleaving case.
	Recursively call aarch64_expand_vector_init_fallback, instead of
	aarch64_expand_vector_init.
	(aarch64_unzip_vector_init): New function.
	(aarch64_expand_vector_init): Likewise.

gcc/testsuite/ChangeLog:
	* gcc.target/aarch64/ldp_stp_16.c (cons2_8_float): Adjust for new
	code-gen.
	* gcc.target/aarch64/sve/acle/general/dupq_5.c: Likewise.
	* gcc.target/aarch64/sve/acle/general/dupq_6.c: Likewise.
	* gcc.target/aarch64/vec-init-18.c: Rename interleave-init-1.c to
	this.
	* gcc.target/aarch64/vec-init-19.c: New test.
	* gcc.target/aarch64/vec-init-20.c: Likewise.
	* gcc.target/aarch64/vec-init-21.c: Likewise.
	* gcc.target/aarch64/vec-init-22-size.c: Likewise.
	* gcc.target/aarch64/vec-init-22-speed.c: Likewise.
	* gcc.target/aarch64/vec-init-22.h: New header.

diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc
index 2b0de7ca038..48ece0ad328 100644
--- a/gcc/config/aarch64/aarch64.cc
+++ b/gcc/config/aarch64/aarch64.cc
@@ -22060,11 +22060,12 @@ aarch64_simd_make_constant (rtx vals)
     return NULL_RTX;
 }
 
-/* Expand a vector initialisation sequence, such that TARGET is
-   initialised to contain VALS.  */
+/* A subroutine of aarch64_expand_vector_init, with the same interface.
+   The caller has already tried a divide-and-conquer approach, so do
+   not consider that case here.  */
 
 void
-aarch64_expand_vector_init (rtx target, rtx vals)
+aarch64_expand_vector_init_fallback (rtx target, rtx vals)
 {
   machine_mode mode = GET_MODE (target);
   scalar_mode inner_mode = GET_MODE_INNER (mode);
@@ -22124,38 +22125,6 @@ aarch64_expand_vector_init (rtx target, rtx vals)
       return;
     }
 
-  /* Check for interleaving case.
-     For eg if initializer is (int16x8_t) {x, y, x, y, x, y, x, y}.
-     Generate following code:
-     dup v0.h, x
-     dup v1.h, y
-     zip1 v0.h, v0.h, v1.h
-     for "large enough" initializer.  */
-
-  if (n_elts >= 8)
-    {
-      int i;
-      for (i = 2; i < n_elts; i++)
-	if (!rtx_equal_p (XVECEXP (vals, 0, i), XVECEXP (vals, 0, i % 2)))
-	  break;
-
-      if (i == n_elts)
-	{
-	  machine_mode mode = GET_MODE (target);
-	  rtx dest[2];
-
-	  for (int i = 0; i < 2; i++)
-	    {
-	      rtx x = expand_vector_broadcast (mode, XVECEXP (vals, 0, i));
-	      dest[i] = force_reg (mode, x);
-	    }
-
-	  rtvec v = gen_rtvec (2, dest[0], dest[1]);
-	  emit_set_insn (target, gen_rtx_UNSPEC (mode, v, UNSPEC_ZIP1));
-	  return;
-	}
-    }
-
   enum insn_code icode = optab_handler (vec_set_optab, mode);
   gcc_assert (icode != CODE_FOR_nothing);
 
@@ -22277,7 +22246,7 @@ aarch64_expand_vector_init (rtx target, rtx vals)
 	    }
 	  XVECEXP (copy, 0, i) = subst;
 	}
-      aarch64_expand_vector_init (target, copy);
+      aarch64_expand_vector_init_fallback (target, copy);
     }
 
   /* Insert the variable lanes directly.  */
@@ -22291,6 +22260,80 @@ aarch64_expand_vector_init (rtx target, rtx vals)
     }
 }
 
+/* Return even or odd half of VALS depending on EVEN_P.  */
+
+static rtx
+aarch64_unzip_vector_init (machine_mode mode, rtx vals, bool even_p)
+{
+  int n = XVECLEN (vals, 0);
+  machine_mode new_mode
+    = aarch64_simd_container_mode (GET_MODE_INNER (mode),
+				   GET_MODE_BITSIZE (mode).to_constant () / 2);
+  rtvec vec = rtvec_alloc (n / 2);
+  for (int i = 0; i < n / 2; i++)
+    RTVEC_ELT (vec, i) = (even_p) ? XVECEXP (vals, 0, 2 * i)
+				  : XVECEXP (vals, 0, 2 * i + 1);
+  return gen_rtx_PARALLEL (new_mode, vec);
+}
+
+/* Expand a vector initialization sequence, such that TARGET is
+   initialized to contain VALS.  */
+
+void
+aarch64_expand_vector_init (rtx target, rtx vals)
+{
+  /* Try decomposing the initializer into even and odd halves and
+     then ZIP them together.  Use the resulting sequence if it is
+     strictly cheaper than loading VALS directly.
+
+     Prefer the fallback sequence in the event of a tie, since it
+     will tend to use fewer registers.  */
+
+  machine_mode mode = GET_MODE (target);
+  int n_elts = XVECLEN (vals, 0);
+
+  if (n_elts < 4
+      || maybe_ne (GET_MODE_BITSIZE (mode), 128))
+    {
+      aarch64_expand_vector_init_fallback (target, vals);
+      return;
+    }
+
+  start_sequence ();
+  rtx halves[2];
+  unsigned costs[2];
+  for (int i = 0; i < 2; i++)
+    {
+      start_sequence ();
+      rtx new_vals = aarch64_unzip_vector_init (mode, vals, i == 0);
+      rtx tmp_reg = gen_reg_rtx (GET_MODE (new_vals));
+      aarch64_expand_vector_init (tmp_reg, new_vals);
+      halves[i] = gen_rtx_SUBREG (mode, tmp_reg, 0);
+      rtx_insn *rec_seq = get_insns ();
+      end_sequence ();
+      costs[i] = seq_cost (rec_seq, !optimize_size);
+      emit_insn (rec_seq);
+    }
+
+  rtvec v = gen_rtvec (2, halves[0], halves[1]);
+  rtx_insn *zip1_insn
+    = emit_set_insn (target, gen_rtx_UNSPEC (mode, v, UNSPEC_ZIP1));
+  unsigned seq_total_cost
+    = (!optimize_size) ? std::max (costs[0], costs[1]) : costs[0] + costs[1];
+  seq_total_cost += insn_cost (zip1_insn, !optimize_size);
+
+  rtx_insn *seq = get_insns ();
+  end_sequence ();
+
+  start_sequence ();
+  aarch64_expand_vector_init_fallback (target, vals);
+  rtx_insn *fallback_seq = get_insns ();
+  unsigned fallback_seq_cost = seq_cost (fallback_seq, !optimize_size);
+  end_sequence ();
+
+  emit_insn (seq_total_cost < fallback_seq_cost ? seq : fallback_seq);
+}
+
 /* Emit RTL corresponding to:
    insr TARGET, ELEM.  */
 
diff --git a/gcc/testsuite/gcc.target/aarch64/interleave-init-1.c b/gcc/testsuite/gcc.target/aarch64/interleave-init-1.c
deleted file mode 100644
index ee775048589..00000000000
--- a/gcc/testsuite/gcc.target/aarch64/interleave-init-1.c
+++ /dev/null
@@ -1,37 +0,0 @@
-/* { dg-do compile } */
-/* { dg-options "-O3" } */
-/* { dg-final { check-function-bodies "**" "" "" } } */
-
-#include <arm_neon.h>
-
-/*
-** foo:
-**	...
-**	dup	v[0-9]+\.8h, w[0-9]+
-**	dup	v[0-9]+\.8h, w[0-9]+
-**	zip1	v[0-9]+\.8h, v[0-9]+\.8h, v[0-9]+\.8h
-**	...
-**	ret
-*/
-
-int16x8_t foo(int16_t x, int y)
-{
-  int16x8_t v = (int16x8_t) {x, y, x, y, x, y, x, y}; 
-  return v;
-}
-
-/*
-** foo2:
-**	...
-**	dup	v[0-9]+\.8h, w[0-9]+
-**	movi	v[0-9]+\.8h, 0x1
-**	zip1	v[0-9]+\.8h, v[0-9]+\.8h, v[0-9]+\.8h
-**	...
-**	ret
-*/
-
-int16x8_t foo2(int16_t x) 
-{
-  int16x8_t v = (int16x8_t) {x, 1, x, 1, x, 1, x, 1}; 
-  return v;
-}
diff --git a/gcc/testsuite/gcc.target/aarch64/ldp_stp_16.c b/gcc/testsuite/gcc.target/aarch64/ldp_stp_16.c
index 8ab117c4dcd..ba14194d0a4 100644
--- a/gcc/testsuite/gcc.target/aarch64/ldp_stp_16.c
+++ b/gcc/testsuite/gcc.target/aarch64/ldp_stp_16.c
@@ -96,8 +96,9 @@ CONS2_FN (4, float);
 
 /*
 ** cons2_8_float:
-**	dup	v([0-9]+)\.4s, .*
-**	...
+**	dup	v[0-9]+\.2s, v[0-9]+\.s\[0\]
+**	dup	v[0-9]+\.2s, v[0-9]+\.s\[0\]
+**	zip1	v([0-9]+)\.4s, v[0-9]+\.4s, v[0-9]+\.4s
 **	stp	q\1, q\1, \[x0\]
 **	stp	q\1, q\1, \[x0, #?32\]
 **	ret
diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/general/dupq_5.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/general/dupq_5.c
index 53426c9af5a..c7d6f3ff390 100644
--- a/gcc/testsuite/gcc.target/aarch64/sve/acle/general/dupq_5.c
+++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/general/dupq_5.c
@@ -11,7 +11,7 @@ dupq (int x1, int x2, int x3, int x4)
 
 /* { dg-final { scan-assembler-not {\tldr\t} } } */
 /* { dg-final { scan-assembler {, [wx]0\n} } } */
-/* { dg-final { scan-assembler {\tins\tv[0-9]+\.s\[1\], w1\n} } } */
-/* { dg-final { scan-assembler {\tins\tv[0-9]+\.s\[2\], w2\n} } } */
-/* { dg-final { scan-assembler {\tins\tv[0-9]+\.s\[3\], w3\n} } } */
+/* { dg-final { scan-assembler {\tins\tv[0-9]+\.s\[1\], w2\n} } } */
+/* { dg-final { scan-assembler {\tins\tv[0-9]+\.s\[1\], w3\n} } } */
+/* { dg-final { scan-assembler {\tzip1\tv[0-9]+\.4s, v[0-9]+\.4s, v[0-9]\.4s\n} } } */
 /* { dg-final { scan-assembler {\tdup\tz[0-9]+\.q, z[0-9]+\.q\[0\]\n} } } */
diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/general/dupq_6.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/general/dupq_6.c
index dfce5e7a12a..4745a3815b0 100644
--- a/gcc/testsuite/gcc.target/aarch64/sve/acle/general/dupq_6.c
+++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/general/dupq_6.c
@@ -12,7 +12,7 @@ dupq (int x1, int x2, int x3, int x4)
 
 /* { dg-final { scan-assembler-not {\tldr\t} } } */
 /* { dg-final { scan-assembler {, [wx]0\n} } } */
-/* { dg-final { scan-assembler {\tins\tv[0-9]+\.s\[1\], w1\n} } } */
-/* { dg-final { scan-assembler {\tins\tv[0-9]+\.s\[2\], w2\n} } } */
-/* { dg-final { scan-assembler {\tins\tv[0-9]+\.s\[3\], w3\n} } } */
+/* { dg-final { scan-assembler {\tins\tv[0-9]+\.s\[1\], w2\n} } } */
+/* { dg-final { scan-assembler {\tins\tv[0-9]+\.s\[1\], w3\n} } } */
+/* { dg-final { scan-assembler {\tzip1\tv[0-9]+\.4s, v[0-9]+\.4s, v[0-9]\.4s\n} } } */
 /* { dg-final { scan-assembler {\tdup\tz[0-9]+\.q, z[0-9]+\.q\[0\]\n} } } */
diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-18.c b/gcc/testsuite/gcc.target/aarch64/vec-init-18.c
new file mode 100644
index 00000000000..598a51f17c6
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/vec-init-18.c
@@ -0,0 +1,20 @@
+/* { dg-do compile } */
+/* { dg-options "-O3" } */
+
+#include <arm_neon.h>
+
+int16x8_t foo(int16_t x, int y)
+{
+  int16x8_t v = (int16x8_t) {x, y, x, y, x, y, x, y}; 
+  return v;
+}
+
+int16x8_t foo2(int16_t x) 
+{
+  int16x8_t v = (int16x8_t) {x, 1, x, 1, x, 1, x, 1}; 
+  return v;
+}
+
+/* { dg-final { scan-assembler-times {\tdup\tv[0-9]+\.4h, w[0-9]+} 3 } } */
+/* { dg-final { scan-assembler {\tmovi\tv[0-9]+\.4h, 0x1} } } */
+/* { dg-final { scan-assembler {\tzip1\tv[0-9]+\.8h, v[0-9]+\.8h, v[0-9]+\.8h} } } */
diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-19.c b/gcc/testsuite/gcc.target/aarch64/vec-init-19.c
new file mode 100644
index 00000000000..46e9dbf51a3
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/vec-init-19.c
@@ -0,0 +1,14 @@
+/* { dg-do compile } */
+/* { dg-options "-O3" } */
+
+#include <arm_neon.h>
+
+int8x16_t f_s8(int8_t x)
+{
+  return (int8x16_t) { x, 1, x, 2, x, 3, x, 4,
+                       x, 5, x, 6, x, 7, x, 8 };
+}
+
+/* { dg-final { scan-assembler {\tdup\tv[0-9]+\.8b, w[0-9]+} } } */
+/* { dg-final { scan-assembler {\tldr\td[0-9]+,} } } */
+/* { dg-final { scan-assembler {\tzip1\tv[0-9]+\.16b, v[0-9]+\.16b, v[0-9]+\.16b} } } */
diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-20.c b/gcc/testsuite/gcc.target/aarch64/vec-init-20.c
new file mode 100644
index 00000000000..4494121cb2d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/vec-init-20.c
@@ -0,0 +1,15 @@
+/* { dg-do compile } */
+/* { dg-options "-O3" } */
+
+#include <arm_neon.h>
+
+int8x16_t f_s8(int8_t x, int8_t y)
+{
+  return (int8x16_t) { x, y, 1, y, 2, y, 3, y,
+                       4, y, 5, y, 6, y, 7, y };
+}
+
+/* { dg-final { scan-assembler {\tdup\tv[0-9]+\.8b, w[0-9]+} } } */
+/* { dg-final { scan-assembler {\tldr\td[0-9]+,} } } */
+/* { dg-final { scan-assembler {\tins\tv[0-9]+\.b\[0|7\], w[0-9]+} } } */
+/* { dg-final { scan-assembler {\tzip1\tv[0-9]+\.16b, v[0-9]+\.16b, v[0-9]+\.16b} } } */
diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-21.c b/gcc/testsuite/gcc.target/aarch64/vec-init-21.c
new file mode 100644
index 00000000000..f53e0ed08d5
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/vec-init-21.c
@@ -0,0 +1,14 @@
+/* { dg-do compile } */
+/* { dg-options "-O3" } */
+
+#include <arm_neon.h>
+
+int8x16_t f_s8(int8_t x, int8_t y)
+{
+  return (int8x16_t) { x, y, 1, 2, 3, 4, 5, 6,
+                       7, 8, 9, 10, 11, 12, 13, 14 };
+}
+
+/* { dg-final { scan-assembler {\tldr\tq[0-9]+,} } } */
+/* { dg-final { scan-assembler {\tins\tv[0-9]+\.b\[0|15\], w0} } } */
+/* { dg-final { scan-assembler {\tins\tv[0-9]+\.b\[1|14\], w1} } } */
diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-22-size.c b/gcc/testsuite/gcc.target/aarch64/vec-init-22-size.c
new file mode 100644
index 00000000000..4333ff50205
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/vec-init-22-size.c
@@ -0,0 +1,10 @@
+/* { dg-do compile } */
+/* { dg-options "-Os" } */
+
+/* Verify that fallback code-sequence is chosen over
+   recursively generated code-sequence merged with zip1.  */
+
+#include "vec-init-22.h"
+
+/* { dg-final { scan-assembler {\tfmov\ts[0-9]+, w0|w7} } } */
+/* { dg-final { scan-assembler-times {\tins\tv[0-9]+\.h\[[1-7]\], w[0-9]+} 7 } } */
diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-22-speed.c b/gcc/testsuite/gcc.target/aarch64/vec-init-22-speed.c
new file mode 100644
index 00000000000..993ef8c4161
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/vec-init-22-speed.c
@@ -0,0 +1,12 @@
+/* { dg-do compile } */
+/* { dg-options "-O3" } */
+
+/* Verify that we recursively generate code for even and odd halves
+   instead of fallback code. This is so despite the longer code-gen
+   because it has fewer dependencies and thus has lesser cost.  */
+
+#include "vec-init-22.h"
+
+/* { dg-final { scan-assembler-times {\tfmov\td[0-9]+, x[0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {\tins\tv[0-9]+\.h\[[1-3]\], w[0-9]+} 6 } } */
+/* { dg-final { scan-assembler {\tzip1\tv[0-9]+\.8h, v[0-9]+\.8h, v[0-9]+\.8h} } } */
diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-22.h b/gcc/testsuite/gcc.target/aarch64/vec-init-22.h
new file mode 100644
index 00000000000..15b889d4097
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/vec-init-22.h
@@ -0,0 +1,7 @@
+#include <arm_neon.h>
+
+int16x8_t f_s16 (int16_t x0, int16_t x1, int16_t x2, int16_t x3,
+                 int16_t x4, int16_t x5, int16_t x6, int16_t x7)
+{
+  return (int16x8_t) { x0, x1, x2, x3, x4, x5, x6, x7 };
+}

  reply	other threads:[~2023-05-04 11:48 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-29 14:39 Prathamesh Kulkarni
2022-11-29 15:13 ` Andrew Pinski
2022-11-29 17:06   ` Prathamesh Kulkarni
2022-12-05 10:52 ` Richard Sandiford
2022-12-05 11:20   ` Richard Sandiford
2022-12-06  1:31     ` Prathamesh Kulkarni
2022-12-26  4:22       ` Prathamesh Kulkarni
2023-01-12 15:51         ` Richard Sandiford
2023-02-01  9:36           ` Prathamesh Kulkarni
2023-02-01 16:26             ` Richard Sandiford
2023-02-02 14:51               ` Prathamesh Kulkarni
2023-02-02 15:20                 ` Richard Sandiford
2023-02-03  1:40                   ` Prathamesh Kulkarni
2023-02-03  3:02                     ` Prathamesh Kulkarni
2023-02-03 15:17                       ` Richard Sandiford
2023-02-04  6:49                         ` Prathamesh Kulkarni
2023-02-06 12:13                           ` Richard Sandiford
2023-02-11  9:12                             ` Prathamesh Kulkarni
2023-03-10 18:08                               ` Richard Sandiford
2023-03-13  7:33                                 ` Richard Biener
2023-04-03 16:33                                   ` Prathamesh Kulkarni
2023-04-04 18:05                                     ` Richard Sandiford
2023-04-06 10:26                                       ` Prathamesh Kulkarni
2023-04-06 10:34                                         ` Richard Sandiford
2023-04-06 11:21                                           ` Prathamesh Kulkarni
2023-04-12  8:59                                             ` Richard Sandiford
2023-04-21  7:27                                               ` Prathamesh Kulkarni
2023-04-21  9:17                                                 ` Richard Sandiford
2023-04-21 15:15                                                   ` Prathamesh Kulkarni
2023-04-23  1:53                                                     ` Prathamesh Kulkarni
2023-04-24  9:29                                                       ` Richard Sandiford
2023-05-04 11:47                                                         ` Prathamesh Kulkarni [this message]
2023-05-11 19:07                                                           ` Richard Sandiford
2023-05-13  9:10                                                             ` Prathamesh Kulkarni

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAAgBjMm2Fn5THbXFHudD--M7QzZYXDv=qi1XMRxZHm-tHFTRWA@mail.gmail.com' \
    --to=prathamesh.kulkarni@linaro.org \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=rguenther@suse.de \
    --cc=richard.sandiford@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).