public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org>
To: Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org>,
	gcc Patches <gcc-patches@gcc.gnu.org>,
	 richard.sandiford@arm.com
Subject: Re: [aarch64] Use dup and zip1 for interleaving elements in initializing vector
Date: Wed, 1 Feb 2023 15:06:35 +0530	[thread overview]
Message-ID: <CAAgBjMndgAd5eS52rKq+5MsqzA2FRiXM_3CLiovgD9rn8f6TBw@mail.gmail.com> (raw)
In-Reply-To: <mptmt6nvjhu.fsf@arm.com>

[-- Attachment #1: Type: text/plain, Size: 16248 bytes --]

On Thu, 12 Jan 2023 at 21:21, Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
> > On Tue, 6 Dec 2022 at 07:01, Prathamesh Kulkarni
> > <prathamesh.kulkarni@linaro.org> wrote:
> >>
> >> On Mon, 5 Dec 2022 at 16:50, Richard Sandiford
> >> <richard.sandiford@arm.com> wrote:
> >> >
> >> > Richard Sandiford via Gcc-patches <gcc-patches@gcc.gnu.org> writes:
> >> > > Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
> >> > >> Hi,
> >> > >> For the following test-case:
> >> > >>
> >> > >> int16x8_t foo(int16_t x, int16_t y)
> >> > >> {
> >> > >>   return (int16x8_t) { x, y, x, y, x, y, x, y };
> >> > >> }
> >> > >>
> >> > >> Code gen at -O3:
> >> > >> foo:
> >> > >>         dup    v0.8h, w0
> >> > >>         ins     v0.h[1], w1
> >> > >>         ins     v0.h[3], w1
> >> > >>         ins     v0.h[5], w1
> >> > >>         ins     v0.h[7], w1
> >> > >>         ret
> >> > >>
> >> > >> For 16 elements, it results in 8 ins instructions which might not be
> >> > >> optimal perhaps.
> >> > >> I guess, the above code-gen would be equivalent to the following ?
> >> > >> dup v0.8h, w0
> >> > >> dup v1.8h, w1
> >> > >> zip1 v0.8h, v0.8h, v1.8h
> >> > >>
> >> > >> I have attached patch to do the same, if number of elements >= 8,
> >> > >> which should be possibly better compared to current code-gen ?
> >> > >> Patch passes bootstrap+test on aarch64-linux-gnu.
> >> > >> Does the patch look OK ?
> >> > >>
> >> > >> Thanks,
> >> > >> Prathamesh
> >> > >>
> >> > >> diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc
> >> > >> index c91df6f5006..e5dea70e363 100644
> >> > >> --- a/gcc/config/aarch64/aarch64.cc
> >> > >> +++ b/gcc/config/aarch64/aarch64.cc
> >> > >> @@ -22028,6 +22028,39 @@ aarch64_expand_vector_init (rtx target, rtx vals)
> >> > >>        return;
> >> > >>      }
> >> > >>
> >> > >> +  /* Check for interleaving case.
> >> > >> +     For eg if initializer is (int16x8_t) {x, y, x, y, x, y, x, y}.
> >> > >> +     Generate following code:
> >> > >> +     dup v0.h, x
> >> > >> +     dup v1.h, y
> >> > >> +     zip1 v0.h, v0.h, v1.h
> >> > >> +     for "large enough" initializer.  */
> >> > >> +
> >> > >> +  if (n_elts >= 8)
> >> > >> +    {
> >> > >> +      int i;
> >> > >> +      for (i = 2; i < n_elts; i++)
> >> > >> +    if (!rtx_equal_p (XVECEXP (vals, 0, i), XVECEXP (vals, 0, i % 2)))
> >> > >> +      break;
> >> > >> +
> >> > >> +      if (i == n_elts)
> >> > >> +    {
> >> > >> +      machine_mode mode = GET_MODE (target);
> >> > >> +      rtx dest[2];
> >> > >> +
> >> > >> +      for (int i = 0; i < 2; i++)
> >> > >> +        {
> >> > >> +          rtx x = copy_to_mode_reg (GET_MODE_INNER (mode), XVECEXP (vals, 0, i));
> >> > >
> >> > > Formatting nit: long line.
> >> > >
> >> > >> +          dest[i] = gen_reg_rtx (mode);
> >> > >> +          aarch64_emit_move (dest[i], gen_vec_duplicate (mode, x));
> >> > >> +        }
> >> > >
> >> > > This could probably be written:
> >> > >
> >> > >         for (int i = 0; i < 2; i++)
> >> > >           {
> >> > >             rtx x = expand_vector_broadcast (mode, XVECEXP (vals, 0, i));
> >> > >             dest[i] = force_reg (GET_MODE_INNER (mode), x);
> >> >
> >> > Oops, I meant "mode" rather than "GET_MODE_INNER (mode)", sorry.
> >> Thanks, I have pushed the change in
> >> 769370f3e2e04823c8a621d8ffa756dd83ebf21e after running
> >> bootstrap+test on aarch64-linux-gnu.
> > Hi Richard,
> > I have attached a patch that extends the transform if one half is dup
> > and other is set of constants.
> > For eg:
> > int8x16_t f(int8_t x)
> > {
> >   return (int8x16_t) { x, 1, x, 2, x, 3, x, 4, x, 5, x, 6, x, 7, x, 8 };
> > }
> >
> > code-gen trunk:
> > f:
> >         adrp    x1, .LC0
> >         ldr     q0, [x1, #:lo12:.LC0]
> >         ins     v0.b[0], w0
> >         ins     v0.b[2], w0
> >         ins     v0.b[4], w0
> >         ins     v0.b[6], w0
> >         ins     v0.b[8], w0
> >         ins     v0.b[10], w0
> >         ins     v0.b[12], w0
> >         ins     v0.b[14], w0
> >         ret
> >
> > code-gen with patch:
> > f:
> >         dup     v0.16b, w0
> >         adrp    x0, .LC0
> >         ldr     q1, [x0, #:lo12:.LC0]
> >         zip1    v0.16b, v0.16b, v1.16b
> >         ret
> >
> > Bootstrapped+tested on aarch64-linux-gnu.
> > Does it look OK ?
>
> Looks like a nice improvement.  It'll need to wait for GCC 14 now though.
>
> However, rather than handle this case specially, I think we should instead
> take a divide-and-conquer approach: split the initialiser into even and
> odd elements, find the best way of loading each part, then compare the
> cost of these sequences + ZIP with the cost of the fallback code (the code
> later in aarch64_expand_vector_init).
>
> For example, doing that would allow:
>
>   { x, y, 0, y, 0, y, 0, y, 0, y }
>
> to be loaded more easily, even though the even elements aren't wholly
> constant.
Hi Richard,
I have attached a prototype patch based on the above approach.
It subsumes specializing for above {x, y, x, y, x, y, x, y} case by generating
same sequence, thus I removed that hunk, and improves the following cases:

(a)
int8x16_t f_s16(int8_t x)
{
  return (int8x16_t) { x, 1, x, 2, x, 3, x, 4,
                                 x, 5, x, 6, x, 7, x, 8 };
}

code-gen trunk:
f_s16:
        adrp    x1, .LC0
        ldr     q0, [x1, #:lo12:.LC0]
        ins     v0.b[0], w0
        ins     v0.b[2], w0
        ins     v0.b[4], w0
        ins     v0.b[6], w0
        ins     v0.b[8], w0
        ins     v0.b[10], w0
        ins     v0.b[12], w0
        ins     v0.b[14], w0
        ret

code-gen with patch:
f_s16:
        dup     v0.16b, w0
        adrp    x0, .LC0
        ldr     q1, [x0, #:lo12:.LC0]
        zip1    v0.16b, v0.16b, v1.16b
        ret

(b)
int8x16_t f_s16(int8_t x, int8_t y)
{
  return (int8x16_t) { x, y, 1, y, 2, y, 3, y,
                                4, y, 5, y, 6, y, 7, y };
}

code-gen trunk:
f_s16:
        adrp    x2, .LC0
        ldr     q0, [x2, #:lo12:.LC0]
        ins     v0.b[0], w0
        ins     v0.b[1], w1
        ins     v0.b[3], w1
        ins     v0.b[5], w1
        ins     v0.b[7], w1
        ins     v0.b[9], w1
        ins     v0.b[11], w1
        ins     v0.b[13], w1
        ins     v0.b[15], w1
        ret

code-gen patch:
f_s16:
        adrp    x2, .LC0
        dup     v1.16b, w1
        ldr     q0, [x2, #:lo12:.LC0]
        ins     v0.b[0], w0
        zip1    v0.16b, v0.16b, v1.16b
        ret

There are a couple of issues I have come across:
(1) Choosing element to pad vector.
For eg, if we are initiailizing a vector say { x, y, 0, y, 1, y, 2, y }
with mode V8HI.
We split it into { x, 0, 1, 2 } and { y, y, y, y}
However since the mode is V8HI, we would need to pad the above split vectors
with 4 more elements to match up to vector length.
For {x, 0, 1, 2} using any constant is the obvious choice while for {y, y, y, y}
using 'y' is the obvious choice thus making them:
{x, 0, 1, 2, 0, 0, 0, 0} and {y, y, y, y, y, y, y, y}
These would be then merged using zip1 which would discard the lower half
of both vectors.
Currently I encoded the above two heuristics in
aarch64_expand_vector_init_get_padded_elem:
(a) If split portion contains a constant, use the constant to pad the vector.
(b) If split portion only contains variables, then use the most
frequently repeating variable
to pad the vector.
I suppose tho this could be improved ?

(2) Setting cost for zip1:
Currently it returns 4 as cost for following zip1 insn:
(set (reg:V8HI 102)
    (unspec:V8HI [
            (reg:V8HI 103)
            (reg:V8HI 108)
        ] UNSPEC_ZIP1))
I am not sure if that's correct, or if not, what cost to use in this case
for zip1 ?

Thanks,
Prathamesh
>
> Thanks,
> Richard
>
> >
> > Thanks,
> > Prathamesh
> >>
> >
> >> Thanks,
> >> Prathamesh
> >> >
> >> > >           }
> >> > >
> >> > > which avoids forcing constant elements into a register before the duplication.
> >> > > OK with that change if it works.
> >> > >
> >> > > Thanks,
> >> > > Richard
> >> > >
> >> > >> +
> >> > >> +      rtvec v = gen_rtvec (2, dest[0], dest[1]);
> >> > >> +      emit_set_insn (target, gen_rtx_UNSPEC (mode, v, UNSPEC_ZIP1));
> >> > >> +      return;
> >> > >> +    }
> >> > >> +    }
> >> > >> +
> >> > >>    enum insn_code icode = optab_handler (vec_set_optab, mode);
> >> > >>    gcc_assert (icode != CODE_FOR_nothing);
> >> > >>
> >> > >> diff --git a/gcc/testsuite/gcc.target/aarch64/interleave-init-1.c b/gcc/testsuite/gcc.target/aarch64/interleave-init-1.c
> >> > >> new file mode 100644
> >> > >> index 00000000000..ee775048589
> >> > >> --- /dev/null
> >> > >> +++ b/gcc/testsuite/gcc.target/aarch64/interleave-init-1.c
> >> > >> @@ -0,0 +1,37 @@
> >> > >> +/* { dg-do compile } */
> >> > >> +/* { dg-options "-O3" } */
> >> > >> +/* { dg-final { check-function-bodies "**" "" "" } } */
> >> > >> +
> >> > >> +#include <arm_neon.h>
> >> > >> +
> >> > >> +/*
> >> > >> +** foo:
> >> > >> +**  ...
> >> > >> +**  dup     v[0-9]+\.8h, w[0-9]+
> >> > >> +**  dup     v[0-9]+\.8h, w[0-9]+
> >> > >> +**  zip1    v[0-9]+\.8h, v[0-9]+\.8h, v[0-9]+\.8h
> >> > >> +**  ...
> >> > >> +**  ret
> >> > >> +*/
> >> > >> +
> >> > >> +int16x8_t foo(int16_t x, int y)
> >> > >> +{
> >> > >> +  int16x8_t v = (int16x8_t) {x, y, x, y, x, y, x, y};
> >> > >> +  return v;
> >> > >> +}
> >> > >> +
> >> > >> +/*
> >> > >> +** foo2:
> >> > >> +**  ...
> >> > >> +**  dup     v[0-9]+\.8h, w[0-9]+
> >> > >> +**  movi    v[0-9]+\.8h, 0x1
> >> > >> +**  zip1    v[0-9]+\.8h, v[0-9]+\.8h, v[0-9]+\.8h
> >> > >> +**  ...
> >> > >> +**  ret
> >> > >> +*/
> >> > >> +
> >> > >> +int16x8_t foo2(int16_t x)
> >> > >> +{
> >> > >> +  int16x8_t v = (int16x8_t) {x, 1, x, 1, x, 1, x, 1};
> >> > >> +  return v;
> >> > >> +}
> >
> > diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc
> > index 9a79a9e7928..411e85f52a4 100644
> > --- a/gcc/config/aarch64/aarch64.cc
> > +++ b/gcc/config/aarch64/aarch64.cc
> > @@ -21984,6 +21984,54 @@ aarch64_simd_make_constant (rtx vals)
> >      return NULL_RTX;
> >  }
> >
> > +/* Subroutine of aarch64_expand_vector_init.
> > +   Check if VALS has same element at every alternate position
> > +   from START_POS.  */
> > +
> > +static
> > +bool aarch64_init_interleaving_dup_p (rtx vals, int start_pos)
> > +{
> > +  for (int i = start_pos + 2; i < XVECLEN (vals, 0); i += 2)
> > +    if (!rtx_equal_p (XVECEXP (vals, 0, start_pos), XVECEXP (vals, 0, i)))
> > +      return false;
> > +  return true;
> > +}
> > +
> > +/* Subroutine of aarch64_expand_vector_init.
> > +   Check if every alternate element in VALS starting from START_POS
> > +   is a constant.  */
> > +
> > +static
> > +bool aarch64_init_interleaving_const_p (rtx vals, int start_pos)
> > +{
> > +  for (int i = start_pos; i < XVECLEN (vals, 0); i += 2)
> > +    if (!CONSTANT_P (XVECEXP (vals, 0, i)))
> > +      return false;
> > +  return true;
> > +}
> > +
> > +/* Subroutine of aarch64_expand_vector_init.
> > +   Copy all odd-numbered or even-numbered elements from VALS
> > +   depending on CONST_EVEN.
> > +   For eg if VALS is { x, 1, x, 2, x, 3, x, 4 }
> > +   return {1, 2, 3, 4, 1, 1, 1, 1}.
> > +   We are only interested in the first half {0 ... n_elts/2} since
> > +   that will be used by zip1 for merging. Fill the second half
> > +   with an arbitrary value since it will be discarded.  */
> > +
> > +static
> > +rtx aarch64_init_interleaving_shift_init (rtx vals, bool const_even)
> > +{
> > +  int n_elts = XVECLEN (vals, 0);
> > +  rtvec vec = rtvec_alloc (n_elts);
> > +  int i;
> > +  for (i = 0; i < n_elts / 2; i++)
> > +    RTVEC_ELT (vec, i) = XVECEXP (vals, 0, (const_even) ? 2 * i : 2 * i + 1);
> > +  for (; i < n_elts; i++)
> > +    RTVEC_ELT (vec, i) = RTVEC_ELT (vec, 0);
> > +  return gen_rtx_CONST_VECTOR (GET_MODE (vals), vec);
> > +}
> > +
> >  /* Expand a vector initialisation sequence, such that TARGET is
> >     initialised to contain VALS.  */
> >
> > @@ -22048,22 +22096,55 @@ aarch64_expand_vector_init (rtx target, rtx vals)
> >        return;
> >      }
> >
> > -  /* Check for interleaving case.
> > -     For eg if initializer is (int16x8_t) {x, y, x, y, x, y, x, y}.
> > -     Generate following code:
> > -     dup v0.h, x
> > -     dup v1.h, y
> > -     zip1 v0.h, v0.h, v1.h
> > -     for "large enough" initializer.  */
> > +  /* Check for interleaving case for "large enough" initializer.
> > +     Currently we handle following cases:
> > +     (a) Even part is dup and odd part is const.
> > +     (b) Odd part is dup and even part is const.
> > +     (c) Both even and odd parts are dup.  */
> >
> >    if (n_elts >= 8)
> >      {
> > -      int i;
> > -      for (i = 2; i < n_elts; i++)
> > -     if (!rtx_equal_p (XVECEXP (vals, 0, i), XVECEXP (vals, 0, i % 2)))
> > -       break;
> > +      bool even_dup = false, even_const = false;
> > +      bool odd_dup = false, odd_const = false;
> > +
> > +      even_dup = aarch64_init_interleaving_dup_p (vals, 0);
> > +      if (!even_dup)
> > +     even_const = aarch64_init_interleaving_const_p (vals, 0);
> > +
> > +      odd_dup = aarch64_init_interleaving_dup_p (vals, 1);
> > +      if (!odd_dup)
> > +     odd_const = aarch64_init_interleaving_const_p (vals, 1);
> > +
> > +      /* This case should already be handled above when all elements are constants.  */
> > +      gcc_assert (!(even_const && odd_const));
> >
> > -      if (i == n_elts)
> > +      if (even_dup && odd_const)
> > +     {
> > +       rtx dup_reg = expand_vector_broadcast (mode, XVECEXP (vals, 0, 0));
> > +       dup_reg = force_reg (mode, dup_reg);
> > +
> > +       rtx const_reg = gen_reg_rtx (mode);
> > +       rtx const_vector = aarch64_init_interleaving_shift_init (vals, false);
> > +       aarch64_expand_vector_init (const_reg, const_vector);
> > +
> > +       rtvec v = gen_rtvec (2, dup_reg, const_reg);
> > +       emit_set_insn (target, gen_rtx_UNSPEC (mode, v, UNSPEC_ZIP1));
> > +       return;
> > +     }
> > +      else if (odd_dup && even_const)
> > +     {
> > +       rtx dup_reg = expand_vector_broadcast (mode, XVECEXP (vals, 0, 1));
> > +       dup_reg = force_reg (mode, dup_reg);
> > +
> > +       rtx const_reg = gen_reg_rtx (mode);
> > +       rtx const_vector = aarch64_init_interleaving_shift_init (vals, true);
> > +       aarch64_expand_vector_init (const_reg, const_vector);
> > +
> > +       rtvec v = gen_rtvec (2, const_reg, dup_reg);
> > +       emit_set_insn (target, gen_rtx_UNSPEC (mode, v, UNSPEC_ZIP1));
> > +       return;
> > +     }
> > +      else if (even_dup && odd_dup)
> >       {
> >         machine_mode mode = GET_MODE (target);
> >         rtx dest[2];
> > diff --git a/gcc/testsuite/gcc.target/aarch64/interleave-init-2.c b/gcc/testsuite/gcc.target/aarch64/interleave-init-2.c
> > new file mode 100644
> > index 00000000000..3ad06c00451
> > --- /dev/null
> > +++ b/gcc/testsuite/gcc.target/aarch64/interleave-init-2.c
> > @@ -0,0 +1,51 @@
> > +/* { dg-do compile } */
> > +/* { dg-options "-O3" } */
> > +/* { dg-final { check-function-bodies "**" "" "" } } */
> > +
> > +#include "arm_neon.h"
> > +
> > +/*
> > +**foo:
> > +**   ...
> > +**   dup     v[0-9]+\.8h, w[0-9]+
> > +**   adrp    x[0-9]+, .LC[0-9]+
> > +**   ldr     q[0-9]+, \[x[0-9]+, #:lo12:.LC[0-9]+\]
> > +**   zip1    v[0-9]+\.8h, v[0-9]+\.8h, v[0-9]+\.8h
> > +**   ...
> > +*/
> > +
> > +int16x8_t foo(int16_t x)
> > +{
> > +  return (int16x8_t) { x, 1, x, 2, x, 3, x, 4 };
> > +}
> > +
> > +
> > +/*
> > +**foo2:
> > +**   ...
> > +**   dup     v[0-9]+\.8h, w[0-9]+
> > +**   adrp    x[0-9]+, .LC[0-9]+
> > +**   ldr     q[0-9]+, \[x[0-9]+, #:lo12:.LC[0-9]+\]
> > +**   zip1    v[0-9]+\.8h, v[0-9]+\.8h, v[0-9]+\.8h
> > +**   ...
> > +*/
> > +
> > +int16x8_t foo2(int16_t x)
> > +{
> > +  return (int16x8_t) { 1, x, 2, x, 3, x, 4, x };
> > +}
> > +
> > +/*
> > +**foo3:
> > +**   ...
> > +**   dup     v[0-9]+\.8h, v[0-9]+\.h\[0\]
> > +**   adrp    x[0-9]+, .LC[0-9]+
> > +**   ldr     q[0-9]+, \[x[0-9]+, #:lo12:.LC[0-9]+\]
> > +**   zip1    v[0-9]+\.8h, v[0-9]+\.8h, v[0-9]+\.8h
> > +**   ...
> > +*/
> > +
> > +float16x8_t foo3(float16_t x)
> > +{
> > +  return (float16x8_t) { x, 1.0, x, 2.0, x, 3.0, x, 4.0 };
> > +}

[-- Attachment #2: gnu-821-1.txt --]
[-- Type: text/plain, Size: 11424 bytes --]

diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc
index 17c1e23e5b5..0090fb47d98 100644
--- a/gcc/config/aarch64/aarch64.cc
+++ b/gcc/config/aarch64/aarch64.cc
@@ -15065,6 +15065,11 @@ cost_plus:
       return false;
 
     case UNSPEC:
+      /* FIXME: What cost to use for zip1 ?
+	 Currently using default cost.  */
+      if (XINT (x, 1) == UNSPEC_ZIP1)
+	break;
+
       /* The floating point round to integer frint* instructions.  */
       if (aarch64_frint_unspec_p (XINT (x, 1)))
         {
@@ -21972,11 +21977,44 @@ aarch64_simd_make_constant (rtx vals)
     return NULL_RTX;
 }
 
+/* The algorithm will fill matches[*][0] with the earliest matching element,
+   and matches[X][1] with the count of duplicate elements (if X is the
+   earliest element which has duplicates).  */
+
+static void
+aarch64_expand_vector_init_get_most_repeating_elem (rtx vals, int n,
+						    int (*matches)[2],
+						    int &maxv, int &maxelement)
+{
+  memset (matches, 0, 16 * 2 * sizeof (int));
+  for (int i = 0; i < n; i++)
+    {
+      for (int j = 0; j <= i; j++)
+	{
+	  if (rtx_equal_p (XVECEXP (vals, 0, i), XVECEXP (vals, 0, j)))
+	    {
+	      matches[i][0] = j;
+	      matches[j][1]++;
+	      break;
+	    }
+	}
+    }
+  
+  maxelement = 0;
+  maxv = 0;
+  for (int i = 0; i < n; i++)
+    if (matches[i][1] > maxv)
+      {
+	maxelement = i;
+	maxv = matches[i][1];
+      }
+}
+
 /* Expand a vector initialisation sequence, such that TARGET is
    initialised to contain VALS.  */
 
-void
-aarch64_expand_vector_init (rtx target, rtx vals)
+static void
+aarch64_expand_vector_init_fallback (rtx target, rtx vals)
 {
   machine_mode mode = GET_MODE (target);
   scalar_mode inner_mode = GET_MODE_INNER (mode);
@@ -22036,38 +22074,6 @@ aarch64_expand_vector_init (rtx target, rtx vals)
       return;
     }
 
-  /* Check for interleaving case.
-     For eg if initializer is (int16x8_t) {x, y, x, y, x, y, x, y}.
-     Generate following code:
-     dup v0.h, x
-     dup v1.h, y
-     zip1 v0.h, v0.h, v1.h
-     for "large enough" initializer.  */
-
-  if (n_elts >= 8)
-    {
-      int i;
-      for (i = 2; i < n_elts; i++)
-	if (!rtx_equal_p (XVECEXP (vals, 0, i), XVECEXP (vals, 0, i % 2)))
-	  break;
-
-      if (i == n_elts)
-	{
-	  machine_mode mode = GET_MODE (target);
-	  rtx dest[2];
-
-	  for (int i = 0; i < 2; i++)
-	    {
-	      rtx x = expand_vector_broadcast (mode, XVECEXP (vals, 0, i));
-	      dest[i] = force_reg (mode, x);
-	    }
-
-	  rtvec v = gen_rtvec (2, dest[0], dest[1]);
-	  emit_set_insn (target, gen_rtx_UNSPEC (mode, v, UNSPEC_ZIP1));
-	  return;
-	}
-    }
-
   enum insn_code icode = optab_handler (vec_set_optab, mode);
   gcc_assert (icode != CODE_FOR_nothing);
 
@@ -22075,33 +22081,15 @@ aarch64_expand_vector_init (rtx target, rtx vals)
      the insertion using dup for the most common element
      followed by insertions.  */
 
-  /* The algorithm will fill matches[*][0] with the earliest matching element,
-     and matches[X][1] with the count of duplicate elements (if X is the
-     earliest element which has duplicates).  */
 
   if (n_var == n_elts && n_elts <= 16)
     {
-      int matches[16][2] = {0};
-      for (int i = 0; i < n_elts; i++)
-	{
-	  for (int j = 0; j <= i; j++)
-	    {
-	      if (rtx_equal_p (XVECEXP (vals, 0, i), XVECEXP (vals, 0, j)))
-		{
-		  matches[i][0] = j;
-		  matches[j][1]++;
-		  break;
-		}
-	    }
-	}
-      int maxelement = 0;
-      int maxv = 0;
-      for (int i = 0; i < n_elts; i++)
-	if (matches[i][1] > maxv)
-	  {
-	    maxelement = i;
-	    maxv = matches[i][1];
-	  }
+      int matches[16][2];
+      int maxelement, maxv;
+      aarch64_expand_vector_init_get_most_repeating_elem (vals, n_elts,
+      							  matches,
+							  maxv,
+							  maxelement);
 
       /* Create a duplicate of the most common element, unless all elements
 	 are equally useless to us, in which case just immediately set the
@@ -22189,7 +22177,7 @@ aarch64_expand_vector_init (rtx target, rtx vals)
 	    }
 	  XVECEXP (copy, 0, i) = subst;
 	}
-      aarch64_expand_vector_init (target, copy);
+      aarch64_expand_vector_init_fallback (target, copy);
     }
 
   /* Insert the variable lanes directly.  */
@@ -22203,6 +22191,126 @@ aarch64_expand_vector_init (rtx target, rtx vals)
     }
 }
 
+/* Function to pad elements in VALS as described in the comment
+   for aarch64_expand_vector_init_split_vals.  */
+
+static rtx
+aarch64_expand_vector_init_get_padded_elem (rtx vals, int n)
+{
+  for (int i = 0; i < n; i++)
+    {
+      rtx elem = XVECEXP (vals, 0, i);
+      if (CONST_INT_P (elem) || CONST_DOUBLE_P (elem))
+	return elem;
+    }
+
+  int matches[16][2];
+  int maxv, maxelement;
+  aarch64_expand_vector_init_get_most_repeating_elem (vals, n, matches, maxv, maxelement);
+  return XVECEXP (vals, 0, maxelement);
+}
+
+/*
+Split vals into even or odd half, however since the mode remains same,
+we have to pad up with extra elements to fill vector length.
+The function uses couple of heuristics for padding:
+(1) If the split portion contains a constant, pad the vector with
+    constant elem.
+    For eg if split portion is {x, 1, 2, 3} and mode is V8HI
+    then the result is {x, 1, 2, 3, 1, 1, 1, 1}
+(2) If the split portion is entirely of variables, then use the
+    most frequently repeating variable as padding element.
+    For eg if split portion is {x, x, x, y} and mode is V8HI,
+    then the result is {x, x, x, x, y, x, x, x}
+    We use the most frequenty repeating variable so dup will initialize
+    most of the vector and then use insr to insert remaining ones,
+    which will be done in aarch64_expand_vector_init_fallback.
+*/
+
+static rtx
+aarch64_expand_vector_init_split_vals (rtx vals, bool even_p)
+{
+  rtx new_vals = copy_rtx (vals);
+  int n = XVECLEN (vals, 0);
+  int i;
+  for (i = 0; i < n / 2; i++)
+    XVECEXP (new_vals, 0, i)
+      = XVECEXP (new_vals, 0, (even_p) ? 2 * i : 2 * i + 1);
+
+  rtx padded_val
+    = aarch64_expand_vector_init_get_padded_elem (new_vals, n / 2); 
+  for (; i < n; i++)
+    XVECEXP (new_vals, 0, i) = padded_val;
+  return new_vals;
+}
+
+DEBUG_FUNCTION
+static void
+aarch64_expand_vector_init_debug_seq (rtx_insn *seq, const char *s)
+{
+  fprintf (stderr, "%s: %u\n", s, seq_cost (seq, !optimize_size));
+  for (rtx_insn *i = seq; i; i = NEXT_INSN (i))
+    {
+      debug_rtx (PATTERN (i));
+      fprintf (stderr, "cost: %d\n", pattern_cost (PATTERN (i), !optimize_size));
+    }
+}
+
+/*
+The function does the following:
+(a) Generates code sequence by splitting VALS into even and odd halves,
+    and recursively calling itself to initialize them and then merge using
+    zip1.
+(b) Generate code sequence directly using aarch64_expand_vector_init_fallback.
+(c) Compare the cost of code sequences generated by (a) and (b), and choose
+    the more efficient one.
+*/
+
+void
+aarch64_expand_vector_init_1 (rtx target, rtx vals, int n_elts)
+{
+  if (n_elts < 8)
+    {
+      aarch64_expand_vector_init_fallback (target, vals);
+      return;
+    }
+
+  machine_mode mode = GET_MODE (target);
+
+  start_sequence ();
+  rtx dest[2];
+  for (int i = 0; i < 2; i++)
+    {
+      dest[i] = gen_reg_rtx (mode);
+      rtx new_vals
+	= aarch64_expand_vector_init_split_vals (vals, (i % 2) == 0);
+      aarch64_expand_vector_init_1 (dest[i], new_vals, n_elts / 2);
+    }
+
+  rtvec v = gen_rtvec (2, dest[0], dest[1]);
+  emit_set_insn (target, gen_rtx_UNSPEC (mode, v, UNSPEC_ZIP1));
+  
+  rtx_insn *seq = get_insns (); 
+  end_sequence ();
+
+  start_sequence ();
+  aarch64_expand_vector_init_fallback (target, vals);
+  rtx_insn *fallback_seq = get_insns (); 
+  end_sequence ();
+
+  emit_insn (seq_cost (seq, !optimize_size)
+	     < seq_cost (fallback_seq, !optimize_size)
+	     ? seq : fallback_seq);
+}
+
+/* Wrapper around aarch64_expand_vector_init_1.  */
+
+void
+aarch64_expand_vector_init (rtx target, rtx vals)
+{
+  aarch64_expand_vector_init_1 (target, vals, XVECLEN (vals, 0));
+}
+
 /* Emit RTL corresponding to:
    insr TARGET, ELEM.  */
 
diff --git a/gcc/testsuite/gcc.target/aarch64/interleave-init-1.c b/gcc/testsuite/gcc.target/aarch64/vec-init-18.c
similarity index 100%
rename from gcc/testsuite/gcc.target/aarch64/interleave-init-1.c
rename to gcc/testsuite/gcc.target/aarch64/vec-init-18.c
diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-19.c b/gcc/testsuite/gcc.target/aarch64/vec-init-19.c
new file mode 100644
index 00000000000..d204c7e1f8b
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/vec-init-19.c
@@ -0,0 +1,21 @@
+/* { dg-do compile } */
+/* { dg-options "-O3" } */
+/* { dg-final { check-function-bodies "**" "" "" } } */
+
+#include <arm_neon.h>
+
+/*
+** f_s8:
+**	...
+**	dup	v[0-9]+\.16b, w[0-9]+
+**	adrp	x[0-9]+, \.LC[0-9]+
+**	ldr	q[0-9]+, \[x[0-9]+, #:lo12:.LC[0-9]+\]
+**	zip1	v[0-9]+\.16b, v[0-9]+\.16b, v[0-9]+\.16b
+**	ret
+*/
+
+int8x16_t f_s8(int8_t x)
+{
+  return (int8x16_t) { x, 1, x, 2, x, 3, x, 4,
+                       x, 5, x, 6, x, 7, x, 8 };
+}
diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-20.c b/gcc/testsuite/gcc.target/aarch64/vec-init-20.c
new file mode 100644
index 00000000000..c2c97469940
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/vec-init-20.c
@@ -0,0 +1,22 @@
+/* { dg-do compile } */
+/* { dg-options "-O3" } */
+/* { dg-final { check-function-bodies "**" "" "" } } */
+
+#include <arm_neon.h>
+
+/*
+** f_s8:
+**	...
+**	adrp	x[0-9]+, \.LC[0-9]+
+**	dup	v[0-9]+\.16b, w[0-9]+
+**	ldr	q[0-9]+, \[x[0-9]+, #:lo12:\.LC[0-9]+\]
+**	ins	v0\.b\[0\], w0
+**	zip1	v[0-9]+\.16b, v[0-9]+\.16b, v[0-9]+\.16b
+**	ret
+*/
+
+int8x16_t f_s8(int8_t x, int8_t y)
+{
+  return (int8x16_t) { x, y, 1, y, 2, y, 3, y,
+                       4, y, 5, y, 6, y, 7, y };
+}
diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-21.c b/gcc/testsuite/gcc.target/aarch64/vec-init-21.c
new file mode 100644
index 00000000000..e16459486d7
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/vec-init-21.c
@@ -0,0 +1,22 @@
+/* { dg-do compile } */
+/* { dg-options "-O3" } */
+/* { dg-final { check-function-bodies "**" "" "" } } */
+
+#include <arm_neon.h>
+
+/*
+** f_s8:
+**	...
+**	adrp	x[0-9]+, \.LC[0-9]+
+**	ldr	q[0-9]+, \[x[0-9]+, #:lo12:\.LC[0-9]+\]
+**	ins	v0\.b\[0\], w0
+**	ins	v0\.b\[1\], w1
+**	...
+**	ret
+*/
+
+int8x16_t f_s8(int8_t x, int8_t y)
+{
+  return (int8x16_t) { x, y, 1, 2, 3, 4, 5, 6,
+                       7, 8, 9, 10, 11, 12, 13, 14 };
+}
diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-22.c b/gcc/testsuite/gcc.target/aarch64/vec-init-22.c
new file mode 100644
index 00000000000..e5016a47a3b
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/vec-init-22.c
@@ -0,0 +1,30 @@
+/* { dg-do compile } */
+/* { dg-options "-O3" } */
+/* { dg-final { check-function-bodies "**" "" "" } } */
+
+#include <arm_neon.h>
+
+/* Verify that fallback code-sequence is chosen over
+   recursively generated code-sequence merged with zip1.  */
+
+/*
+** f_s16:
+**	...
+**	sxth	w0, w0
+**	fmov	s0, w0
+**	ins	v0\.h\[1\], w1
+**	ins	v0\.h\[2\], w2
+**	ins	v0\.h\[3\], w3
+**	ins	v0\.h\[4\], w4
+**	ins	v0\.h\[5\], w5
+**	ins	v0\.h\[6\], w6
+**	ins	v0\.h\[7\], w7
+**	...
+**	ret
+*/
+
+int16x8_t f_s16 (int16_t x0, int16_t x1, int16_t x2, int16_t x3,
+                 int16_t x4, int16_t x5, int16_t x6, int16_t x7)
+{
+  return (int16x8_t) { x0, x1, x2, x3, x4, x5, x6, x7 };
+}

  reply	other threads:[~2023-02-01  9:37 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-29 14:39 Prathamesh Kulkarni
2022-11-29 15:13 ` Andrew Pinski
2022-11-29 17:06   ` Prathamesh Kulkarni
2022-12-05 10:52 ` Richard Sandiford
2022-12-05 11:20   ` Richard Sandiford
2022-12-06  1:31     ` Prathamesh Kulkarni
2022-12-26  4:22       ` Prathamesh Kulkarni
2023-01-12 15:51         ` Richard Sandiford
2023-02-01  9:36           ` Prathamesh Kulkarni [this message]
2023-02-01 16:26             ` Richard Sandiford
2023-02-02 14:51               ` Prathamesh Kulkarni
2023-02-02 15:20                 ` Richard Sandiford
2023-02-03  1:40                   ` Prathamesh Kulkarni
2023-02-03  3:02                     ` Prathamesh Kulkarni
2023-02-03 15:17                       ` Richard Sandiford
2023-02-04  6:49                         ` Prathamesh Kulkarni
2023-02-06 12:13                           ` Richard Sandiford
2023-02-11  9:12                             ` Prathamesh Kulkarni
2023-03-10 18:08                               ` Richard Sandiford
2023-03-13  7:33                                 ` Richard Biener
2023-04-03 16:33                                   ` Prathamesh Kulkarni
2023-04-04 18:05                                     ` Richard Sandiford
2023-04-06 10:26                                       ` Prathamesh Kulkarni
2023-04-06 10:34                                         ` Richard Sandiford
2023-04-06 11:21                                           ` Prathamesh Kulkarni
2023-04-12  8:59                                             ` Richard Sandiford
2023-04-21  7:27                                               ` Prathamesh Kulkarni
2023-04-21  9:17                                                 ` Richard Sandiford
2023-04-21 15:15                                                   ` Prathamesh Kulkarni
2023-04-23  1:53                                                     ` Prathamesh Kulkarni
2023-04-24  9:29                                                       ` Richard Sandiford
2023-05-04 11:47                                                         ` Prathamesh Kulkarni
2023-05-11 19:07                                                           ` Richard Sandiford
2023-05-13  9:10                                                             ` Prathamesh Kulkarni

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAAgBjMndgAd5eS52rKq+5MsqzA2FRiXM_3CLiovgD9rn8f6TBw@mail.gmail.com \
    --to=prathamesh.kulkarni@linaro.org \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=richard.sandiford@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).