public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Richard Sandiford <richard.sandiford@arm.com>
To: Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org>
Cc: Richard Biener <rguenther@suse.de>,
	 gcc Patches <gcc-patches@gcc.gnu.org>
Subject: Re: [aarch64] Use dup and zip1 for interleaving elements in initializing vector
Date: Tue, 04 Apr 2023 19:05:55 +0100	[thread overview]
Message-ID: <mpth6tvo73w.fsf@arm.com> (raw)
In-Reply-To: <CAAgBjM=1qkfP=f-_LTXJtqoexpGMzNdUpEJY2y1nRyTn8XUcow@mail.gmail.com> (Prathamesh Kulkarni's message of "Mon, 3 Apr 2023 22:03:24 +0530")

Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
> On Mon, 13 Mar 2023 at 13:03, Richard Biener <rguenther@suse.de> wrote:
>> On GIMPLE it would be
>>
>>  _1 = { a, ... }; // (a)
>>  _2 = { _1, ... }; // (b)
>>
>> but I'm not sure if (b), a VL CTOR of fixed len(?) sub-vectors is
>> possible?  But at least a CTOR of vectors is what we use to
>> concat vectors.
>>
>> With the recent relaxing of VEC_PERM inputs it's also possible to
>> express (b) with a VEC_PERM:
>>
>>  _2 = VEC_PERM <_1, _1, { 0, 1, 2, 3, 0, 1, 2, 3, ... }>
>>
>> but again I'm not sure if that repeating 0, 1, 2, 3 is expressible
>> for VL vectors (maybe we'd allow "wrapping" here, I'm not sure).
>>
> Hi,
> Thanks for the suggestions and sorry for late response in turn.
> The attached patch tries to fix the issue by explicitly constructing a CTOR
> from svdupq's arguments and then using VEC_PERM_EXPR with VL mask
> having encoded elements {0, 1, ... nargs-1},
> npatterns == nargs, and nelts_per_pattern == 1, to replicate the base vector.
>
> So for example, for the above case,
> svint32_t f_32(int32x4_t x)
> {
>   return svdupq_s32 (x[0], x[1], x[2], x[3]);
> }
>
> forwprop1 lowers it to:
>   svint32_t _6;
>   vector(4) int _8;
>  <bb 2> :
>   _1 = BIT_FIELD_REF <x_5(D), 32, 0>;
>   _2 = BIT_FIELD_REF <x_5(D), 32, 32>;
>   _3 = BIT_FIELD_REF <x_5(D), 32, 64>;
>   _4 = BIT_FIELD_REF <x_5(D), 32, 96>;
>   _8 = {_1, _2, _3, _4};
>   _6 = VEC_PERM_EXPR <_8, _8, { 0, 1, 2, 3, ... }>;
>   return _6;
>
> which is then eventually optimized to:
>   svint32_t _6;
>   <bb 2> [local count: 1073741824]:
>   _6 = VEC_PERM_EXPR <x_5(D), x_5(D), { 0, 1, 2, 3, ... }>;
>   return _6;
>
> code-gen:
> f_32:
>         dup     z0.q, z0.q[0]
>         ret

Nice!

> Does it look OK ?
>
> Thanks,
> Prathamesh
>> Richard.
>>
>> > We're planning to implement the ACLE's Neon-SVE bridge:
>> > https://github.com/ARM-software/acle/blob/main/main/acle.md#neon-sve-bridge
>> > and so we'll need (b) to implement the svdup_neonq functions.
>> >
>> > Thanks,
>> > Richard
>> >
>>
>> --
>> Richard Biener <rguenther@suse.de>
>> SUSE Software Solutions Germany GmbH, Frankenstrasse 146, 90461 Nuernberg,
>> Germany; GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman;
>> HRB 36809 (AG Nuernberg)
>
> [SVE] Fold svld1rq to VEC_PERM_EXPR if elements are not constant.
>
> gcc/ChangeLog:
> 	* config/aarch64/aarch64-sve-builtins-base.cc
> 	(svdupq_impl::fold_nonconst_dupq): New method.
> 	(svdupq_impl::fold): Call fold_nonconst_dupq.
>
> gcc/testsuite/ChangeLog:
> 	* gcc.target/aarch64/sve/acle/general/dupq_11.c: New test.
>
> diff --git a/gcc/config/aarch64/aarch64-sve-builtins-base.cc b/gcc/config/aarch64/aarch64-sve-builtins-base.cc
> index cd9cace3c9b..3de79060619 100644
> --- a/gcc/config/aarch64/aarch64-sve-builtins-base.cc
> +++ b/gcc/config/aarch64/aarch64-sve-builtins-base.cc
> @@ -817,6 +817,62 @@ public:
>  
>  class svdupq_impl : public quiet<function_base>
>  {
> +private:
> +  gimple *
> +  fold_nonconst_dupq (gimple_folder &f, unsigned factor) const
> +  {
> +    /* Lower lhs = svdupq (arg0, arg1, ..., argN} into:
> +       tmp = {arg0, arg1, ..., arg<N-1>}
> +       lhs = VEC_PERM_EXPR (tmp, tmp, {0, 1, 2, N-1, ...})  */
> +
> +    /* TODO: Revisit to handle factor by padding zeros.  */
> +    if (factor > 1)
> +      return NULL;

Isn't the key thing here predicate vs. vector rather than factor == 1 vs.
factor != 1?  Do we generate good code for b8, where factor should be 1?

> +
> +    if (BYTES_BIG_ENDIAN)
> +      return NULL;
> +
> +    tree lhs = gimple_call_lhs (f.call);
> +    if (TREE_CODE (lhs) != SSA_NAME)
> +      return NULL;

Why is this check needed?

> +    tree lhs_type = TREE_TYPE (lhs);
> +    tree elt_type = TREE_TYPE (lhs_type);
> +    scalar_mode elt_mode = GET_MODE_INNER (TYPE_MODE (elt_type));

Aren't we already dealing with a scalar type here?  I'd have expected
SCALAR_TYPE_MODE rather than GET_MODE_INNER (TYPE_MODE ...).

> +    machine_mode vq_mode = aarch64_vq_mode (elt_mode).require ();
> +    tree vq_type = build_vector_type_for_mode (elt_type, vq_mode);
> +
> +    unsigned nargs = gimple_call_num_args (f.call);
> +    vec<constructor_elt, va_gc> *v;
> +    vec_alloc (v, nargs);
> +    for (unsigned i = 0; i < nargs; i++)
> +      CONSTRUCTOR_APPEND_ELT (v, NULL_TREE, gimple_call_arg (f.call, i));
> +    tree vec = build_constructor (vq_type, v);
> +
> +    tree access_type
> +      = build_aligned_type (vq_type, TYPE_ALIGN (elt_type));

Nit: seems to fit on one line.  But do we need this?  We're not accessing
memory, so I'd have expected vq_type to be OK as-is.

> +    tree tmp = make_ssa_name_fn (cfun, access_type, 0);
> +    gimple *g = gimple_build_assign (tmp, vec);
> +
> +    gimple_seq stmts = NULL;
> +    gimple_seq_add_stmt_without_update (&stmts, g);
> +
> +    int source_nelts = TYPE_VECTOR_SUBPARTS (access_type).to_constant ();

Looks like we should be able to use nargs instead of source_nelts.

Thanks,
Richard

> +    poly_uint64 lhs_len = TYPE_VECTOR_SUBPARTS (lhs_type);
> +    vec_perm_builder sel (lhs_len, source_nelts, 1);
> +    for (int i = 0; i < source_nelts; i++)
> +      sel.quick_push (i);
> +
> +    vec_perm_indices indices (sel, 1, source_nelts);
> +    tree mask_type = build_vector_type (ssizetype, lhs_len);
> +    tree mask = vec_perm_indices_to_tree (mask_type, indices);
> +
> +    gimple *g2 = gimple_build_assign (lhs, VEC_PERM_EXPR, tmp, tmp, mask);
> +    gimple_seq_add_stmt_without_update (&stmts, g2);
> +    gsi_replace_with_seq (f.gsi, stmts, false);
> +    return g2;
> +  }
> +
>  public:
>    gimple *
>    fold (gimple_folder &f) const override
> @@ -832,7 +888,7 @@ public:
>        {
>  	tree elt = gimple_call_arg (f.call, i);
>  	if (!CONSTANT_CLASS_P (elt))
> -	  return NULL;
> +	  return fold_nonconst_dupq (f, factor);
>  	builder.quick_push (elt);
>  	for (unsigned int j = 1; j < factor; ++j)
>  	  builder.quick_push (build_zero_cst (TREE_TYPE (vec_type)));
> diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/general/dupq_11.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/general/dupq_11.c
> new file mode 100644
> index 00000000000..f19f8deb1e5
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/general/dupq_11.c
> @@ -0,0 +1,31 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O3 -fdump-tree-optimized" } */
> +
> +#include <arm_sve.h>
> +#include <arm_neon.h>
> +
> +svint8_t f_s8(int8x16_t x)
> +{
> +  return svdupq_s8 (x[0], x[1], x[2], x[3], x[4], x[5], x[6], x[7],
> +		    x[8], x[9], x[10], x[11], x[12], x[13], x[14], x[15]);
> +}
> +
> +svint16_t f_s16(int16x8_t x)
> +{
> +  return svdupq_s16 (x[0], x[1], x[2], x[3], x[4], x[5], x[6], x[7]);
> +}
> +
> +svint32_t f_s32(int32x4_t x)
> +{
> +  return svdupq_s32 (x[0], x[1], x[2], x[3]);
> +}
> +
> +svint64_t f_s64(int64x2_t x)
> +{
> +  return svdupq_s64 (x[0], x[1]);
> +}
> +
> +/* { dg-final { scan-tree-dump "VEC_PERM_EXPR" "optimized" } } */
> +/* { dg-final { scan-tree-dump-not "svdupq" "optimized" } } */
> +
> +/* { dg-final { scan-assembler-times {\tdup\tz[0-9]+\.q, z[0-9]+\.q\[0\]\n} 4 } } */

  reply	other threads:[~2023-04-04 18:05 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-29 14:39 Prathamesh Kulkarni
2022-11-29 15:13 ` Andrew Pinski
2022-11-29 17:06   ` Prathamesh Kulkarni
2022-12-05 10:52 ` Richard Sandiford
2022-12-05 11:20   ` Richard Sandiford
2022-12-06  1:31     ` Prathamesh Kulkarni
2022-12-26  4:22       ` Prathamesh Kulkarni
2023-01-12 15:51         ` Richard Sandiford
2023-02-01  9:36           ` Prathamesh Kulkarni
2023-02-01 16:26             ` Richard Sandiford
2023-02-02 14:51               ` Prathamesh Kulkarni
2023-02-02 15:20                 ` Richard Sandiford
2023-02-03  1:40                   ` Prathamesh Kulkarni
2023-02-03  3:02                     ` Prathamesh Kulkarni
2023-02-03 15:17                       ` Richard Sandiford
2023-02-04  6:49                         ` Prathamesh Kulkarni
2023-02-06 12:13                           ` Richard Sandiford
2023-02-11  9:12                             ` Prathamesh Kulkarni
2023-03-10 18:08                               ` Richard Sandiford
2023-03-13  7:33                                 ` Richard Biener
2023-04-03 16:33                                   ` Prathamesh Kulkarni
2023-04-04 18:05                                     ` Richard Sandiford [this message]
2023-04-06 10:26                                       ` Prathamesh Kulkarni
2023-04-06 10:34                                         ` Richard Sandiford
2023-04-06 11:21                                           ` Prathamesh Kulkarni
2023-04-12  8:59                                             ` Richard Sandiford
2023-04-21  7:27                                               ` Prathamesh Kulkarni
2023-04-21  9:17                                                 ` Richard Sandiford
2023-04-21 15:15                                                   ` Prathamesh Kulkarni
2023-04-23  1:53                                                     ` Prathamesh Kulkarni
2023-04-24  9:29                                                       ` Richard Sandiford
2023-05-04 11:47                                                         ` Prathamesh Kulkarni
2023-05-11 19:07                                                           ` Richard Sandiford
2023-05-13  9:10                                                             ` Prathamesh Kulkarni

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=mpth6tvo73w.fsf@arm.com \
    --to=richard.sandiford@arm.com \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=prathamesh.kulkarni@linaro.org \
    --cc=rguenther@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).