From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id BE28F3858C27 for ; Tue, 4 Apr 2023 18:05:57 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org BE28F3858C27 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 013C5D75; Tue, 4 Apr 2023 11:06:42 -0700 (PDT) Received: from localhost (e121540-lin.manchester.arm.com [10.32.110.72]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DF7083F6C4; Tue, 4 Apr 2023 11:05:56 -0700 (PDT) From: Richard Sandiford To: Prathamesh Kulkarni Mail-Followup-To: Prathamesh Kulkarni ,Richard Biener , gcc Patches , richard.sandiford@arm.com Cc: Richard Biener , gcc Patches Subject: Re: [aarch64] Use dup and zip1 for interleaving elements in initializing vector References: Date: Tue, 04 Apr 2023 19:05:55 +0100 In-Reply-To: (Prathamesh Kulkarni's message of "Mon, 3 Apr 2023 22:03:24 +0530") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain X-Spam-Status: No, score=-31.4 required=5.0 tests=BAYES_00,GIT_PATCH_0,KAM_DMARC_NONE,KAM_DMARC_STATUS,KAM_LAZY_DOMAIN_SECURITY,KAM_SHORT,SPF_HELO_NONE,SPF_NONE,TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: Prathamesh Kulkarni writes: > On Mon, 13 Mar 2023 at 13:03, Richard Biener wrote: >> On GIMPLE it would be >> >> _1 = { a, ... }; // (a) >> _2 = { _1, ... }; // (b) >> >> but I'm not sure if (b), a VL CTOR of fixed len(?) sub-vectors is >> possible? But at least a CTOR of vectors is what we use to >> concat vectors. >> >> With the recent relaxing of VEC_PERM inputs it's also possible to >> express (b) with a VEC_PERM: >> >> _2 = VEC_PERM <_1, _1, { 0, 1, 2, 3, 0, 1, 2, 3, ... }> >> >> but again I'm not sure if that repeating 0, 1, 2, 3 is expressible >> for VL vectors (maybe we'd allow "wrapping" here, I'm not sure). >> > Hi, > Thanks for the suggestions and sorry for late response in turn. > The attached patch tries to fix the issue by explicitly constructing a CTOR > from svdupq's arguments and then using VEC_PERM_EXPR with VL mask > having encoded elements {0, 1, ... nargs-1}, > npatterns == nargs, and nelts_per_pattern == 1, to replicate the base vector. > > So for example, for the above case, > svint32_t f_32(int32x4_t x) > { > return svdupq_s32 (x[0], x[1], x[2], x[3]); > } > > forwprop1 lowers it to: > svint32_t _6; > vector(4) int _8; > : > _1 = BIT_FIELD_REF ; > _2 = BIT_FIELD_REF ; > _3 = BIT_FIELD_REF ; > _4 = BIT_FIELD_REF ; > _8 = {_1, _2, _3, _4}; > _6 = VEC_PERM_EXPR <_8, _8, { 0, 1, 2, 3, ... }>; > return _6; > > which is then eventually optimized to: > svint32_t _6; > [local count: 1073741824]: > _6 = VEC_PERM_EXPR ; > return _6; > > code-gen: > f_32: > dup z0.q, z0.q[0] > ret Nice! > Does it look OK ? > > Thanks, > Prathamesh >> Richard. >> >> > We're planning to implement the ACLE's Neon-SVE bridge: >> > https://github.com/ARM-software/acle/blob/main/main/acle.md#neon-sve-bridge >> > and so we'll need (b) to implement the svdup_neonq functions. >> > >> > Thanks, >> > Richard >> > >> >> -- >> Richard Biener >> SUSE Software Solutions Germany GmbH, Frankenstrasse 146, 90461 Nuernberg, >> Germany; GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman; >> HRB 36809 (AG Nuernberg) > > [SVE] Fold svld1rq to VEC_PERM_EXPR if elements are not constant. > > gcc/ChangeLog: > * config/aarch64/aarch64-sve-builtins-base.cc > (svdupq_impl::fold_nonconst_dupq): New method. > (svdupq_impl::fold): Call fold_nonconst_dupq. > > gcc/testsuite/ChangeLog: > * gcc.target/aarch64/sve/acle/general/dupq_11.c: New test. > > diff --git a/gcc/config/aarch64/aarch64-sve-builtins-base.cc b/gcc/config/aarch64/aarch64-sve-builtins-base.cc > index cd9cace3c9b..3de79060619 100644 > --- a/gcc/config/aarch64/aarch64-sve-builtins-base.cc > +++ b/gcc/config/aarch64/aarch64-sve-builtins-base.cc > @@ -817,6 +817,62 @@ public: > > class svdupq_impl : public quiet > { > +private: > + gimple * > + fold_nonconst_dupq (gimple_folder &f, unsigned factor) const > + { > + /* Lower lhs = svdupq (arg0, arg1, ..., argN} into: > + tmp = {arg0, arg1, ..., arg} > + lhs = VEC_PERM_EXPR (tmp, tmp, {0, 1, 2, N-1, ...}) */ > + > + /* TODO: Revisit to handle factor by padding zeros. */ > + if (factor > 1) > + return NULL; Isn't the key thing here predicate vs. vector rather than factor == 1 vs. factor != 1? Do we generate good code for b8, where factor should be 1? > + > + if (BYTES_BIG_ENDIAN) > + return NULL; > + > + tree lhs = gimple_call_lhs (f.call); > + if (TREE_CODE (lhs) != SSA_NAME) > + return NULL; Why is this check needed? > + tree lhs_type = TREE_TYPE (lhs); > + tree elt_type = TREE_TYPE (lhs_type); > + scalar_mode elt_mode = GET_MODE_INNER (TYPE_MODE (elt_type)); Aren't we already dealing with a scalar type here? I'd have expected SCALAR_TYPE_MODE rather than GET_MODE_INNER (TYPE_MODE ...). > + machine_mode vq_mode = aarch64_vq_mode (elt_mode).require (); > + tree vq_type = build_vector_type_for_mode (elt_type, vq_mode); > + > + unsigned nargs = gimple_call_num_args (f.call); > + vec *v; > + vec_alloc (v, nargs); > + for (unsigned i = 0; i < nargs; i++) > + CONSTRUCTOR_APPEND_ELT (v, NULL_TREE, gimple_call_arg (f.call, i)); > + tree vec = build_constructor (vq_type, v); > + > + tree access_type > + = build_aligned_type (vq_type, TYPE_ALIGN (elt_type)); Nit: seems to fit on one line. But do we need this? We're not accessing memory, so I'd have expected vq_type to be OK as-is. > + tree tmp = make_ssa_name_fn (cfun, access_type, 0); > + gimple *g = gimple_build_assign (tmp, vec); > + > + gimple_seq stmts = NULL; > + gimple_seq_add_stmt_without_update (&stmts, g); > + > + int source_nelts = TYPE_VECTOR_SUBPARTS (access_type).to_constant (); Looks like we should be able to use nargs instead of source_nelts. Thanks, Richard > + poly_uint64 lhs_len = TYPE_VECTOR_SUBPARTS (lhs_type); > + vec_perm_builder sel (lhs_len, source_nelts, 1); > + for (int i = 0; i < source_nelts; i++) > + sel.quick_push (i); > + > + vec_perm_indices indices (sel, 1, source_nelts); > + tree mask_type = build_vector_type (ssizetype, lhs_len); > + tree mask = vec_perm_indices_to_tree (mask_type, indices); > + > + gimple *g2 = gimple_build_assign (lhs, VEC_PERM_EXPR, tmp, tmp, mask); > + gimple_seq_add_stmt_without_update (&stmts, g2); > + gsi_replace_with_seq (f.gsi, stmts, false); > + return g2; > + } > + > public: > gimple * > fold (gimple_folder &f) const override > @@ -832,7 +888,7 @@ public: > { > tree elt = gimple_call_arg (f.call, i); > if (!CONSTANT_CLASS_P (elt)) > - return NULL; > + return fold_nonconst_dupq (f, factor); > builder.quick_push (elt); > for (unsigned int j = 1; j < factor; ++j) > builder.quick_push (build_zero_cst (TREE_TYPE (vec_type))); > diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/general/dupq_11.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/general/dupq_11.c > new file mode 100644 > index 00000000000..f19f8deb1e5 > --- /dev/null > +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/general/dupq_11.c > @@ -0,0 +1,31 @@ > +/* { dg-do compile } */ > +/* { dg-options "-O3 -fdump-tree-optimized" } */ > + > +#include > +#include > + > +svint8_t f_s8(int8x16_t x) > +{ > + return svdupq_s8 (x[0], x[1], x[2], x[3], x[4], x[5], x[6], x[7], > + x[8], x[9], x[10], x[11], x[12], x[13], x[14], x[15]); > +} > + > +svint16_t f_s16(int16x8_t x) > +{ > + return svdupq_s16 (x[0], x[1], x[2], x[3], x[4], x[5], x[6], x[7]); > +} > + > +svint32_t f_s32(int32x4_t x) > +{ > + return svdupq_s32 (x[0], x[1], x[2], x[3]); > +} > + > +svint64_t f_s64(int64x2_t x) > +{ > + return svdupq_s64 (x[0], x[1]); > +} > + > +/* { dg-final { scan-tree-dump "VEC_PERM_EXPR" "optimized" } } */ > +/* { dg-final { scan-tree-dump-not "svdupq" "optimized" } } */ > + > +/* { dg-final { scan-assembler-times {\tdup\tz[0-9]+\.q, z[0-9]+\.q\[0\]\n} 4 } } */