From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id E73313858C66 for ; Mon, 24 Apr 2023 09:30:00 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org E73313858C66 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8ADA5D75; Mon, 24 Apr 2023 02:30:44 -0700 (PDT) Received: from localhost (e121540-lin.manchester.arm.com [10.32.110.72]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E6D953F5A1; Mon, 24 Apr 2023 02:29:59 -0700 (PDT) From: Richard Sandiford To: Prathamesh Kulkarni Mail-Followup-To: Prathamesh Kulkarni ,Richard Biener , gcc Patches , richard.sandiford@arm.com Cc: Richard Biener , gcc Patches Subject: Re: [aarch64] Use dup and zip1 for interleaving elements in initializing vector References: Date: Mon, 24 Apr 2023 10:29:58 +0100 In-Reply-To: (Prathamesh Kulkarni's message of "Sun, 23 Apr 2023 07:23:11 +0530") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain X-Spam-Status: No, score=-30.5 required=5.0 tests=BAYES_00,GIT_PATCH_0,KAM_DMARC_NONE,KAM_DMARC_STATUS,KAM_LAZY_DOMAIN_SECURITY,KAM_SHORT,SPF_HELO_NONE,SPF_NONE,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: Prathamesh Kulkarni writes: > [aarch64] Recursively intialize even and odd sub-parts and merge with zip1. > > gcc/ChangeLog: > * config/aarch64/aarch64.cc (aarch64_expand_vector_init_fallback): Rename > aarch64_expand_vector_init to this, and remove interleaving case. > Recursively call aarch64_expand_vector_init_fallback, instead of > aarch64_expand_vector_init. > (aarch64_unzip_vector_init): New function. > (aarch64_expand_vector_init): Likewise. > > gcc/testsuite/ChangeLog: > * gcc.target/aarch64/ldp_stp_16.c (cons2_8_float): Adjust for new > code-gen. > * gcc.target/aarch64/sve/acle/general/dupq_5.c: Likewise. > * gcc.target/aarch64/sve/acle/general/dupq_6.c: Likewise. > * gcc.target/aarch64/vec-init-18.c: Rename interleave-init-1.c to > this. > * gcc.target/aarch64/vec-init-19.c: New test. > * gcc.target/aarch64/vec-init-20.c: Likewise. > * gcc.target/aarch64/vec-init-21.c: Likewise. > * gcc.target/aarch64/vec-init-22-size.c: Likewise. > * gcc.target/aarch64/vec-init-22-speed.c: Likewise. > * gcc.target/aarch64/vec-init-22.h: New header. > > diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc > index d7e895f8d34..416e062829c 100644 > --- a/gcc/config/aarch64/aarch64.cc > +++ b/gcc/config/aarch64/aarch64.cc > @@ -22026,11 +22026,12 @@ aarch64_simd_make_constant (rtx vals) > return NULL_RTX; > } > > -/* Expand a vector initialisation sequence, such that TARGET is > - initialised to contain VALS. */ > +/* A subroutine of aarch64_expand_vector_init, with the same interface. > + The caller has already tried a divide-and-conquer approach, so do > + not consider that case here. */ > > void > -aarch64_expand_vector_init (rtx target, rtx vals) > +aarch64_expand_vector_init_fallback (rtx target, rtx vals) > { > machine_mode mode = GET_MODE (target); > scalar_mode inner_mode = GET_MODE_INNER (mode); > @@ -22090,38 +22091,6 @@ aarch64_expand_vector_init (rtx target, rtx vals) > return; > } > > - /* Check for interleaving case. > - For eg if initializer is (int16x8_t) {x, y, x, y, x, y, x, y}. > - Generate following code: > - dup v0.h, x > - dup v1.h, y > - zip1 v0.h, v0.h, v1.h > - for "large enough" initializer. */ > - > - if (n_elts >= 8) > - { > - int i; > - for (i = 2; i < n_elts; i++) > - if (!rtx_equal_p (XVECEXP (vals, 0, i), XVECEXP (vals, 0, i % 2))) > - break; > - > - if (i == n_elts) > - { > - machine_mode mode = GET_MODE (target); > - rtx dest[2]; > - > - for (int i = 0; i < 2; i++) > - { > - rtx x = expand_vector_broadcast (mode, XVECEXP (vals, 0, i)); > - dest[i] = force_reg (mode, x); > - } > - > - rtvec v = gen_rtvec (2, dest[0], dest[1]); > - emit_set_insn (target, gen_rtx_UNSPEC (mode, v, UNSPEC_ZIP1)); > - return; > - } > - } > - > enum insn_code icode = optab_handler (vec_set_optab, mode); > gcc_assert (icode != CODE_FOR_nothing); > > @@ -22243,7 +22212,7 @@ aarch64_expand_vector_init (rtx target, rtx vals) > } > XVECEXP (copy, 0, i) = subst; > } > - aarch64_expand_vector_init (target, copy); > + aarch64_expand_vector_init_fallback (target, copy); > } > > /* Insert the variable lanes directly. */ > @@ -22257,6 +22226,81 @@ aarch64_expand_vector_init (rtx target, rtx vals) > } > } > > +/* Return even or odd half of VALS depending on EVEN_P. */ > + > +static rtx > +aarch64_unzip_vector_init (machine_mode mode, rtx vals, bool even_p) > +{ > + int n = XVECLEN (vals, 0); > + machine_mode new_mode > + = aarch64_simd_container_mode (GET_MODE_INNER (mode), > + GET_MODE_BITSIZE (mode).to_constant () / 2); > + rtvec vec = rtvec_alloc (n / 2); > + for (int i = 0; i < n/2; i++) Formatting nit: n / 2 > + RTVEC_ELT (vec, i) = (even_p) ? XVECEXP (vals, 0, 2 * i) > + : XVECEXP (vals, 0, 2 * i + 1); > + return gen_rtx_PARALLEL (new_mode, vec); > +} > + > +/* Expand a vector initialisation sequence, such that TARGET is initialization > + initialized to contain VALS. */ > + > +void > +aarch64_expand_vector_init (rtx target, rtx vals) > +{ > + /* Try decomposing the initializer into even and odd halves and > + then ZIP them together. Use the resulting sequence if it is > + strictly cheaper than loading VALS directly. > + > + Prefer the fallback sequence in the event of a tie, since it > + will tend to use fewer registers. */ > + > + machine_mode mode = GET_MODE (target); > + int n_elts = XVECLEN (vals, 0); > + > + if (n_elts < 4 > + || maybe_ne (GET_MODE_BITSIZE (mode), 128)) > + { > + aarch64_expand_vector_init_fallback (target, vals); > + return; > + } > + > + start_sequence (); > + rtx halves[2]; > + unsigned costs[2]; > + for (int i = 0; i < 2; i++) > + { > + start_sequence (); > + rtx new_vals > + = aarch64_unzip_vector_init (mode, vals, (i % 2) == 0); Just i == 0 wouold be enough. Also, this fits on one line. > + rtx tmp_reg = gen_reg_rtx (GET_MODE (new_vals)); > + aarch64_expand_vector_init (tmp_reg, new_vals); > + halves[i] = gen_rtx_SUBREG (mode, tmp_reg, 0); > + rtx_insn *rec_seq = get_insns (); > + end_sequence (); > + costs[i] = seq_cost (rec_seq, !optimize_size); > + emit_insn (rec_seq); > + } > + > + rtvec v = gen_rtvec (2, halves[0], halves[1]); > + rtx_insn *zip1_insn > + = emit_set_insn (target, gen_rtx_UNSPEC (mode, v, UNSPEC_ZIP1)); > + unsigned seq_total_cost > + = (!optimize_size) ? std::max (costs[0], costs[1]) : costs[0] + costs[1]; > + seq_total_cost += insn_cost (zip1_insn, !optimize_size); > + > + rtx_insn *seq = get_insns (); > + end_sequence (); > + > + start_sequence (); > + aarch64_expand_vector_init_fallback (target, vals); > + rtx_insn *fallback_seq = get_insns (); > + unsigned fallback_seq_cost = seq_cost (fallback_seq, !optimize_size); > + end_sequence (); > + > + emit_insn (seq_total_cost < fallback_seq_cost ? seq : fallback_seq); > +} > + > /* Emit RTL corresponding to: > insr TARGET, ELEM. */ > > diff --git a/gcc/testsuite/gcc.target/aarch64/ldp_stp_16.c b/gcc/testsuite/gcc.target/aarch64/ldp_stp_16.c > index 8ab117c4dcd..30c86018773 100644 > --- a/gcc/testsuite/gcc.target/aarch64/ldp_stp_16.c > +++ b/gcc/testsuite/gcc.target/aarch64/ldp_stp_16.c > @@ -96,10 +96,10 @@ CONS2_FN (4, float); > > /* > ** cons2_8_float: > -** dup v([0-9]+)\.4s, .* > +** dup v([0-9]+)\.2s, v1.s\[0\] > ** ... > -** stp q\1, q\1, \[x0\] > -** stp q\1, q\1, \[x0, #?32\] > +** stp q0, q0, \[x0\] > +** stp q0, q0, \[x0, #?32\] Leaving the capture in the first line while hard-coding q0 at the end doesn't look right. The original was written that way because nothing guarantees a particular register allocation. I think this now needs to match more of the sequence. > diff --git a/gcc/testsuite/gcc.target/aarch64/interleave-init-1.c b/gcc/testsuite/gcc.target/aarch64/vec-init-18.c > similarity index 82% > rename from gcc/testsuite/gcc.target/aarch64/interleave-init-1.c > rename to gcc/testsuite/gcc.target/aarch64/vec-init-18.c > index ee775048589..e812d3946de 100644 > --- a/gcc/testsuite/gcc.target/aarch64/interleave-init-1.c > +++ b/gcc/testsuite/gcc.target/aarch64/vec-init-18.c > @@ -7,8 +7,8 @@ > /* > ** foo: > ** ... > -** dup v[0-9]+\.8h, w[0-9]+ > -** dup v[0-9]+\.8h, w[0-9]+ > +** dup v[0-9]+\.4h, w[0-9]+ > +** dup v[0-9]+\.4h, w[0-9]+ > ** zip1 v[0-9]+\.8h, v[0-9]+\.8h, v[0-9]+\.8h > ** ... > ** ret > @@ -23,8 +23,8 @@ int16x8_t foo(int16_t x, int y) > /* > ** foo2: > ** ... > -** dup v[0-9]+\.8h, w[0-9]+ > -** movi v[0-9]+\.8h, 0x1 > +** dup v[0-9]+\.4h, w[0-9]+ > +** movi v[0-9]+\.4h, 0x1 > ** zip1 v[0-9]+\.8h, v[0-9]+\.8h, v[0-9]+\.8h > ** ... > ** ret > diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-19.c b/gcc/testsuite/gcc.target/aarch64/vec-init-19.c > new file mode 100644 > index 00000000000..e28fdcda29d > --- /dev/null > +++ b/gcc/testsuite/gcc.target/aarch64/vec-init-19.c > @@ -0,0 +1,21 @@ > +/* { dg-do compile } */ > +/* { dg-options "-O3" } */ > +/* { dg-final { check-function-bodies "**" "" "" } } */ > + > +#include > + > +/* > +** f_s8: > +** ... > +** dup v[0-9]+\.8b, w[0-9]+ > +** adrp x[0-9]+, \.LC[0-9]+ > +** ldr d[0-9]+, \[x[0-9]+, #:lo12:.LC[0-9]+\] > +** zip1 v[0-9]+\.16b, v[0-9]+\.16b, v[0-9]+\.16b This kind of match is dangerous for a test that enables scheduling, since the zip sequences start with two independent sequences that build 64-bit vectors. Since the lines of the match don't build on each other (e.g. they don't use captures to ensure that the zip operands are in the right order), I think it'd be better to use scan-assemblers instead. There's then no need to match the adrp. or the exact addressing mode of the ldr. Just {ldr\td[0-9]+, } would be enough. Same comments for the other tests. Please also check that the new tests pass on big-endian targets. Thanks, Richard > +** ret > +*/ > + > +int8x16_t f_s8(int8_t x) > +{ > + return (int8x16_t) { x, 1, x, 2, x, 3, x, 4, > + x, 5, x, 6, x, 7, x, 8 }; > +} > diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-20.c b/gcc/testsuite/gcc.target/aarch64/vec-init-20.c > new file mode 100644 > index 00000000000..9366ca349b6 > --- /dev/null > +++ b/gcc/testsuite/gcc.target/aarch64/vec-init-20.c > @@ -0,0 +1,22 @@ > +/* { dg-do compile } */ > +/* { dg-options "-O3" } */ > +/* { dg-final { check-function-bodies "**" "" "" } } */ > + > +#include > + > +/* > +** f_s8: > +** ... > +** adrp x[0-9]+, \.LC[0-9]+ > +** dup v[0-9]+\.8b, w[0-9]+ > +** ldr d[0-9]+, \[x[0-9]+, #:lo12:\.LC[0-9]+\] > +** ins v0\.b\[0\], w0 > +** zip1 v[0-9]+\.16b, v[0-9]+\.16b, v[0-9]+\.16b > +** ret > +*/ > + > +int8x16_t f_s8(int8_t x, int8_t y) > +{ > + return (int8x16_t) { x, y, 1, y, 2, y, 3, y, > + 4, y, 5, y, 6, y, 7, y }; > +} > diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-21.c b/gcc/testsuite/gcc.target/aarch64/vec-init-21.c > new file mode 100644 > index 00000000000..e16459486d7 > --- /dev/null > +++ b/gcc/testsuite/gcc.target/aarch64/vec-init-21.c > @@ -0,0 +1,22 @@ > +/* { dg-do compile } */ > +/* { dg-options "-O3" } */ > +/* { dg-final { check-function-bodies "**" "" "" } } */ > + > +#include > + > +/* > +** f_s8: > +** ... > +** adrp x[0-9]+, \.LC[0-9]+ > +** ldr q[0-9]+, \[x[0-9]+, #:lo12:\.LC[0-9]+\] > +** ins v0\.b\[0\], w0 > +** ins v0\.b\[1\], w1 > +** ... > +** ret > +*/ > + > +int8x16_t f_s8(int8_t x, int8_t y) > +{ > + return (int8x16_t) { x, y, 1, 2, 3, 4, 5, 6, > + 7, 8, 9, 10, 11, 12, 13, 14 }; > +} > diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-22-size.c b/gcc/testsuite/gcc.target/aarch64/vec-init-22-size.c > new file mode 100644 > index 00000000000..8f35854c008 > --- /dev/null > +++ b/gcc/testsuite/gcc.target/aarch64/vec-init-22-size.c > @@ -0,0 +1,24 @@ > +/* { dg-do compile } */ > +/* { dg-options "-Os" } */ > +/* { dg-final { check-function-bodies "**" "" "" } } */ > + > +/* Verify that fallback code-sequence is chosen over > + recursively generated code-sequence merged with zip1. */ > + > +/* > +** f_s16: > +** ... > +** sxth w0, w0 > +** fmov s0, w0 > +** ins v0\.h\[1\], w1 > +** ins v0\.h\[2\], w2 > +** ins v0\.h\[3\], w3 > +** ins v0\.h\[4\], w4 > +** ins v0\.h\[5\], w5 > +** ins v0\.h\[6\], w6 > +** ins v0\.h\[7\], w7 > +** ... > +** ret > +*/ > + > +#include "vec-init-22.h" > diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-22-speed.c b/gcc/testsuite/gcc.target/aarch64/vec-init-22-speed.c > new file mode 100644 > index 00000000000..172d56ffdf1 > --- /dev/null > +++ b/gcc/testsuite/gcc.target/aarch64/vec-init-22-speed.c > @@ -0,0 +1,27 @@ > +/* { dg-do compile } */ > +/* { dg-options "-O3" } */ > +/* { dg-final { check-function-bodies "**" "" "" } } */ > + > +/* Verify that we recursively generate code for even and odd halves > + instead of fallback code. This is so despite the longer code-gen > + because it has fewer dependencies and thus has lesser cost. */ > + > +/* > +** f_s16: > +** ... > +** sxth w0, w0 > +** sxth w1, w1 > +** fmov d0, x0 > +** fmov d1, x1 > +** ins v[0-9]+\.h\[1\], w2 > +** ins v[0-9]+\.h\[1\], w3 > +** ins v[0-9]+\.h\[2\], w4 > +** ins v[0-9]+\.h\[2\], w5 > +** ins v[0-9]+\.h\[3\], w6 > +** ins v[0-9]+\.h\[3\], w7 > +** zip1 v[0-9]+\.8h, v[0-9]+\.8h, v[0-9]+\.8h > +** ... > +** ret > +*/ > + > +#include "vec-init-22.h" > diff --git a/gcc/testsuite/gcc.target/aarch64/vec-init-22.h b/gcc/testsuite/gcc.target/aarch64/vec-init-22.h > new file mode 100644 > index 00000000000..15b889d4097 > --- /dev/null > +++ b/gcc/testsuite/gcc.target/aarch64/vec-init-22.h > @@ -0,0 +1,7 @@ > +#include > + > +int16x8_t f_s16 (int16_t x0, int16_t x1, int16_t x2, int16_t x3, > + int16_t x4, int16_t x5, int16_t x6, int16_t x7) > +{ > + return (int16x8_t) { x0, x1, x2, x3, x4, x5, x6, x7 }; > +}