From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id B49263851AA3 for ; Mon, 20 Jun 2022 13:10:49 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org B49263851AA3 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 38D7C113E; Mon, 20 Jun 2022 06:10:49 -0700 (PDT) Received: from localhost (e121540-lin.manchester.arm.com [10.32.98.37]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3E30C3F5A1; Mon, 20 Jun 2022 06:10:48 -0700 (PDT) From: Richard Sandiford To: Tamar Christina via Gcc-patches Mail-Followup-To: Tamar Christina via Gcc-patches , Richard Biener , Tamar Christina , nd , richard.sandiford@arm.com Cc: Richard Biener , Tamar Christina , nd Subject: Re: [PATCH]middle-end Add optimized float addsub without needing VEC_PERM_EXPR. References: <1C4185AB-6EE6-4B8B-838C-465098DAFD3B@suse.de> <997q6no-qsqp-1oro-52sp-899sr075p4po@fhfr.qr> Date: Mon, 20 Jun 2022 14:10:46 +0100 In-Reply-To: (Tamar Christina via Gcc-patches's message of "Mon, 20 Jun 2022 12:05:07 +0000") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-57.1 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_NONE, KAM_DMARC_STATUS, KAM_LAZY_DOMAIN_SECURITY, KAM_SHORT, SPF_HELO_NONE, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 20 Jun 2022 13:10:53 -0000 Tamar Christina via Gcc-patches writes: >> -----Original Message----- >> From: Richard Biener >> Sent: Monday, June 20, 2022 12:56 PM >> To: Tamar Christina >> Cc: Andrew Pinski via Gcc-patches ; nd >> >> Subject: RE: [PATCH]middle-end Add optimized float addsub without >> needing VEC_PERM_EXPR. >>=20 >> On Mon, 20 Jun 2022, Tamar Christina wrote: >>=20 >> > > -----Original Message----- >> > > From: Richard Biener >> > > Sent: Saturday, June 18, 2022 11:49 AM >> > > To: Andrew Pinski via Gcc-patches >> > > Cc: Tamar Christina ; nd >> > > Subject: Re: [PATCH]middle-end Add optimized float addsub without >> > > needing VEC_PERM_EXPR. >> > > >> > > >> > > >> > > > Am 17.06.2022 um 22:34 schrieb Andrew Pinski via Gcc-patches > > > patches@gcc.gnu.org>: >> > > > >> > > > =EF=BB=BFOn Thu, Jun 16, 2022 at 3:59 AM Tamar Christina via Gcc-p= atches >> > > > wrote: >> > > >> >> > > >> Hi All, >> > > >> >> > > >> For IEEE 754 floating point formats we can replace a sequence of >> > > >> alternative >> > > >> +/- with fneg of a wider type followed by an fadd. This >> > > >> +eliminated the need for >> > > >> using a permutation. This patch adds a math.pd rule to recognize >> > > >> and do this rewriting. >> > > > >> > > > I don't think this is correct. You don't check the format of the >> > > > floating point to make sure this is valid (e.g. REAL_MODE_FORMAT's >> > > > signbit_rw/signbit_ro field). >> > >> > Yes I originally had this check, but I wondered whether it would be ne= eded. >> > I'm not aware of any vector ISA where the 32-bit and 16-bit floats >> > don't follow the IEEE data layout and semantics here. >> > >> > My preference would be to ask the target about the data format of its >> > vector Floating points because I don't think there needs to be a direct >> correlation between >> > The scalar and vector formats strictly speaking. But I know Richi wo= n't like >> that so >> > the check is probably most likely. >> > >> > > > Also would just be better if you do the xor in integer mode (using >> > > > signbit_rw field for the correct bit)? >> > > > And then making sure the target optimizes the xor to the neg >> > > > instruction when needed? >> > >> > I don't really see the advantage of this one. It's not removing an >> > instruction and it's assuming the vector ISA can do integer ops on a >> > floating point vector cheaply. Since match.pd doesn't have the >> > ability to do costing I'd rather not do this. >> > >> > > I=E2=80=99m also worried about using FP operations for the negate he= re. >> > > When @1 is constant do we still constant fold this correctly? >> > >> > We never did constant folding for this case, the folding >> > infrastructure doesn't know how to fold the VEC_PERM_EXPR. So even >> > with @0 and @1 constant no folding takes place even today if we vector= ize. >> > >> > > >> > > For costing purposes it would be nice to make this visible to the >> vectorizer. >> > > >> > >> > I initially wanted to use VEC_ADDSUB for this, but noticed it didn't >> > trigger in a number of place I had expected it to. While looking into >> > it I noticed it's because this follows the x86 instruction semantics s= o left it >> alone. >> > >> > It felt like adding a third pattern here might be confusing. However I >> > can also use the SLP pattern matcher to rewrite it without an optab if= you >> prefer that? >> > >> > The operations will then be costed normally. >> > >> > > Also is this really good for all targets? Can there be issues with >> > > reformatting when using FP ops as in your patch or with using >> > > integer XOR as suggested making this more expensive than the blend? >> > >> > I don't think with the fp ops alone, since it's using two fp ops alre= ady and >> after the change 2 fp ops. >> > and I can't image that a target would have a slow -a. >>=20 >> Wouldn't a target need to re-check if lanes are NaN or denormal if after= a >> SFmode lane operation a DFmode lane operation follows? IIRC that is what >> usually makes punning "integer" vectors as FP vectors costly. > > I guess this really depends on the target. > >>=20 >> Note one option would be to emit a multiply with { 1, -1, 1, -1 } on GIM= PLE >> where then targets could opt-in to handle this via a DFmode negate via a >> combine pattern? Not sure if this can be even done starting from the ve= c- >> perm RTL IL. >> > > But multiplies can be so expensive that the VEC_PERM_EXPR would still be > better. At least as you say, the target costed for that.=20 > >> I fear whether (neg:V2DF (subreg:V2DF (reg:V4SF))) is a good idea will >> heavily depend on the target CPU (not only the ISA). For RISC-V for exa= mple >> I think the DF lanes do not overlap with two SF lanes (so same with gcn I >> think). > > Right, so I think the conclusion is I need to move this to the backend. I wouldn't go that far :) It just means that it needs more conditions. E.g. whether the subreg is cheap should be testable via MODES_TIEABLE_P. I don't think that macro should be true for modes that cause reinterpretation (and it isn't for the RISC-V example). There's also targetm.can_change_mode_class (=E2=80=A6, ALL_REGS). Thanks, Richard > > Thanks, > Tamar > >>=20 >> Richard. >>=20 >> > The XOR one I wouldn't do, as the vector int and vector float could >> > for instance be in different register files or FP be a co-processor >> > etc. Mixing FP and Integer ops in this case I can image can lead to >> > something suboptimal. Also for targets with masking/predication the >> VEC_PERM_EXP could potentially be lowered to a mask/predicate in the >> backend. Whereas the XOR approach is far less likely. >> > >> > Thanks, >> > Tamar >> > >> > > >> > > Richard. >> > > >> > > > Thanks, >> > > > Andrew Pinski >> > > > >> > > > >> > > > >> > > >> >> > > >> For >> > > >> >> > > >> void f (float *restrict a, float *restrict b, float *res, int n) { >> > > >> for (int i =3D 0; i < (n & -4); i+=3D2) >> > > >> { >> > > >> res[i+0] =3D a[i+0] + b[i+0]; >> > > >> res[i+1] =3D a[i+1] - b[i+1]; >> > > >> } >> > > >> } >> > > >> >> > > >> we generate: >> > > >> >> > > >> .L3: >> > > >> ldr q1, [x1, x3] >> > > >> ldr q0, [x0, x3] >> > > >> fneg v1.2d, v1.2d >> > > >> fadd v0.4s, v0.4s, v1.4s >> > > >> str q0, [x2, x3] >> > > >> add x3, x3, 16 >> > > >> cmp x3, x4 >> > > >> bne .L3 >> > > >> >> > > >> now instead of: >> > > >> >> > > >> .L3: >> > > >> ldr q1, [x0, x3] >> > > >> ldr q2, [x1, x3] >> > > >> fadd v0.4s, v1.4s, v2.4s >> > > >> fsub v1.4s, v1.4s, v2.4s >> > > >> tbl v0.16b, {v0.16b - v1.16b}, v3.16b >> > > >> str q0, [x2, x3] >> > > >> add x3, x3, 16 >> > > >> cmp x3, x4 >> > > >> bne .L3 >> > > >> >> > > >> Bootstrapped Regtested on aarch64-none-linux-gnu and no issues. >> > > >> >> > > >> Thanks to George Steed for the idea. >> > > >> >> > > >> Ok for master? >> > > >> >> > > >> Thanks, >> > > >> Tamar >> > > >> >> > > >> gcc/ChangeLog: >> > > >> >> > > >> * match.pd: Add fneg/fadd rule. >> > > >> >> > > >> gcc/testsuite/ChangeLog: >> > > >> >> > > >> * gcc.target/aarch64/simd/addsub_1.c: New test. >> > > >> * gcc.target/aarch64/sve/addsub_1.c: New test. >> > > >> >> > > >> --- inline copy of patch -- >> > > >> diff --git a/gcc/match.pd b/gcc/match.pd index >> > > >> >> > > >> 51b0a1b562409af535e53828a10c30b8a3e1ae2e..af1c98d4a2831f38258d6fc1bb >> > > e >> > > >> 811c8ee6c7c6e 100644 >> > > >> --- a/gcc/match.pd >> > > >> +++ b/gcc/match.pd >> > > >> @@ -7612,6 +7612,49 @@ and, >> > > >> (simplify (reduc (op @0 VECTOR_CST@1)) >> > > >> (op (reduc:type @0) (reduc:type @1)))) >> > > >> >> > > >> +/* Simplify vector floating point operations of alternating sub/= add >> pairs >> > > >> + into using an fneg of a wider element type followed by a norm= al >> add. >> > > >> + under IEEE 754 the fneg of the wider type will negate every e= ven >> entry >> > > >> + and when doing an add we get a sub of the even and add of eve= ry >> odd >> > > >> + elements. */ >> > > >> +(simplify >> > > >> + (vec_perm (plus:c @0 @1) (minus @0 @1) VECTOR_CST@2) (if >> > > >> +(!VECTOR_INTEGER_TYPE_P (type) && !BYTES_BIG_ENDIAN) >> > > >> + (with >> > > >> + { >> > > >> + /* Build a vector of integers from the tree mask. */ >> > > >> + vec_perm_builder builder; >> > > >> + if (!tree_to_vec_perm_builder (&builder, @2)) >> > > >> + return NULL_TREE; >> > > >> + >> > > >> + /* Create a vec_perm_indices for the integer vector. */ >> > > >> + poly_uint64 nelts =3D TYPE_VECTOR_SUBPARTS (type); >> > > >> + vec_perm_indices sel (builder, 2, nelts); >> > > >> + } >> > > >> + (if (sel.series_p (0, 2, 0, 2)) >> > > >> + (with >> > > >> + { >> > > >> + machine_mode vec_mode =3D TYPE_MODE (type); >> > > >> + auto elem_mode =3D GET_MODE_INNER (vec_mode); >> > > >> + auto nunits =3D exact_div (GET_MODE_NUNITS (vec_mode), 2); >> > > >> + tree stype; >> > > >> + switch (elem_mode) >> > > >> + { >> > > >> + case E_HFmode: >> > > >> + stype =3D float_type_node; >> > > >> + break; >> > > >> + case E_SFmode: >> > > >> + stype =3D double_type_node; >> > > >> + break; >> > > >> + default: >> > > >> + return NULL_TREE; >> > > >> + } >> > > >> + tree ntype =3D build_vector_type (stype, nunits); >> > > >> + if (!ntype) >> > > >> + return NULL_TREE; >> > > >> + } >> > > >> + (plus (view_convert:type (negate (view_convert:ntype @1))) >> > > >> + @0)))))) >> > > >> + >> > > >> (simplify >> > > >> (vec_perm @0 @1 VECTOR_CST@2) >> > > >> (with >> > > >> diff --git a/gcc/testsuite/gcc.target/aarch64/simd/addsub_1.c >> > > >> b/gcc/testsuite/gcc.target/aarch64/simd/addsub_1.c >> > > >> new file mode 100644 >> > > >> index >> > > >> >> > > >> 0000000000000000000000000000000000000000..1fb91a34c421bbd2894faa0db >> > > bf >> > > >> 1b47ad43310c4 >> > > >> --- /dev/null >> > > >> +++ b/gcc/testsuite/gcc.target/aarch64/simd/addsub_1.c >> > > >> @@ -0,0 +1,56 @@ >> > > >> +/* { dg-do compile } */ >> > > >> +/* { dg-require-effective-target arm_v8_2a_fp16_neon_ok } */ >> > > >> +/* { dg-options "-Ofast" } */ >> > > >> +/* { dg-add-options arm_v8_2a_fp16_neon } */ >> > > >> +/* { dg-final { check-function-bodies "**" "" "" { target { le } >> > > >> +} } } */ >> > > >> + >> > > >> +#pragma GCC target "+nosve" >> > > >> + >> > > >> +/* >> > > >> +** f1: >> > > >> +** ... >> > > >> +** fneg v[0-9]+.2d, v[0-9]+.2d >> > > >> +** fadd v[0-9]+.4s, v[0-9]+.4s, v[0-9]+.4s >> > > >> +** ... >> > > >> +*/ >> > > >> +void f1 (float *restrict a, float *restrict b, float *res, int n= ) { >> > > >> + for (int i =3D 0; i < (n & -4); i+=3D2) >> > > >> + { >> > > >> + res[i+0] =3D a[i+0] + b[i+0]; >> > > >> + res[i+1] =3D a[i+1] - b[i+1]; >> > > >> + } >> > > >> +} >> > > >> + >> > > >> +/* >> > > >> +** d1: >> > > >> +** ... >> > > >> +** fneg v[0-9]+.4s, v[0-9]+.4s >> > > >> +** fadd v[0-9]+.8h, v[0-9]+.8h, v[0-9]+.8h >> > > >> +** ... >> > > >> +*/ >> > > >> +void d1 (_Float16 *restrict a, _Float16 *restrict b, _Float16 >> > > >> +*res, int n) { >> > > >> + for (int i =3D 0; i < (n & -8); i+=3D2) >> > > >> + { >> > > >> + res[i+0] =3D a[i+0] + b[i+0]; >> > > >> + res[i+1] =3D a[i+1] - b[i+1]; >> > > >> + } >> > > >> +} >> > > >> + >> > > >> +/* >> > > >> +** e1: >> > > >> +** ... >> > > >> +** fadd v[0-9]+.2d, v[0-9]+.2d, v[0-9]+.2d >> > > >> +** fsub v[0-9]+.2d, v[0-9]+.2d, v[0-9]+.2d >> > > >> +** ins v[0-9]+.d\[1\], v[0-9]+.d\[1\] >> > > >> +** ... >> > > >> +*/ >> > > >> +void e1 (double *restrict a, double *restrict b, double *res, >> > > >> +int n) { >> > > >> + for (int i =3D 0; i < (n & -4); i+=3D2) >> > > >> + { >> > > >> + res[i+0] =3D a[i+0] + b[i+0]; >> > > >> + res[i+1] =3D a[i+1] - b[i+1]; >> > > >> + } >> > > >> +} >> > > >> diff --git a/gcc/testsuite/gcc.target/aarch64/sve/addsub_1.c >> > > >> b/gcc/testsuite/gcc.target/aarch64/sve/addsub_1.c >> > > >> new file mode 100644 >> > > >> index >> > > >> >> > > >> 0000000000000000000000000000000000000000..ea7f9d9db2c8c9a3efe5c7951a >> > > 3 >> > > >> 14a29b7a7a922 >> > > >> --- /dev/null >> > > >> +++ b/gcc/testsuite/gcc.target/aarch64/sve/addsub_1.c >> > > >> @@ -0,0 +1,52 @@ >> > > >> +/* { dg-do compile } */ >> > > >> +/* { dg-options "-Ofast" } */ >> > > >> +/* { dg-final { check-function-bodies "**" "" "" { target { le } >> > > >> +} } } */ >> > > >> + >> > > >> +/* >> > > >> +** f1: >> > > >> +** ... >> > > >> +** fneg z[0-9]+.d, p[0-9]+/m, z[0-9]+.d >> > > >> +** fadd z[0-9]+.s, z[0-9]+.s, z[0-9]+.s >> > > >> +** ... >> > > >> +*/ >> > > >> +void f1 (float *restrict a, float *restrict b, float *res, int n= ) { >> > > >> + for (int i =3D 0; i < (n & -4); i+=3D2) >> > > >> + { >> > > >> + res[i+0] =3D a[i+0] + b[i+0]; >> > > >> + res[i+1] =3D a[i+1] - b[i+1]; >> > > >> + } >> > > >> +} >> > > >> + >> > > >> +/* >> > > >> +** d1: >> > > >> +** ... >> > > >> +** fneg z[0-9]+.s, p[0-9]+/m, z[0-9]+.s >> > > >> +** fadd z[0-9]+.h, z[0-9]+.h, z[0-9]+.h >> > > >> +** ... >> > > >> +*/ >> > > >> +void d1 (_Float16 *restrict a, _Float16 *restrict b, _Float16 >> > > >> +*res, int n) { >> > > >> + for (int i =3D 0; i < (n & -8); i+=3D2) >> > > >> + { >> > > >> + res[i+0] =3D a[i+0] + b[i+0]; >> > > >> + res[i+1] =3D a[i+1] - b[i+1]; >> > > >> + } >> > > >> +} >> > > >> + >> > > >> +/* >> > > >> +** e1: >> > > >> +** ... >> > > >> +** fsub z[0-9]+.d, z[0-9]+.d, z[0-9]+.d >> > > >> +** movprfx z[0-9]+.d, p[0-9]+/m, z[0-9]+.d >> > > >> +** fadd z[0-9]+.d, p[0-9]+/m, z[0-9]+.d, z[0-9]+.d >> > > >> +** ... >> > > >> +*/ >> > > >> +void e1 (double *restrict a, double *restrict b, double *res, >> > > >> +int n) { >> > > >> + for (int i =3D 0; i < (n & -4); i+=3D2) >> > > >> + { >> > > >> + res[i+0] =3D a[i+0] + b[i+0]; >> > > >> + res[i+1] =3D a[i+1] - b[i+1]; >> > > >> + } >> > > >> +} >> > > >> >> > > >> >> > > >> >> > > >> >> > > >> -- >> > >>=20 >> -- >> Richard Biener >> SUSE Software Solutions Germany GmbH, Frankenstra=C3=9Fe 146, 90461 >> Nuernberg, Germany; GF: Ivo Totev, Andrew Myers, Andrew McDonald, >> Boudien Moerman; HRB 36809 (AG Nuernberg)