From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id BB8B03858035 for ; Wed, 27 Sep 2023 08:50:09 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org BB8B03858035 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8B0F21FB; Wed, 27 Sep 2023 01:50:47 -0700 (PDT) Received: from localhost (e121540-lin.manchester.arm.com [10.32.110.72]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6F2C53F59C; Wed, 27 Sep 2023 01:50:08 -0700 (PDT) From: Richard Sandiford To: Tamar Christina Mail-Followup-To: Tamar Christina ,gcc-patches@gcc.gnu.org, nd@arm.com, Richard.Earnshaw@arm.com, Marcus.Shawcroft@arm.com, Kyrylo.Tkachov@arm.com, richard.sandiford@arm.com Cc: gcc-patches@gcc.gnu.org, nd@arm.com, Richard.Earnshaw@arm.com, Marcus.Shawcroft@arm.com, Kyrylo.Tkachov@arm.com Subject: Re: [PATCH]AArch64: Use SVE unpredicated LOGICAL expressions when Advanced SIMD inefficient [PR109154] References: Date: Wed, 27 Sep 2023 09:50:07 +0100 In-Reply-To: (Tamar Christina's message of "Wed, 27 Sep 2023 01:51:30 +0100") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain X-Spam-Status: No, score=-24.5 required=5.0 tests=BAYES_00,GIT_PATCH_0,KAM_DMARC_NONE,KAM_DMARC_STATUS,KAM_LAZY_DOMAIN_SECURITY,KAM_LOTSOFHASH,KAM_SHORT,SPF_HELO_NONE,SPF_NONE,TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: Tamar Christina writes: > Hi All, > > SVE has much bigger immediate encoding range for bitmasks than Advanced SIMD has > and so on a system that is SVE capable if we need an Advanced SIMD Inclusive-OR > by immediate and would require a reload then an unpredicated SVE ORR instead. > > This has both speed and size improvements. > > Bootstrapped Regtested on aarch64-none-linux-gnu and no issues. > > Ok for master? > > Thanks, > Tamar > > gcc/ChangeLog: > > PR tree-optimization/109154 > * config/aarch64/aarch64.md (3): Convert to new syntax and > SVE split case. > * config/aarch64/iterators.md (VCONV, vconv): New. > > gcc/testsuite/ChangeLog: > > PR tree-optimization/109154 > * gcc.target/aarch64/sve/fneg-abs_2.c: Updated. > * gcc.target/aarch64/sve/fneg-abs_4.c: Updated. > > --- inline copy of patch -- > diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md > index 60c92213c75a2a4c18a6b59ae52fe45d1e872718..377c5cafedd43d8d1320489a36267cc6e5f15239 100644 > --- a/gcc/config/aarch64/aarch64.md > +++ b/gcc/config/aarch64/aarch64.md > @@ -4551,17 +4551,27 @@ (define_insn_and_split "*aarch64_and_imm2" > } > ) > > -(define_insn "3" > - [(set (match_operand:GPI 0 "register_operand" "=r,rk,w") > - (LOGICAL:GPI (match_operand:GPI 1 "register_operand" "%r,r,w") > - (match_operand:GPI 2 "aarch64_logical_operand" "r,,w")))] > - "" > - "@ > - \\t%0, %1, %2 > - \\t%0, %1, %2 > - \\t%0., %1., %2." > - [(set_attr "type" "logic_reg,logic_imm,neon_logic") > - (set_attr "arch" "*,*,simd")] > +(define_insn_and_split "3" > + [(set (match_operand:GPI 0 "register_operand") > + (LOGICAL:GPI (match_operand:GPI 1 "register_operand") > + (match_operand:GPI 2 "aarch64_logical_operand")))] > + "" > + {@ [cons: =0, 1, 2; attrs: type, arch] > + [r , %r, r ; logic_reg , * ] \t%0, %1, %2 > + [rk, r , ; logic_imm , * ] \t%0, %1, %2 > + [w , 0 , ; * , sve ] # > + [w , w , w ; neon_logic, simd] \t%0., %1., %2. > + } > + "&& TARGET_SVE && rtx_equal_p (operands[0], operands[1]) > + && satisfies_constraint_ (operands[2]) > + && FP_REGNUM_P (REGNO (operands[0]))" > + [(const_int 0)] > + { > + rtx op1 = lowpart_subreg (mode, operands[1], mode); > + rtx op2 = gen_const_vec_duplicate (mode, operands[2]); > + emit_insn (gen_3 (op1, op1, op2)); > + DONE; > + } > ) The WIP SME patches add a %Z modifier for 'z' register prefixes, similarly to b/h/s/d for scalar FP. With that I think the alternative can be: [w , 0 , ; * , sve ] \t%Z0., %Z0., #%2 although it would be nice to keep the hex constant. Will try to post the patches up to that part soon. Thanks, Richard > > ;; zero_extend version of above > diff --git a/gcc/config/aarch64/iterators.md b/gcc/config/aarch64/iterators.md > index d17becc37e230684beaee3c69e2a0f0ce612eda5..568cd5d1a3a9e00475376177ad13de72609df3d8 100644 > --- a/gcc/config/aarch64/iterators.md > +++ b/gcc/config/aarch64/iterators.md > @@ -1432,6 +1432,11 @@ (define_mode_attr VCONQ [(V8QI "V16QI") (V16QI "V16QI") > (HI "V8HI") (QI "V16QI") > (SF "V4SF") (DF "V2DF")]) > > +;; 128-bit container modes for the lower part of an SVE vector to the inner or > +;; scalar source mode. > +(define_mode_attr VCONV [(SI "VNx4SI") (DI "VNx2DI")]) > +(define_mode_attr vconv [(SI "vnx4si") (DI "vnx2di")]) > + > ;; Half modes of all vector modes. > (define_mode_attr VHALF [(V8QI "V4QI") (V16QI "V8QI") > (V4HI "V2HI") (V8HI "V4HI") > diff --git a/gcc/testsuite/gcc.target/aarch64/sve/fneg-abs_2.c b/gcc/testsuite/gcc.target/aarch64/sve/fneg-abs_2.c > index a60cd31b9294af2dac69eed1c93f899bd5c78fca..fe9f27bf91b8fb18205a5891a5d5e847a5d88e4b 100644 > --- a/gcc/testsuite/gcc.target/aarch64/sve/fneg-abs_2.c > +++ b/gcc/testsuite/gcc.target/aarch64/sve/fneg-abs_2.c > @@ -7,8 +7,7 @@ > > /* > ** f1: > -** movi v[0-9]+.2s, 0x80, lsl 24 > -** orr v[0-9]+.8b, v[0-9]+.8b, v[0-9]+.8b > +** orr z0.s, z0.s, #0x80000000 > ** ret > */ > float32_t f1 (float32_t a) > @@ -18,9 +17,7 @@ float32_t f1 (float32_t a) > > /* > ** f2: > -** mov x0, -9223372036854775808 > -** fmov d[0-9]+, x0 > -** orr v[0-9]+.8b, v[0-9]+.8b, v[0-9]+.8b > +** orr z0.d, z0.d, #0x8000000000000000 > ** ret > */ > float64_t f2 (float64_t a) > diff --git a/gcc/testsuite/gcc.target/aarch64/sve/fneg-abs_4.c b/gcc/testsuite/gcc.target/aarch64/sve/fneg-abs_4.c > index 21f2a8da2a5d44e3d01f6604ca7be87e3744d494..707bcb0b6c53e212b55a255f500e9e548e9ccd80 100644 > --- a/gcc/testsuite/gcc.target/aarch64/sve/fneg-abs_4.c > +++ b/gcc/testsuite/gcc.target/aarch64/sve/fneg-abs_4.c > @@ -6,9 +6,7 @@ > > /* > ** negabs: > -** mov x0, -9223372036854775808 > -** fmov d[0-9]+, x0 > -** orr v[0-9]+.8b, v[0-9]+.8b, v[0-9]+.8b > +** orr z0.d, z0.d, #0x8000000000000000 > ** ret > */ > double negabs (double x) > @@ -22,8 +20,7 @@ double negabs (double x) > > /* > ** negabsf: > -** movi v[0-9]+.2s, 0x80, lsl 24 > -** orr v[0-9]+.8b, v[0-9]+.8b, v[0-9]+.8b > +** orr z0.s, z0.s, #0x80000000 > ** ret > */ > float negabsf (float x)