From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id A7E63393D02C for ; Wed, 28 Apr 2021 15:11:37 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org A7E63393D02C Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BF53F1FB; Wed, 28 Apr 2021 08:11:36 -0700 (PDT) Received: from localhost (e121540-lin.manchester.arm.com [10.32.98.126]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 499733F73B; Wed, 28 Apr 2021 08:11:36 -0700 (PDT) From: Richard Sandiford To: Jonathan Wright via Gcc-patches Mail-Followup-To: Jonathan Wright via Gcc-patches , Jonathan Wright , richard.sandiford@arm.com Subject: Re: [PATCH 10/20] aarch64: Use RTL builtins for FP ml[as]_n intrinsics References: Date: Wed, 28 Apr 2021 16:11:35 +0100 In-Reply-To: (Jonathan Wright via Gcc-patches's message of "Wed, 28 Apr 2021 14:31:56 +0000") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-12.5 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, KAM_LOTSOFHASH, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 28 Apr 2021 15:11:39 -0000 Jonathan Wright via Gcc-patches writes: > Hi, > > As subject, this patch rewrites the floating-point vml[as][q]_n Neon > intrinsics to use RTL builtins rather than inline assembly code, allowing > for better scheduling and optimization. > > Regression tested and bootstrapped on aarch64-none-linux-gnu - no > issues. > > Ok for master? > > Thanks, > Jonathan > > --- > > gcc/ChangeLog: > > 2021-01-18 =C2=A0Jonathan Wright =C2=A0 > > * config/aarch64/aarch64-simd-builtins.def: Add > float_ml[as]_n builtin generator macros. > * config/aarch64/aarch64-simd.md (mul_n3): Define. > (aarch64_float_mla_n): Define. > (aarch64_float_mls_n): Define. > * config/aarch64/arm_neon.h (vmla_n_f32): Use RTL builtin > instead of inline asm. > (vmlaq_n_f32): Likewise. > (vmls_n_f32): Likewise. > (vmlsq_n_f32): Likewise. > > diff --git a/gcc/config/aarch64/aarch64-simd-builtins.def b/gcc/config/aa= rch64/aarch64-simd-builtins.def > index 0f44ed84ff9d08d808b1b2dfe528db5208b134f5..547509474c23daf6882ed2f84= 07ddb5caf1d1b91 100644 > --- a/gcc/config/aarch64/aarch64-simd-builtins.def > +++ b/gcc/config/aarch64/aarch64-simd-builtins.def > @@ -664,6 +664,9 @@ > BUILTIN_VHSDF (TERNOP, fnma, 4, FP) > VAR1 (TERNOP, fnma, 4, FP, hf) >=20=20 > + BUILTIN_VDQSF (TERNOP, float_mla_n, 0, FP) > + BUILTIN_VDQSF (TERNOP, float_mls_n, 0, FP) > + > /* Implemented by aarch64_simd_bsl. */ > BUILTIN_VDQQH (BSL_P, simd_bsl, 0, NONE) > VAR2 (BSL_P, simd_bsl,0, NONE, di, v2di) > diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarc= h64-simd.md > index 5f701dd2775290156634ef8c6feccecd359e9ec9..d016970a2c278405b270a0ac7= 45221e69f0f625e 100644 > --- a/gcc/config/aarch64/aarch64-simd.md > +++ b/gcc/config/aarch64/aarch64-simd.md > @@ -2614,6 +2614,17 @@ > [(set_attr "type" "neon_fp_mul_")] > ) >=20=20 > +(define_insn "mul_n3" > + [(set (match_operand:VHSDF 0 "register_operand" "=3Dw") > + (mult:VHSDF > + (vec_duplicate:VHSDF > + (match_operand: 2 "register_operand" "w")) > + (match_operand:VHSDF 1 "register_operand" "w")))] > + "TARGET_SIMD" > + "fmul\\t%0., %1., %2.[0]" This functionality should already be provided by: (define_insn "*aarch64_mul3_elt_from_dup" [(set (match_operand:VMUL 0 "register_operand" "=3Dw") (mult:VMUL (vec_duplicate:VMUL (match_operand: 1 "register_operand" "")) (match_operand:VMUL 2 "register_operand" "w")))] "TARGET_SIMD" "mul\t%0., %2., %1.[0]"; [(set_attr "type" "neon_mul__scalar")] ) so I think we should instead rename that to mul_n3 and reorder its operands. Thanks, Richard > + [(set_attr "type" "neon_fp_mul_")] > +) > + > (define_expand "div3" > [(set (match_operand:VHSDF 0 "register_operand") > (div:VHSDF (match_operand:VHSDF 1 "register_operand") > @@ -2651,6 +2662,40 @@ > [(set_attr "type" "neon_fp_abs_")] > ) >=20=20 > +(define_expand "aarch64_float_mla_n" > + [(set (match_operand:VDQSF 0 "register_operand") > + (plus:VDQSF > + (mult:VDQSF > + (vec_duplicate:VDQSF > + (match_operand: 3 "register_operand")) > + (match_operand:VDQSF 2 "register_operand")) > + (match_operand:VDQSF 1 "register_operand")))] > + "TARGET_SIMD" > + { > + rtx scratch =3D gen_reg_rtx (mode); > + emit_insn (gen_mul_n3 (scratch, operands[2], operands[3])); > + emit_insn (gen_add3 (operands[0], operands[1], scratch)); > + DONE; > + } > +) > + > +(define_expand "aarch64_float_mls_n" > + [(set (match_operand:VDQSF 0 "register_operand") > + (minus:VDQSF > + (match_operand:VDQSF 1 "register_operand") > + (mult:VDQSF > + (vec_duplicate:VDQSF > + (match_operand: 3 "register_operand")) > + (match_operand:VDQSF 2 "register_operand"))))] > + "TARGET_SIMD" > + { > + rtx scratch =3D gen_reg_rtx (mode); > + emit_insn (gen_mul_n3 (scratch, operands[2], operands[3])); > + emit_insn (gen_sub3 (operands[0], operands[1], scratch)); > + DONE; > + } > +) > + > (define_insn "fma4" > [(set (match_operand:VHSDF 0 "register_operand" "=3Dw") > (fma:VHSDF (match_operand:VHSDF 1 "register_operand" "w") > diff --git a/gcc/config/aarch64/arm_neon.h b/gcc/config/aarch64/arm_neon.h > index 1c48c166b5b9aaf052761f95121c26845221dae9..c0399c4dc428fe63c07fce0d1= 2bb1580ead1542f 100644 > --- a/gcc/config/aarch64/arm_neon.h > +++ b/gcc/config/aarch64/arm_neon.h > @@ -7050,13 +7050,7 @@ __extension__ extern __inline float32x2_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > vmla_n_f32 (float32x2_t __a, float32x2_t __b, float32_t __c) > { > - float32x2_t __result; > - float32x2_t __t1; > - __asm__ ("fmul %1.2s, %3.2s, %4.s[0]; fadd %0.2s, %0.2s, %1.2s" > - : "=3Dw"(__result), "=3Dw"(__t1) > - : "0"(__a), "w"(__b), "w"(__c) > - : /* No clobbers */); > - return __result; > + return __builtin_aarch64_float_mla_nv2sf (__a, __b, __c); > } >=20=20 > __extension__ extern __inline int16x4_t > @@ -7403,13 +7397,7 @@ __extension__ extern __inline float32x4_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > vmlaq_n_f32 (float32x4_t __a, float32x4_t __b, float32_t __c) > { > - float32x4_t __result; > - float32x4_t __t1; > - __asm__ ("fmul %1.4s, %3.4s, %4.s[0]; fadd %0.4s, %0.4s, %1.4s" > - : "=3Dw"(__result), "=3Dw"(__t1) > - : "0"(__a), "w"(__b), "w"(__c) > - : /* No clobbers */); > - return __result; > + return __builtin_aarch64_float_mla_nv4sf (__a, __b, __c); > } >=20=20 > __extension__ extern __inline int16x8_t > @@ -7496,13 +7484,7 @@ __extension__ extern __inline float32x2_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > vmls_n_f32 (float32x2_t __a, float32x2_t __b, float32_t __c) > { > - float32x2_t __result; > - float32x2_t __t1; > - __asm__ ("fmul %1.2s, %3.2s, %4.s[0]; fsub %0.2s, %0.2s, %1.2s" > - : "=3Dw"(__result), "=3Dw"(__t1) > - : "0"(__a), "w"(__b), "w"(__c) > - : /* No clobbers */); > - return __result; > + return __builtin_aarch64_float_mls_nv2sf (__a, __b, __c); > } >=20=20 > __extension__ extern __inline int16x4_t > @@ -7853,13 +7835,7 @@ __extension__ extern __inline float32x4_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > vmlsq_n_f32 (float32x4_t __a, float32x4_t __b, float32_t __c) > { > - float32x4_t __result; > - float32x4_t __t1; > - __asm__ ("fmul %1.4s, %3.4s, %4.s[0]; fsub %0.4s, %0.4s, %1.4s" > - : "=3Dw"(__result), "=3Dw"(__t1) > - : "0"(__a), "w"(__b), "w"(__c) > - : /* No clobbers */); > - return __result; > + return __builtin_aarch64_float_mls_nv4sf (__a, __b, __c); > } >=20=20 > __extension__ extern __inline int16x8_t