From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id DA812385AE7D for ; Tue, 5 Jul 2022 06:52:36 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org DA812385AE7D Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C414A23A; Mon, 4 Jul 2022 23:52:36 -0700 (PDT) Received: from localhost (e121540-lin.manchester.arm.com [10.32.98.37]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F3ABA3F70D; Mon, 4 Jul 2022 23:52:35 -0700 (PDT) From: Richard Sandiford To: Andrew Carlotti via Gcc-patches Mail-Followup-To: Andrew Carlotti via Gcc-patches , Andrew Carlotti , richard.sandiford@arm.com Subject: Re: [PATCH] aarch64: Move vreinterpret definitions into the compiler References: Date: Tue, 05 Jul 2022 07:52:34 +0100 In-Reply-To: (Andrew Carlotti via Gcc-patches's message of "Wed, 29 Jun 2022 12:32:36 +0000") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-55.3 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_NONE, KAM_DMARC_STATUS, KAM_LAZY_DOMAIN_SECURITY, SPF_HELO_NONE, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 05 Jul 2022 06:52:38 -0000 Sorry for the slow review. Andrew Carlotti via Gcc-patches writes: > Hi, > > This removes a significant number of intrinsic definitions from the arm_n= eon.h > header file, and reduces the amount of code duplication. The new macros a= nd > data structures are intended to also facilitate moving other intrinsic > definitions out of the header file in future. Nice. > There is a a slight change in the behaviour of the bf16 vreinterpret intr= insics > when compiling without bf16 support. Expressions like: > > b =3D vreinterpretq_s32_bf16(vreinterpretq_bf16_s64(a)); > > are now compiled successfully, instead of causing a 'target specific opti= on > mismatch' during inlining. Yeah. The ACLE says that these are conditional on +bf16, but no-one seems to have a strong opinion about whether it needs to be that way. Accepting them unconditionally makes them consistent with arm_sve.h. Could you split out the part that adds V1DI? I never did understand why we had V1DF but not V1DI, but neither one seemed obviously more right than the other. Having the V1DI change as a separate patch would help with bisecting and might help with later archaeology. > @@ -523,6 +581,99 @@ static aarch64_simd_builtin_datum aarch64_simd_built= in_data[] =3D { > FCMLA_LANEQ_BUILTIN (180, v4hf, fcmla_laneq, V4HF, true) \ > FCMLA_LANEQ_BUILTIN (270, v4hf, fcmla_laneq, V4HF, true) \ >=20=20 > + > +/* vreinterpret intrinsics are defined for any pair of element types. > + { _bf16 } { _bf16 } (if bf16 is support= ed) Like you say, the bf16 alternatives are now unconditional, so it might be better to remove "(if bf16 is supported)". > + { _f16 _f32 _f64 } { _f16 _f32 _f64 } > + { _s8 _s16 _s32 _s64 } x { _s8 _s16 _s32 _s64 } > + { _u8 _u16 _u32 _u64 } { _u8 _u16 _u32 _u64 } > + { _p8 _p16 _p64 } { _p8 _p16 _p64 }. */ > +#define VREINTERPRET_BUILTIN2(A, B) \ > + VREINTERPRET_BUILTIN (A, B, d) > [=E2=80=A6] > @@ -636,6 +804,22 @@ static aarch64_fcmla_laneq_builtin_datum aarch64_fcm= la_lane_builtin_data[] =3D { > AARCH64_SIMD_FCMLA_LANEQ_BUILTINS > }; >=20=20 > +#undef VREINTERPRET_BUILTIN > +#define VREINTERPRET_BUILTIN(A, B, L) \ > + {"vreinterpret" SIMD_INTR_LENGTH_CHAR(L) "_" #A "_" #B, \ > + AARCH64_SIMD_BUILTIN_VREINTERPRET##L##_##A##_##B, \ > + 2, \ > + { SIMD_INTR_MODE(A, L), SIMD_INTR_MODE(B, L) }, \ > + { SIMD_INTR_QUAL(A), SIMD_INTR_QUAL(B) }, \ > + FLAG_AUTO_FP, \ > + !strcmp(#A, #B) \ Could we instead use: SIMD_INTR_MODE(A, L) =3D=3D SIMD_INTR_MODE(B, L) ? That should be a definite compile-time constant. It should then be possible to make=E2=80=A6 > + }, > + > +static aarch64_simd_intrinsic_datum aarch64_simd_intrinsic_data[] =3D { > + AARCH64_SIMD_VREINTERPRET_BUILTINS > +}; =E2=80=A6this a static const array. > + > + > #undef CRC32_BUILTIN >=20=20 > static GTY(()) tree aarch64_builtin_decls[AARCH64_BUILTIN_MAX]; > @@ -1127,6 +1311,58 @@ aarch64_init_fcmla_laneq_builtins (void) > } > } >=20=20 > +void > +aarch64_init_simd_intrinsics (void) > +{ > + unsigned int i =3D 0; > + > + for (i =3D 0; i < ARRAY_SIZE (aarch64_simd_intrinsic_data); ++i) > + { > + aarch64_simd_intrinsic_datum* d =3D &aarch64_simd_intrinsic_data[i= ]; Making the type const would overflow this line, but using auto would be OK. > + > + if (d->skip) > + continue; > + > + tree return_type =3D void_type_node; > + tree args =3D void_list_node; > + > + for (int op_num =3D d->op_count - 1; op_num >=3D 0; op_num--) > + { > + machine_mode op_mode =3D d->op_modes[op_num]; > + enum aarch64_type_qualifiers qualifiers =3D d->qualifiers[op_num]; > + > + /* For pointers, we want a pointer to the basic type > + of the vector. */ > + if (qualifiers & qualifier_pointer && VECTOR_MODE_P (op_mode)) > + op_mode =3D GET_MODE_INNER (op_mode); > + > + tree eltype =3D aarch64_simd_builtin_type > + (op_mode, > + (qualifiers & qualifier_unsigned) !=3D 0, > + (qualifiers & qualifier_poly) !=3D 0); > + gcc_assert (eltype !=3D NULL); > + > + /* Add qualifiers. */ > + if (qualifiers & qualifier_const) > + eltype =3D build_qualified_type (eltype, TYPE_QUAL_CONST); > + > + if (qualifiers & qualifier_pointer) > + eltype =3D build_pointer_type (eltype); Would be better to put this eltype stuff in a common subroutine rather than duplicate it from aarch64_init_simd_builtin_functions. > + > + if (op_num =3D=3D 0) > + return_type =3D eltype; > + else > + args =3D tree_cons (NULL_TREE, eltype, args); > + } > + > + tree ftype =3D build_function_type (return_type, args); > + tree attrs =3D aarch64_get_attributes (FLAG_AUTO_FP, d->op_modes[0= ]); > + unsigned int code =3D (d->fcode << AARCH64_BUILTIN_SHIFT | AARCH64= _BUILTIN_GENERAL); > + tree fndecl =3D simulate_builtin_function_decl (input_location, d-= >name, ftype, code, NULL, attrs); Formatting nit: last two lines overflow the 80 character limit. > + aarch64_builtin_decls[d->fcode] =3D fndecl; > + } > +} > + > void > aarch64_init_simd_builtin_functions (bool called_from_pragma) > { > [=E2=80=A6] > diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarc= h64-simd.md > index a00e1c6ef8d6b43d8b1a0fe4701e6b8c1f0f622f..a3f8f20e5fb20eeb9b9f48a27= d83343638ba3c93 100644 > --- a/gcc/config/aarch64/aarch64-simd.md > +++ b/gcc/config/aarch64/aarch64-simd.md > @@ -8039,6 +8039,20 @@ > DONE; > }) >=20=20 > +;; And same for V1DI (this should probably be merged with the above patt= ern) Yes please :-) E.g. maybe add a VQ_2E mode iterator as the 128-bit half of VP_2E, and a V1HALF mode attributes that gives V1DI and V1DF (instead of VHALF, which gives DI and DF). Thanks, Richard > +(define_expand "vec_extractv2div1di" > + [(match_operand:V1DI 0 "register_operand") > + (match_operand:V2DI 1 "register_operand") > + (match_operand 2 "immediate_operand")] > + "TARGET_SIMD" > +{ > + /* V1DI is rarely used by other patterns, so it should be better to hi= de > + it in a subreg destination of a normal DI op. */ > + rtx scalar0 =3D gen_lowpart (DImode, operands[0]); > + emit_insn (gen_vec_extractv2didi (scalar0, operands[1], operands[2])); > + DONE; > +}) > + > ;; aes >=20=20 > (define_insn "aarch64_crypto_aesv16qi"