From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 98381 invoked by alias); 28 Jul 2015 11:26:15 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 98347 invoked by uid 89); 28 Jul 2015 11:26:14 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-0.3 required=5.0 tests=AWL,BAYES_50,SPF_PASS autolearn=ham version=3.3.2 X-HELO: eu-smtp-delivery-143.mimecast.com Received: from eu-smtp-delivery-143.mimecast.com (HELO eu-smtp-delivery-143.mimecast.com) (207.82.80.143) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Tue, 28 Jul 2015 11:26:00 +0000 Received: from cam-owa1.Emea.Arm.com (fw-tnat.cambridge.arm.com [217.140.96.140]) by eu-smtp-1.mimecast.com with ESMTP id uk-mta-4-lnnzlIYUQUKwKWFLK5LmuQ-1; Tue, 28 Jul 2015 12:25:55 +0100 Received: from [10.2.207.65] ([10.1.2.79]) by cam-owa1.Emea.Arm.com with Microsoft SMTPSVC(6.0.3790.3959); Tue, 28 Jul 2015 12:25:55 +0100 Message-ID: <55B766C3.4060601@arm.com> Date: Tue, 28 Jul 2015 11:27:00 -0000 From: Alan Lawrence User-Agent: Thunderbird 2.0.0.24 (X11/20101213) MIME-Version: 1.0 To: "gcc-patches@gcc.gnu.org" Subject: [PATCH 9/15][AArch64] vld{2,3,4}{,_lane,_dup}, vcombine, vcreate In-Reply-To: <55B765DF.4040706@arm.com> X-MC-Unique: lnnzlIYUQUKwKWFLK5LmuQ-1 Content-Type: multipart/mixed; boundary="------------090102050908030104070409" X-IsSubscribed: yes X-SW-Source: 2015-07/txt/msg02363.txt.bz2 This is a multi-part message in MIME format. --------------090102050908030104070409 Content-Type: text/plain; charset=WINDOWS-1252; format=flowed Content-Transfer-Encoding: quoted-printable Content-length: 1150 gcc/ChangeLog: * config/aarch64/aarch64.c (aarch64_split_simd_combine): Add V4HFmode. * config/aarch64/aarch64-builtins.c (VAR13, VAR14): New. (aarch64_scalar_builtin_types, aarch64_init_simd_builtin_scalar_types): Add __builtin_aarch64_simd_hf. * config/aarch64/arm_neon.h (float16x4x2_t, float16x8x2_t, float16x4x3_t, float16x8x3_t, float16x4x4_t, float16x8x4_t, vcombine_f16, vst2_lane_f16, vst2q_lane_f16, vst3_lane_f16, vst3q_lane_f16, vst4_lane_f16, vst4q_lane_f16, vld2_f16, vld2q_f16, vld3_f16, vld3q_f16, vld4_f16, vld4q_f16, vld2_dup_f16, vld2q_dup_f16, vld3_dup_f16, vld3q_dup_f16, vld4_dup_f16, vld4q_dup_f16, vld2_lane_f16, vld2q_lane_f16, vld3_lane_f16, vld3q_lane_f16, vld4_lane_f16, vld4q_lane_f16, vst2_f16, vst2q_f16, vst3_f16, vst3q_f16, vst4_f16, vst4q_f16, vcreate_f16): New. * config/aarch64/iterators.md (VALLDIF, Vtype, Vetype, Vbtype, V_cmp_result, v_cmp_result): Add cases for V4HF and V8HF. (VDC, Vdbl): Add V4HF. gcc/testsuite/ChangeLog: * gcc.target/aarch64/vldN_1.c: Add float16x4_t and float16x8_t cases. * gcc.target/aarch64/vldN_dup_1.c: Likewise. * gcc.target/aarch64/vldN_lane_1.c: Likewise. --------------090102050908030104070409 Content-Type: text/x-patch; name=09_aarch64_vcreate_et_al.patch Content-Transfer-Encoding: quoted-printable Content-Disposition: inline; filename="09_aarch64_vcreate_et_al.patch" Content-length: 29873 commit ef719e5d3d6eccc5cf621851283b7c0ba1a9ee6c Author: Alan Lawrence Date: Tue Aug 5 17:52:28 2014 +0100 AArch64 3/N: v(create|combine|v(ld|st|ld...dup/lane|st...lane)[234](q?)= )_f16; tests vldN{,_lane,_dup} inc bigendian. Add __builtin_aarch64_simd_hf. =20=20=20=20 Fix some casts, to ..._hf not ..._sf ! diff --git a/gcc/config/aarch64/aarch64-builtins.c b/gcc/config/aarch64/aar= ch64-builtins.c index a6c3377..5367ba6 100644 --- a/gcc/config/aarch64/aarch64-builtins.c +++ b/gcc/config/aarch64/aarch64-builtins.c @@ -300,6 +300,12 @@ aarch64_types_storestruct_lane_qualifiers[SIMD_MAX_BUI= LTIN_ARGS] #define VAR12(T, N, MAP, A, B, C, D, E, F, G, H, I, J, K, L) \ VAR11 (T, N, MAP, A, B, C, D, E, F, G, H, I, J, K) \ VAR1 (T, N, MAP, L) +#define VAR13(T, N, MAP, A, B, C, D, E, F, G, H, I, J, K, L, M) \ + VAR12 (T, N, MAP, A, B, C, D, E, F, G, H, I, J, K, L) \ + VAR1 (T, N, MAP, M) +#define VAR14(T, X, MAP, A, B, C, D, E, F, G, H, I, J, K, L, M, N) \ + VAR13 (T, X, MAP, A, B, C, D, E, F, G, H, I, J, K, L, M) \ + VAR1 (T, X, MAP, N) =20 #include "aarch64-builtin-iterators.h" =20 @@ -377,6 +383,7 @@ const char *aarch64_scalar_builtin_types[] =3D { "__builtin_aarch64_simd_qi", "__builtin_aarch64_simd_hi", "__builtin_aarch64_simd_si", + "__builtin_aarch64_simd_hf", "__builtin_aarch64_simd_sf", "__builtin_aarch64_simd_di", "__builtin_aarch64_simd_df", @@ -664,6 +671,8 @@ aarch64_init_simd_builtin_scalar_types (void) "__builtin_aarch64_simd_qi"); (*lang_hooks.types.register_builtin_type) (intHI_type_node, "__builtin_aarch64_simd_hi"); + (*lang_hooks.types.register_builtin_type) (aarch64_fp16_type_node, + "__builtin_aarch64_simd_hf"); (*lang_hooks.types.register_builtin_type) (intSI_type_node, "__builtin_aarch64_simd_si"); (*lang_hooks.types.register_builtin_type) (float_type_node, diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c index ccf063a..bbf5230 100644 --- a/gcc/config/aarch64/aarch64.c +++ b/gcc/config/aarch64/aarch64.c @@ -1063,6 +1063,9 @@ aarch64_split_simd_combine (rtx dst, rtx src1, rtx sr= c2) case V2SImode: gen =3D gen_aarch64_simd_combinev2si; break; + case V4HFmode: + gen =3D gen_aarch64_simd_combinev4hf; + break; case V2SFmode: gen =3D gen_aarch64_simd_combinev2sf; break; diff --git a/gcc/config/aarch64/arm_neon.h b/gcc/config/aarch64/arm_neon.h index 7425485..d61e619 100644 --- a/gcc/config/aarch64/arm_neon.h +++ b/gcc/config/aarch64/arm_neon.h @@ -153,6 +153,16 @@ typedef struct uint64x2x2_t uint64x2_t val[2]; } uint64x2x2_t; =20 +typedef struct float16x4x2_t +{ + float16x4_t val[2]; +} float16x4x2_t; + +typedef struct float16x8x2_t +{ + float16x8_t val[2]; +} float16x8x2_t; + typedef struct float32x2x2_t { float32x2_t val[2]; @@ -273,6 +283,16 @@ typedef struct uint64x2x3_t uint64x2_t val[3]; } uint64x2x3_t; =20 +typedef struct float16x4x3_t +{ + float16x4_t val[3]; +} float16x4x3_t; + +typedef struct float16x8x3_t +{ + float16x8_t val[3]; +} float16x8x3_t; + typedef struct float32x2x3_t { float32x2_t val[3]; @@ -393,6 +413,16 @@ typedef struct uint64x2x4_t uint64x2_t val[4]; } uint64x2x4_t; =20 +typedef struct float16x4x4_t +{ + float16x4_t val[4]; +} float16x4x4_t; + +typedef struct float16x8x4_t +{ + float16x8_t val[4]; +} float16x8x4_t; + typedef struct float32x2x4_t { float32x2_t val[4]; @@ -2644,6 +2674,12 @@ vcreate_s64 (uint64_t __a) return (int64x1_t) {__a}; } =20 +__extension__ static __inline float16x4_t __attribute__ ((__always_inline_= _)) +vcreate_f16 (uint64_t __a) +{ + return (float16x4_t) __a; +} + __extension__ static __inline float32x2_t __attribute__ ((__always_inline_= _)) vcreate_f32 (uint64_t __a) { @@ -4780,6 +4816,12 @@ vcombine_s64 (int64x1_t __a, int64x1_t __b) return __builtin_aarch64_combinedi (__a[0], __b[0]); } =20 +__extension__ static __inline float16x8_t __attribute__ ((__always_inline_= _)) +vcombine_f16 (float16x4_t __a, float16x4_t __b) +{ + return __builtin_aarch64_combinev4hf (__a, __b); +} + __extension__ static __inline float32x4_t __attribute__ ((__always_inline_= _)) vcombine_f32 (float32x2_t __a, float32x2_t __b) { @@ -9908,7 +9950,7 @@ vtstq_p16 (poly16x8_t a, poly16x8_t b) +------+----+----+----+----+ |uint | Y | Y | N | N | +------+----+----+----+----+ - |float | - | - | N | N | + |float | - | Y | N | N | +------+----+----+----+----+ |poly | Y | Y | - | - | +------+----+----+----+----+ @@ -9922,7 +9964,7 @@ vtstq_p16 (poly16x8_t a, poly16x8_t b) +------+----+----+----+----+ |uint | Y | Y | Y | Y | +------+----+----+----+----+ - |float | - | - | Y | Y | + |float | - | Y | Y | Y | +------+----+----+----+----+ |poly | Y | Y | - | - | +------+----+----+----+----+ @@ -9936,7 +9978,7 @@ vtstq_p16 (poly16x8_t a, poly16x8_t b) +------+----+----+----+----+ |uint | Y | N | N | Y | +------+----+----+----+----+ - |float | - | - | N | Y | + |float | - | N | N | Y | +------+----+----+----+----+ |poly | Y | N | - | - | +------+----+----+----+----+ @@ -9952,6 +9994,7 @@ __STRUCTN (int, 8, 2) __STRUCTN (int, 16, 2) __STRUCTN (uint, 8, 2) __STRUCTN (uint, 16, 2) +__STRUCTN (float, 16, 2) __STRUCTN (poly, 8, 2) __STRUCTN (poly, 16, 2) /* 3-element structs. */ @@ -9963,6 +10006,7 @@ __STRUCTN (uint, 8, 3) __STRUCTN (uint, 16, 3) __STRUCTN (uint, 32, 3) __STRUCTN (uint, 64, 3) +__STRUCTN (float, 16, 3) __STRUCTN (float, 32, 3) __STRUCTN (float, 64, 3) __STRUCTN (poly, 8, 3) @@ -10000,6 +10044,8 @@ vst2_lane_ ## funcsuffix (ptrtype *__ptr, \ __ptr, __o, __c); \ } =20 +__ST2_LANE_FUNC (float16x4x2_t, float16x8x2_t, float16_t, v8hf, hf, f16, + float16x8_t) __ST2_LANE_FUNC (float32x2x2_t, float32x4x2_t, float32_t, v4sf, sf, f32, float32x4_t) __ST2_LANE_FUNC (float64x1x2_t, float64x2x2_t, float64_t, v2df, df, f64, @@ -10032,6 +10078,7 @@ vst2q_lane_ ## funcsuffix (ptrtype *__ptr, \ __ptr, __temp.__o, __c); \ } =20 +__ST2_LANE_FUNC (float16x8x2_t, float16_t, v8hf, hf, f16) __ST2_LANE_FUNC (float32x4x2_t, float32_t, v4sf, sf, f32) __ST2_LANE_FUNC (float64x2x2_t, float64_t, v2df, df, f64) __ST2_LANE_FUNC (poly8x16x2_t, poly8_t, v16qi, qi, p8) @@ -10073,6 +10120,8 @@ vst3_lane_ ## funcsuffix (ptrtype *__ptr, \ __ptr, __o, __c); \ } =20 +__ST3_LANE_FUNC (float16x4x3_t, float16x8x3_t, float16_t, v8hf, hf, f16, + float16x8_t) __ST3_LANE_FUNC (float32x2x3_t, float32x4x3_t, float32_t, v4sf, sf, f32, float32x4_t) __ST3_LANE_FUNC (float64x1x3_t, float64x2x3_t, float64_t, v2df, df, f64, @@ -10105,6 +10154,7 @@ vst3q_lane_ ## funcsuffix (ptrtype *__ptr, \ __ptr, __temp.__o, __c); \ } =20 +__ST3_LANE_FUNC (float16x8x3_t, float16_t, v8hf, hf, f16) __ST3_LANE_FUNC (float32x4x3_t, float32_t, v4sf, sf, f32) __ST3_LANE_FUNC (float64x2x3_t, float64_t, v2df, df, f64) __ST3_LANE_FUNC (poly8x16x3_t, poly8_t, v16qi, qi, p8) @@ -10151,6 +10201,8 @@ vst4_lane_ ## funcsuffix (ptrtype *__ptr, \ __ptr, __o, __c); \ } =20 +__ST4_LANE_FUNC (float16x4x4_t, float16x8x4_t, float16_t, v8hf, hf, f16, + float16x8_t) __ST4_LANE_FUNC (float32x2x4_t, float32x4x4_t, float32_t, v4sf, sf, f32, float32x4_t) __ST4_LANE_FUNC (float64x1x4_t, float64x2x4_t, float64_t, v2df, df, f64, @@ -10183,6 +10235,7 @@ vst4q_lane_ ## funcsuffix (ptrtype *__ptr, \ __ptr, __temp.__o, __c); \ } =20 +__ST4_LANE_FUNC (float16x8x4_t, float16_t, v8hf, hf, f16) __ST4_LANE_FUNC (float32x4x4_t, float32_t, v4sf, sf, f32) __ST4_LANE_FUNC (float64x2x4_t, float64_t, v2df, df, f64) __ST4_LANE_FUNC (poly8x16x4_t, poly8_t, v16qi, qi, p8) @@ -15239,6 +15292,17 @@ vld2_u32 (const uint32_t * __a) return ret; } =20 +__extension__ static __inline float16x4x2_t __attribute__ ((__always_inlin= e__)) +vld2_f16 (const float16_t * __a) +{ + float16x4x2_t ret; + __builtin_aarch64_simd_oi __o; + __o =3D __builtin_aarch64_ld2v4hf (__a); + ret.val[0] =3D __builtin_aarch64_get_dregoiv4hf (__o, 0); + ret.val[1] =3D __builtin_aarch64_get_dregoiv4hf (__o, 1); + return ret; +} + __extension__ static __inline float32x2x2_t __attribute__ ((__always_inlin= e__)) vld2_f32 (const float32_t * __a) { @@ -15360,6 +15424,17 @@ vld2q_u64 (const uint64_t * __a) return ret; } =20 +__extension__ static __inline float16x8x2_t __attribute__ ((__always_inlin= e__)) +vld2q_f16 (const float16_t * __a) +{ + float16x8x2_t ret; + __builtin_aarch64_simd_oi __o; + __o =3D __builtin_aarch64_ld2v8hf (__a); + ret.val[0] =3D __builtin_aarch64_get_qregoiv8hf (__o, 0); + ret.val[1] =3D __builtin_aarch64_get_qregoiv8hf (__o, 1); + return ret; +} + __extension__ static __inline float32x4x2_t __attribute__ ((__always_inlin= e__)) vld2q_f32 (const float32_t * __a) { @@ -15514,6 +15589,18 @@ vld3_u32 (const uint32_t * __a) return ret; } =20 +__extension__ static __inline float16x4x3_t __attribute__ ((__always_inlin= e__)) +vld3_f16 (const float16_t * __a) +{ + float16x4x3_t ret; + __builtin_aarch64_simd_ci __o; + __o =3D __builtin_aarch64_ld3v4hf (__a); + ret.val[0] =3D __builtin_aarch64_get_dregciv4hf (__o, 0); + ret.val[1] =3D __builtin_aarch64_get_dregciv4hf (__o, 1); + ret.val[2] =3D __builtin_aarch64_get_dregciv4hf (__o, 2); + return ret; +} + __extension__ static __inline float32x2x3_t __attribute__ ((__always_inlin= e__)) vld3_f32 (const float32_t * __a) { @@ -15646,6 +15733,18 @@ vld3q_u64 (const uint64_t * __a) return ret; } =20 +__extension__ static __inline float16x8x3_t __attribute__ ((__always_inlin= e__)) +vld3q_f16 (const float16_t * __a) +{ + float16x8x3_t ret; + __builtin_aarch64_simd_ci __o; + __o =3D __builtin_aarch64_ld3v8hf (__a); + ret.val[0] =3D __builtin_aarch64_get_qregciv8hf (__o, 0); + ret.val[1] =3D __builtin_aarch64_get_qregciv8hf (__o, 1); + ret.val[2] =3D __builtin_aarch64_get_qregciv8hf (__o, 2); + return ret; +} + __extension__ static __inline float32x4x3_t __attribute__ ((__always_inlin= e__)) vld3q_f32 (const float32_t * __a) { @@ -15813,6 +15912,19 @@ vld4_u32 (const uint32_t * __a) return ret; } =20 +__extension__ static __inline float16x4x4_t __attribute__ ((__always_inlin= e__)) +vld4_f16 (const float16_t * __a) +{ + float16x4x4_t ret; + __builtin_aarch64_simd_xi __o; + __o =3D __builtin_aarch64_ld4v4hf (__a); + ret.val[0] =3D __builtin_aarch64_get_dregxiv4hf (__o, 0); + ret.val[1] =3D __builtin_aarch64_get_dregxiv4hf (__o, 1); + ret.val[2] =3D __builtin_aarch64_get_dregxiv4hf (__o, 2); + ret.val[3] =3D __builtin_aarch64_get_dregxiv4hf (__o, 3); + return ret; +} + __extension__ static __inline float32x2x4_t __attribute__ ((__always_inlin= e__)) vld4_f32 (const float32_t * __a) { @@ -15956,6 +16068,19 @@ vld4q_u64 (const uint64_t * __a) return ret; } =20 +__extension__ static __inline float16x8x4_t __attribute__ ((__always_inlin= e__)) +vld4q_f16 (const float16_t * __a) +{ + float16x8x4_t ret; + __builtin_aarch64_simd_xi __o; + __o =3D __builtin_aarch64_ld4v8hf (__a); + ret.val[0] =3D __builtin_aarch64_get_qregxiv8hf (__o, 0); + ret.val[1] =3D __builtin_aarch64_get_qregxiv8hf (__o, 1); + ret.val[2] =3D __builtin_aarch64_get_qregxiv8hf (__o, 2); + ret.val[3] =3D __builtin_aarch64_get_qregxiv8hf (__o, 3); + return ret; +} + __extension__ static __inline float32x4x4_t __attribute__ ((__always_inlin= e__)) vld4q_f32 (const float32_t * __a) { @@ -16017,6 +16142,18 @@ vld2_dup_s32 (const int32_t * __a) return ret; } =20 + +__extension__ static __inline float16x4x2_t __attribute__ ((__always_inlin= e__)) +vld2_dup_f16 (const float16_t * __a) +{ + float16x4x2_t ret; + __builtin_aarch64_simd_oi __o; + __o =3D __builtin_aarch64_ld2rv4hf ((const __builtin_aarch64_simd_hf *) = __a); + ret.val[0] =3D __builtin_aarch64_get_dregoiv4hf (__o, 0); + ret.val[1] =3D (float16x4_t) __builtin_aarch64_get_dregoiv4hf (__o, 1); + return ret; +} + __extension__ static __inline float32x2x2_t __attribute__ ((__always_inlin= e__)) vld2_dup_f32 (const float32_t * __a) { @@ -16226,6 +16363,17 @@ vld2q_dup_u64 (const uint64_t * __a) return ret; } =20 +__extension__ static __inline float16x8x2_t __attribute__ ((__always_inlin= e__)) +vld2q_dup_f16 (const float16_t * __a) +{ + float16x8x2_t ret; + __builtin_aarch64_simd_oi __o; + __o =3D __builtin_aarch64_ld2rv8hf ((const __builtin_aarch64_simd_hf *) = __a); + ret.val[0] =3D (float16x8_t) __builtin_aarch64_get_qregoiv8hf (__o, 0); + ret.val[1] =3D __builtin_aarch64_get_qregoiv8hf (__o, 1); + return ret; +} + __extension__ static __inline float32x4x2_t __attribute__ ((__always_inlin= e__)) vld2q_dup_f32 (const float32_t * __a) { @@ -16380,6 +16528,18 @@ vld3_dup_u32 (const uint32_t * __a) return ret; } =20 +__extension__ static __inline float16x4x3_t __attribute__ ((__always_inlin= e__)) +vld3_dup_f16 (const float16_t * __a) +{ + float16x4x3_t ret; + __builtin_aarch64_simd_ci __o; + __o =3D __builtin_aarch64_ld3rv4hf ((const __builtin_aarch64_simd_hf *) = __a); + ret.val[0] =3D (float16x4_t) __builtin_aarch64_get_dregciv4hf (__o, 0); + ret.val[1] =3D (float16x4_t) __builtin_aarch64_get_dregciv4hf (__o, 1); + ret.val[2] =3D (float16x4_t) __builtin_aarch64_get_dregciv4hf (__o, 2); + return ret; +} + __extension__ static __inline float32x2x3_t __attribute__ ((__always_inlin= e__)) vld3_dup_f32 (const float32_t * __a) { @@ -16512,6 +16672,18 @@ vld3q_dup_u64 (const uint64_t * __a) return ret; } =20 +__extension__ static __inline float16x8x3_t __attribute__ ((__always_inlin= e__)) +vld3q_dup_f16 (const float16_t * __a) +{ + float16x8x3_t ret; + __builtin_aarch64_simd_ci __o; + __o =3D __builtin_aarch64_ld3rv8hf ((const __builtin_aarch64_simd_hf *) = __a); + ret.val[0] =3D (float16x8_t) __builtin_aarch64_get_qregciv8hf (__o, 0); + ret.val[1] =3D (float16x8_t) __builtin_aarch64_get_qregciv8hf (__o, 1); + ret.val[2] =3D (float16x8_t) __builtin_aarch64_get_qregciv8hf (__o, 2); + return ret; +} + __extension__ static __inline float32x4x3_t __attribute__ ((__always_inlin= e__)) vld3q_dup_f32 (const float32_t * __a) { @@ -16679,6 +16851,19 @@ vld4_dup_u32 (const uint32_t * __a) return ret; } =20 +__extension__ static __inline float16x4x4_t __attribute__ ((__always_inlin= e__)) +vld4_dup_f16 (const float16_t * __a) +{ + float16x4x4_t ret; + __builtin_aarch64_simd_xi __o; + __o =3D __builtin_aarch64_ld4rv4hf ((const __builtin_aarch64_simd_hf *) = __a); + ret.val[0] =3D (float16x4_t) __builtin_aarch64_get_dregxiv4hf (__o, 0); + ret.val[1] =3D (float16x4_t) __builtin_aarch64_get_dregxiv4hf (__o, 1); + ret.val[2] =3D (float16x4_t) __builtin_aarch64_get_dregxiv4hf (__o, 2); + ret.val[3] =3D (float16x4_t) __builtin_aarch64_get_dregxiv4hf (__o, 3); + return ret; +} + __extension__ static __inline float32x2x4_t __attribute__ ((__always_inlin= e__)) vld4_dup_f32 (const float32_t * __a) { @@ -16822,6 +17007,19 @@ vld4q_dup_u64 (const uint64_t * __a) return ret; } =20 +__extension__ static __inline float16x8x4_t __attribute__ ((__always_inlin= e__)) +vld4q_dup_f16 (const float16_t * __a) +{ + float16x8x4_t ret; + __builtin_aarch64_simd_xi __o; + __o =3D __builtin_aarch64_ld4rv8hf ((const __builtin_aarch64_simd_hf *) = __a); + ret.val[0] =3D (float16x8_t) __builtin_aarch64_get_qregxiv8hf (__o, 0); + ret.val[1] =3D (float16x8_t) __builtin_aarch64_get_qregxiv8hf (__o, 1); + ret.val[2] =3D (float16x8_t) __builtin_aarch64_get_qregxiv8hf (__o, 2); + ret.val[3] =3D (float16x8_t) __builtin_aarch64_get_qregxiv8hf (__o, 3); + return ret; +} + __extension__ static __inline float32x4x4_t __attribute__ ((__always_inlin= e__)) vld4q_dup_f32 (const float32_t * __a) { @@ -16874,6 +17072,8 @@ vld2_lane_##funcsuffix (const ptrtype * __ptr, inty= pe __b, const int __c) \ return __b; \ } =20 +__LD2_LANE_FUNC (float16x4x2_t, float16x4_t, float16x8x2_t, float16_t, v8h= f, + hf, f16, float16x8_t) __LD2_LANE_FUNC (float32x2x2_t, float32x2_t, float32x4x2_t, float32_t, v4s= f, sf, f32, float32x4_t) __LD2_LANE_FUNC (float64x1x2_t, float64x1_t, float64x2x2_t, float64_t, v2d= f, @@ -16918,6 +17118,7 @@ vld2q_lane_##funcsuffix (const ptrtype * __ptr, int= ype __b, const int __c) \ return ret; \ } =20 +__LD2_LANE_FUNC (float16x8x2_t, float16x8_t, float16_t, v8hf, hf, f16) __LD2_LANE_FUNC (float32x4x2_t, float32x4_t, float32_t, v4sf, sf, f32) __LD2_LANE_FUNC (float64x2x2_t, float64x2_t, float64_t, v2df, df, f64) __LD2_LANE_FUNC (poly8x16x2_t, poly8x16_t, poly8_t, v16qi, qi, p8) @@ -16965,6 +17166,8 @@ vld3_lane_##funcsuffix (const ptrtype * __ptr, inty= pe __b, const int __c) \ return __b; \ } =20 +__LD3_LANE_FUNC (float16x4x3_t, float16x4_t, float16x8x3_t, float16_t, v8h= f, + hf, f16, float16x8_t) __LD3_LANE_FUNC (float32x2x3_t, float32x2_t, float32x4x3_t, float32_t, v4s= f, sf, f32, float32x4_t) __LD3_LANE_FUNC (float64x1x3_t, float64x1_t, float64x2x3_t, float64_t, v2d= f, @@ -17011,6 +17214,7 @@ vld3q_lane_##funcsuffix (const ptrtype * __ptr, int= ype __b, const int __c) \ return ret; \ } =20 +__LD3_LANE_FUNC (float16x8x3_t, float16x8_t, float16_t, v8hf, hf, f16) __LD3_LANE_FUNC (float32x4x3_t, float32x4_t, float32_t, v4sf, sf, f32) __LD3_LANE_FUNC (float64x2x3_t, float64x2_t, float64_t, v2df, df, f64) __LD3_LANE_FUNC (poly8x16x3_t, poly8x16_t, poly8_t, v16qi, qi, p8) @@ -17066,6 +17270,8 @@ vld4_lane_##funcsuffix (const ptrtype * __ptr, inty= pe __b, const int __c) \ =20 /* vld4q_lane */ =20 +__LD4_LANE_FUNC (float16x4x4_t, float16x4_t, float16x8x4_t, float16_t, v8h= f, + hf, f16, float16x8_t) __LD4_LANE_FUNC (float32x2x4_t, float32x2_t, float32x4x4_t, float32_t, v4s= f, sf, f32, float32x4_t) __LD4_LANE_FUNC (float64x1x4_t, float64x1_t, float64x2x4_t, float64_t, v2d= f, @@ -17114,6 +17320,7 @@ vld4q_lane_##funcsuffix (const ptrtype * __ptr, int= ype __b, const int __c) \ return ret; \ } =20 +__LD4_LANE_FUNC (float16x8x4_t, float16x8_t, float16_t, v8hf, hf, f16) __LD4_LANE_FUNC (float32x4x4_t, float32x4_t, float32_t, v4sf, sf, f32) __LD4_LANE_FUNC (float64x2x4_t, float64x2_t, float64_t, v2df, df, f64) __LD4_LANE_FUNC (poly8x16x4_t, poly8x16_t, poly8_t, v16qi, qi, p8) @@ -22474,6 +22681,18 @@ vst2_u32 (uint32_t * __a, uint32x2x2_t val) } =20 __extension__ static __inline void __attribute__ ((__always_inline__)) +vst2_f16 (float16_t * __a, float16x4x2_t val) +{ + __builtin_aarch64_simd_oi __o; + float16x8x2_t temp; + temp.val[0] =3D vcombine_f16 (val.val[0], vcreate_f16 (__AARCH64_UINT64_= C (0))); + temp.val[1] =3D vcombine_f16 (val.val[1], vcreate_f16 (__AARCH64_UINT64_= C (0))); + __o =3D __builtin_aarch64_set_qregoiv8hf (__o, temp.val[0], 0); + __o =3D __builtin_aarch64_set_qregoiv8hf (__o, temp.val[1], 1); + __builtin_aarch64_st2v4hf (__a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) vst2_f32 (float32_t * __a, float32x2x2_t val) { __builtin_aarch64_simd_oi __o; @@ -22576,6 +22795,15 @@ vst2q_u64 (uint64_t * __a, uint64x2x2_t val) } =20 __extension__ static __inline void __attribute__ ((__always_inline__)) +vst2q_f16 (float16_t * __a, float16x8x2_t val) +{ + __builtin_aarch64_simd_oi __o; + __o =3D __builtin_aarch64_set_qregoiv8hf (__o, val.val[0], 0); + __o =3D __builtin_aarch64_set_qregoiv8hf (__o, val.val[1], 1); + __builtin_aarch64_st2v8hf (__a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) vst2q_f32 (float32_t * __a, float32x4x2_t val) { __builtin_aarch64_simd_oi __o; @@ -22748,6 +22976,20 @@ vst3_u32 (uint32_t * __a, uint32x2x3_t val) } =20 __extension__ static __inline void __attribute__ ((__always_inline__)) +vst3_f16 (float16_t * __a, float16x4x3_t val) +{ + __builtin_aarch64_simd_ci __o; + float16x8x3_t temp; + temp.val[0] =3D vcombine_f16 (val.val[0], vcreate_f16 (__AARCH64_UINT64_= C (0))); + temp.val[1] =3D vcombine_f16 (val.val[1], vcreate_f16 (__AARCH64_UINT64_= C (0))); + temp.val[2] =3D vcombine_f16 (val.val[2], vcreate_f16 (__AARCH64_UINT64_= C (0))); + __o =3D __builtin_aarch64_set_qregciv8hf (__o, (float16x8_t) temp.val[0]= , 0); + __o =3D __builtin_aarch64_set_qregciv8hf (__o, (float16x8_t) temp.val[1]= , 1); + __o =3D __builtin_aarch64_set_qregciv8hf (__o, (float16x8_t) temp.val[2]= , 2); + __builtin_aarch64_st3v4hf ((__builtin_aarch64_simd_hf *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) vst3_f32 (float32_t * __a, float32x2x3_t val) { __builtin_aarch64_simd_ci __o; @@ -22862,6 +23104,16 @@ vst3q_u64 (uint64_t * __a, uint64x2x3_t val) } =20 __extension__ static __inline void __attribute__ ((__always_inline__)) +vst3q_f16 (float16_t * __a, float16x8x3_t val) +{ + __builtin_aarch64_simd_ci __o; + __o =3D __builtin_aarch64_set_qregciv8hf (__o, (float16x8_t) val.val[0],= 0); + __o =3D __builtin_aarch64_set_qregciv8hf (__o, (float16x8_t) val.val[1],= 1); + __o =3D __builtin_aarch64_set_qregciv8hf (__o, (float16x8_t) val.val[2],= 2); + __builtin_aarch64_st3v8hf ((__builtin_aarch64_simd_hf *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) vst3q_f32 (float32_t * __a, float32x4x3_t val) { __builtin_aarch64_simd_ci __o; @@ -23058,6 +23310,22 @@ vst4_u32 (uint32_t * __a, uint32x2x4_t val) } =20 __extension__ static __inline void __attribute__ ((__always_inline__)) +vst4_f16 (float16_t * __a, float16x4x4_t val) +{ + __builtin_aarch64_simd_xi __o; + float16x8x4_t temp; + temp.val[0] =3D vcombine_f16 (val.val[0], vcreate_f16 (__AARCH64_UINT64_= C (0))); + temp.val[1] =3D vcombine_f16 (val.val[1], vcreate_f16 (__AARCH64_UINT64_= C (0))); + temp.val[2] =3D vcombine_f16 (val.val[2], vcreate_f16 (__AARCH64_UINT64_= C (0))); + temp.val[3] =3D vcombine_f16 (val.val[3], vcreate_f16 (__AARCH64_UINT64_= C (0))); + __o =3D __builtin_aarch64_set_qregxiv8hf (__o, (float16x8_t) temp.val[0]= , 0); + __o =3D __builtin_aarch64_set_qregxiv8hf (__o, (float16x8_t) temp.val[1]= , 1); + __o =3D __builtin_aarch64_set_qregxiv8hf (__o, (float16x8_t) temp.val[2]= , 2); + __o =3D __builtin_aarch64_set_qregxiv8hf (__o, (float16x8_t) temp.val[3]= , 3); + __builtin_aarch64_st4v4hf ((__builtin_aarch64_simd_hf *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) vst4_f32 (float32_t * __a, float32x2x4_t val) { __builtin_aarch64_simd_xi __o; @@ -23184,6 +23452,17 @@ vst4q_u64 (uint64_t * __a, uint64x2x4_t val) } =20 __extension__ static __inline void __attribute__ ((__always_inline__)) +vst4q_f16 (float16_t * __a, float16x8x4_t val) +{ + __builtin_aarch64_simd_xi __o; + __o =3D __builtin_aarch64_set_qregxiv8hf (__o, (float16x8_t) val.val[0],= 0); + __o =3D __builtin_aarch64_set_qregxiv8hf (__o, (float16x8_t) val.val[1],= 1); + __o =3D __builtin_aarch64_set_qregxiv8hf (__o, (float16x8_t) val.val[2],= 2); + __o =3D __builtin_aarch64_set_qregxiv8hf (__o, (float16x8_t) val.val[3],= 3); + __builtin_aarch64_st4v8hf ((__builtin_aarch64_simd_hf *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) vst4q_f32 (float32_t * __a, float32x4x4_t val) { __builtin_aarch64_simd_xi __o; diff --git a/gcc/config/aarch64/iterators.md b/gcc/config/aarch64/iterators= .md index a7aaa52..96920cf 100644 --- a/gcc/config/aarch64/iterators.md +++ b/gcc/config/aarch64/iterators.md @@ -113,7 +113,7 @@ =20 ;; All vector modes and DI and DF. (define_mode_iterator VALLDIF [V8QI V16QI V4HI V8HI V2SI V4SI - V2DI V2SF V4SF V2DF DI DF]) + V2DI V4HF V8HF V2SF V4SF V2DF DI DF]) =20 ;; Vector modes for Integer reduction across lanes. (define_mode_iterator VDQV [V8QI V16QI V4HI V8HI V4SI V2DI]) @@ -134,7 +134,7 @@ (define_mode_iterator VQW [V16QI V8HI V4SI]) =20 ;; Double vector modes for combines. -(define_mode_iterator VDC [V8QI V4HI V2SI V2SF DI DF]) +(define_mode_iterator VDC [V8QI V4HI V4HF V2SI V2SF DI DF]) =20 ;; Vector modes except double int. (define_mode_iterator VDQIF [V8QI V16QI V4HI V8HI V2SI V4SI V2SF V4SF V2DF= ]) @@ -364,7 +364,8 @@ (V2SI "2s") (V4SI "4s") (DI "1d") (DF "1d") (V2DI "2d") (V2SF "2s") - (V4SF "4s") (V2DF "2d")]) + (V4SF "4s") (V2DF "2d") + (V4HF "4h") (V8HF "8h")]) =20 (define_mode_attr Vrevsuff [(V4HI "16") (V8HI "16") (V2SI "32") (V4SI "32") (V2DI "64")]) @@ -390,7 +391,8 @@ (define_mode_attr Vetype [(V8QI "b") (V16QI "b") (V4HI "h") (V8HI "h") (V2SI "s") (V4SI "s") - (V2DI "d") (V2SF "s") + (V2DI "d") (V4HF "h") + (V8HF "h") (V2SF "s") (V4SF "s") (V2DF "d") (SF "s") (DF "d") (QI "b") (HI "h") @@ -400,7 +402,8 @@ (define_mode_attr Vbtype [(V8QI "8b") (V16QI "16b") (V4HI "8b") (V8HI "16b") (V2SI "8b") (V4SI "16b") - (V2DI "16b") (V2SF "8b") + (V2DI "16b") (V4HF "8b") + (V8HF "16b") (V2SF "8b") (V4SF "16b") (V2DF "16b") (DI "8b") (DF "8b") (SI "8b")]) @@ -451,6 +454,7 @@ =20 ;; Double modes of vector modes (lower case). (define_mode_attr Vdbl [(V8QI "v16qi") (V4HI "v8hi") + (V4HF "v8hf") (V2SI "v4si") (V2SF "v4sf") (SI "v2si") (DI "v2di") (DF "v2df")]) @@ -525,6 +529,7 @@ (V4HI "V4HI") (V8HI "V8HI") (V2SI "V2SI") (V4SI "V4SI") (DI "DI") (V2DI "V2DI") + (V4HF "V4HI") (V8HF "V8HI") (V2SF "V2SI") (V4SF "V4SI") (V2DF "V2DI") (DF "DI") (SF "SI")]) @@ -534,6 +539,7 @@ (V4HI "v4hi") (V8HI "v8hi") (V2SI "v2si") (V4SI "v4si") (DI "di") (V2DI "v2di") + (V4HF "v4hi") (V8HF "v8hi") (V2SF "v2si") (V4SF "v4si") (V2DF "v2di") (DF "di") (SF "si")]) diff --git a/gcc/testsuite/gcc.target/aarch64/vldN_1.c b/gcc/testsuite/gcc.= target/aarch64/vldN_1.c index b64de16..caac94f 100644 --- a/gcc/testsuite/gcc.target/aarch64/vldN_1.c +++ b/gcc/testsuite/gcc.target/aarch64/vldN_1.c @@ -39,6 +39,7 @@ VARIANT (int32, 2, STRUCT, _s32) \ VARIANT (int64, 1, STRUCT, _s64) \ VARIANT (poly8, 8, STRUCT, _p8) \ VARIANT (poly16, 4, STRUCT, _p16) \ +VARIANT (float16, 4, STRUCT, _f16) \ VARIANT (float32, 2, STRUCT, _f32) \ VARIANT (float64, 1, STRUCT, _f64) \ VARIANT (uint8, 16, STRUCT, q_u8) \ @@ -51,6 +52,7 @@ VARIANT (int32, 4, STRUCT, q_s32) \ VARIANT (int64, 2, STRUCT, q_s64) \ VARIANT (poly8, 16, STRUCT, q_p8) \ VARIANT (poly16, 8, STRUCT, q_p16) \ +VARIANT (float16, 8, STRUCT, q_f16) \ VARIANT (float32, 4, STRUCT, q_f32) \ VARIANT (float64, 2, STRUCT, q_f64) =20 diff --git a/gcc/testsuite/gcc.target/aarch64/vldN_dup_1.c b/gcc/testsuite/= gcc.target/aarch64/vldN_dup_1.c index 9af0565..68c3fc3 100644 --- a/gcc/testsuite/gcc.target/aarch64/vldN_dup_1.c +++ b/gcc/testsuite/gcc.target/aarch64/vldN_dup_1.c @@ -16,6 +16,7 @@ VARIANT (int32, , 2, _s32, STRUCT) \ VARIANT (int64, , 1, _s64, STRUCT) \ VARIANT (poly8, , 8, _p8, STRUCT) \ VARIANT (poly16, , 4, _p16, STRUCT) \ +VARIANT (float16, , 4, _f16, STRUCT) \ VARIANT (float32, , 2, _f32, STRUCT) \ VARIANT (float64, , 1, _f64, STRUCT) \ VARIANT (uint8, q, 16, _u8, STRUCT) \ @@ -28,6 +29,7 @@ VARIANT (int32, q, 4, _s32, STRUCT) \ VARIANT (int64, q, 2, _s64, STRUCT) \ VARIANT (poly8, q, 16, _p8, STRUCT) \ VARIANT (poly16, q, 8, _p16, STRUCT) \ +VARIANT (float16, q, 8, _f16, STRUCT) \ VARIANT (float32, q, 4, _f32, STRUCT) \ VARIANT (float64, q, 2, _f64, STRUCT) =20 @@ -74,6 +76,7 @@ main (int argc, char **argv) int64_t *int64_data =3D (int64_t *)uint64_data; poly8_t poly8_data[4] =3D { 0, 7, 13, 18, }; poly16_t poly16_data[4] =3D { 11111, 2222, 333, 44 }; + float16_t float16_data[4] =3D { 1.0625, 3.125, 0.03125, 7.75 }; float32_t float32_data[4] =3D { 3.14159, 2.718, 1.414, 100.0 }; float64_t float64_data[4] =3D { 1.010010001, 12345.6789, -9876.54321, 1.= 618 }; =20 diff --git a/gcc/testsuite/gcc.target/aarch64/vldN_lane_1.c b/gcc/testsuite= /gcc.target/aarch64/vldN_lane_1.c index 13ab454..6837a11 100644 --- a/gcc/testsuite/gcc.target/aarch64/vldN_lane_1.c +++ b/gcc/testsuite/gcc.target/aarch64/vldN_lane_1.c @@ -16,6 +16,7 @@ VARIANT (int32, , 2, _s32, 0, STRUCT) \ VARIANT (int64, , 1, _s64, 0, STRUCT) \ VARIANT (poly8, , 8, _p8, 7, STRUCT) \ VARIANT (poly16, , 4, _p16, 1, STRUCT) \ +VARIANT (float16, , 4, _f16, 3, STRUCT) \ VARIANT (float32, , 2, _f32, 1, STRUCT) \ VARIANT (float64, , 1, _f64, 0, STRUCT) \ VARIANT (uint8, q, 16, _u8, 14, STRUCT) \ @@ -28,6 +29,7 @@ VARIANT (int32, q, 4, _s32, 2, STRUCT) \ VARIANT (int64, q, 2, _s64, 1, STRUCT) \ VARIANT (poly8, q, 16, _p8, 12, STRUCT) \ VARIANT (poly16, q, 8, _p16, 5, STRUCT) \ +VARIANT (float16, q, 8, _f16, 7, STRUCT)\ VARIANT (float32, q, 4, _f32, 1, STRUCT)\ VARIANT (float64, q, 2, _f64, 0, STRUCT) =20 @@ -71,7 +73,7 @@ main (int argc, char **argv) { /* Original data for all vector formats. */ uint64_t orig_data[8] =3D {0x1234567890abcdefULL, 0x13579bdf02468aceULL, - 0x012389ab4567cdefULL, 0xfeeddadacafe0431ULL, + 0x012389ab4567cdefULL, 0xdeeddadacafe0431ULL, 0x1032547698badcfeULL, 0xbadbadbadbad0badULL, 0x0102030405060708ULL, 0x0f0e0d0c0b0a0908ULL}; =20 @@ -87,6 +89,7 @@ main (int argc, char **argv) int64_t *int64_data =3D (int64_t *)uint64_data; poly8_t poly8_data[4] =3D { 0, 7, 13, 18, }; poly16_t poly16_data[4] =3D { 11111, 2222, 333, 44 }; + float16_t float16_data[4] =3D { 0.8125, 7.5, 19, 0.046875 }; float32_t float32_data[4] =3D { 3.14159, 2.718, 1.414, 100.0 }; float64_t float64_data[4] =3D { 1.010010001, 12345.6789, -9876.54321, 1.= 618 }; =20 --------------090102050908030104070409--