From: Kyrylo Tkachov <Kyrylo.Tkachov@arm.com>
To: Christophe Lyon <Christophe.Lyon@arm.com>,
"gcc-patches@gcc.gnu.org" <gcc-patches@gcc.gnu.org>,
Richard Earnshaw <Richard.Earnshaw@arm.com>,
Richard Sandiford <Richard.Sandiford@arm.com>
Cc: Christophe Lyon <Christophe.Lyon@arm.com>
Subject: RE: [PATCH 20/23] arm: [MVE intrinsics] rework vqrshrunbq vqrshruntq vqshrunbq vqshruntq
Date: Fri, 5 May 2023 11:05:00 +0000 [thread overview]
Message-ID: <PAXPR08MB692671CF10FD25894E1FF6A593729@PAXPR08MB6926.eurprd08.prod.outlook.com> (raw)
In-Reply-To: <20230505083930.101210-20-christophe.lyon@arm.com>
> -----Original Message-----
> From: Christophe Lyon <christophe.lyon@arm.com>
> Sent: Friday, May 5, 2023 9:39 AM
> To: gcc-patches@gcc.gnu.org; Kyrylo Tkachov <Kyrylo.Tkachov@arm.com>;
> Richard Earnshaw <Richard.Earnshaw@arm.com>; Richard Sandiford
> <Richard.Sandiford@arm.com>
> Cc: Christophe Lyon <Christophe.Lyon@arm.com>
> Subject: [PATCH 20/23] arm: [MVE intrinsics] rework vqrshrunbq vqrshruntq
> vqshrunbq vqshruntq
>
> Implement vqrshrunbq, vqrshruntq, vqshrunbq, vqshruntq using the new
> MVE builtins framework.
Ok.
Thanks,
Kyrill
>
> 2022-09-08 Christophe Lyon <christophe.lyon@arm.com>
>
> gcc/
> * config/arm/arm-mve-builtins-base.cc
> (FUNCTION_ONLY_N_NO_U_F): New.
> (vqshrunbq, vqshruntq, vqrshrunbq, vqrshruntq): New.
> * config/arm/arm-mve-builtins-base.def (vqshrunbq, vqshruntq)
> (vqrshrunbq, vqrshruntq): New.
> * config/arm/arm-mve-builtins-base.h (vqshrunbq, vqshruntq)
> (vqrshrunbq, vqrshruntq): New.
> * config/arm/arm-mve-builtins.cc
> (function_instance::has_inactive_argument): Handle vqshrunbq,
> vqshruntq, vqrshrunbq, vqrshruntq.
> * config/arm/arm_mve.h (vqrshrunbq): Remove.
> (vqrshruntq): Remove.
> (vqrshrunbq_m): Remove.
> (vqrshruntq_m): Remove.
> (vqrshrunbq_n_s16): Remove.
> (vqrshrunbq_n_s32): Remove.
> (vqrshruntq_n_s16): Remove.
> (vqrshruntq_n_s32): Remove.
> (vqrshrunbq_m_n_s32): Remove.
> (vqrshrunbq_m_n_s16): Remove.
> (vqrshruntq_m_n_s32): Remove.
> (vqrshruntq_m_n_s16): Remove.
> (__arm_vqrshrunbq_n_s16): Remove.
> (__arm_vqrshrunbq_n_s32): Remove.
> (__arm_vqrshruntq_n_s16): Remove.
> (__arm_vqrshruntq_n_s32): Remove.
> (__arm_vqrshrunbq_m_n_s32): Remove.
> (__arm_vqrshrunbq_m_n_s16): Remove.
> (__arm_vqrshruntq_m_n_s32): Remove.
> (__arm_vqrshruntq_m_n_s16): Remove.
> (__arm_vqrshrunbq): Remove.
> (__arm_vqrshruntq): Remove.
> (__arm_vqrshrunbq_m): Remove.
> (__arm_vqrshruntq_m): Remove.
> (vqshrunbq): Remove.
> (vqshruntq): Remove.
> (vqshrunbq_m): Remove.
> (vqshruntq_m): Remove.
> (vqshrunbq_n_s16): Remove.
> (vqshruntq_n_s16): Remove.
> (vqshrunbq_n_s32): Remove.
> (vqshruntq_n_s32): Remove.
> (vqshrunbq_m_n_s32): Remove.
> (vqshrunbq_m_n_s16): Remove.
> (vqshruntq_m_n_s32): Remove.
> (vqshruntq_m_n_s16): Remove.
> (__arm_vqshrunbq_n_s16): Remove.
> (__arm_vqshruntq_n_s16): Remove.
> (__arm_vqshrunbq_n_s32): Remove.
> (__arm_vqshruntq_n_s32): Remove.
> (__arm_vqshrunbq_m_n_s32): Remove.
> (__arm_vqshrunbq_m_n_s16): Remove.
> (__arm_vqshruntq_m_n_s32): Remove.
> (__arm_vqshruntq_m_n_s16): Remove.
> (__arm_vqshrunbq): Remove.
> (__arm_vqshruntq): Remove.
> (__arm_vqshrunbq_m): Remove.
> (__arm_vqshruntq_m): Remove.
> ---
> gcc/config/arm/arm-mve-builtins-base.cc | 13 +
> gcc/config/arm/arm-mve-builtins-base.def | 4 +
> gcc/config/arm/arm-mve-builtins-base.h | 4 +
> gcc/config/arm/arm-mve-builtins.cc | 4 +
> gcc/config/arm/arm_mve.h | 320 -----------------------
> 5 files changed, 25 insertions(+), 320 deletions(-)
>
> diff --git a/gcc/config/arm/arm-mve-builtins-base.cc b/gcc/config/arm/arm-
> mve-builtins-base.cc
> index c95abe70239..e7d2e0abffc 100644
> --- a/gcc/config/arm/arm-mve-builtins-base.cc
> +++ b/gcc/config/arm/arm-mve-builtins-base.cc
> @@ -184,6 +184,15 @@ namespace arm_mve {
> -1, -1, -1, \
> UNSPEC##_M_N_S, UNSPEC##_M_N_U, -1))
>
> + /* Helper for builtins with only unspec codes, _m predicated
> + overrides, only _n version, no unsigned, no floating-point. */
> +#define FUNCTION_ONLY_N_NO_U_F(NAME, UNSPEC) FUNCTION
> \
> + (NAME, unspec_mve_function_exact_insn, \
> + (-1, -1, -1, \
> + UNSPEC##_N_S, -1, -1, \
> + -1, -1, -1, \
> + UNSPEC##_M_N_S, -1, -1))
> +
> FUNCTION_WITHOUT_N (vabdq, VABDQ)
> FUNCTION_WITH_RTX_M_N (vaddq, PLUS, VADDQ)
> FUNCTION_WITH_RTX_M (vandq, AND, VANDQ)
> @@ -203,8 +212,12 @@ FUNCTION_WITH_M_N_NO_U_F (vqrdmulhq,
> VQRDMULHQ)
> FUNCTION_WITH_M_N_R (vqshlq, VQSHLQ)
> FUNCTION_ONLY_N_NO_F (vqrshrnbq, VQRSHRNBQ)
> FUNCTION_ONLY_N_NO_F (vqrshrntq, VQRSHRNTQ)
> +FUNCTION_ONLY_N_NO_U_F (vqrshrunbq, VQRSHRUNBQ)
> +FUNCTION_ONLY_N_NO_U_F (vqrshruntq, VQRSHRUNTQ)
> FUNCTION_ONLY_N_NO_F (vqshrnbq, VQSHRNBQ)
> FUNCTION_ONLY_N_NO_F (vqshrntq, VQSHRNTQ)
> +FUNCTION_ONLY_N_NO_U_F (vqshrunbq, VQSHRUNBQ)
> +FUNCTION_ONLY_N_NO_U_F (vqshruntq, VQSHRUNTQ)
> FUNCTION_WITH_M_N_NO_F (vqsubq, VQSUBQ)
> FUNCTION (vreinterpretq, vreinterpretq_impl,)
> FUNCTION_WITHOUT_N_NO_F (vrhaddq, VRHADDQ)
> diff --git a/gcc/config/arm/arm-mve-builtins-base.def b/gcc/config/arm/arm-
> mve-builtins-base.def
> index 3dd40086663..50cb2d055e9 100644
> --- a/gcc/config/arm/arm-mve-builtins-base.def
> +++ b/gcc/config/arm/arm-mve-builtins-base.def
> @@ -36,10 +36,14 @@ DEF_MVE_FUNCTION (vqrdmulhq, binary_opt_n,
> all_signed, m_or_none)
> DEF_MVE_FUNCTION (vqrshlq, binary_round_lshift, all_integer, m_or_none)
> DEF_MVE_FUNCTION (vqrshrnbq, binary_rshift_narrow, integer_16_32,
> m_or_none)
> DEF_MVE_FUNCTION (vqrshrntq, binary_rshift_narrow, integer_16_32,
> m_or_none)
> +DEF_MVE_FUNCTION (vqrshrunbq, binary_rshift_narrow_unsigned,
> signed_16_32, m_or_none)
> +DEF_MVE_FUNCTION (vqrshruntq, binary_rshift_narrow_unsigned,
> signed_16_32, m_or_none)
> DEF_MVE_FUNCTION (vqshlq, binary_lshift, all_integer, m_or_none)
> DEF_MVE_FUNCTION (vqshlq, binary_lshift_r, all_integer, m_or_none)
> DEF_MVE_FUNCTION (vqshrnbq, binary_rshift_narrow, integer_16_32,
> m_or_none)
> DEF_MVE_FUNCTION (vqshrntq, binary_rshift_narrow, integer_16_32,
> m_or_none)
> +DEF_MVE_FUNCTION (vqshrunbq, binary_rshift_narrow_unsigned,
> signed_16_32, m_or_none)
> +DEF_MVE_FUNCTION (vqshruntq, binary_rshift_narrow_unsigned,
> signed_16_32, m_or_none)
> DEF_MVE_FUNCTION (vqsubq, binary_opt_n, all_integer, m_or_none)
> DEF_MVE_FUNCTION (vreinterpretq, unary_convert, reinterpret_integer,
> none)
> DEF_MVE_FUNCTION (vrhaddq, binary, all_integer, mx_or_none)
> diff --git a/gcc/config/arm/arm-mve-builtins-base.h b/gcc/config/arm/arm-
> mve-builtins-base.h
> index 9e11ac83681..fcac772bc5b 100644
> --- a/gcc/config/arm/arm-mve-builtins-base.h
> +++ b/gcc/config/arm/arm-mve-builtins-base.h
> @@ -41,9 +41,13 @@ extern const function_base *const vqrdmulhq;
> extern const function_base *const vqrshlq;
> extern const function_base *const vqrshrnbq;
> extern const function_base *const vqrshrntq;
> +extern const function_base *const vqrshrunbq;
> +extern const function_base *const vqrshruntq;
> extern const function_base *const vqshlq;
> extern const function_base *const vqshrnbq;
> extern const function_base *const vqshrntq;
> +extern const function_base *const vqshrunbq;
> +extern const function_base *const vqshruntq;
> extern const function_base *const vqsubq;
> extern const function_base *const vreinterpretq;
> extern const function_base *const vrhaddq;
> diff --git a/gcc/config/arm/arm-mve-builtins.cc b/gcc/config/arm/arm-mve-
> builtins.cc
> index 667bbc58483..4fc6160a794 100644
> --- a/gcc/config/arm/arm-mve-builtins.cc
> +++ b/gcc/config/arm/arm-mve-builtins.cc
> @@ -674,8 +674,12 @@ function_instance::has_inactive_argument () const
> || (base == functions::vqrshlq && mode_suffix_id == MODE_n)
> || base == functions::vqrshrnbq
> || base == functions::vqrshrntq
> + || base == functions::vqrshrunbq
> + || base == functions::vqrshruntq
> || base == functions::vqshrnbq
> || base == functions::vqshrntq
> + || base == functions::vqshrunbq
> + || base == functions::vqshruntq
> || (base == functions::vrshlq && mode_suffix_id == MODE_n)
> || base == functions::vrshrnbq
> || base == functions::vrshrntq
> diff --git a/gcc/config/arm/arm_mve.h b/gcc/config/arm/arm_mve.h
> index ed7852e2460..b2701f1135d 100644
> --- a/gcc/config/arm/arm_mve.h
> +++ b/gcc/config/arm/arm_mve.h
> @@ -113,7 +113,6 @@
> #define vrmlaldavhxq(__a, __b) __arm_vrmlaldavhxq(__a, __b)
> #define vabavq(__a, __b, __c) __arm_vabavq(__a, __b, __c)
> #define vbicq_m_n(__a, __imm, __p) __arm_vbicq_m_n(__a, __imm, __p)
> -#define vqrshrunbq(__a, __b, __imm) __arm_vqrshrunbq(__a, __b, __imm)
> #define vrmlaldavhaq(__a, __b, __c) __arm_vrmlaldavhaq(__a, __b, __c)
> #define vshlcq(__a, __b, __imm) __arm_vshlcq(__a, __b, __imm)
> #define vpselq(__a, __b, __p) __arm_vpselq(__a, __b, __p)
> @@ -190,9 +189,6 @@
> #define vqmovnbq_m(__a, __b, __p) __arm_vqmovnbq_m(__a, __b, __p)
> #define vqmovntq_m(__a, __b, __p) __arm_vqmovntq_m(__a, __b, __p)
> #define vrev32q_m(__inactive, __a, __p) __arm_vrev32q_m(__inactive, __a,
> __p)
> -#define vqrshruntq(__a, __b, __imm) __arm_vqrshruntq(__a, __b, __imm)
> -#define vqshrunbq(__a, __b, __imm) __arm_vqshrunbq(__a, __b, __imm)
> -#define vqshruntq(__a, __b, __imm) __arm_vqshruntq(__a, __b, __imm)
> #define vqmovunbq_m(__a, __b, __p) __arm_vqmovunbq_m(__a, __b, __p)
> #define vqmovuntq_m(__a, __b, __p) __arm_vqmovuntq_m(__a, __b, __p)
> #define vsriq_m(__a, __b, __imm, __p) __arm_vsriq_m(__a, __b, __imm,
> __p)
> @@ -236,10 +232,6 @@
> #define vmulltq_poly_m(__inactive, __a, __b, __p)
> __arm_vmulltq_poly_m(__inactive, __a, __b, __p)
> #define vqdmullbq_m(__inactive, __a, __b, __p)
> __arm_vqdmullbq_m(__inactive, __a, __b, __p)
> #define vqdmulltq_m(__inactive, __a, __b, __p)
> __arm_vqdmulltq_m(__inactive, __a, __b, __p)
> -#define vqrshrunbq_m(__a, __b, __imm, __p) __arm_vqrshrunbq_m(__a,
> __b, __imm, __p)
> -#define vqrshruntq_m(__a, __b, __imm, __p) __arm_vqrshruntq_m(__a,
> __b, __imm, __p)
> -#define vqshrunbq_m(__a, __b, __imm, __p) __arm_vqshrunbq_m(__a,
> __b, __imm, __p)
> -#define vqshruntq_m(__a, __b, __imm, __p) __arm_vqshruntq_m(__a, __b,
> __imm, __p)
> #define vrmlaldavhaq_p(__a, __b, __c, __p) __arm_vrmlaldavhaq_p(__a,
> __b, __c, __p)
> #define vrmlaldavhaxq_p(__a, __b, __c, __p) __arm_vrmlaldavhaxq_p(__a,
> __b, __c, __p)
> #define vrmlsldavhaq_p(__a, __b, __c, __p) __arm_vrmlsldavhaq_p(__a,
> __b, __c, __p)
> @@ -889,8 +881,6 @@
> #define vcvtq_m_f16_u16(__inactive, __a, __p)
> __arm_vcvtq_m_f16_u16(__inactive, __a, __p)
> #define vcvtq_m_f32_s32(__inactive, __a, __p)
> __arm_vcvtq_m_f32_s32(__inactive, __a, __p)
> #define vcvtq_m_f32_u32(__inactive, __a, __p)
> __arm_vcvtq_m_f32_u32(__inactive, __a, __p)
> -#define vqrshrunbq_n_s16(__a, __b, __imm)
> __arm_vqrshrunbq_n_s16(__a, __b, __imm)
> -#define vqrshrunbq_n_s32(__a, __b, __imm)
> __arm_vqrshrunbq_n_s32(__a, __b, __imm)
> #define vrmlaldavhaq_s32(__a, __b, __c) __arm_vrmlaldavhaq_s32(__a,
> __b, __c)
> #define vrmlaldavhaq_u32(__a, __b, __c) __arm_vrmlaldavhaq_u32(__a,
> __b, __c)
> #define vshlcq_s8(__a, __b, __imm) __arm_vshlcq_s8(__a, __b, __imm)
> @@ -1203,9 +1193,6 @@
> #define vcmpneq_m_f16(__a, __b, __p) __arm_vcmpneq_m_f16(__a, __b,
> __p)
> #define vcmpneq_m_n_f16(__a, __b, __p) __arm_vcmpneq_m_n_f16(__a,
> __b, __p)
> #define vmvnq_m_n_u16(__inactive, __imm, __p)
> __arm_vmvnq_m_n_u16(__inactive, __imm, __p)
> -#define vqrshruntq_n_s16(__a, __b, __imm) __arm_vqrshruntq_n_s16(__a,
> __b, __imm)
> -#define vqshrunbq_n_s16(__a, __b, __imm) __arm_vqshrunbq_n_s16(__a,
> __b, __imm)
> -#define vqshruntq_n_s16(__a, __b, __imm) __arm_vqshruntq_n_s16(__a,
> __b, __imm)
> #define vcvtmq_m_u16_f16(__inactive, __a, __p)
> __arm_vcvtmq_m_u16_f16(__inactive, __a, __p)
> #define vcvtnq_m_u16_f16(__inactive, __a, __p)
> __arm_vcvtnq_m_u16_f16(__inactive, __a, __p)
> #define vcvtpq_m_u16_f16(__inactive, __a, __p)
> __arm_vcvtpq_m_u16_f16(__inactive, __a, __p)
> @@ -1278,9 +1265,6 @@
> #define vcmpneq_m_f32(__a, __b, __p) __arm_vcmpneq_m_f32(__a, __b,
> __p)
> #define vcmpneq_m_n_f32(__a, __b, __p) __arm_vcmpneq_m_n_f32(__a,
> __b, __p)
> #define vmvnq_m_n_u32(__inactive, __imm, __p)
> __arm_vmvnq_m_n_u32(__inactive, __imm, __p)
> -#define vqrshruntq_n_s32(__a, __b, __imm) __arm_vqrshruntq_n_s32(__a,
> __b, __imm)
> -#define vqshrunbq_n_s32(__a, __b, __imm) __arm_vqshrunbq_n_s32(__a,
> __b, __imm)
> -#define vqshruntq_n_s32(__a, __b, __imm) __arm_vqshruntq_n_s32(__a,
> __b, __imm)
> #define vcvtmq_m_u32_f32(__inactive, __a, __p)
> __arm_vcvtmq_m_u32_f32(__inactive, __a, __p)
> #define vcvtnq_m_u32_f32(__inactive, __a, __p)
> __arm_vcvtnq_m_u32_f32(__inactive, __a, __p)
> #define vcvtpq_m_u32_f32(__inactive, __a, __p)
> __arm_vcvtpq_m_u32_f32(__inactive, __a, __p)
> @@ -1466,14 +1450,6 @@
> #define vqdmulltq_m_n_s16(__inactive, __a, __b, __p)
> __arm_vqdmulltq_m_n_s16(__inactive, __a, __b, __p)
> #define vqdmulltq_m_s32(__inactive, __a, __b, __p)
> __arm_vqdmulltq_m_s32(__inactive, __a, __b, __p)
> #define vqdmulltq_m_s16(__inactive, __a, __b, __p)
> __arm_vqdmulltq_m_s16(__inactive, __a, __b, __p)
> -#define vqrshrunbq_m_n_s32(__a, __b, __imm, __p)
> __arm_vqrshrunbq_m_n_s32(__a, __b, __imm, __p)
> -#define vqrshrunbq_m_n_s16(__a, __b, __imm, __p)
> __arm_vqrshrunbq_m_n_s16(__a, __b, __imm, __p)
> -#define vqrshruntq_m_n_s32(__a, __b, __imm, __p)
> __arm_vqrshruntq_m_n_s32(__a, __b, __imm, __p)
> -#define vqrshruntq_m_n_s16(__a, __b, __imm, __p)
> __arm_vqrshruntq_m_n_s16(__a, __b, __imm, __p)
> -#define vqshrunbq_m_n_s32(__a, __b, __imm, __p)
> __arm_vqshrunbq_m_n_s32(__a, __b, __imm, __p)
> -#define vqshrunbq_m_n_s16(__a, __b, __imm, __p)
> __arm_vqshrunbq_m_n_s16(__a, __b, __imm, __p)
> -#define vqshruntq_m_n_s32(__a, __b, __imm, __p)
> __arm_vqshruntq_m_n_s32(__a, __b, __imm, __p)
> -#define vqshruntq_m_n_s16(__a, __b, __imm, __p)
> __arm_vqshruntq_m_n_s16(__a, __b, __imm, __p)
> #define vrmlaldavhaq_p_s32(__a, __b, __c, __p)
> __arm_vrmlaldavhaq_p_s32(__a, __b, __c, __p)
> #define vrmlaldavhaq_p_u32(__a, __b, __c, __p)
> __arm_vrmlaldavhaq_p_u32(__a, __b, __c, __p)
> #define vrmlaldavhaxq_p_s32(__a, __b, __c, __p)
> __arm_vrmlaldavhaxq_p_s32(__a, __b, __c, __p)
> @@ -4445,20 +4421,6 @@ __arm_vbicq_m_n_u32 (uint32x4_t __a, const int
> __imm, mve_pred16_t __p)
> return __builtin_mve_vbicq_m_n_uv4si (__a, __imm, __p);
> }
>
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrunbq_n_s16 (uint8x16_t __a, int16x8_t __b, const int __imm)
> -{
> - return __builtin_mve_vqrshrunbq_n_sv8hi (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrunbq_n_s32 (uint16x8_t __a, int32x4_t __b, const int __imm)
> -{
> - return __builtin_mve_vqrshrunbq_n_sv4si (__a, __b, __imm);
> -}
> -
> __extension__ extern __inline int64_t
> __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> __arm_vrmlaldavhaq_s32 (int64_t __a, int32x4_t __b, int32x4_t __c)
> @@ -6320,27 +6282,6 @@ __arm_vmvnq_m_n_u16 (uint16x8_t __inactive,
> const int __imm, mve_pred16_t __p)
> return __builtin_mve_vmvnq_m_n_uv8hi (__inactive, __imm, __p);
> }
>
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshruntq_n_s16 (uint8x16_t __a, int16x8_t __b, const int __imm)
> -{
> - return __builtin_mve_vqrshruntq_n_sv8hi (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrunbq_n_s16 (uint8x16_t __a, int16x8_t __b, const int __imm)
> -{
> - return __builtin_mve_vqshrunbq_n_sv8hi (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshruntq_n_s16 (uint8x16_t __a, int16x8_t __b, const int __imm)
> -{
> - return __builtin_mve_vqshruntq_n_sv8hi (__a, __b, __imm);
> -}
> -
> __extension__ extern __inline uint8x16_t
> __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> __arm_vqmovunbq_m_s16 (uint8x16_t __a, int16x8_t __b, mve_pred16_t
> __p)
> @@ -6537,27 +6478,6 @@ __arm_vmvnq_m_n_u32 (uint32x4_t __inactive,
> const int __imm, mve_pred16_t __p)
> return __builtin_mve_vmvnq_m_n_uv4si (__inactive, __imm, __p);
> }
>
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshruntq_n_s32 (uint16x8_t __a, int32x4_t __b, const int __imm)
> -{
> - return __builtin_mve_vqrshruntq_n_sv4si (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrunbq_n_s32 (uint16x8_t __a, int32x4_t __b, const int __imm)
> -{
> - return __builtin_mve_vqshrunbq_n_sv4si (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshruntq_n_s32 (uint16x8_t __a, int32x4_t __b, const int __imm)
> -{
> - return __builtin_mve_vqshruntq_n_sv4si (__a, __b, __imm);
> -}
> -
> __extension__ extern __inline uint16x8_t
> __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> __arm_vqmovunbq_m_s32 (uint16x8_t __a, int32x4_t __b, mve_pred16_t
> __p)
> @@ -7797,62 +7717,6 @@ __arm_vqdmulltq_m_s16 (int32x4_t __inactive,
> int16x8_t __a, int16x8_t __b, mve_p
> return __builtin_mve_vqdmulltq_m_sv8hi (__inactive, __a, __b, __p);
> }
>
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrunbq_m_n_s32 (uint16x8_t __a, int32x4_t __b, const int
> __imm, mve_pred16_t __p)
> -{
> - return __builtin_mve_vqrshrunbq_m_n_sv4si (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrunbq_m_n_s16 (uint8x16_t __a, int16x8_t __b, const int
> __imm, mve_pred16_t __p)
> -{
> - return __builtin_mve_vqrshrunbq_m_n_sv8hi (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshruntq_m_n_s32 (uint16x8_t __a, int32x4_t __b, const int
> __imm, mve_pred16_t __p)
> -{
> - return __builtin_mve_vqrshruntq_m_n_sv4si (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshruntq_m_n_s16 (uint8x16_t __a, int16x8_t __b, const int
> __imm, mve_pred16_t __p)
> -{
> - return __builtin_mve_vqrshruntq_m_n_sv8hi (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrunbq_m_n_s32 (uint16x8_t __a, int32x4_t __b, const int
> __imm, mve_pred16_t __p)
> -{
> - return __builtin_mve_vqshrunbq_m_n_sv4si (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrunbq_m_n_s16 (uint8x16_t __a, int16x8_t __b, const int
> __imm, mve_pred16_t __p)
> -{
> - return __builtin_mve_vqshrunbq_m_n_sv8hi (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshruntq_m_n_s32 (uint16x8_t __a, int32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __builtin_mve_vqshruntq_m_n_sv4si (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshruntq_m_n_s16 (uint8x16_t __a, int16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __builtin_mve_vqshruntq_m_n_sv8hi (__a, __b, __imm, __p);
> -}
> -
> __extension__ extern __inline int64_t
> __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> __arm_vrmlaldavhaq_p_s32 (int64_t __a, int32x4_t __b, int32x4_t __c,
> mve_pred16_t __p)
> @@ -16398,20 +16262,6 @@ __arm_vbicq_m_n (uint32x4_t __a, const int
> __imm, mve_pred16_t __p)
> return __arm_vbicq_m_n_u32 (__a, __imm, __p);
> }
>
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrunbq (uint8x16_t __a, int16x8_t __b, const int __imm)
> -{
> - return __arm_vqrshrunbq_n_s16 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrunbq (uint16x8_t __a, int32x4_t __b, const int __imm)
> -{
> - return __arm_vqrshrunbq_n_s32 (__a, __b, __imm);
> -}
> -
> __extension__ extern __inline int64_t
> __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> __arm_vrmlaldavhaq (int64_t __a, int32x4_t __b, int32x4_t __c)
> @@ -18260,27 +18110,6 @@ __arm_vmvnq_m (uint16x8_t __inactive, const
> int __imm, mve_pred16_t __p)
> return __arm_vmvnq_m_n_u16 (__inactive, __imm, __p);
> }
>
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshruntq (uint8x16_t __a, int16x8_t __b, const int __imm)
> -{
> - return __arm_vqrshruntq_n_s16 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrunbq (uint8x16_t __a, int16x8_t __b, const int __imm)
> -{
> - return __arm_vqshrunbq_n_s16 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshruntq (uint8x16_t __a, int16x8_t __b, const int __imm)
> -{
> - return __arm_vqshruntq_n_s16 (__a, __b, __imm);
> -}
> -
> __extension__ extern __inline uint8x16_t
> __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> __arm_vqmovunbq_m (uint8x16_t __a, int16x8_t __b, mve_pred16_t __p)
> @@ -18477,27 +18306,6 @@ __arm_vmvnq_m (uint32x4_t __inactive, const
> int __imm, mve_pred16_t __p)
> return __arm_vmvnq_m_n_u32 (__inactive, __imm, __p);
> }
>
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshruntq (uint16x8_t __a, int32x4_t __b, const int __imm)
> -{
> - return __arm_vqrshruntq_n_s32 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrunbq (uint16x8_t __a, int32x4_t __b, const int __imm)
> -{
> - return __arm_vqshrunbq_n_s32 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshruntq (uint16x8_t __a, int32x4_t __b, const int __imm)
> -{
> - return __arm_vqshruntq_n_s32 (__a, __b, __imm);
> -}
> -
> __extension__ extern __inline uint16x8_t
> __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> __arm_vqmovunbq_m (uint16x8_t __a, int32x4_t __b, mve_pred16_t __p)
> @@ -19737,62 +19545,6 @@ __arm_vqdmulltq_m (int32x4_t __inactive,
> int16x8_t __a, int16x8_t __b, mve_pred1
> return __arm_vqdmulltq_m_s16 (__inactive, __a, __b, __p);
> }
>
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrunbq_m (uint16x8_t __a, int32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vqrshrunbq_m_n_s32 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrunbq_m (uint8x16_t __a, int16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vqrshrunbq_m_n_s16 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshruntq_m (uint16x8_t __a, int32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vqrshruntq_m_n_s32 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshruntq_m (uint8x16_t __a, int16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vqrshruntq_m_n_s16 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrunbq_m (uint16x8_t __a, int32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vqshrunbq_m_n_s32 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrunbq_m (uint8x16_t __a, int16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vqshrunbq_m_n_s16 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshruntq_m (uint16x8_t __a, int32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vqshruntq_m_n_s32 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshruntq_m (uint8x16_t __a, int16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vqshruntq_m_n_s16 (__a, __b, __imm, __p);
> -}
> -
> __extension__ extern __inline int64_t
> __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> __arm_vrmlaldavhaq_p (int64_t __a, int32x4_t __b, int32x4_t __c,
> mve_pred16_t __p)
> @@ -25799,12 +25551,6 @@ extern void *__ARM_undef;
> int (*)[__ARM_mve_type_uint16x8_t]: __arm_vbicq_m_n_u16
> (__ARM_mve_coerce(__p0, uint16x8_t), p1, p2), \
> int (*)[__ARM_mve_type_uint32x4_t]: __arm_vbicq_m_n_u32
> (__ARM_mve_coerce(__p0, uint32x4_t), p1, p2));})
>
> -#define __arm_vqrshrunbq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> - __typeof(p1) __p1 = (p1); \
> - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqrshrunbq_n_s16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2), \
> - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqrshrunbq_n_s32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2));})
> -
> #define __arm_vshlcq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \
> int (*)[__ARM_mve_type_int8x16_t]: __arm_vshlcq_s8
> (__ARM_mve_coerce(__p0, int8x16_t), p1, p2), \
> @@ -26364,18 +26110,6 @@ extern void *__ARM_undef;
> int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]:
> __arm_vrev16q_m_s8 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int8x16_t), p2), \
> int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]:
> __arm_vrev16q_m_u8 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint8x16_t), p2));})
>
> -#define __arm_vqshruntq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> - __typeof(p1) __p1 = (p1); \
> - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqshruntq_n_s16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2), \
> - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqshruntq_n_s32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2));})
> -
> -#define __arm_vqrshruntq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> - __typeof(p1) __p1 = (p1); \
> - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqrshruntq_n_s16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2), \
> - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqrshruntq_n_s32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2));})
> -
> #define __arm_vqmovnbq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> __typeof(p1) __p1 = (p1); \
> _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> @@ -26404,12 +26138,6 @@ extern void *__ARM_undef;
> int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqmovuntq_m_s16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2), \
> int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqmovuntq_m_s32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2));})
>
> -#define __arm_vqrshruntq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> - __typeof(p1) __p1 = (p1); \
> - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqrshruntq_n_s16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2), \
> - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqrshruntq_n_s32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2));})
> -
> #define __arm_vnegq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> __typeof(p1) __p1 = (p1); \
> _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> @@ -27544,12 +27272,6 @@ extern void *__ARM_undef;
> int (*)[__ARM_mve_type_uint16x8_t]: __arm_vbicq_m_n_u16
> (__ARM_mve_coerce(__p0, uint16x8_t), p1, p2), \
> int (*)[__ARM_mve_type_uint32x4_t]: __arm_vbicq_m_n_u32
> (__ARM_mve_coerce(__p0, uint32x4_t), p1, p2));})
>
> -#define __arm_vqrshrunbq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> - __typeof(p1) __p1 = (p1); \
> - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqrshrunbq_n_s16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2), \
> - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqrshrunbq_n_s32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2));})
> -
> #define __arm_vqrdmlsdhq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> __typeof(p1) __p1 = (p1); \
> __typeof(p2) __p2 = (p2); \
> @@ -27861,24 +27583,12 @@ extern void *__ARM_undef;
> int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]:
> __arm_vrev32q_m_u8 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint8x16_t), p2), \
> int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]:
> __arm_vrev32q_m_u16 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint16x8_t), p2));})
>
> -#define __arm_vqshruntq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> - __typeof(p1) __p1 = (p1); \
> - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqshruntq_n_s16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2), \
> - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqshruntq_n_s32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2));})
> -
> #define __arm_vrev16q_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> __typeof(p1) __p1 = (p1); \
> _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]:
> __arm_vrev16q_m_s8 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int8x16_t), p2), \
> int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]:
> __arm_vrev16q_m_u8 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint8x16_t), p2));})
>
> -#define __arm_vqrshruntq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> - __typeof(p1) __p1 = (p1); \
> - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqrshruntq_n_s16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2), \
> - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqrshruntq_n_s32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2));})
> -
> #define __arm_vqmovuntq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> __typeof(p1) __p1 = (p1); \
> _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> @@ -28718,30 +28428,6 @@ extern void *__ARM_undef;
> int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint8x16_t]:
> __arm_vshlltq_m_n_u8 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint8x16_t), p2, p3), \
> int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint16x8_t]:
> __arm_vshlltq_m_n_u16 (__ARM_mve_coerce(__p0, uint32x4_t),
> __ARM_mve_coerce(__p1, uint16x8_t), p2, p3));})
>
> -#define __arm_vqshruntq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
> - __typeof(p1) __p1 = (p1); \
> - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqshruntq_m_n_s16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \
> - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqshruntq_m_n_s32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2, p3));})
> -
> -#define __arm_vqshrunbq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
> - __typeof(p1) __p1 = (p1); \
> - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqshrunbq_m_n_s16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \
> - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqshrunbq_m_n_s32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2, p3));})
> -
> -#define __arm_vqrshrunbq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
> - __typeof(p1) __p1 = (p1); \
> - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqrshrunbq_m_n_s16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \
> - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqrshrunbq_m_n_s32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2, p3));})
> -
> -#define __arm_vqrshruntq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
> - __typeof(p1) __p1 = (p1); \
> - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqrshruntq_m_n_s16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \
> - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqrshruntq_m_n_s32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2, p3));})
> -
> #define __arm_vmlaldavaq_p(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
> __typeof(p1) __p1 = (p1); \
> __typeof(p2) __p2 = (p2); \
> @@ -28831,12 +28517,6 @@ extern void *__ARM_undef;
> int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int_n]:
> __arm_vmvnq_m_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce1(__p1, int) , p2), \
> int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_int_n]:
> __arm_vmvnq_m_n_u32 (__ARM_mve_coerce(__p0, uint32x4_t),
> __ARM_mve_coerce1(__p1, int) , p2));})
>
> -#define __arm_vqshrunbq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> - __typeof(p1) __p1 = (p1); \
> - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqshrunbq_n_s16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2), \
> - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqshrunbq_n_s32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2));})
> -
> #define __arm_vqshluq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
> __typeof(p1) __p1 = (p1); \
> _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> --
> 2.34.1
next prev parent reply other threads:[~2023-05-05 11:05 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-05 8:39 [PATCH 01/23] arm: [MVE intrinsics] add binary_round_lshift shape Christophe Lyon
2023-05-05 8:39 ` [PATCH 02/23] arm: [MVE intrinsics] factorize vqrshlq vrshlq Christophe Lyon
2023-05-05 9:58 ` Kyrylo Tkachov
2023-05-05 8:39 ` [PATCH 03/23] arm: [MVE intrinsics] rework vrshlq vqrshlq Christophe Lyon
2023-05-05 9:59 ` Kyrylo Tkachov
2023-05-05 8:39 ` [PATCH 04/23] arm: [MVE intrinsics] factorize vqshlq vshlq Christophe Lyon
2023-05-05 10:00 ` Kyrylo Tkachov
2023-05-05 8:39 ` [PATCH 05/23] arm: [MVE intrinsics] rework vqrdmulhq Christophe Lyon
2023-05-05 10:01 ` Kyrylo Tkachov
2023-05-05 8:39 ` [PATCH 06/23] arm: [MVE intrinsics] factorize vabdq Christophe Lyon
2023-05-05 10:48 ` Kyrylo Tkachov
2023-05-05 8:39 ` [PATCH 07/23] arm: [MVE intrinsics] rework vabdq Christophe Lyon
2023-05-05 10:49 ` Kyrylo Tkachov
2023-05-05 8:39 ` [PATCH 08/23] arm: [MVE intrinsics] add binary_lshift shape Christophe Lyon
2023-05-05 10:51 ` Kyrylo Tkachov
2023-05-05 8:39 ` [PATCH 09/23] arm: [MVE intrinsics] add support for MODE_r Christophe Lyon
2023-05-05 10:55 ` Kyrylo Tkachov
2023-05-05 8:39 ` [PATCH 10/23] arm: [MVE intrinsics] add binary_lshift_r shape Christophe Lyon
2023-05-05 10:56 ` Kyrylo Tkachov
2023-05-05 8:39 ` [PATCH 11/23] arm: [MVE intrinsics] add unspec_mve_function_exact_insn_vshl Christophe Lyon
2023-05-05 10:56 ` Kyrylo Tkachov
2023-05-05 8:39 ` [PATCH 12/23] arm: [MVE intrinsics] rework vqshlq vshlq Christophe Lyon
2023-05-05 10:58 ` Kyrylo Tkachov
2023-05-05 8:39 ` [PATCH 13/23] arm: [MVE intrinsics] factorize vmaxq vminq Christophe Lyon
2023-05-05 10:58 ` Kyrylo Tkachov
2023-05-05 8:39 ` [PATCH 14/23] arm: [MVE intrinsics] rework " Christophe Lyon
2023-05-05 10:59 ` Kyrylo Tkachov
2023-05-05 8:39 ` [PATCH 15/23] arm: [MVE intrinsics] add binary_rshift_narrow shape Christophe Lyon
2023-05-05 11:00 ` Kyrylo Tkachov
2023-05-05 8:39 ` [PATCH 16/23] arm: [MVE intrinsics] factorize vshrntq vshrnbq vrshrnbq vrshrntq vqshrnbq vqshrntq vqrshrnbq vqrshrntq Christophe Lyon
2023-05-05 11:00 ` Kyrylo Tkachov
2023-05-05 8:39 ` [PATCH 17/23] arm: [MVE intrinsics] rework vshrnbq vshrntq " Christophe Lyon
2023-05-05 11:02 ` Kyrylo Tkachov
2023-05-05 8:39 ` [PATCH 18/23] arm: [MVE intrinsics] add binary_rshift_narrow_unsigned shape Christophe Lyon
2023-05-05 11:03 ` Kyrylo Tkachov
2023-05-05 8:39 ` [PATCH 19/23] arm: [MVE intrinsics] factorize vqrshrunb vqrshrunt vqshrunb vqshrunt Christophe Lyon
2023-05-05 11:04 ` Kyrylo Tkachov
2023-05-05 8:39 ` [PATCH 20/23] arm: [MVE intrinsics] rework vqrshrunbq vqrshruntq vqshrunbq vqshruntq Christophe Lyon
2023-05-05 11:05 ` Kyrylo Tkachov [this message]
2023-05-05 8:39 ` [PATCH 21/23] arm: [MVE intrinsics] add binary_rshift shape Christophe Lyon
2023-05-05 11:05 ` Kyrylo Tkachov
2023-05-05 8:39 ` [PATCH 22/23] arm: [MVE intrinsics] factorize vsrhrq vrshrq Christophe Lyon
2023-05-05 11:06 ` Kyrylo Tkachov
2023-05-05 8:39 ` [PATCH 23/23] arm: [MVE intrinsics] rework vshrq vrshrq Christophe Lyon
2023-05-05 11:07 ` Kyrylo Tkachov
2023-05-05 9:55 ` [PATCH 01/23] arm: [MVE intrinsics] add binary_round_lshift shape Kyrylo Tkachov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=PAXPR08MB692671CF10FD25894E1FF6A593729@PAXPR08MB6926.eurprd08.prod.outlook.com \
--to=kyrylo.tkachov@arm.com \
--cc=Christophe.Lyon@arm.com \
--cc=Richard.Earnshaw@arm.com \
--cc=Richard.Sandiford@arm.com \
--cc=gcc-patches@gcc.gnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).