public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Kyrylo Tkachov <Kyrylo.Tkachov@arm.com>
To: Christophe Lyon <Christophe.Lyon@arm.com>,
	"gcc-patches@gcc.gnu.org" <gcc-patches@gcc.gnu.org>,
	Richard Earnshaw <Richard.Earnshaw@arm.com>,
	Richard Sandiford <Richard.Sandiford@arm.com>
Cc: Christophe Lyon <Christophe.Lyon@arm.com>
Subject: RE: [PATCH 17/23] arm: [MVE intrinsics] rework vshrnbq vshrntq vrshrnbq vrshrntq vqshrnbq vqshrntq vqrshrnbq vqrshrntq
Date: Fri, 5 May 2023 11:02:19 +0000	[thread overview]
Message-ID: <PAXPR08MB692674F3B96BE61F759E839C93729@PAXPR08MB6926.eurprd08.prod.outlook.com> (raw)
In-Reply-To: <20230505083930.101210-17-christophe.lyon@arm.com>



> -----Original Message-----
> From: Christophe Lyon <christophe.lyon@arm.com>
> Sent: Friday, May 5, 2023 9:39 AM
> To: gcc-patches@gcc.gnu.org; Kyrylo Tkachov <Kyrylo.Tkachov@arm.com>;
> Richard Earnshaw <Richard.Earnshaw@arm.com>; Richard Sandiford
> <Richard.Sandiford@arm.com>
> Cc: Christophe Lyon <Christophe.Lyon@arm.com>
> Subject: [PATCH 17/23] arm: [MVE intrinsics] rework vshrnbq vshrntq
> vrshrnbq vrshrntq vqshrnbq vqshrntq vqrshrnbq vqrshrntq
> 
> Implement vshrnbq, vshrntq, vrshrnbq, vrshrntq, vqshrnbq, vqshrntq,
> vqrshrnbq, vqrshrntq using the new MVE builtins framework.

Ok with a style nit...

> 
> 2022-09-08  Christophe Lyon  <christophe.lyon@arm.com>
> 
> 	gcc/
> 	* config/arm/arm-mve-builtins-base.cc (FUNCTION_ONLY_N_NO_F):
> New.
> 	(vshrnbq, vshrntq, vrshrnbq, vrshrntq, vqshrnbq, vqshrntq)
> 	(vqrshrnbq, vqrshrntq): New.
> 	* config/arm/arm-mve-builtins-base.def (vshrnbq, vshrntq)
> 	(vrshrnbq, vrshrntq, vqshrnbq, vqshrntq, vqrshrnbq, vqrshrntq):
> 	New.
> 	* config/arm/arm-mve-builtins-base.h (vshrnbq, vshrntq, vrshrnbq)
> 	(vrshrntq, vqshrnbq, vqshrntq, vqrshrnbq, vqrshrntq): New.
> 	* config/arm/arm-mve-builtins.cc
> 	(function_instance::has_inactive_argument): Handle vshrnbq,
> 	vshrntq, vrshrnbq, vrshrntq, vqshrnbq, vqshrntq, vqrshrnbq,
> 	vqrshrntq.
> 	* config/arm/arm_mve.h (vshrnbq): Remove.
> 	(vshrntq): Remove.
> 	(vshrnbq_m): Remove.
> 	(vshrntq_m): Remove.
> 	(vshrnbq_n_s16): Remove.
> 	(vshrntq_n_s16): Remove.
> 	(vshrnbq_n_u16): Remove.
> 	(vshrntq_n_u16): Remove.
> 	(vshrnbq_n_s32): Remove.
> 	(vshrntq_n_s32): Remove.
> 	(vshrnbq_n_u32): Remove.
> 	(vshrntq_n_u32): Remove.
> 	(vshrnbq_m_n_s32): Remove.
> 	(vshrnbq_m_n_s16): Remove.
> 	(vshrnbq_m_n_u32): Remove.
> 	(vshrnbq_m_n_u16): Remove.
> 	(vshrntq_m_n_s32): Remove.
> 	(vshrntq_m_n_s16): Remove.
> 	(vshrntq_m_n_u32): Remove.
> 	(vshrntq_m_n_u16): Remove.
> 	(__arm_vshrnbq_n_s16): Remove.
> 	(__arm_vshrntq_n_s16): Remove.
> 	(__arm_vshrnbq_n_u16): Remove.
> 	(__arm_vshrntq_n_u16): Remove.
> 	(__arm_vshrnbq_n_s32): Remove.
> 	(__arm_vshrntq_n_s32): Remove.
> 	(__arm_vshrnbq_n_u32): Remove.
> 	(__arm_vshrntq_n_u32): Remove.
> 	(__arm_vshrnbq_m_n_s32): Remove.
> 	(__arm_vshrnbq_m_n_s16): Remove.
> 	(__arm_vshrnbq_m_n_u32): Remove.
> 	(__arm_vshrnbq_m_n_u16): Remove.
> 	(__arm_vshrntq_m_n_s32): Remove.
> 	(__arm_vshrntq_m_n_s16): Remove.
> 	(__arm_vshrntq_m_n_u32): Remove.
> 	(__arm_vshrntq_m_n_u16): Remove.
> 	(__arm_vshrnbq): Remove.
> 	(__arm_vshrntq): Remove.
> 	(__arm_vshrnbq_m): Remove.
> 	(__arm_vshrntq_m): Remove.
> 	(vrshrnbq): Remove.
> 	(vrshrntq): Remove.
> 	(vrshrnbq_m): Remove.
> 	(vrshrntq_m): Remove.
> 	(vrshrnbq_n_s16): Remove.
> 	(vrshrntq_n_s16): Remove.
> 	(vrshrnbq_n_u16): Remove.
> 	(vrshrntq_n_u16): Remove.
> 	(vrshrnbq_n_s32): Remove.
> 	(vrshrntq_n_s32): Remove.
> 	(vrshrnbq_n_u32): Remove.
> 	(vrshrntq_n_u32): Remove.
> 	(vrshrnbq_m_n_s32): Remove.
> 	(vrshrnbq_m_n_s16): Remove.
> 	(vrshrnbq_m_n_u32): Remove.
> 	(vrshrnbq_m_n_u16): Remove.
> 	(vrshrntq_m_n_s32): Remove.
> 	(vrshrntq_m_n_s16): Remove.
> 	(vrshrntq_m_n_u32): Remove.
> 	(vrshrntq_m_n_u16): Remove.
> 	(__arm_vrshrnbq_n_s16): Remove.
> 	(__arm_vrshrntq_n_s16): Remove.
> 	(__arm_vrshrnbq_n_u16): Remove.
> 	(__arm_vrshrntq_n_u16): Remove.
> 	(__arm_vrshrnbq_n_s32): Remove.
> 	(__arm_vrshrntq_n_s32): Remove.
> 	(__arm_vrshrnbq_n_u32): Remove.
> 	(__arm_vrshrntq_n_u32): Remove.
> 	(__arm_vrshrnbq_m_n_s32): Remove.
> 	(__arm_vrshrnbq_m_n_s16): Remove.
> 	(__arm_vrshrnbq_m_n_u32): Remove.
> 	(__arm_vrshrnbq_m_n_u16): Remove.
> 	(__arm_vrshrntq_m_n_s32): Remove.
> 	(__arm_vrshrntq_m_n_s16): Remove.
> 	(__arm_vrshrntq_m_n_u32): Remove.
> 	(__arm_vrshrntq_m_n_u16): Remove.
> 	(__arm_vrshrnbq): Remove.
> 	(__arm_vrshrntq): Remove.
> 	(__arm_vrshrnbq_m): Remove.
> 	(__arm_vrshrntq_m): Remove.
> 	(vqshrnbq): Remove.
> 	(vqshrntq): Remove.
> 	(vqshrnbq_m): Remove.
> 	(vqshrntq_m): Remove.
> 	(vqshrnbq_n_s16): Remove.
> 	(vqshrntq_n_s16): Remove.
> 	(vqshrnbq_n_u16): Remove.
> 	(vqshrntq_n_u16): Remove.
> 	(vqshrnbq_n_s32): Remove.
> 	(vqshrntq_n_s32): Remove.
> 	(vqshrnbq_n_u32): Remove.
> 	(vqshrntq_n_u32): Remove.
> 	(vqshrnbq_m_n_s32): Remove.
> 	(vqshrnbq_m_n_s16): Remove.
> 	(vqshrnbq_m_n_u32): Remove.
> 	(vqshrnbq_m_n_u16): Remove.
> 	(vqshrntq_m_n_s32): Remove.
> 	(vqshrntq_m_n_s16): Remove.
> 	(vqshrntq_m_n_u32): Remove.
> 	(vqshrntq_m_n_u16): Remove.
> 	(__arm_vqshrnbq_n_s16): Remove.
> 	(__arm_vqshrntq_n_s16): Remove.
> 	(__arm_vqshrnbq_n_u16): Remove.
> 	(__arm_vqshrntq_n_u16): Remove.
> 	(__arm_vqshrnbq_n_s32): Remove.
> 	(__arm_vqshrntq_n_s32): Remove.
> 	(__arm_vqshrnbq_n_u32): Remove.
> 	(__arm_vqshrntq_n_u32): Remove.
> 	(__arm_vqshrnbq_m_n_s32): Remove.
> 	(__arm_vqshrnbq_m_n_s16): Remove.
> 	(__arm_vqshrnbq_m_n_u32): Remove.
> 	(__arm_vqshrnbq_m_n_u16): Remove.
> 	(__arm_vqshrntq_m_n_s32): Remove.
> 	(__arm_vqshrntq_m_n_s16): Remove.
> 	(__arm_vqshrntq_m_n_u32): Remove.
> 	(__arm_vqshrntq_m_n_u16): Remove.
> 	(__arm_vqshrnbq): Remove.
> 	(__arm_vqshrntq): Remove.
> 	(__arm_vqshrnbq_m): Remove.
> 	(__arm_vqshrntq_m): Remove.
> 	(vqrshrnbq): Remove.
> 	(vqrshrntq): Remove.
> 	(vqrshrnbq_m): Remove.
> 	(vqrshrntq_m): Remove.
> 	(vqrshrnbq_n_s16): Remove.
> 	(vqrshrnbq_n_u16): Remove.
> 	(vqrshrnbq_n_s32): Remove.
> 	(vqrshrnbq_n_u32): Remove.
> 	(vqrshrntq_n_s16): Remove.
> 	(vqrshrntq_n_u16): Remove.
> 	(vqrshrntq_n_s32): Remove.
> 	(vqrshrntq_n_u32): Remove.
> 	(vqrshrnbq_m_n_s32): Remove.
> 	(vqrshrnbq_m_n_s16): Remove.
> 	(vqrshrnbq_m_n_u32): Remove.
> 	(vqrshrnbq_m_n_u16): Remove.
> 	(vqrshrntq_m_n_s32): Remove.
> 	(vqrshrntq_m_n_s16): Remove.
> 	(vqrshrntq_m_n_u32): Remove.
> 	(vqrshrntq_m_n_u16): Remove.
> 	(__arm_vqrshrnbq_n_s16): Remove.
> 	(__arm_vqrshrnbq_n_u16): Remove.
> 	(__arm_vqrshrnbq_n_s32): Remove.
> 	(__arm_vqrshrnbq_n_u32): Remove.
> 	(__arm_vqrshrntq_n_s16): Remove.
> 	(__arm_vqrshrntq_n_u16): Remove.
> 	(__arm_vqrshrntq_n_s32): Remove.
> 	(__arm_vqrshrntq_n_u32): Remove.
> 	(__arm_vqrshrnbq_m_n_s32): Remove.
> 	(__arm_vqrshrnbq_m_n_s16): Remove.
> 	(__arm_vqrshrnbq_m_n_u32): Remove.
> 	(__arm_vqrshrnbq_m_n_u16): Remove.
> 	(__arm_vqrshrntq_m_n_s32): Remove.
> 	(__arm_vqrshrntq_m_n_s16): Remove.
> 	(__arm_vqrshrntq_m_n_u32): Remove.
> 	(__arm_vqrshrntq_m_n_u16): Remove.
> 	(__arm_vqrshrnbq): Remove.
> 	(__arm_vqrshrntq): Remove.
> 	(__arm_vqrshrnbq_m): Remove.
> 	(__arm_vqrshrntq_m): Remove.
> ---
>  gcc/config/arm/arm-mve-builtins-base.cc  |   17 +
>  gcc/config/arm/arm-mve-builtins-base.def |    8 +
>  gcc/config/arm/arm-mve-builtins-base.h   |    8 +
>  gcc/config/arm/arm-mve-builtins.cc       |   11 +-
>  gcc/config/arm/arm_mve.h                 | 1196 +---------------------
>  5 files changed, 65 insertions(+), 1175 deletions(-)
> 
> diff --git a/gcc/config/arm/arm-mve-builtins-base.cc b/gcc/config/arm/arm-
> mve-builtins-base.cc
> index 1839d5cb1a5..c95abe70239 100644
> --- a/gcc/config/arm/arm-mve-builtins-base.cc
> +++ b/gcc/config/arm/arm-mve-builtins-base.cc
> @@ -175,6 +175,15 @@ namespace arm_mve {
>      UNSPEC##_M_S, UNSPEC##_M_U, UNSPEC##_M_F,
> 	\
>      -1, -1, -1))
> 
> +  /* Helper for builtins with only unspec codes, _m predicated
> +     overrides, only _n version, no floating-point.  */
> +#define FUNCTION_ONLY_N_NO_F(NAME, UNSPEC) FUNCTION
> 		\
> +  (NAME, unspec_mve_function_exact_insn,				\
> +   (-1, -1, -1,								\
> +    UNSPEC##_N_S, UNSPEC##_N_U, -1,					\
> +    -1, -1, -1,								\
> +    UNSPEC##_M_N_S, UNSPEC##_M_N_U, -1))
> +
>  FUNCTION_WITHOUT_N (vabdq, VABDQ)
>  FUNCTION_WITH_RTX_M_N (vaddq, PLUS, VADDQ)
>  FUNCTION_WITH_RTX_M (vandq, AND, VANDQ)
> @@ -192,12 +201,20 @@ FUNCTION_WITH_M_N_NO_U_F (vqdmulhq,
> VQDMULHQ)
>  FUNCTION_WITH_M_N_NO_F (vqrshlq, VQRSHLQ)
>  FUNCTION_WITH_M_N_NO_U_F (vqrdmulhq, VQRDMULHQ)
>  FUNCTION_WITH_M_N_R (vqshlq, VQSHLQ)
> +FUNCTION_ONLY_N_NO_F (vqrshrnbq, VQRSHRNBQ)
> +FUNCTION_ONLY_N_NO_F (vqrshrntq, VQRSHRNTQ)
> +FUNCTION_ONLY_N_NO_F (vqshrnbq, VQSHRNBQ)
> +FUNCTION_ONLY_N_NO_F (vqshrntq, VQSHRNTQ)
>  FUNCTION_WITH_M_N_NO_F (vqsubq, VQSUBQ)
>  FUNCTION (vreinterpretq, vreinterpretq_impl,)
>  FUNCTION_WITHOUT_N_NO_F (vrhaddq, VRHADDQ)
>  FUNCTION_WITHOUT_N_NO_F (vrmulhq, VRMULHQ)
>  FUNCTION_WITH_M_N_NO_F (vrshlq, VRSHLQ)
> +FUNCTION_ONLY_N_NO_F (vrshrnbq, VRSHRNBQ)
> +FUNCTION_ONLY_N_NO_F (vrshrntq, VRSHRNTQ)
>  FUNCTION_WITH_M_N_R (vshlq, VSHLQ)
> +FUNCTION_ONLY_N_NO_F (vshrnbq, VSHRNBQ)
> +FUNCTION_ONLY_N_NO_F (vshrntq, VSHRNTQ)
>  FUNCTION_WITH_RTX_M_N (vsubq, MINUS, VSUBQ)
>  FUNCTION (vuninitializedq, vuninitializedq_impl,)
> 
> diff --git a/gcc/config/arm/arm-mve-builtins-base.def b/gcc/config/arm/arm-
> mve-builtins-base.def
> index 3b42bf46e81..3dd40086663 100644
> --- a/gcc/config/arm/arm-mve-builtins-base.def
> +++ b/gcc/config/arm/arm-mve-builtins-base.def
> @@ -34,15 +34,23 @@ DEF_MVE_FUNCTION (vqaddq, binary_opt_n,
> all_integer, m_or_none)
>  DEF_MVE_FUNCTION (vqdmulhq, binary_opt_n, all_signed, m_or_none)
>  DEF_MVE_FUNCTION (vqrdmulhq, binary_opt_n, all_signed, m_or_none)
>  DEF_MVE_FUNCTION (vqrshlq, binary_round_lshift, all_integer, m_or_none)
> +DEF_MVE_FUNCTION (vqrshrnbq, binary_rshift_narrow, integer_16_32,
> m_or_none)
> +DEF_MVE_FUNCTION (vqrshrntq, binary_rshift_narrow, integer_16_32,
> m_or_none)
>  DEF_MVE_FUNCTION (vqshlq, binary_lshift, all_integer, m_or_none)
>  DEF_MVE_FUNCTION (vqshlq, binary_lshift_r, all_integer, m_or_none)
> +DEF_MVE_FUNCTION (vqshrnbq, binary_rshift_narrow, integer_16_32,
> m_or_none)
> +DEF_MVE_FUNCTION (vqshrntq, binary_rshift_narrow, integer_16_32,
> m_or_none)
>  DEF_MVE_FUNCTION (vqsubq, binary_opt_n, all_integer, m_or_none)
>  DEF_MVE_FUNCTION (vreinterpretq, unary_convert, reinterpret_integer,
> none)
>  DEF_MVE_FUNCTION (vrhaddq, binary, all_integer, mx_or_none)
>  DEF_MVE_FUNCTION (vrmulhq, binary, all_integer, mx_or_none)
>  DEF_MVE_FUNCTION (vrshlq, binary_round_lshift, all_integer, mx_or_none)
> +DEF_MVE_FUNCTION (vrshrnbq, binary_rshift_narrow, integer_16_32,
> m_or_none)
> +DEF_MVE_FUNCTION (vrshrntq, binary_rshift_narrow, integer_16_32,
> m_or_none)
>  DEF_MVE_FUNCTION (vshlq, binary_lshift, all_integer, mx_or_none)
>  DEF_MVE_FUNCTION (vshlq, binary_lshift_r, all_integer, m_or_none) // "_r"
> forms do not support the "x" predicate
> +DEF_MVE_FUNCTION (vshrnbq, binary_rshift_narrow, integer_16_32,
> m_or_none)
> +DEF_MVE_FUNCTION (vshrntq, binary_rshift_narrow, integer_16_32,
> m_or_none)
>  DEF_MVE_FUNCTION (vsubq, binary_opt_n, all_integer, mx_or_none)
>  DEF_MVE_FUNCTION (vuninitializedq, inherent, all_integer_with_64, none)
>  #undef REQUIRES_FLOAT
> diff --git a/gcc/config/arm/arm-mve-builtins-base.h b/gcc/config/arm/arm-
> mve-builtins-base.h
> index 81d10f4a8f4..9e11ac83681 100644
> --- a/gcc/config/arm/arm-mve-builtins-base.h
> +++ b/gcc/config/arm/arm-mve-builtins-base.h
> @@ -39,13 +39,21 @@ extern const function_base *const vqaddq;
>  extern const function_base *const vqdmulhq;
>  extern const function_base *const vqrdmulhq;
>  extern const function_base *const vqrshlq;
> +extern const function_base *const vqrshrnbq;
> +extern const function_base *const vqrshrntq;
>  extern const function_base *const vqshlq;
> +extern const function_base *const vqshrnbq;
> +extern const function_base *const vqshrntq;
>  extern const function_base *const vqsubq;
>  extern const function_base *const vreinterpretq;
>  extern const function_base *const vrhaddq;
>  extern const function_base *const vrmulhq;
>  extern const function_base *const vrshlq;
> +extern const function_base *const vrshrnbq;
> +extern const function_base *const vrshrntq;
>  extern const function_base *const vshlq;
> +extern const function_base *const vshrnbq;
> +extern const function_base *const vshrntq;
>  extern const function_base *const vsubq;
>  extern const function_base *const vuninitializedq;
> 
> diff --git a/gcc/config/arm/arm-mve-builtins.cc b/gcc/config/arm/arm-mve-
> builtins.cc
> index c25b1be9903..667bbc58483 100644
> --- a/gcc/config/arm/arm-mve-builtins.cc
> +++ b/gcc/config/arm/arm-mve-builtins.cc
> @@ -672,7 +672,16 @@ function_instance::has_inactive_argument () const
>    if (mode_suffix_id == MODE_r
>        || (base == functions::vorrq && mode_suffix_id == MODE_n)
>        || (base == functions::vqrshlq && mode_suffix_id == MODE_n)
> -      || (base == functions::vrshlq && mode_suffix_id == MODE_n))
> +      || base == functions::vqrshrnbq
> +      || base == functions::vqrshrntq
> +      || base == functions::vqshrnbq
> +      || base == functions::vqshrntq
> +      || (base == functions::vrshlq && mode_suffix_id == MODE_n)
> +      || base == functions::vrshrnbq
> +      || base == functions::vrshrntq
> +      || base == functions::vshrnbq
> +      || base == functions::vshrntq
> +      )

... The ')' should be on the previous line.
Thanks,
Kyrill

>      return false;
> 
>    return true;
> diff --git a/gcc/config/arm/arm_mve.h b/gcc/config/arm/arm_mve.h
> index 5fbea52c8ef..ed7852e2460 100644
> --- a/gcc/config/arm/arm_mve.h
> +++ b/gcc/config/arm/arm_mve.h
> @@ -113,7 +113,6 @@
>  #define vrmlaldavhxq(__a, __b) __arm_vrmlaldavhxq(__a, __b)
>  #define vabavq(__a, __b, __c) __arm_vabavq(__a, __b, __c)
>  #define vbicq_m_n(__a, __imm, __p) __arm_vbicq_m_n(__a, __imm, __p)
> -#define vqrshrnbq(__a, __b, __imm) __arm_vqrshrnbq(__a, __b, __imm)
>  #define vqrshrunbq(__a, __b, __imm) __arm_vqrshrunbq(__a, __b, __imm)
>  #define vrmlaldavhaq(__a, __b, __c) __arm_vrmlaldavhaq(__a, __b, __c)
>  #define vshlcq(__a, __b, __imm) __arm_vshlcq(__a, __b, __imm)
> @@ -176,13 +175,6 @@
>  #define vrmlaldavhxq_p(__a, __b, __p) __arm_vrmlaldavhxq_p(__a, __b,
> __p)
>  #define vrmlsldavhq_p(__a, __b, __p) __arm_vrmlsldavhq_p(__a, __b, __p)
>  #define vrmlsldavhxq_p(__a, __b, __p) __arm_vrmlsldavhxq_p(__a, __b,
> __p)
> -#define vqrshrntq(__a, __b, __imm) __arm_vqrshrntq(__a, __b, __imm)
> -#define vqshrnbq(__a, __b, __imm) __arm_vqshrnbq(__a, __b, __imm)
> -#define vqshrntq(__a, __b, __imm) __arm_vqshrntq(__a, __b, __imm)
> -#define vrshrnbq(__a, __b, __imm) __arm_vrshrnbq(__a, __b, __imm)
> -#define vrshrntq(__a, __b, __imm) __arm_vrshrntq(__a, __b, __imm)
> -#define vshrnbq(__a, __b, __imm) __arm_vshrnbq(__a, __b, __imm)
> -#define vshrntq(__a, __b, __imm) __arm_vshrntq(__a, __b, __imm)
>  #define vmlaldavaq(__a, __b, __c) __arm_vmlaldavaq(__a, __b, __c)
>  #define vmlaldavaxq(__a, __b, __c) __arm_vmlaldavaxq(__a, __b, __c)
>  #define vmlsldavaq(__a, __b, __c) __arm_vmlsldavaq(__a, __b, __c)
> @@ -244,24 +236,16 @@
>  #define vmulltq_poly_m(__inactive, __a, __b, __p)
> __arm_vmulltq_poly_m(__inactive, __a, __b, __p)
>  #define vqdmullbq_m(__inactive, __a, __b, __p)
> __arm_vqdmullbq_m(__inactive, __a, __b, __p)
>  #define vqdmulltq_m(__inactive, __a, __b, __p)
> __arm_vqdmulltq_m(__inactive, __a, __b, __p)
> -#define vqrshrnbq_m(__a, __b, __imm, __p) __arm_vqrshrnbq_m(__a, __b,
> __imm, __p)
> -#define vqrshrntq_m(__a, __b, __imm, __p) __arm_vqrshrntq_m(__a, __b,
> __imm, __p)
>  #define vqrshrunbq_m(__a, __b, __imm, __p) __arm_vqrshrunbq_m(__a,
> __b, __imm, __p)
>  #define vqrshruntq_m(__a, __b, __imm, __p) __arm_vqrshruntq_m(__a,
> __b, __imm, __p)
> -#define vqshrnbq_m(__a, __b, __imm, __p) __arm_vqshrnbq_m(__a, __b,
> __imm, __p)
> -#define vqshrntq_m(__a, __b, __imm, __p) __arm_vqshrntq_m(__a, __b,
> __imm, __p)
>  #define vqshrunbq_m(__a, __b, __imm, __p) __arm_vqshrunbq_m(__a,
> __b, __imm, __p)
>  #define vqshruntq_m(__a, __b, __imm, __p) __arm_vqshruntq_m(__a, __b,
> __imm, __p)
>  #define vrmlaldavhaq_p(__a, __b, __c, __p) __arm_vrmlaldavhaq_p(__a,
> __b, __c, __p)
>  #define vrmlaldavhaxq_p(__a, __b, __c, __p) __arm_vrmlaldavhaxq_p(__a,
> __b, __c, __p)
>  #define vrmlsldavhaq_p(__a, __b, __c, __p) __arm_vrmlsldavhaq_p(__a,
> __b, __c, __p)
>  #define vrmlsldavhaxq_p(__a, __b, __c, __p) __arm_vrmlsldavhaxq_p(__a,
> __b, __c, __p)
> -#define vrshrnbq_m(__a, __b, __imm, __p) __arm_vrshrnbq_m(__a, __b,
> __imm, __p)
> -#define vrshrntq_m(__a, __b, __imm, __p) __arm_vrshrntq_m(__a, __b,
> __imm, __p)
>  #define vshllbq_m(__inactive, __a, __imm, __p)
> __arm_vshllbq_m(__inactive, __a, __imm, __p)
>  #define vshlltq_m(__inactive, __a, __imm, __p) __arm_vshlltq_m(__inactive,
> __a, __imm, __p)
> -#define vshrnbq_m(__a, __b, __imm, __p) __arm_vshrnbq_m(__a, __b,
> __imm, __p)
> -#define vshrntq_m(__a, __b, __imm, __p) __arm_vshrntq_m(__a, __b,
> __imm, __p)
>  #define vstrbq_scatter_offset(__base, __offset, __value)
> __arm_vstrbq_scatter_offset(__base, __offset, __value)
>  #define vstrbq(__addr, __value) __arm_vstrbq(__addr, __value)
>  #define vstrwq_scatter_base(__addr, __offset, __value)
> __arm_vstrwq_scatter_base(__addr, __offset, __value)
> @@ -905,10 +889,6 @@
>  #define vcvtq_m_f16_u16(__inactive, __a, __p)
> __arm_vcvtq_m_f16_u16(__inactive, __a, __p)
>  #define vcvtq_m_f32_s32(__inactive, __a, __p)
> __arm_vcvtq_m_f32_s32(__inactive, __a, __p)
>  #define vcvtq_m_f32_u32(__inactive, __a, __p)
> __arm_vcvtq_m_f32_u32(__inactive, __a, __p)
> -#define vqrshrnbq_n_s16(__a, __b,  __imm) __arm_vqrshrnbq_n_s16(__a,
> __b,  __imm)
> -#define vqrshrnbq_n_u16(__a, __b,  __imm) __arm_vqrshrnbq_n_u16(__a,
> __b,  __imm)
> -#define vqrshrnbq_n_s32(__a, __b,  __imm) __arm_vqrshrnbq_n_s32(__a,
> __b,  __imm)
> -#define vqrshrnbq_n_u32(__a, __b,  __imm) __arm_vqrshrnbq_n_u32(__a,
> __b,  __imm)
>  #define vqrshrunbq_n_s16(__a, __b,  __imm)
> __arm_vqrshrunbq_n_s16(__a, __b,  __imm)
>  #define vqrshrunbq_n_s32(__a, __b,  __imm)
> __arm_vqrshrunbq_n_s32(__a, __b,  __imm)
>  #define vrmlaldavhaq_s32(__a, __b, __c) __arm_vrmlaldavhaq_s32(__a,
> __b, __c)
> @@ -1167,13 +1147,6 @@
>  #define vrev16q_m_u8(__inactive, __a, __p)
> __arm_vrev16q_m_u8(__inactive, __a, __p)
>  #define vrmlaldavhq_p_u32(__a, __b, __p) __arm_vrmlaldavhq_p_u32(__a,
> __b, __p)
>  #define vmvnq_m_n_s16(__inactive,  __imm, __p)
> __arm_vmvnq_m_n_s16(__inactive,  __imm, __p)
> -#define vqrshrntq_n_s16(__a, __b,  __imm) __arm_vqrshrntq_n_s16(__a,
> __b,  __imm)
> -#define vqshrnbq_n_s16(__a, __b,  __imm) __arm_vqshrnbq_n_s16(__a,
> __b,  __imm)
> -#define vqshrntq_n_s16(__a, __b,  __imm) __arm_vqshrntq_n_s16(__a,
> __b,  __imm)
> -#define vrshrnbq_n_s16(__a, __b,  __imm) __arm_vrshrnbq_n_s16(__a,
> __b,  __imm)
> -#define vrshrntq_n_s16(__a, __b,  __imm) __arm_vrshrntq_n_s16(__a, __b,
> __imm)
> -#define vshrnbq_n_s16(__a, __b,  __imm) __arm_vshrnbq_n_s16(__a, __b,
> __imm)
> -#define vshrntq_n_s16(__a, __b,  __imm) __arm_vshrntq_n_s16(__a, __b,
> __imm)
>  #define vcmlaq_f16(__a, __b, __c) __arm_vcmlaq_f16(__a, __b, __c)
>  #define vcmlaq_rot180_f16(__a, __b, __c) __arm_vcmlaq_rot180_f16(__a,
> __b, __c)
>  #define vcmlaq_rot270_f16(__a, __b, __c) __arm_vcmlaq_rot270_f16(__a,
> __b, __c)
> @@ -1239,13 +1212,6 @@
>  #define vcvtq_m_u16_f16(__inactive, __a, __p)
> __arm_vcvtq_m_u16_f16(__inactive, __a, __p)
>  #define vqmovunbq_m_s16(__a, __b, __p) __arm_vqmovunbq_m_s16(__a,
> __b, __p)
>  #define vqmovuntq_m_s16(__a, __b, __p) __arm_vqmovuntq_m_s16(__a,
> __b, __p)
> -#define vqrshrntq_n_u16(__a, __b,  __imm) __arm_vqrshrntq_n_u16(__a,
> __b,  __imm)
> -#define vqshrnbq_n_u16(__a, __b,  __imm) __arm_vqshrnbq_n_u16(__a,
> __b,  __imm)
> -#define vqshrntq_n_u16(__a, __b,  __imm) __arm_vqshrntq_n_u16(__a,
> __b,  __imm)
> -#define vrshrnbq_n_u16(__a, __b,  __imm) __arm_vrshrnbq_n_u16(__a,
> __b,  __imm)
> -#define vrshrntq_n_u16(__a, __b,  __imm) __arm_vrshrntq_n_u16(__a, __b,
> __imm)
> -#define vshrnbq_n_u16(__a, __b,  __imm) __arm_vshrnbq_n_u16(__a, __b,
> __imm)
> -#define vshrntq_n_u16(__a, __b,  __imm) __arm_vshrntq_n_u16(__a, __b,
> __imm)
>  #define vmlaldavaq_u16(__a, __b, __c) __arm_vmlaldavaq_u16(__a, __b,
> __c)
>  #define vmlaldavq_p_u16(__a, __b, __p) __arm_vmlaldavq_p_u16(__a, __b,
> __p)
>  #define vmovlbq_m_u8(__inactive, __a, __p)
> __arm_vmovlbq_m_u8(__inactive, __a, __p)
> @@ -1256,13 +1222,6 @@
>  #define vqmovntq_m_u16(__a, __b, __p) __arm_vqmovntq_m_u16(__a,
> __b, __p)
>  #define vrev32q_m_u8(__inactive, __a, __p)
> __arm_vrev32q_m_u8(__inactive, __a, __p)
>  #define vmvnq_m_n_s32(__inactive,  __imm, __p)
> __arm_vmvnq_m_n_s32(__inactive,  __imm, __p)
> -#define vqrshrntq_n_s32(__a, __b,  __imm) __arm_vqrshrntq_n_s32(__a,
> __b,  __imm)
> -#define vqshrnbq_n_s32(__a, __b,  __imm) __arm_vqshrnbq_n_s32(__a,
> __b,  __imm)
> -#define vqshrntq_n_s32(__a, __b,  __imm) __arm_vqshrntq_n_s32(__a,
> __b,  __imm)
> -#define vrshrnbq_n_s32(__a, __b,  __imm) __arm_vrshrnbq_n_s32(__a,
> __b,  __imm)
> -#define vrshrntq_n_s32(__a, __b,  __imm) __arm_vrshrntq_n_s32(__a, __b,
> __imm)
> -#define vshrnbq_n_s32(__a, __b,  __imm) __arm_vshrnbq_n_s32(__a, __b,
> __imm)
> -#define vshrntq_n_s32(__a, __b,  __imm) __arm_vshrntq_n_s32(__a, __b,
> __imm)
>  #define vcmlaq_f32(__a, __b, __c) __arm_vcmlaq_f32(__a, __b, __c)
>  #define vcmlaq_rot180_f32(__a, __b, __c) __arm_vcmlaq_rot180_f32(__a,
> __b, __c)
>  #define vcmlaq_rot270_f32(__a, __b, __c) __arm_vcmlaq_rot270_f32(__a,
> __b, __c)
> @@ -1328,13 +1287,6 @@
>  #define vcvtq_m_u32_f32(__inactive, __a, __p)
> __arm_vcvtq_m_u32_f32(__inactive, __a, __p)
>  #define vqmovunbq_m_s32(__a, __b, __p) __arm_vqmovunbq_m_s32(__a,
> __b, __p)
>  #define vqmovuntq_m_s32(__a, __b, __p) __arm_vqmovuntq_m_s32(__a,
> __b, __p)
> -#define vqrshrntq_n_u32(__a, __b,  __imm) __arm_vqrshrntq_n_u32(__a,
> __b,  __imm)
> -#define vqshrnbq_n_u32(__a, __b,  __imm) __arm_vqshrnbq_n_u32(__a,
> __b,  __imm)
> -#define vqshrntq_n_u32(__a, __b,  __imm) __arm_vqshrntq_n_u32(__a,
> __b,  __imm)
> -#define vrshrnbq_n_u32(__a, __b,  __imm) __arm_vrshrnbq_n_u32(__a,
> __b,  __imm)
> -#define vrshrntq_n_u32(__a, __b,  __imm) __arm_vrshrntq_n_u32(__a, __b,
> __imm)
> -#define vshrnbq_n_u32(__a, __b,  __imm) __arm_vshrnbq_n_u32(__a, __b,
> __imm)
> -#define vshrntq_n_u32(__a, __b,  __imm) __arm_vshrntq_n_u32(__a, __b,
> __imm)
>  #define vmlaldavaq_u32(__a, __b, __c) __arm_vmlaldavaq_u32(__a, __b,
> __c)
>  #define vmlaldavq_p_u32(__a, __b, __p) __arm_vmlaldavq_p_u32(__a, __b,
> __p)
>  #define vmovlbq_m_u16(__inactive, __a, __p)
> __arm_vmovlbq_m_u16(__inactive, __a, __p)
> @@ -1514,26 +1466,10 @@
>  #define vqdmulltq_m_n_s16(__inactive, __a, __b, __p)
> __arm_vqdmulltq_m_n_s16(__inactive, __a, __b, __p)
>  #define vqdmulltq_m_s32(__inactive, __a, __b, __p)
> __arm_vqdmulltq_m_s32(__inactive, __a, __b, __p)
>  #define vqdmulltq_m_s16(__inactive, __a, __b, __p)
> __arm_vqdmulltq_m_s16(__inactive, __a, __b, __p)
> -#define vqrshrnbq_m_n_s32(__a, __b,  __imm, __p)
> __arm_vqrshrnbq_m_n_s32(__a, __b,  __imm, __p)
> -#define vqrshrnbq_m_n_s16(__a, __b,  __imm, __p)
> __arm_vqrshrnbq_m_n_s16(__a, __b,  __imm, __p)
> -#define vqrshrnbq_m_n_u32(__a, __b,  __imm, __p)
> __arm_vqrshrnbq_m_n_u32(__a, __b,  __imm, __p)
> -#define vqrshrnbq_m_n_u16(__a, __b,  __imm, __p)
> __arm_vqrshrnbq_m_n_u16(__a, __b,  __imm, __p)
> -#define vqrshrntq_m_n_s32(__a, __b,  __imm, __p)
> __arm_vqrshrntq_m_n_s32(__a, __b,  __imm, __p)
> -#define vqrshrntq_m_n_s16(__a, __b,  __imm, __p)
> __arm_vqrshrntq_m_n_s16(__a, __b,  __imm, __p)
> -#define vqrshrntq_m_n_u32(__a, __b,  __imm, __p)
> __arm_vqrshrntq_m_n_u32(__a, __b,  __imm, __p)
> -#define vqrshrntq_m_n_u16(__a, __b,  __imm, __p)
> __arm_vqrshrntq_m_n_u16(__a, __b,  __imm, __p)
>  #define vqrshrunbq_m_n_s32(__a, __b,  __imm, __p)
> __arm_vqrshrunbq_m_n_s32(__a, __b,  __imm, __p)
>  #define vqrshrunbq_m_n_s16(__a, __b,  __imm, __p)
> __arm_vqrshrunbq_m_n_s16(__a, __b,  __imm, __p)
>  #define vqrshruntq_m_n_s32(__a, __b,  __imm, __p)
> __arm_vqrshruntq_m_n_s32(__a, __b,  __imm, __p)
>  #define vqrshruntq_m_n_s16(__a, __b,  __imm, __p)
> __arm_vqrshruntq_m_n_s16(__a, __b,  __imm, __p)
> -#define vqshrnbq_m_n_s32(__a, __b,  __imm, __p)
> __arm_vqshrnbq_m_n_s32(__a, __b,  __imm, __p)
> -#define vqshrnbq_m_n_s16(__a, __b,  __imm, __p)
> __arm_vqshrnbq_m_n_s16(__a, __b,  __imm, __p)
> -#define vqshrnbq_m_n_u32(__a, __b,  __imm, __p)
> __arm_vqshrnbq_m_n_u32(__a, __b,  __imm, __p)
> -#define vqshrnbq_m_n_u16(__a, __b,  __imm, __p)
> __arm_vqshrnbq_m_n_u16(__a, __b,  __imm, __p)
> -#define vqshrntq_m_n_s32(__a, __b,  __imm, __p)
> __arm_vqshrntq_m_n_s32(__a, __b,  __imm, __p)
> -#define vqshrntq_m_n_s16(__a, __b,  __imm, __p)
> __arm_vqshrntq_m_n_s16(__a, __b,  __imm, __p)
> -#define vqshrntq_m_n_u32(__a, __b,  __imm, __p)
> __arm_vqshrntq_m_n_u32(__a, __b,  __imm, __p)
> -#define vqshrntq_m_n_u16(__a, __b,  __imm, __p)
> __arm_vqshrntq_m_n_u16(__a, __b,  __imm, __p)
>  #define vqshrunbq_m_n_s32(__a, __b,  __imm, __p)
> __arm_vqshrunbq_m_n_s32(__a, __b,  __imm, __p)
>  #define vqshrunbq_m_n_s16(__a, __b,  __imm, __p)
> __arm_vqshrunbq_m_n_s16(__a, __b,  __imm, __p)
>  #define vqshruntq_m_n_s32(__a, __b,  __imm, __p)
> __arm_vqshruntq_m_n_s32(__a, __b,  __imm, __p)
> @@ -1543,14 +1479,6 @@
>  #define vrmlaldavhaxq_p_s32(__a, __b, __c, __p)
> __arm_vrmlaldavhaxq_p_s32(__a, __b, __c, __p)
>  #define vrmlsldavhaq_p_s32(__a, __b, __c, __p)
> __arm_vrmlsldavhaq_p_s32(__a, __b, __c, __p)
>  #define vrmlsldavhaxq_p_s32(__a, __b, __c, __p)
> __arm_vrmlsldavhaxq_p_s32(__a, __b, __c, __p)
> -#define vrshrnbq_m_n_s32(__a, __b,  __imm, __p)
> __arm_vrshrnbq_m_n_s32(__a, __b,  __imm, __p)
> -#define vrshrnbq_m_n_s16(__a, __b,  __imm, __p)
> __arm_vrshrnbq_m_n_s16(__a, __b,  __imm, __p)
> -#define vrshrnbq_m_n_u32(__a, __b,  __imm, __p)
> __arm_vrshrnbq_m_n_u32(__a, __b,  __imm, __p)
> -#define vrshrnbq_m_n_u16(__a, __b,  __imm, __p)
> __arm_vrshrnbq_m_n_u16(__a, __b,  __imm, __p)
> -#define vrshrntq_m_n_s32(__a, __b,  __imm, __p)
> __arm_vrshrntq_m_n_s32(__a, __b,  __imm, __p)
> -#define vrshrntq_m_n_s16(__a, __b,  __imm, __p)
> __arm_vrshrntq_m_n_s16(__a, __b,  __imm, __p)
> -#define vrshrntq_m_n_u32(__a, __b,  __imm, __p)
> __arm_vrshrntq_m_n_u32(__a, __b,  __imm, __p)
> -#define vrshrntq_m_n_u16(__a, __b,  __imm, __p)
> __arm_vrshrntq_m_n_u16(__a, __b,  __imm, __p)
>  #define vshllbq_m_n_s8(__inactive, __a,  __imm, __p)
> __arm_vshllbq_m_n_s8(__inactive, __a,  __imm, __p)
>  #define vshllbq_m_n_s16(__inactive, __a,  __imm, __p)
> __arm_vshllbq_m_n_s16(__inactive, __a,  __imm, __p)
>  #define vshllbq_m_n_u8(__inactive, __a,  __imm, __p)
> __arm_vshllbq_m_n_u8(__inactive, __a,  __imm, __p)
> @@ -1559,14 +1487,6 @@
>  #define vshlltq_m_n_s16(__inactive, __a,  __imm, __p)
> __arm_vshlltq_m_n_s16(__inactive, __a,  __imm, __p)
>  #define vshlltq_m_n_u8(__inactive, __a,  __imm, __p)
> __arm_vshlltq_m_n_u8(__inactive, __a,  __imm, __p)
>  #define vshlltq_m_n_u16(__inactive, __a,  __imm, __p)
> __arm_vshlltq_m_n_u16(__inactive, __a,  __imm, __p)
> -#define vshrnbq_m_n_s32(__a, __b,  __imm, __p)
> __arm_vshrnbq_m_n_s32(__a, __b,  __imm, __p)
> -#define vshrnbq_m_n_s16(__a, __b,  __imm, __p)
> __arm_vshrnbq_m_n_s16(__a, __b,  __imm, __p)
> -#define vshrnbq_m_n_u32(__a, __b,  __imm, __p)
> __arm_vshrnbq_m_n_u32(__a, __b,  __imm, __p)
> -#define vshrnbq_m_n_u16(__a, __b,  __imm, __p)
> __arm_vshrnbq_m_n_u16(__a, __b,  __imm, __p)
> -#define vshrntq_m_n_s32(__a, __b,  __imm, __p)
> __arm_vshrntq_m_n_s32(__a, __b,  __imm, __p)
> -#define vshrntq_m_n_s16(__a, __b,  __imm, __p)
> __arm_vshrntq_m_n_s16(__a, __b,  __imm, __p)
> -#define vshrntq_m_n_u32(__a, __b,  __imm, __p)
> __arm_vshrntq_m_n_u32(__a, __b,  __imm, __p)
> -#define vshrntq_m_n_u16(__a, __b,  __imm, __p)
> __arm_vshrntq_m_n_u16(__a, __b,  __imm, __p)
>  #define vbicq_m_f32(__inactive, __a, __b, __p)
> __arm_vbicq_m_f32(__inactive, __a, __b, __p)
>  #define vbicq_m_f16(__inactive, __a, __b, __p)
> __arm_vbicq_m_f16(__inactive, __a, __b, __p)
>  #define vbrsrq_m_n_f32(__inactive, __a, __b, __p)
> __arm_vbrsrq_m_n_f32(__inactive, __a, __b, __p)
> @@ -4525,34 +4445,6 @@ __arm_vbicq_m_n_u32 (uint32x4_t __a, const int
> __imm, mve_pred16_t __p)
>    return __builtin_mve_vbicq_m_n_uv4si (__a, __imm, __p);
>  }
> 
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrnbq_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm)
> -{
> -  return __builtin_mve_vqrshrnbq_n_sv8hi (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrnbq_n_u16 (uint8x16_t __a, uint16x8_t __b, const int __imm)
> -{
> -  return __builtin_mve_vqrshrnbq_n_uv8hi (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrnbq_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm)
> -{
> -  return __builtin_mve_vqrshrnbq_n_sv4si (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrnbq_n_u32 (uint16x8_t __a, uint32x4_t __b, const int __imm)
> -{
> -  return __builtin_mve_vqrshrnbq_n_uv4si (__a, __b, __imm);
> -}
> -
>  __extension__ extern __inline uint8x16_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vqrshrunbq_n_s16 (uint8x16_t __a, int16x8_t __b, const int __imm)
> @@ -6316,55 +6208,6 @@ __arm_vmvnq_m_n_s16 (int16x8_t __inactive,
> const int __imm, mve_pred16_t __p)
>    return __builtin_mve_vmvnq_m_n_sv8hi (__inactive, __imm, __p);
>  }
> 
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrntq_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm)
> -{
> -  return __builtin_mve_vqrshrntq_n_sv8hi (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrnbq_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm)
> -{
> -  return __builtin_mve_vqshrnbq_n_sv8hi (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrntq_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm)
> -{
> -  return __builtin_mve_vqshrntq_n_sv8hi (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrnbq_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm)
> -{
> -  return __builtin_mve_vrshrnbq_n_sv8hi (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrntq_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm)
> -{
> -  return __builtin_mve_vrshrntq_n_sv8hi (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrnbq_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm)
> -{
> -  return __builtin_mve_vshrnbq_n_sv8hi (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrntq_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm)
> -{
> -  return __builtin_mve_vshrntq_n_sv8hi (__a, __b, __imm);
> -}
> -
>  __extension__ extern __inline int64_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vmlaldavaq_s16 (int64_t __a, int16x8_t __b, int16x8_t __c)
> @@ -6512,55 +6355,6 @@ __arm_vqmovuntq_m_s16 (uint8x16_t __a,
> int16x8_t __b, mve_pred16_t __p)
>    return __builtin_mve_vqmovuntq_m_sv8hi (__a, __b, __p);
>  }
> 
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrntq_n_u16 (uint8x16_t __a, uint16x8_t __b, const int __imm)
> -{
> -  return __builtin_mve_vqrshrntq_n_uv8hi (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrnbq_n_u16 (uint8x16_t __a, uint16x8_t __b, const int __imm)
> -{
> -  return __builtin_mve_vqshrnbq_n_uv8hi (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrntq_n_u16 (uint8x16_t __a, uint16x8_t __b, const int __imm)
> -{
> -  return __builtin_mve_vqshrntq_n_uv8hi (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrnbq_n_u16 (uint8x16_t __a, uint16x8_t __b, const int __imm)
> -{
> -  return __builtin_mve_vrshrnbq_n_uv8hi (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrntq_n_u16 (uint8x16_t __a, uint16x8_t __b, const int __imm)
> -{
> -  return __builtin_mve_vrshrntq_n_uv8hi (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrnbq_n_u16 (uint8x16_t __a, uint16x8_t __b, const int __imm)
> -{
> -  return __builtin_mve_vshrnbq_n_uv8hi (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrntq_n_u16 (uint8x16_t __a, uint16x8_t __b, const int __imm)
> -{
> -  return __builtin_mve_vshrntq_n_uv8hi (__a, __b, __imm);
> -}
> -
>  __extension__ extern __inline uint64_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vmlaldavaq_u16 (uint64_t __a, uint16x8_t __b, uint16x8_t __c)
> @@ -6631,55 +6425,6 @@ __arm_vmvnq_m_n_s32 (int32x4_t __inactive,
> const int __imm, mve_pred16_t __p)
>    return __builtin_mve_vmvnq_m_n_sv4si (__inactive, __imm, __p);
>  }
> 
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrntq_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm)
> -{
> -  return __builtin_mve_vqrshrntq_n_sv4si (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrnbq_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm)
> -{
> -  return __builtin_mve_vqshrnbq_n_sv4si (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrntq_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm)
> -{
> -  return __builtin_mve_vqshrntq_n_sv4si (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrnbq_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm)
> -{
> -  return __builtin_mve_vrshrnbq_n_sv4si (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrntq_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm)
> -{
> -  return __builtin_mve_vrshrntq_n_sv4si (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrnbq_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm)
> -{
> -  return __builtin_mve_vshrnbq_n_sv4si (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrntq_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm)
> -{
> -  return __builtin_mve_vshrntq_n_sv4si (__a, __b, __imm);
> -}
> -
>  __extension__ extern __inline int64_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vmlaldavaq_s32 (int64_t __a, int32x4_t __b, int32x4_t __c)
> @@ -6827,55 +6572,6 @@ __arm_vqmovuntq_m_s32 (uint16x8_t __a,
> int32x4_t __b, mve_pred16_t __p)
>    return __builtin_mve_vqmovuntq_m_sv4si (__a, __b, __p);
>  }
> 
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrntq_n_u32 (uint16x8_t __a, uint32x4_t __b, const int __imm)
> -{
> -  return __builtin_mve_vqrshrntq_n_uv4si (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrnbq_n_u32 (uint16x8_t __a, uint32x4_t __b, const int __imm)
> -{
> -  return __builtin_mve_vqshrnbq_n_uv4si (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrntq_n_u32 (uint16x8_t __a, uint32x4_t __b, const int __imm)
> -{
> -  return __builtin_mve_vqshrntq_n_uv4si (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrnbq_n_u32 (uint16x8_t __a, uint32x4_t __b, const int __imm)
> -{
> -  return __builtin_mve_vrshrnbq_n_uv4si (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrntq_n_u32 (uint16x8_t __a, uint32x4_t __b, const int __imm)
> -{
> -  return __builtin_mve_vrshrntq_n_uv4si (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrnbq_n_u32 (uint16x8_t __a, uint32x4_t __b, const int __imm)
> -{
> -  return __builtin_mve_vshrnbq_n_uv4si (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrntq_n_u32 (uint16x8_t __a, uint32x4_t __b, const int __imm)
> -{
> -  return __builtin_mve_vshrntq_n_uv4si (__a, __b, __imm);
> -}
> -
>  __extension__ extern __inline uint64_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vmlaldavaq_u32 (uint64_t __a, uint32x4_t __b, uint32x4_t __c)
> @@ -8101,62 +7797,6 @@ __arm_vqdmulltq_m_s16 (int32x4_t __inactive,
> int16x8_t __a, int16x8_t __b, mve_p
>    return __builtin_mve_vqdmulltq_m_sv8hi (__inactive, __a, __b, __p);
>  }
> 
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrnbq_m_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vqrshrnbq_m_n_sv4si (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrnbq_m_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vqrshrnbq_m_n_sv8hi (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrnbq_m_n_u32 (uint16x8_t __a, uint32x4_t __b, const int
> __imm, mve_pred16_t __p)
> -{
> -  return __builtin_mve_vqrshrnbq_m_n_uv4si (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrnbq_m_n_u16 (uint8x16_t __a, uint16x8_t __b, const int
> __imm, mve_pred16_t __p)
> -{
> -  return __builtin_mve_vqrshrnbq_m_n_uv8hi (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrntq_m_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vqrshrntq_m_n_sv4si (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrntq_m_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vqrshrntq_m_n_sv8hi (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrntq_m_n_u32 (uint16x8_t __a, uint32x4_t __b, const int
> __imm, mve_pred16_t __p)
> -{
> -  return __builtin_mve_vqrshrntq_m_n_uv4si (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrntq_m_n_u16 (uint8x16_t __a, uint16x8_t __b, const int
> __imm, mve_pred16_t __p)
> -{
> -  return __builtin_mve_vqrshrntq_m_n_uv8hi (__a, __b, __imm, __p);
> -}
> -
>  __extension__ extern __inline uint16x8_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vqrshrunbq_m_n_s32 (uint16x8_t __a, int32x4_t __b, const int
> __imm, mve_pred16_t __p)
> @@ -8185,62 +7825,6 @@ __arm_vqrshruntq_m_n_s16 (uint8x16_t __a,
> int16x8_t __b, const int __imm, mve_pr
>    return __builtin_mve_vqrshruntq_m_n_sv8hi (__a, __b, __imm, __p);
>  }
> 
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrnbq_m_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vqshrnbq_m_n_sv4si (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrnbq_m_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vqshrnbq_m_n_sv8hi (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrnbq_m_n_u32 (uint16x8_t __a, uint32x4_t __b, const int
> __imm, mve_pred16_t __p)
> -{
> -  return __builtin_mve_vqshrnbq_m_n_uv4si (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrnbq_m_n_u16 (uint8x16_t __a, uint16x8_t __b, const int
> __imm, mve_pred16_t __p)
> -{
> -  return __builtin_mve_vqshrnbq_m_n_uv8hi (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrntq_m_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vqshrntq_m_n_sv4si (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrntq_m_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vqshrntq_m_n_sv8hi (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrntq_m_n_u32 (uint16x8_t __a, uint32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vqshrntq_m_n_uv4si (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrntq_m_n_u16 (uint8x16_t __a, uint16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vqshrntq_m_n_uv8hi (__a, __b, __imm, __p);
> -}
> -
>  __extension__ extern __inline uint16x8_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vqshrunbq_m_n_s32 (uint16x8_t __a, int32x4_t __b, const int
> __imm, mve_pred16_t __p)
> @@ -8304,62 +7888,6 @@ __arm_vrmlsldavhaxq_p_s32 (int64_t __a,
> int32x4_t __b, int32x4_t __c, mve_pred16
>    return __builtin_mve_vrmlsldavhaxq_p_sv4si (__a, __b, __c, __p);
>  }
> 
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrnbq_m_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vrshrnbq_m_n_sv4si (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrnbq_m_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vrshrnbq_m_n_sv8hi (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrnbq_m_n_u32 (uint16x8_t __a, uint32x4_t __b, const int
> __imm, mve_pred16_t __p)
> -{
> -  return __builtin_mve_vrshrnbq_m_n_uv4si (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrnbq_m_n_u16 (uint8x16_t __a, uint16x8_t __b, const int
> __imm, mve_pred16_t __p)
> -{
> -  return __builtin_mve_vrshrnbq_m_n_uv8hi (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrntq_m_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vrshrntq_m_n_sv4si (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrntq_m_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vrshrntq_m_n_sv8hi (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrntq_m_n_u32 (uint16x8_t __a, uint32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vrshrntq_m_n_uv4si (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrntq_m_n_u16 (uint8x16_t __a, uint16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vrshrntq_m_n_uv8hi (__a, __b, __imm, __p);
> -}
> -
>  __extension__ extern __inline int16x8_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vshllbq_m_n_s8 (int16x8_t __inactive, int8x16_t __a, const int
> __imm, mve_pred16_t __p)
> @@ -8416,62 +7944,6 @@ __arm_vshlltq_m_n_u16 (uint32x4_t __inactive,
> uint16x8_t __a, const int __imm, m
>    return __builtin_mve_vshlltq_m_n_uv8hi (__inactive, __a, __imm, __p);
>  }
> 
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrnbq_m_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vshrnbq_m_n_sv4si (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrnbq_m_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vshrnbq_m_n_sv8hi (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrnbq_m_n_u32 (uint16x8_t __a, uint32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vshrnbq_m_n_uv4si (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrnbq_m_n_u16 (uint8x16_t __a, uint16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vshrnbq_m_n_uv8hi (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrntq_m_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vshrntq_m_n_sv4si (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrntq_m_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vshrntq_m_n_sv8hi (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrntq_m_n_u32 (uint16x8_t __a, uint32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vshrntq_m_n_uv4si (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrntq_m_n_u16 (uint8x16_t __a, uint16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> -  return __builtin_mve_vshrntq_m_n_uv8hi (__a, __b, __imm, __p);
> -}
> -
>  __extension__ extern __inline void
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vstrbq_scatter_offset_s8 (int8_t * __base, uint8x16_t __offset,
> int8x16_t __value)
> @@ -16926,34 +16398,6 @@ __arm_vbicq_m_n (uint32x4_t __a, const int
> __imm, mve_pred16_t __p)
>   return __arm_vbicq_m_n_u32 (__a, __imm, __p);
>  }
> 
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrnbq (int8x16_t __a, int16x8_t __b, const int __imm)
> -{
> - return __arm_vqrshrnbq_n_s16 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrnbq (uint8x16_t __a, uint16x8_t __b, const int __imm)
> -{
> - return __arm_vqrshrnbq_n_u16 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrnbq (int16x8_t __a, int32x4_t __b, const int __imm)
> -{
> - return __arm_vqrshrnbq_n_s32 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrnbq (uint16x8_t __a, uint32x4_t __b, const int __imm)
> -{
> - return __arm_vqrshrnbq_n_u32 (__a, __b, __imm);
> -}
> -
>  __extension__ extern __inline uint8x16_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vqrshrunbq (uint8x16_t __a, int16x8_t __b, const int __imm)
> @@ -18704,55 +18148,6 @@ __arm_vmvnq_m (int16x8_t __inactive, const
> int __imm, mve_pred16_t __p)
>   return __arm_vmvnq_m_n_s16 (__inactive, __imm, __p);
>  }
> 
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrntq (int8x16_t __a, int16x8_t __b, const int __imm)
> -{
> - return __arm_vqrshrntq_n_s16 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrnbq (int8x16_t __a, int16x8_t __b, const int __imm)
> -{
> - return __arm_vqshrnbq_n_s16 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrntq (int8x16_t __a, int16x8_t __b, const int __imm)
> -{
> - return __arm_vqshrntq_n_s16 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrnbq (int8x16_t __a, int16x8_t __b, const int __imm)
> -{
> - return __arm_vrshrnbq_n_s16 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrntq (int8x16_t __a, int16x8_t __b, const int __imm)
> -{
> - return __arm_vrshrntq_n_s16 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrnbq (int8x16_t __a, int16x8_t __b, const int __imm)
> -{
> - return __arm_vshrnbq_n_s16 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrntq (int8x16_t __a, int16x8_t __b, const int __imm)
> -{
> - return __arm_vshrntq_n_s16 (__a, __b, __imm);
> -}
> -
>  __extension__ extern __inline int64_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vmlaldavaq (int64_t __a, int16x8_t __b, int16x8_t __c)
> @@ -18900,55 +18295,6 @@ __arm_vqmovuntq_m (uint8x16_t __a,
> int16x8_t __b, mve_pred16_t __p)
>   return __arm_vqmovuntq_m_s16 (__a, __b, __p);
>  }
> 
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrntq (uint8x16_t __a, uint16x8_t __b, const int __imm)
> -{
> - return __arm_vqrshrntq_n_u16 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrnbq (uint8x16_t __a, uint16x8_t __b, const int __imm)
> -{
> - return __arm_vqshrnbq_n_u16 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrntq (uint8x16_t __a, uint16x8_t __b, const int __imm)
> -{
> - return __arm_vqshrntq_n_u16 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrnbq (uint8x16_t __a, uint16x8_t __b, const int __imm)
> -{
> - return __arm_vrshrnbq_n_u16 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrntq (uint8x16_t __a, uint16x8_t __b, const int __imm)
> -{
> - return __arm_vrshrntq_n_u16 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrnbq (uint8x16_t __a, uint16x8_t __b, const int __imm)
> -{
> - return __arm_vshrnbq_n_u16 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrntq (uint8x16_t __a, uint16x8_t __b, const int __imm)
> -{
> - return __arm_vshrntq_n_u16 (__a, __b, __imm);
> -}
> -
>  __extension__ extern __inline uint64_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vmlaldavaq (uint64_t __a, uint16x8_t __b, uint16x8_t __c)
> @@ -19019,55 +18365,6 @@ __arm_vmvnq_m (int32x4_t __inactive, const
> int __imm, mve_pred16_t __p)
>   return __arm_vmvnq_m_n_s32 (__inactive, __imm, __p);
>  }
> 
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrntq (int16x8_t __a, int32x4_t __b, const int __imm)
> -{
> - return __arm_vqrshrntq_n_s32 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrnbq (int16x8_t __a, int32x4_t __b, const int __imm)
> -{
> - return __arm_vqshrnbq_n_s32 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrntq (int16x8_t __a, int32x4_t __b, const int __imm)
> -{
> - return __arm_vqshrntq_n_s32 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrnbq (int16x8_t __a, int32x4_t __b, const int __imm)
> -{
> - return __arm_vrshrnbq_n_s32 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrntq (int16x8_t __a, int32x4_t __b, const int __imm)
> -{
> - return __arm_vrshrntq_n_s32 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrnbq (int16x8_t __a, int32x4_t __b, const int __imm)
> -{
> - return __arm_vshrnbq_n_s32 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrntq (int16x8_t __a, int32x4_t __b, const int __imm)
> -{
> - return __arm_vshrntq_n_s32 (__a, __b, __imm);
> -}
> -
>  __extension__ extern __inline int64_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vmlaldavaq (int64_t __a, int32x4_t __b, int32x4_t __c)
> @@ -19152,116 +18449,67 @@ __arm_vmovntq_m (int16x8_t __a, int32x4_t
> __b, mve_pred16_t __p)
>   return __arm_vmovntq_m_s32 (__a, __b, __p);
>  }
> 
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqmovnbq_m (int16x8_t __a, int32x4_t __b, mve_pred16_t __p)
> -{
> - return __arm_vqmovnbq_m_s32 (__a, __b, __p);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqmovntq_m (int16x8_t __a, int32x4_t __b, mve_pred16_t __p)
> -{
> - return __arm_vqmovntq_m_s32 (__a, __b, __p);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrev32q_m (int16x8_t __inactive, int16x8_t __a, mve_pred16_t __p)
> -{
> - return __arm_vrev32q_m_s16 (__inactive, __a, __p);
> -}
> -
> -__extension__ extern __inline uint32x4_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vmvnq_m (uint32x4_t __inactive, const int __imm, mve_pred16_t
> __p)
> -{
> - return __arm_vmvnq_m_n_u32 (__inactive, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshruntq (uint16x8_t __a, int32x4_t __b, const int __imm)
> -{
> - return __arm_vqrshruntq_n_s32 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrunbq (uint16x8_t __a, int32x4_t __b, const int __imm)
> -{
> - return __arm_vqshrunbq_n_s32 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshruntq (uint16x8_t __a, int32x4_t __b, const int __imm)
> -{
> - return __arm_vqshruntq_n_s32 (__a, __b, __imm);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> +__extension__ extern __inline int16x8_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqmovunbq_m (uint16x8_t __a, int32x4_t __b, mve_pred16_t __p)
> +__arm_vqmovnbq_m (int16x8_t __a, int32x4_t __b, mve_pred16_t __p)
>  {
> - return __arm_vqmovunbq_m_s32 (__a, __b, __p);
> + return __arm_vqmovnbq_m_s32 (__a, __b, __p);
>  }
> 
> -__extension__ extern __inline uint16x8_t
> +__extension__ extern __inline int16x8_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqmovuntq_m (uint16x8_t __a, int32x4_t __b, mve_pred16_t __p)
> +__arm_vqmovntq_m (int16x8_t __a, int32x4_t __b, mve_pred16_t __p)
>  {
> - return __arm_vqmovuntq_m_s32 (__a, __b, __p);
> + return __arm_vqmovntq_m_s32 (__a, __b, __p);
>  }
> 
> -__extension__ extern __inline uint16x8_t
> +__extension__ extern __inline int16x8_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrntq (uint16x8_t __a, uint32x4_t __b, const int __imm)
> +__arm_vrev32q_m (int16x8_t __inactive, int16x8_t __a, mve_pred16_t __p)
>  {
> - return __arm_vqrshrntq_n_u32 (__a, __b, __imm);
> + return __arm_vrev32q_m_s16 (__inactive, __a, __p);
>  }
> 
> -__extension__ extern __inline uint16x8_t
> +__extension__ extern __inline uint32x4_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrnbq (uint16x8_t __a, uint32x4_t __b, const int __imm)
> +__arm_vmvnq_m (uint32x4_t __inactive, const int __imm, mve_pred16_t
> __p)
>  {
> - return __arm_vqshrnbq_n_u32 (__a, __b, __imm);
> + return __arm_vmvnq_m_n_u32 (__inactive, __imm, __p);
>  }
> 
>  __extension__ extern __inline uint16x8_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrntq (uint16x8_t __a, uint32x4_t __b, const int __imm)
> +__arm_vqrshruntq (uint16x8_t __a, int32x4_t __b, const int __imm)
>  {
> - return __arm_vqshrntq_n_u32 (__a, __b, __imm);
> + return __arm_vqrshruntq_n_s32 (__a, __b, __imm);
>  }
> 
>  __extension__ extern __inline uint16x8_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrnbq (uint16x8_t __a, uint32x4_t __b, const int __imm)
> +__arm_vqshrunbq (uint16x8_t __a, int32x4_t __b, const int __imm)
>  {
> - return __arm_vrshrnbq_n_u32 (__a, __b, __imm);
> + return __arm_vqshrunbq_n_s32 (__a, __b, __imm);
>  }
> 
>  __extension__ extern __inline uint16x8_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrntq (uint16x8_t __a, uint32x4_t __b, const int __imm)
> +__arm_vqshruntq (uint16x8_t __a, int32x4_t __b, const int __imm)
>  {
> - return __arm_vrshrntq_n_u32 (__a, __b, __imm);
> + return __arm_vqshruntq_n_s32 (__a, __b, __imm);
>  }
> 
>  __extension__ extern __inline uint16x8_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrnbq (uint16x8_t __a, uint32x4_t __b, const int __imm)
> +__arm_vqmovunbq_m (uint16x8_t __a, int32x4_t __b, mve_pred16_t __p)
>  {
> - return __arm_vshrnbq_n_u32 (__a, __b, __imm);
> + return __arm_vqmovunbq_m_s32 (__a, __b, __p);
>  }
> 
>  __extension__ extern __inline uint16x8_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrntq (uint16x8_t __a, uint32x4_t __b, const int __imm)
> +__arm_vqmovuntq_m (uint16x8_t __a, int32x4_t __b, mve_pred16_t __p)
>  {
> - return __arm_vshrntq_n_u32 (__a, __b, __imm);
> + return __arm_vqmovuntq_m_s32 (__a, __b, __p);
>  }
> 
>  __extension__ extern __inline uint64_t
> @@ -20489,62 +19737,6 @@ __arm_vqdmulltq_m (int32x4_t __inactive,
> int16x8_t __a, int16x8_t __b, mve_pred1
>   return __arm_vqdmulltq_m_s16 (__inactive, __a, __b, __p);
>  }
> 
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrnbq_m (int16x8_t __a, int32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vqrshrnbq_m_n_s32 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrnbq_m (int8x16_t __a, int16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vqrshrnbq_m_n_s16 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrnbq_m (uint16x8_t __a, uint32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vqrshrnbq_m_n_u32 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrnbq_m (uint8x16_t __a, uint16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vqrshrnbq_m_n_u16 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrntq_m (int16x8_t __a, int32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vqrshrntq_m_n_s32 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrntq_m (int8x16_t __a, int16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vqrshrntq_m_n_s16 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrntq_m (uint16x8_t __a, uint32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vqrshrntq_m_n_u32 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqrshrntq_m (uint8x16_t __a, uint16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vqrshrntq_m_n_u16 (__a, __b, __imm, __p);
> -}
> -
>  __extension__ extern __inline uint16x8_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vqrshrunbq_m (uint16x8_t __a, int32x4_t __b, const int __imm,
> mve_pred16_t __p)
> @@ -20573,62 +19765,6 @@ __arm_vqrshruntq_m (uint8x16_t __a,
> int16x8_t __b, const int __imm, mve_pred16_t
>   return __arm_vqrshruntq_m_n_s16 (__a, __b, __imm, __p);
>  }
> 
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrnbq_m (int16x8_t __a, int32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vqshrnbq_m_n_s32 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrnbq_m (int8x16_t __a, int16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vqshrnbq_m_n_s16 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrnbq_m (uint16x8_t __a, uint32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vqshrnbq_m_n_u32 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrnbq_m (uint8x16_t __a, uint16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vqshrnbq_m_n_u16 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrntq_m (int16x8_t __a, int32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vqshrntq_m_n_s32 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrntq_m (int8x16_t __a, int16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vqshrntq_m_n_s16 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrntq_m (uint16x8_t __a, uint32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vqshrntq_m_n_u32 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vqshrntq_m (uint8x16_t __a, uint16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vqshrntq_m_n_u16 (__a, __b, __imm, __p);
> -}
> -
>  __extension__ extern __inline uint16x8_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vqshrunbq_m (uint16x8_t __a, int32x4_t __b, const int __imm,
> mve_pred16_t __p)
> @@ -20692,62 +19828,6 @@ __arm_vrmlsldavhaxq_p (int64_t __a, int32x4_t
> __b, int32x4_t __c, mve_pred16_t _
>   return __arm_vrmlsldavhaxq_p_s32 (__a, __b, __c, __p);
>  }
> 
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrnbq_m (int16x8_t __a, int32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vrshrnbq_m_n_s32 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrnbq_m (int8x16_t __a, int16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vrshrnbq_m_n_s16 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrnbq_m (uint16x8_t __a, uint32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vrshrnbq_m_n_u32 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrnbq_m (uint8x16_t __a, uint16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vrshrnbq_m_n_u16 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrntq_m (int16x8_t __a, int32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vrshrntq_m_n_s32 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrntq_m (int8x16_t __a, int16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vrshrntq_m_n_s16 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrntq_m (uint16x8_t __a, uint32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vrshrntq_m_n_u32 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vrshrntq_m (uint8x16_t __a, uint16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vrshrntq_m_n_u16 (__a, __b, __imm, __p);
> -}
> -
>  __extension__ extern __inline int16x8_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vshllbq_m (int16x8_t __inactive, int8x16_t __a, const int __imm,
> mve_pred16_t __p)
> @@ -20804,62 +19884,6 @@ __arm_vshlltq_m (uint32x4_t __inactive,
> uint16x8_t __a, const int __imm, mve_pre
>   return __arm_vshlltq_m_n_u16 (__inactive, __a, __imm, __p);
>  }
> 
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrnbq_m (int16x8_t __a, int32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vshrnbq_m_n_s32 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrnbq_m (int8x16_t __a, int16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vshrnbq_m_n_s16 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrnbq_m (uint16x8_t __a, uint32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vshrnbq_m_n_u32 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrnbq_m (uint8x16_t __a, uint16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vshrnbq_m_n_u16 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline int16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrntq_m (int16x8_t __a, int32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vshrntq_m_n_s32 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline int8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrntq_m (int8x16_t __a, int16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vshrntq_m_n_s16 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint16x8_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrntq_m (uint16x8_t __a, uint32x4_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vshrntq_m_n_u32 (__a, __b, __imm, __p);
> -}
> -
> -__extension__ extern __inline uint8x16_t
> -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> -__arm_vshrntq_m (uint8x16_t __a, uint16x8_t __b, const int __imm,
> mve_pred16_t __p)
> -{
> - return __arm_vshrntq_m_n_u16 (__a, __b, __imm, __p);
> -}
> -
>  __extension__ extern __inline void
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
>  __arm_vstrbq_scatter_offset (int8_t * __base, uint8x16_t __offset, int8x16_t
> __value)
> @@ -26775,14 +25799,6 @@ extern void *__ARM_undef;
>    int (*)[__ARM_mve_type_uint16x8_t]: __arm_vbicq_m_n_u16
> (__ARM_mve_coerce(__p0, uint16x8_t), p1, p2), \
>    int (*)[__ARM_mve_type_uint32x4_t]: __arm_vbicq_m_n_u32
> (__ARM_mve_coerce(__p0, uint32x4_t), p1, p2));})
> 
> -#define __arm_vqrshrnbq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> -  __typeof(p1) __p1 = (p1); \
> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqrshrnbq_n_s16 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2), \
> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqrshrnbq_n_s32 (__ARM_mve_coerce(__p0, int16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2), \
> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]:
> __arm_vqrshrnbq_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint16x8_t), p2), \
> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]:
> __arm_vqrshrnbq_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint32x4_t), p2));})
> -
>  #define __arm_vqrshrunbq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
>    __typeof(p1) __p1 = (p1); \
>    _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> @@ -27006,14 +26022,6 @@ extern void *__ARM_undef;
>    int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint8x16_t]:
> __arm_vmovltq_m_u8 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint8x16_t), p2), \
>    int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint16x8_t]:
> __arm_vmovltq_m_u16 (__ARM_mve_coerce(__p0, uint32x4_t),
> __ARM_mve_coerce(__p1, uint16x8_t), p2));})
> 
> -#define __arm_vshrnbq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> -  __typeof(p1) __p1 = (p1); \
> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vshrnbq_n_s16 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2), \
> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vshrnbq_n_s32 (__ARM_mve_coerce(__p0, int16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2), \
> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]:
> __arm_vshrnbq_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint16x8_t), p2), \
> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]:
> __arm_vshrnbq_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint32x4_t), p2));})
> -
>  #define __arm_vcvtaq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
>    __typeof(p1) __p1 = (p1); \
>    _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> @@ -27350,14 +26358,6 @@ extern void *__ARM_undef;
>    int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_fp_n]:
> __arm_vcmpgeq_n_f16 (__ARM_mve_coerce(__p0, float16x8_t),
> __ARM_mve_coerce2(p1, double)), \
>    int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_fp_n]:
> __arm_vcmpgeq_n_f32 (__ARM_mve_coerce(__p0, float32x4_t),
> __ARM_mve_coerce2(p1, double)));})
> 
> -#define __arm_vrshrnbq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> -  __typeof(p1) __p1 = (p1); \
> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vrshrnbq_n_s16 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2), \
> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vrshrnbq_n_s32 (__ARM_mve_coerce(__p0, int16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2), \
> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]:
> __arm_vrshrnbq_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint16x8_t), p2), \
> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]:
> __arm_vrshrnbq_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint32x4_t), p2));})
> -
>  #define __arm_vrev16q_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
>    __typeof(p1) __p1 = (p1); \
>    _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> @@ -27370,22 +26370,6 @@ extern void *__ARM_undef;
>    int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqshruntq_n_s16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2), \
>    int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqshruntq_n_s32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2));})
> 
> -#define __arm_vqshrnbq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> -  __typeof(p1) __p1 = (p1); \
> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqshrnbq_n_s16 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2), \
> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqshrnbq_n_s32 (__ARM_mve_coerce(__p0, int16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2), \
> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]:
> __arm_vqshrnbq_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint16x8_t), p2), \
> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]:
> __arm_vqshrnbq_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint32x4_t), p2));})
> -
> -#define __arm_vqshrntq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> -  __typeof(p1) __p1 = (p1); \
> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqshrntq_n_s16 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2), \
> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqshrntq_n_s32 (__ARM_mve_coerce(__p0, int16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2), \
> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]:
> __arm_vqshrntq_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint16x8_t), p2), \
> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]:
> __arm_vqshrntq_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint32x4_t), p2));})
> -
>  #define __arm_vqrshruntq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
>    __typeof(p1) __p1 = (p1); \
>    _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> @@ -27420,14 +26404,6 @@ extern void *__ARM_undef;
>    int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqmovuntq_m_s16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2), \
>    int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqmovuntq_m_s32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2));})
> 
> -#define __arm_vqrshrntq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> -  __typeof(p1) __p1 = (p1); \
> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqrshrntq_n_s16 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2), \
> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqrshrntq_n_s32 (__ARM_mve_coerce(__p0, int16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2), \
> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]:
> __arm_vqrshrntq_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint16x8_t), p2), \
> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]:
> __arm_vqrshrntq_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint32x4_t), p2));})
> -
>  #define __arm_vqrshruntq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
>    __typeof(p1) __p1 = (p1); \
>    _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> @@ -28568,14 +27544,6 @@ extern void *__ARM_undef;
>    int (*)[__ARM_mve_type_uint16x8_t]: __arm_vbicq_m_n_u16
> (__ARM_mve_coerce(__p0, uint16x8_t), p1, p2), \
>    int (*)[__ARM_mve_type_uint32x4_t]: __arm_vbicq_m_n_u32
> (__ARM_mve_coerce(__p0, uint32x4_t), p1, p2));})
> 
> -#define __arm_vqrshrnbq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> -  __typeof(p1) __p1 = (p1); \
> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqrshrnbq_n_s16 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2), \
> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqrshrnbq_n_s32 (__ARM_mve_coerce(__p0, int16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2), \
> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]:
> __arm_vqrshrnbq_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint16x8_t), p2), \
> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]:
> __arm_vqrshrnbq_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint32x4_t), p2));})
> -
>  #define __arm_vqrshrunbq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
>    __typeof(p1) __p1 = (p1); \
>    _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> @@ -28885,22 +27853,6 @@ extern void *__ARM_undef;
>    int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]:
> __arm_vmovntq_m_u16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint16x8_t), p2), \
>    int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]:
> __arm_vmovntq_m_u32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint32x4_t), p2));})
> 
> -#define __arm_vshrnbq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> -  __typeof(p1) __p1 = (p1); \
> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vshrnbq_n_s16 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2), \
> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vshrnbq_n_s32 (__ARM_mve_coerce(__p0, int16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2), \
> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]:
> __arm_vshrnbq_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint16x8_t), p2), \
> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]:
> __arm_vshrnbq_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint32x4_t), p2));})
> -
> -#define __arm_vrshrnbq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> -  __typeof(p1) __p1 = (p1); \
> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vrshrnbq_n_s16 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2), \
> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vrshrnbq_n_s32 (__ARM_mve_coerce(__p0, int16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2), \
> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]:
> __arm_vrshrnbq_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint16x8_t), p2), \
> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]:
> __arm_vrshrnbq_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint32x4_t), p2));})
> -
>  #define __arm_vrev32q_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
>    __typeof(p1) __p1 = (p1); \
>    _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> @@ -28921,36 +27873,12 @@ extern void *__ARM_undef;
>    int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]:
> __arm_vrev16q_m_s8 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int8x16_t), p2), \
>    int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]:
> __arm_vrev16q_m_u8 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint8x16_t), p2));})
> 
> -#define __arm_vqshrntq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> -  __typeof(p1) __p1 = (p1); \
> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqshrntq_n_s16 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2), \
> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqshrntq_n_s32 (__ARM_mve_coerce(__p0, int16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2), \
> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]:
> __arm_vqshrntq_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint16x8_t), p2), \
> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]:
> __arm_vqshrntq_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint32x4_t), p2));})
> -
>  #define __arm_vqrshruntq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
>    __typeof(p1) __p1 = (p1); \
>    _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
>    int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqrshruntq_n_s16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2), \
>    int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqrshruntq_n_s32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2));})
> 
> -#define __arm_vqrshrntq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> -  __typeof(p1) __p1 = (p1); \
> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqrshrntq_n_s16 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2), \
> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqrshrntq_n_s32 (__ARM_mve_coerce(__p0, int16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2), \
> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]:
> __arm_vqrshrntq_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint16x8_t), p2), \
> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]:
> __arm_vqrshrntq_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint32x4_t), p2));})
> -
> -#define __arm_vqshrnbq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> -  __typeof(p1) __p1 = (p1); \
> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqshrnbq_n_s16 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2), \
> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqshrnbq_n_s32 (__ARM_mve_coerce(__p0, int16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2), \
> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]:
> __arm_vqshrnbq_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint16x8_t), p2), \
> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]:
> __arm_vqshrnbq_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint32x4_t), p2));})
> -
>  #define __arm_vqmovuntq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
>    __typeof(p1) __p1 = (p1); \
>    _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> @@ -29474,22 +28402,6 @@ extern void *__ARM_undef;
> 
>  #endif /* MVE Integer.  */
> 
> -#define __arm_vshrntq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> -  __typeof(p1) __p1 = (p1); \
> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vshrntq_n_s16 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2), \
> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vshrntq_n_s32 (__ARM_mve_coerce(__p0, int16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2), \
> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]:
> __arm_vshrntq_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint16x8_t), p2), \
> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]:
> __arm_vshrntq_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint32x4_t), p2));})
> -
> -
> -#define __arm_vrshrntq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> -  __typeof(p1) __p1 = (p1); \
> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vrshrntq_n_s16 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2), \
> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vrshrntq_n_s32 (__ARM_mve_coerce(__p0, int16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2), \
> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]:
> __arm_vrshrntq_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint16x8_t), p2), \
> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]:
> __arm_vrshrntq_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint32x4_t), p2));})
> 
> 
>  #define __arm_vmvnq_x(p1,p2) ({ __typeof(p1) __p1 = (p1); \
> @@ -29798,22 +28710,6 @@ extern void *__ARM_undef;
>    int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint8x16_t]:
> __arm_vshllbq_m_n_u8 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint8x16_t), p2, p3), \
>    int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint16x8_t]:
> __arm_vshllbq_m_n_u16 (__ARM_mve_coerce(__p0, uint32x4_t),
> __ARM_mve_coerce(__p1, uint16x8_t), p2, p3));})
> 
> -#define __arm_vshrntq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
> -  __typeof(p1) __p1 = (p1); \
> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vshrntq_m_n_s16 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \
> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vshrntq_m_n_s32 (__ARM_mve_coerce(__p0, int16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2, p3), \
> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]:
> __arm_vshrntq_m_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint16x8_t), p2, p3), \
> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]:
> __arm_vshrntq_m_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint32x4_t), p2, p3));})
> -
> -#define __arm_vshrnbq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
> -  __typeof(p1) __p1 = (p1); \
> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vshrnbq_m_n_s16 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \
> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vshrnbq_m_n_s32 (__ARM_mve_coerce(__p0, int16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2, p3), \
> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]:
> __arm_vshrnbq_m_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint16x8_t), p2, p3), \
> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]:
> __arm_vshrnbq_m_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint32x4_t), p2, p3));})
> -
>  #define __arm_vshlltq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
>    __typeof(p1) __p1 = (p1); \
>    _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> @@ -29822,14 +28718,6 @@ extern void *__ARM_undef;
>    int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint8x16_t]:
> __arm_vshlltq_m_n_u8 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint8x16_t), p2, p3), \
>    int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint16x8_t]:
> __arm_vshlltq_m_n_u16 (__ARM_mve_coerce(__p0, uint32x4_t),
> __ARM_mve_coerce(__p1, uint16x8_t), p2, p3));})
> 
> -#define __arm_vrshrntq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
> -  __typeof(p1) __p1 = (p1); \
> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vrshrntq_m_n_s16 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \
> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vrshrntq_m_n_s32 (__ARM_mve_coerce(__p0, int16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2, p3), \
> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]:
> __arm_vrshrntq_m_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint16x8_t), p2, p3), \
> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]:
> __arm_vrshrntq_m_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint32x4_t), p2, p3));})
> -
>  #define __arm_vqshruntq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
>    __typeof(p1) __p1 = (p1); \
>    _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> @@ -29842,22 +28730,6 @@ extern void *__ARM_undef;
>    int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqshrunbq_m_n_s16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \
>    int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqshrunbq_m_n_s32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2, p3));})
> 
> -#define __arm_vqrshrnbq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
> -  __typeof(p1) __p1 = (p1); \
> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqrshrnbq_m_n_s16 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \
> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqrshrnbq_m_n_s32 (__ARM_mve_coerce(__p0, int16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2, p3), \
> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]:
> __arm_vqrshrnbq_m_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint16x8_t), p2, p3), \
> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]:
> __arm_vqrshrnbq_m_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint32x4_t), p2, p3));})
> -
> -#define __arm_vqrshrntq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
> -  __typeof(p1) __p1 = (p1); \
> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqrshrntq_m_n_s16 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \
> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqrshrntq_m_n_s32 (__ARM_mve_coerce(__p0, int16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2, p3), \
> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]:
> __arm_vqrshrntq_m_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint16x8_t), p2, p3), \
> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]:
> __arm_vqrshrntq_m_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint32x4_t), p2, p3));})
> -
>  #define __arm_vqrshrunbq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
>    __typeof(p1) __p1 = (p1); \
>    _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> @@ -29870,30 +28742,6 @@ extern void *__ARM_undef;
>    int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqrshruntq_m_n_s16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \
>    int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqrshruntq_m_n_s32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2, p3));})
> 
> -#define __arm_vqshrnbq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
> -  __typeof(p1) __p1 = (p1); \
> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqshrnbq_m_n_s16 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \
> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqshrnbq_m_n_s32 (__ARM_mve_coerce(__p0, int16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2, p3), \
> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]:
> __arm_vqshrnbq_m_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint16x8_t), p2, p3), \
> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]:
> __arm_vqshrnbq_m_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint32x4_t), p2, p3));})
> -
> -#define __arm_vqshrntq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
> -  __typeof(p1) __p1 = (p1); \
> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vqshrntq_m_n_s16 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \
> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vqshrntq_m_n_s32 (__ARM_mve_coerce(__p0, int16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2, p3), \
> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]:
> __arm_vqshrntq_m_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint16x8_t), p2, p3), \
> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]:
> __arm_vqshrntq_m_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint32x4_t), p2, p3));})
> -
> -#define __arm_vrshrnbq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
> -  __typeof(p1) __p1 = (p1); \
> -  _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> \
> -  int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]:
> __arm_vrshrnbq_m_n_s16 (__ARM_mve_coerce(__p0, int8x16_t),
> __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \
> -  int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]:
> __arm_vrshrnbq_m_n_s32 (__ARM_mve_coerce(__p0, int16x8_t),
> __ARM_mve_coerce(__p1, int32x4_t), p2, p3), \
> -  int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]:
> __arm_vrshrnbq_m_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t),
> __ARM_mve_coerce(__p1, uint16x8_t), p2, p3), \
> -  int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]:
> __arm_vrshrnbq_m_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t),
> __ARM_mve_coerce(__p1, uint32x4_t), p2, p3));})
> -
>  #define __arm_vmlaldavaq_p(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \
>    __typeof(p1) __p1 = (p1); \
>    __typeof(p2) __p2 = (p2); \
> --
> 2.34.1


  reply	other threads:[~2023-05-05 11:02 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-05  8:39 [PATCH 01/23] arm: [MVE intrinsics] add binary_round_lshift shape Christophe Lyon
2023-05-05  8:39 ` [PATCH 02/23] arm: [MVE intrinsics] factorize vqrshlq vrshlq Christophe Lyon
2023-05-05  9:58   ` Kyrylo Tkachov
2023-05-05  8:39 ` [PATCH 03/23] arm: [MVE intrinsics] rework vrshlq vqrshlq Christophe Lyon
2023-05-05  9:59   ` Kyrylo Tkachov
2023-05-05  8:39 ` [PATCH 04/23] arm: [MVE intrinsics] factorize vqshlq vshlq Christophe Lyon
2023-05-05 10:00   ` Kyrylo Tkachov
2023-05-05  8:39 ` [PATCH 05/23] arm: [MVE intrinsics] rework vqrdmulhq Christophe Lyon
2023-05-05 10:01   ` Kyrylo Tkachov
2023-05-05  8:39 ` [PATCH 06/23] arm: [MVE intrinsics] factorize vabdq Christophe Lyon
2023-05-05 10:48   ` Kyrylo Tkachov
2023-05-05  8:39 ` [PATCH 07/23] arm: [MVE intrinsics] rework vabdq Christophe Lyon
2023-05-05 10:49   ` Kyrylo Tkachov
2023-05-05  8:39 ` [PATCH 08/23] arm: [MVE intrinsics] add binary_lshift shape Christophe Lyon
2023-05-05 10:51   ` Kyrylo Tkachov
2023-05-05  8:39 ` [PATCH 09/23] arm: [MVE intrinsics] add support for MODE_r Christophe Lyon
2023-05-05 10:55   ` Kyrylo Tkachov
2023-05-05  8:39 ` [PATCH 10/23] arm: [MVE intrinsics] add binary_lshift_r shape Christophe Lyon
2023-05-05 10:56   ` Kyrylo Tkachov
2023-05-05  8:39 ` [PATCH 11/23] arm: [MVE intrinsics] add unspec_mve_function_exact_insn_vshl Christophe Lyon
2023-05-05 10:56   ` Kyrylo Tkachov
2023-05-05  8:39 ` [PATCH 12/23] arm: [MVE intrinsics] rework vqshlq vshlq Christophe Lyon
2023-05-05 10:58   ` Kyrylo Tkachov
2023-05-05  8:39 ` [PATCH 13/23] arm: [MVE intrinsics] factorize vmaxq vminq Christophe Lyon
2023-05-05 10:58   ` Kyrylo Tkachov
2023-05-05  8:39 ` [PATCH 14/23] arm: [MVE intrinsics] rework " Christophe Lyon
2023-05-05 10:59   ` Kyrylo Tkachov
2023-05-05  8:39 ` [PATCH 15/23] arm: [MVE intrinsics] add binary_rshift_narrow shape Christophe Lyon
2023-05-05 11:00   ` Kyrylo Tkachov
2023-05-05  8:39 ` [PATCH 16/23] arm: [MVE intrinsics] factorize vshrntq vshrnbq vrshrnbq vrshrntq vqshrnbq vqshrntq vqrshrnbq vqrshrntq Christophe Lyon
2023-05-05 11:00   ` Kyrylo Tkachov
2023-05-05  8:39 ` [PATCH 17/23] arm: [MVE intrinsics] rework vshrnbq vshrntq " Christophe Lyon
2023-05-05 11:02   ` Kyrylo Tkachov [this message]
2023-05-05  8:39 ` [PATCH 18/23] arm: [MVE intrinsics] add binary_rshift_narrow_unsigned shape Christophe Lyon
2023-05-05 11:03   ` Kyrylo Tkachov
2023-05-05  8:39 ` [PATCH 19/23] arm: [MVE intrinsics] factorize vqrshrunb vqrshrunt vqshrunb vqshrunt Christophe Lyon
2023-05-05 11:04   ` Kyrylo Tkachov
2023-05-05  8:39 ` [PATCH 20/23] arm: [MVE intrinsics] rework vqrshrunbq vqrshruntq vqshrunbq vqshruntq Christophe Lyon
2023-05-05 11:05   ` Kyrylo Tkachov
2023-05-05  8:39 ` [PATCH 21/23] arm: [MVE intrinsics] add binary_rshift shape Christophe Lyon
2023-05-05 11:05   ` Kyrylo Tkachov
2023-05-05  8:39 ` [PATCH 22/23] arm: [MVE intrinsics] factorize vsrhrq vrshrq Christophe Lyon
2023-05-05 11:06   ` Kyrylo Tkachov
2023-05-05  8:39 ` [PATCH 23/23] arm: [MVE intrinsics] rework vshrq vrshrq Christophe Lyon
2023-05-05 11:07   ` Kyrylo Tkachov
2023-05-05  9:55 ` [PATCH 01/23] arm: [MVE intrinsics] add binary_round_lshift shape Kyrylo Tkachov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=PAXPR08MB692674F3B96BE61F759E839C93729@PAXPR08MB6926.eurprd08.prod.outlook.com \
    --to=kyrylo.tkachov@arm.com \
    --cc=Christophe.Lyon@arm.com \
    --cc=Richard.Earnshaw@arm.com \
    --cc=Richard.Sandiford@arm.com \
    --cc=gcc-patches@gcc.gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).