* [PATCH 1/7] RISC-V: Add intrinsic functions for crypto vector Zvbb extension
@ 2023-12-04 2:57 Feng Wang
2023-12-04 2:57 ` [PATCH 2/7] RISC-V: Add intrinsic functions for crypto vector Zvbc extension Feng Wang
` (6 more replies)
0 siblings, 7 replies; 10+ messages in thread
From: Feng Wang @ 2023-12-04 2:57 UTC (permalink / raw)
To: gcc-patches; +Cc: kito.cheng, jeffreyalaw, zhusonghe, panciyan, Feng Wang
This patch add the intrinsic functions(according to https://github.com/
riscv-non-isa/rvv-intrinsic-doc/blob/eopc/vector-crypto/auto-generated/
vector-crypto/intrinsic_funcs.md) for crypto vector Zvbb extension. And all
the test cases are added for api-testing.
Co-Authored by: Songhe Zhu <zhusonghe@eswincomputing.com>
gcc/ChangeLog:
* common/config/riscv/riscv-common.cc: Add Zvbb in riscv_implied_info.
* config/riscv/riscv-vector-builtins-bases.cc (class vandn): Add new function_base for Zvbb.
(class ror_rol): Ditto.
(class b_reverse): Ditto.
(class vcltz): Ditto.
(class vwsll): Ditto.
(BASE): Add Zvbb BASE declaration.
* config/riscv/riscv-vector-builtins-bases.h: Ditto.
* config/riscv/riscv-vector-builtins-shapes.cc (struct zvbb_def): Add new function_builder for Zvbb.
(SHAPE): Add Zvbb SHAPE declaration.
* config/riscv/riscv-vector-builtins-shapes.h: Ditto.
* config/riscv/riscv-vector-builtins.cc (DEF_VECTOR_CRYPTO_FUNCTION): Read crypto vector function def.
(handle_pragma_vector): Register intrinsc functions for crypto vector.
* config/riscv/riscv-vector-builtins.h (struct crypto_function_group_info):
Definition of crypto_function_group_info, add AVAIL info.
* config/riscv/riscv.md: Add Zvbb ins name.
* config/riscv/t-riscv: Add the dependency file for building.
* config/riscv/vector.md: Add the corresponding attribute for Zvbb.
* config/riscv/riscv-vector-crypto-builtins-avail.h: New file.The enable condition.
* config/riscv/riscv-vector-crypto-builtins-functions.def: New file. The intrinsic def file.
* config/riscv/vector-crypto.md: New file.Crypto Vector md file.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/zvk/zvbb/vandn.c: New test.
* gcc.target/riscv/zvk/zvbb/vandn_overloaded.c: New test.
* gcc.target/riscv/zvk/zvbb/vbrev.c: New test.
* gcc.target/riscv/zvk/zvbb/vbrev8.c: New test.
* gcc.target/riscv/zvk/zvbb/vbrev8_overloaded.c: New test.
* gcc.target/riscv/zvk/zvbb/vbrev_overloaded.c: New test.
* gcc.target/riscv/zvk/zvbb/vclz.c: New test.
* gcc.target/riscv/zvk/zvbb/vclz_overloaded.c: New test.
* gcc.target/riscv/zvk/zvbb/vctz.c: New test.
* gcc.target/riscv/zvk/zvbb/vctz_overloaded.c: New test.
* gcc.target/riscv/zvk/zvbb/vrev8.c: New test.
* gcc.target/riscv/zvk/zvbb/vrev8_overloaded.c: New test.
* gcc.target/riscv/zvk/zvbb/vrol.c: New test.
* gcc.target/riscv/zvk/zvbb/vrol_overloaded.c: New test.
* gcc.target/riscv/zvk/zvbb/vror.c: New test.
* gcc.target/riscv/zvk/zvbb/vror_overloaded.c: New test.
* gcc.target/riscv/zvk/zvbb/vwsll.c: New test.
* gcc.target/riscv/zvk/zvbb/vwsll_overloaded.c: New test.
* gcc.target/riscv/zvk/zvbb/zvkb.c: New test.
* gcc.target/riscv/zvk/zvk.exp: New test.
* gcc.target/riscv/zvkb.c: New test.
---
gcc/common/config/riscv/riscv-common.cc | 2 +
.../riscv/riscv-vector-builtins-bases.cc | 104 +-
.../riscv/riscv-vector-builtins-bases.h | 10 +
.../riscv/riscv-vector-builtins-shapes.cc | 27 +-
.../riscv/riscv-vector-builtins-shapes.h | 2 +
gcc/config/riscv/riscv-vector-builtins.cc | 40 +
gcc/config/riscv/riscv-vector-builtins.h | 8 +
.../riscv-vector-crypto-builtins-avail.h | 18 +
...riscv-vector-crypto-builtins-functions.def | 19 +
gcc/config/riscv/riscv.md | 12 +-
gcc/config/riscv/t-riscv | 2 +
gcc/config/riscv/vector-crypto.md | 207 ++++
gcc/config/riscv/vector.md | 31 +-
.../gcc.target/riscv/zvk/zvbb/vandn.c | 1072 ++++++++++++++++
.../riscv/zvk/zvbb/vandn_overloaded.c | 1072 ++++++++++++++++
.../gcc.target/riscv/zvk/zvbb/vbrev.c | 542 +++++++++
.../gcc.target/riscv/zvk/zvbb/vbrev8.c | 542 +++++++++
.../riscv/zvk/zvbb/vbrev8_overloaded.c | 543 +++++++++
.../riscv/zvk/zvbb/vbrev_overloaded.c | 542 +++++++++
.../gcc.target/riscv/zvk/zvbb/vclz.c | 187 +++
.../riscv/zvk/zvbb/vclz_overloaded.c | 187 +++
.../gcc.target/riscv/zvk/zvbb/vctz.c | 187 +++
.../riscv/zvk/zvbb/vctz_overloaded.c | 188 +++
.../gcc.target/riscv/zvk/zvbb/vrev8.c | 542 +++++++++
.../riscv/zvk/zvbb/vrev8_overloaded.c | 542 +++++++++
.../gcc.target/riscv/zvk/zvbb/vrol.c | 1072 ++++++++++++++++
.../riscv/zvk/zvbb/vrol_overloaded.c | 1072 ++++++++++++++++
.../gcc.target/riscv/zvk/zvbb/vror.c | 1073 +++++++++++++++++
.../riscv/zvk/zvbb/vror_overloaded.c | 1072 ++++++++++++++++
.../gcc.target/riscv/zvk/zvbb/vwsll.c | 736 +++++++++++
.../riscv/zvk/zvbb/vwsll_overloaded.c | 736 +++++++++++
.../gcc.target/riscv/zvk/zvbb/zvkb.c | 48 +
gcc/testsuite/gcc.target/riscv/zvk/zvk.exp | 41 +
gcc/testsuite/gcc.target/riscv/zvkb.c | 13 +
34 files changed, 12475 insertions(+), 16 deletions(-)
create mode 100755 gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
create mode 100755 gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
create mode 100755 gcc/config/riscv/vector-crypto.md
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vandn.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vandn_overloaded.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev8.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev8_overloaded.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev_overloaded.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vclz.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vclz_overloaded.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vctz.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vctz_overloaded.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrev8.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrev8_overloaded.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrol.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrol_overloaded.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vror.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vror_overloaded.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vwsll.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vwsll_overloaded.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/zvkb.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
create mode 100644 gcc/testsuite/gcc.target/riscv/zvkb.c
diff --git a/gcc/common/config/riscv/riscv-common.cc b/gcc/common/config/riscv/riscv-common.cc
index 6c210412515..a5fb492c690 100644
--- a/gcc/common/config/riscv/riscv-common.cc
+++ b/gcc/common/config/riscv/riscv-common.cc
@@ -120,6 +120,8 @@ static const riscv_implied_info_t riscv_implied_info[] =
{"zvksc", "zvbc"},
{"zvksg", "zvks"},
{"zvksg", "zvkg"},
+ {"zvbb", "zvkb"},
+ {"zvkb", "v"},
{"zfh", "zfhmin"},
{"zfhmin", "f"},
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index d70468542ee..e41343b4a1a 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -2127,6 +2127,88 @@ public:
}
};
+/* Below implements are vector crypto */
+/* Implements vandn.[vv,vx] */
+class vandn : public function_base
+{
+public:
+ rtx expand (function_expander &e) const override
+ {
+ switch (e.op_info->op)
+ {
+ case OP_TYPE_vv:
+ return e.use_exact_insn (code_for_pred_vandn (e.vector_mode ()));
+ case OP_TYPE_vx:
+ return e.use_exact_insn (code_for_pred_vandn_scalar (e.vector_mode ()));
+ default:
+ gcc_unreachable ();
+ }
+ }
+};
+
+/* Implements vrol/vror.[vv,vx] */
+template<int UNSPEC>
+class ror_rol : public function_base
+{
+public:
+ rtx expand (function_expander &e) const override
+ {
+ switch (e.op_info->op)
+ {
+ case OP_TYPE_vv:
+ return e.use_exact_insn (code_for_pred_v (UNSPEC, e.vector_mode ()));
+ case OP_TYPE_vx:
+ return e.use_exact_insn (code_for_pred_v_scalar (UNSPEC, e.vector_mode ()));
+ default:
+ gcc_unreachable ();
+ }
+ }
+};
+
+/* Implements vbrev/vbrev8/vrev8.v */
+template<int UNSPEC>
+class b_reverse : public function_base
+{
+public:
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (code_for_pred_v (UNSPEC, e.vector_mode ()));
+ }
+};
+
+/* Implements vclz/vctz.v */
+template<int UNSPEC>
+class vcltz : public function_base
+{
+public:
+ bool apply_tail_policy_p () const override { return false; }
+ bool apply_mask_policy_p () const override { return false; }
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (code_for_pred_vc (UNSPEC, e.vector_mode ()));
+ }
+};
+
+/* Implements vwsll */
+class vwsll : public function_base
+{
+public:
+ rtx expand (function_expander &e) const override
+ {
+ switch (e.op_info->op)
+ {
+ case OP_TYPE_vv:
+ return e.use_exact_insn (code_for_pred_vwsll (e.vector_mode ()));
+ case OP_TYPE_vx:
+ return e.use_exact_insn (code_for_pred_vwsll_scalar (e.vector_mode ()));
+ default:
+ gcc_unreachable ();
+ }
+ }
+};
+
static CONSTEXPR const vsetvl<false> vsetvl_obj;
static CONSTEXPR const vsetvl<true> vsetvlmax_obj;
static CONSTEXPR const loadstore<false, LST_UNIT_STRIDE, false> vle_obj;
@@ -2384,6 +2466,17 @@ static CONSTEXPR const seg_indexed_store<UNSPEC_UNORDERED> vsuxseg_obj;
static CONSTEXPR const seg_indexed_store<UNSPEC_ORDERED> vsoxseg_obj;
static CONSTEXPR const vlsegff vlsegff_obj;
+/* Crypto Vector */
+static CONSTEXPR const vandn vandn_obj;
+static CONSTEXPR const ror_rol<UNSPEC_VROL> vrol_obj;
+static CONSTEXPR const ror_rol<UNSPEC_VROR> vror_obj;
+static CONSTEXPR const b_reverse<UNSPEC_VBREV> vbrev_obj;
+static CONSTEXPR const b_reverse<UNSPEC_VBREV8> vbrev8_obj;
+static CONSTEXPR const b_reverse<UNSPEC_VREV8> vrev8_obj;
+static CONSTEXPR const vcltz<UNSPEC_VCLZ> vclz_obj;
+static CONSTEXPR const vcltz<UNSPEC_VCTZ> vctz_obj;
+static CONSTEXPR const vwsll vwsll_obj;
+
/* Declare the function base NAME, pointing it to an instance
of class <NAME>_obj. */
#define BASE(NAME) \
@@ -2645,5 +2738,14 @@ BASE (vloxseg)
BASE (vsuxseg)
BASE (vsoxseg)
BASE (vlsegff)
-
+/* Crypto vector */
+BASE (vandn)
+BASE (vbrev)
+BASE (vbrev8)
+BASE (vrev8)
+BASE (vclz)
+BASE (vctz)
+BASE (vrol)
+BASE (vror)
+BASE (vwsll)
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.h b/gcc/config/riscv/riscv-vector-builtins-bases.h
index 131041ea66f..2f46974bd27 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.h
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.h
@@ -280,6 +280,16 @@ extern const function_base *const vloxseg;
extern const function_base *const vsuxseg;
extern const function_base *const vsoxseg;
extern const function_base *const vlsegff;
+/* Below function_base are Vectro Crypto*/
+extern const function_base *const vandn;
+extern const function_base *const vbrev;
+extern const function_base *const vbrev8;
+extern const function_base *const vrev8;
+extern const function_base *const vclz;
+extern const function_base *const vctz;
+extern const function_base *const vrol;
+extern const function_base *const vror;
+extern const function_base *const vwsll;
}
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 4a754e0228f..a98c2389fbc 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -984,6 +984,31 @@ struct seg_fault_load_def : public build_base
}
};
+/* vandn/vbrev/vbrev8/vrev8/vclz/vctz/vror[l]/vwsll class. */
+struct zvbb_def : public build_base
+{
+ char *get_name (function_builder &b, const function_instance &instance,
+ bool overloaded_p) const override
+ {
+ /* Return nullptr if it can not be overloaded. */
+ if (overloaded_p && !instance.base->can_be_overloaded_p (instance.pred))
+ return nullptr;
+
+ b.append_base_name (instance.base_name);
+ if (!overloaded_p)
+ {
+ b.append_name (operand_suffixes[instance.op_info->op]);
+ b.append_name (type_suffixes[instance.type.index].vector);
+ }
+ /* According to vector-crypto-intrinsic-doc, it does not
+ add "_m" suffix for vop_m C++ overloaded API. */
+ if (overloaded_p && instance.pred == PRED_TYPE_m)
+ return b.finish_name ();
+ b.append_name (predication_suffixes[instance.pred]);
+ return b.finish_name ();
+ }
+};
+
SHAPE(vsetvl, vsetvl)
SHAPE(vsetvl, vsetvlmax)
SHAPE(loadstore, loadstore)
@@ -1012,5 +1037,5 @@ SHAPE(vlenb, vlenb)
SHAPE(seg_loadstore, seg_loadstore)
SHAPE(seg_indexed_loadstore, seg_indexed_loadstore)
SHAPE(seg_fault_load, seg_fault_load)
-
+SHAPE(zvbb, zvbb)
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.h b/gcc/config/riscv/riscv-vector-builtins-shapes.h
index df9884bb572..e8959a2f277 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.h
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.h
@@ -52,6 +52,8 @@ extern const function_shape *const vlenb;
extern const function_shape *const seg_loadstore;
extern const function_shape *const seg_indexed_loadstore;
extern const function_shape *const seg_fault_load;
+/* Below function_shape are Vectro Crypto*/
+extern const function_shape *const zvbb;
}
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
index 6330a3a41c3..7a1da5c4539 100644
--- a/gcc/config/riscv/riscv-vector-builtins.cc
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -51,6 +51,7 @@
#include "riscv-vector-builtins.h"
#include "riscv-vector-builtins-shapes.h"
#include "riscv-vector-builtins-bases.h"
+#include "riscv-vector-crypto-builtins-avail.h"
using namespace riscv_vector;
@@ -754,6 +755,11 @@ static CONSTEXPR const rvv_arg_type_info v_size_args[]
= {rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info (RVV_BASE_size),
rvv_arg_type_info_end};
+/* A list of args for vector_type func (double demote_type, size_t) function. */
+static CONSTEXPR const rvv_arg_type_info wv_size_args[]
+ = {rvv_arg_type_info (RVV_BASE_double_trunc_vector),
+ rvv_arg_type_info (RVV_BASE_size),rvv_arg_type_info_end};
+
/* A list of args for vector_type func (vector_type, vector_type, size)
* function. */
static CONSTEXPR const rvv_arg_type_info vv_size_args[]
@@ -1044,6 +1050,14 @@ static CONSTEXPR const rvv_op_info u_v_ops
rvv_arg_type_info (RVV_BASE_vector), /* Return type */
end_args /* Args */};
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info u_vv_ops
+ = {u_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ v_args /* Args */};
+
/* A static operand information for unsigned long func (vector_type)
* function registration. */
static CONSTEXPR const rvv_op_info b_ulong_m_ops
@@ -2174,6 +2188,14 @@ static CONSTEXPR const rvv_op_info u_wvv_ops
rvv_arg_type_info (RVV_BASE_vector), /* Return type */
wvv_args /* Args */};
+/* A static operand information for vector_type func (double demote type, size type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info u_shift_wvx_ops
+ = {wextu_ops, /* Types */
+ OP_TYPE_vx, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ wv_size_args /* Args */};
+
/* A static operand information for vector_type func (double demote type, double
* demote scalar_type) function registration. */
static CONSTEXPR const rvv_op_info i_wvx_ops
@@ -2689,6 +2711,14 @@ static function_group_info function_groups[] = {
#include "riscv-vector-builtins-functions.def"
};
+/* A list of all Vector Crypto intrinsic functions. */
+static crypto_function_group_info cryoto_function_groups[] = {
+#define DEF_VECTOR_CRYPTO_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO, AVAIL) \
+ {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO,\
+ riscv_vector_crypto_avail_ ## AVAIL},
+#include "riscv-vector-crypto-builtins-functions.def"
+};
+
/* The RVV types, with their built-in
"__rvv..._t" name. Allow an index of NUM_VECTOR_TYPES, which always
yields a null tree. */
@@ -4414,6 +4444,16 @@ handle_pragma_vector ()
function_builder builder;
for (unsigned int i = 0; i < ARRAY_SIZE (function_groups); ++i)
builder.register_function_group (function_groups[i]);
+
+ /* Since Crypto Vector intrinsic functions are depend on vector extension,
+ so the registration is placed at last */
+ for (unsigned int i = 0; i < ARRAY_SIZE (cryoto_function_groups); ++i)
+ {
+ crypto_function_group_info *f = &cryoto_function_groups[i];
+ if (f->avail ())
+ builder.register_function_group\
+ (cryoto_function_groups[i].rvv_function_group_info);
+ }
}
/* Return the function decl with RVV function subcode CODE, or error_mark_node
diff --git a/gcc/config/riscv/riscv-vector-builtins.h b/gcc/config/riscv/riscv-vector-builtins.h
index cd8ccab1724..a62e8cb4845 100644
--- a/gcc/config/riscv/riscv-vector-builtins.h
+++ b/gcc/config/riscv/riscv-vector-builtins.h
@@ -234,6 +234,14 @@ struct function_group_info
const rvv_op_info ops_infos;
};
+/* Static information about a set of crypto vector functions. */
+struct crypto_function_group_info
+{
+ struct function_group_info rvv_function_group_info;
+ /* Whether the function is available. */
+ unsigned int (*avail) (void);
+};
+
class GTY ((user)) function_instance
{
public:
diff --git a/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h b/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
new file mode 100755
index 00000000000..2719027a7da
--- /dev/null
+++ b/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
@@ -0,0 +1,18 @@
+#ifndef GCC_RISCV_VECTOR_CRYPTO_BUILTINS_AVAIL_H
+#define GCC_RISCV_VECTOR_CRYPTO_BUILTINS_AVAIL_H
+
+#include "insn-codes.h"
+namespace riscv_vector {
+
+/* Declare an availability predicate for built-in functions. */
+#define AVAIL(NAME, COND) \
+ static unsigned int \
+ riscv_vector_crypto_avail_##NAME (void) \
+ { \
+ return (COND); \
+ }
+
+AVAIL (zvbb, TARGET_ZVBB)
+AVAIL (zvkb_or_zvbb, TARGET_ZVKB || TARGET_ZVBB)
+}
+#endif
diff --git a/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def b/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
new file mode 100755
index 00000000000..f3371f28a42
--- /dev/null
+++ b/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
@@ -0,0 +1,19 @@
+#ifndef DEF_VECTOR_CRYPTO_FUNCTION
+#define DEF_VECTOR_CRYPTO_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO, AVAIL)
+#endif
+
+
+// ZVBB
+DEF_VECTOR_CRYPTO_FUNCTION (vandn, zvbb, full_preds, u_vvv_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vandn, zvbb, full_preds, u_vvx_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vbrev, zvbb, full_preds, u_vv_ops, zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vbrev8, zvbb, full_preds, u_vv_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vrev8, zvbb, full_preds, u_vv_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vclz, zvbb, none_m_preds, u_vv_ops, zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vctz, zvbb, none_m_preds, u_vv_ops, zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vrol, zvbb, full_preds, u_vvv_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vrol, zvbb, full_preds, u_shift_vvx_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vror, zvbb, full_preds, u_vvv_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vror, zvbb, full_preds, u_shift_vvx_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vwsll, zvbb, full_preds, u_wvv_ops, zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vwsll, zvbb, full_preds, u_shift_wvx_ops, zvbb)
diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md
index 935eeb7fd8e..2a3777e168c 100644
--- a/gcc/config/riscv/riscv.md
+++ b/gcc/config/riscv/riscv.md
@@ -428,6 +428,15 @@
;; vcompress vector compress instruction
;; vmov whole vector register move
;; vector unknown vector instruction
+;; vandn crypto vector bitwise and-not instructions
+;; vbrev crypto vector reverse bits in elements instructions
+;; vbrev8 crypto vector reverse bits in bytes instructions
+;; vrev8 crypto vector reverse bytes instructions
+;; vclz crypto vector count leading Zeros instructions
+;; vctz crypto vector count lrailing Zeros instructions
+;; vrol crypto vector rotate left instructions
+;; vror crypto vector rotate right instructions
+;; vwsll crypto vector widening shift left logical instructions
(define_attr "type"
"unknown,branch,jump,jalr,ret,call,load,fpload,store,fpstore,
mtc,mfc,const,arith,logical,shift,slt,imul,idiv,move,fmove,fadd,fmul,
@@ -447,7 +456,7 @@
vired,viwred,vfredu,vfredo,vfwredu,vfwredo,
vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv,
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,
- vgather,vcompress,vmov,vector"
+ vgather,vcompress,vmov,vector,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,vror,vwsll"
(cond [(eq_attr "got" "load") (const_string "load")
;; If a doubleword move uses these expensive instructions,
@@ -3747,6 +3756,7 @@
(include "thead.md")
(include "generic-ooo.md")
(include "vector.md")
+(include "vector-crypto.md")
(include "zicond.md")
(include "zc.md")
(include "corev.md")
diff --git a/gcc/config/riscv/t-riscv b/gcc/config/riscv/t-riscv
index 3b9686daa58..429d36b6425 100644
--- a/gcc/config/riscv/t-riscv
+++ b/gcc/config/riscv/t-riscv
@@ -1,6 +1,8 @@
RISCV_BUILTINS_H = $(srcdir)/config/riscv/riscv-vector-builtins.h \
$(srcdir)/config/riscv/riscv-vector-builtins.def \
$(srcdir)/config/riscv/riscv-vector-builtins-functions.def \
+ $(srcdir)/config/riscv/riscv-vector-crypto-builtins-avail.h \
+ $(srcdir)/config/riscv/riscv-vector-crypto-builtins-functions.def \
riscv-vector-type-indexer.gen.def
riscv-builtins.o: $(srcdir)/config/riscv/riscv-builtins.cc $(CONFIG_H) \
diff --git a/gcc/config/riscv/vector-crypto.md b/gcc/config/riscv/vector-crypto.md
new file mode 100755
index 00000000000..0373cf6f48a
--- /dev/null
+++ b/gcc/config/riscv/vector-crypto.md
@@ -0,0 +1,207 @@
+(define_c_enum "unspec" [
+ ;; Zvbb unspecs
+ UNSPEC_VANDN
+ UNSPEC_VBREV
+ UNSPEC_VBREV8
+ UNSPEC_VREV8
+ UNSPEC_VCLZ
+ UNSPEC_VCTZ
+ UNSPEC_VROL
+ UNSPEC_VROR
+ UNSPEC_VWSLL
+])
+
+(define_int_attr ror_rol [(UNSPEC_VROL "rol") (UNSPEC_VROR "ror")])
+
+(define_int_attr lt [(UNSPEC_VCLZ "lz") (UNSPEC_VCTZ "tz")])
+
+(define_int_attr rev [(UNSPEC_VBREV "brev") (UNSPEC_VBREV8 "brev8") (UNSPEC_VREV8 "rev8")])
+
+(define_int_iterator UNSPEC_VRORL [UNSPEC_VROL UNSPEC_VROR])
+
+(define_int_iterator UNSPEC_VCLTZ [UNSPEC_VCLZ UNSPEC_VCTZ])
+
+(define_int_iterator UNSPEC_VRBB8 [UNSPEC_VBREV UNSPEC_VBREV8 UNSPEC_VREV8])
+
+;; zvbb instructions patterns.
+;; vandn.vv vandn.vx vrol.vv vrol.vx
+;; vror.vv vror.vx vror.vi
+;; vwsll.vv vwsll.vx vwsll.vi
+
+(define_insn "@pred_vandn<mode>"
+ [(set (match_operand:VI 0 "register_operand" "=vd,vd")
+ (if_then_else:VI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" "rK,rK")
+ (match_operand 6 "const_int_operand" "i, i")
+ (match_operand 7 "const_int_operand" "i, i")
+ (match_operand 8 "const_int_operand" "i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VI
+ [(match_operand:VI 3 "register_operand" "vr,vr")
+ (match_operand:VI 4 "register_operand" "vr,vr")]UNSPEC_VANDN)
+ (match_operand:VI 2 "vector_merge_operand" "vu, 0")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+ "vandn.vv\t%0,%3,%4%p1"
+ [(set_attr "type" "vandn")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_vandn<mode>_scalar"
+ [(set (match_operand:VI 0 "register_operand" "=vd,vd")
+ (if_then_else:VI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" "rK,rK")
+ (match_operand 6 "const_int_operand" "i, i")
+ (match_operand 7 "const_int_operand" "i, i")
+ (match_operand 8 "const_int_operand" "i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VI
+ [(match_operand:VI 3 "register_operand" "vr,vr")
+ (match_operand:<VEL> 4 "register_operand" "r, r")]UNSPEC_VANDN)
+ (match_operand:VI 2 "vector_merge_operand" "vu, 0")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+ "vandn.vx\t%0,%3,%4%p1"
+ [(set_attr "type" "vandn")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_v<ror_rol><mode>"
+ [(set (match_operand:VI 0 "register_operand" "=vd,vd")
+ (if_then_else:VI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" "rK,rK")
+ (match_operand 6 "const_int_operand" "i, i")
+ (match_operand 7 "const_int_operand" "i, i")
+ (match_operand 8 "const_int_operand" "i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VI
+ [(match_operand:VI 3 "register_operand" "vr,vr")
+ (match_operand:VI 4 "register_operand" "vr,vr")]UNSPEC_VRORL)
+ (match_operand:VI 2 "vector_merge_operand" "vu, 0")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+ "v<ror_rol>.vv\t%0,%3,%4%p1"
+ [(set_attr "type" "v<ror_rol>")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_v<ror_rol><mode>_scalar"
+ [(set (match_operand:VI 0 "register_operand" "=vd, vd")
+ (if_then_else:VI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, vmWc1")
+ (match_operand 5 "vector_length_operand" "rK, rK")
+ (match_operand 6 "const_int_operand" "i, i")
+ (match_operand 7 "const_int_operand" "i, i")
+ (match_operand 8 "const_int_operand" "i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VI
+ [(match_operand:VI 3 "register_operand" "vr, vr")
+ (match_operand 4 "pmode_register_operand" "r, r")]UNSPEC_VRORL)
+ (match_operand:VI 2 "vector_merge_operand" "vu, 0")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+ "v<ror_rol>.vx\t%0,%3,%4%p1"
+ [(set_attr "type" "v<ror_rol>")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_vror<mode>_scalar"
+ [(set (match_operand:VI 0 "register_operand" "=vd, vd")
+ (if_then_else:VI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, vmWc1")
+ (match_operand 5 "vector_length_operand" "rK, rK")
+ (match_operand 6 "const_int_operand" "i, i")
+ (match_operand 7 "const_int_operand" "i, i")
+ (match_operand 8 "const_int_operand" "i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VI
+ [(match_operand:VI 3 "register_operand" "vr, vr")
+ (match_operand 4 "const_csr_operand" "K, K")]UNSPEC_VROR)
+ (match_operand:VI 2 "vector_merge_operand" "vu, 0")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+ "vror.vi\t%0,%3,%4%p1"
+ [(set_attr "type" "vror")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_vwsll<mode>"
+ [(set (match_operand:VWEXTI 0 "register_operand" "=&vd")
+ (if_then_else:VWEXTI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (match_operand 8 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VWEXTI
+ [(match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" "vr")
+ (match_operand:<V_DOUBLE_TRUNC> 4 "register_operand" "vr")]UNSPEC_VWSLL)
+ (match_operand:VWEXTI 2 "vector_merge_operand" "0vu")))]
+ "TARGET_ZVBB"
+ "vwsll.vv\t%0,%3,%4%p1"
+ [(set_attr "type" "vwsll")
+ (set_attr "mode" "<VWEXTI:MODE>")])
+
+(define_insn "@pred_vwsll<mode>_scalar"
+ [(set (match_operand:VWEXTI 0 "register_operand" "=&vd")
+ (if_then_else:VWEXTI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (match_operand 8 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VWEXTI
+ [(match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" "vr")
+ (match_operand:<VSUBEL> 4 "pmode_reg_or_uimm5_operand" "rK")]UNSPEC_VWSLL)
+ (match_operand:VWEXTI 2 "vector_merge_operand" "0vu")))]
+ "TARGET_ZVBB"
+ "vwsll.v%o4\t%0,%3,%4%p1"
+ [(set_attr "type" "vwsll")
+ (set_attr "mode" "<VWEXTI:MODE>")])
+
+;; vbrev.v vbrev8.v vrev8.v
+
+(define_insn "@pred_v<rev><mode>"
+ [(set (match_operand:VI 0 "register_operand" "=vd,vd")
+ (if_then_else:VI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 4 "vector_length_operand" " rK, rK")
+ (match_operand 5 "const_int_operand" " i, i")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VI
+ [(match_operand:VI 3 "register_operand" "vr,vr")]UNSPEC_VRBB8)
+ (match_operand:VI 2 "vector_merge_operand" "vu, 0")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+ "v<rev>.v\t%0,%3%p1"
+ [(set_attr "type" "v<rev>")
+ (set_attr "mode" "<MODE>")])
+
+;; vclz.v vctz.v
+
+(define_insn "@pred_vc<lt><mode>"
+ [(set (match_operand:VI 0 "register_operand" "=vd")
+ (unspec:VI
+ [(match_operand:VI 2 "register_operand" "vr")
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)]UNSPEC_VCLTZ))]
+ "TARGET_ZVBB"
+ "vc<lt>.v\t%0,%2%p1"
+ [(set_attr "type" "vc<lt>")
+ (set_attr "mode" "<MODE>")])
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index ba9c9e5a9b6..3e08e18d355 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -52,7 +52,8 @@
vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv,\
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
- vssegtux,vssegtox,vlsegdff")
+ vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
+ vror,vwsll")
(const_string "true")]
(const_string "false")))
@@ -74,7 +75,8 @@
vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovxv,vfmovfv,\
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
- vssegtux,vssegtox,vlsegdff")
+ vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
+ vror,vwsll")
(const_string "true")]
(const_string "false")))
@@ -698,7 +700,8 @@
vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,\
vired,viwred,vfredu,vfredo,vfwredu,vfwredo,vimovxv,vfmovfv,\
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
- vgather,vldff,viwmuladd,vfwmuladd,vlsegde,vlsegds,vlsegdux,vlsegdox,vlsegdff")
+ vgather,vldff,viwmuladd,vfwmuladd,vlsegde,vlsegds,vlsegdux,vlsegdox,vlsegdff,\
+ vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,vror,vwsll")
(const_int 2)
(eq_attr "type" "vimerge,vfmerge,vcompress")
@@ -740,7 +743,7 @@
vstox,vext,vmsfs,vmiota,vfsqrt,vfrecp,vfcvtitof,vldff,\
vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,\
vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,vcompress,\
- vlsegde,vssegts,vssegtux,vssegtox,vlsegdff")
+ vlsegde,vssegts,vssegtux,vssegtox,vlsegdff,vbrev,vbrev8,vrev8")
(const_int 4)
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -755,13 +758,14 @@
vsshift,vnclip,vfalu,vfmul,vfminmax,vfdiv,vfwalu,vfwmul,\
vfsgnj,vfmerge,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
- vgather,viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox")
+ vgather,viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox,vandn,vrol,\
+ vror,vwsll")
(const_int 5)
(eq_attr "type" "vicmp,vimuladd,vfcmp,vfmuladd")
(const_int 6)
- (eq_attr "type" "vmpop,vmffs,vmidx,vssegte")
+ (eq_attr "type" "vmpop,vmffs,vmidx,vssegte,vclz,vctz")
(const_int 3)]
(const_int INVALID_ATTRIBUTE)))
@@ -770,7 +774,7 @@
(cond [(eq_attr "type" "vlde,vimov,vfmov,vext,vmiota,vfsqrt,vfrecp,\
vfcvtitof,vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,\
vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,\
- vcompress,vldff,vlsegde,vlsegdff")
+ vcompress,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8")
(symbol_ref "riscv_vector::get_ta(operands[5])")
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -786,7 +790,7 @@
vfwalu,vfwmul,vfsgnj,vfmerge,vired,viwred,vfredu,\
vfredo,vfwredu,vfwredo,vslideup,vslidedown,vislide1up,\
vislide1down,vfslide1up,vfslide1down,vgather,viwmuladd,vfwmuladd,\
- vlsegds,vlsegdux,vlsegdox")
+ vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll")
(symbol_ref "riscv_vector::get_ta(operands[6])")
(eq_attr "type" "vimuladd,vfmuladd")
@@ -800,7 +804,7 @@
(define_attr "ma" ""
(cond [(eq_attr "type" "vlde,vext,vmiota,vfsqrt,vfrecp,vfcvtitof,vfcvtftoi,\
vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,\
- vfncvtftof,vfclass,vldff,vlsegde,vlsegdff")
+ vfncvtftof,vfclass,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8")
(symbol_ref "riscv_vector::get_ma(operands[6])")
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -815,7 +819,8 @@
vnclip,vicmp,vfalu,vfmul,vfminmax,vfdiv,\
vfwalu,vfwmul,vfsgnj,vfcmp,vslideup,vslidedown,\
vislide1up,vislide1down,vfslide1up,vfslide1down,vgather,\
- viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox")
+ viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox,vandn,vrol,\
+ vror,vwsll")
(symbol_ref "riscv_vector::get_ma(operands[7])")
(eq_attr "type" "vimuladd,vfmuladd")
@@ -831,7 +836,7 @@
vfsqrt,vfrecp,vfmerge,vfcvtitof,vfcvtftoi,vfwcvtitof,\
vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,vfncvtftof,\
vfclass,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\
- vimovxv,vfmovfv,vlsegde,vlsegdff")
+ vimovxv,vfmovfv,vlsegde,vlsegdff,vbrev,vbrev8,vrev8")
(const_int 7)
(eq_attr "type" "vldm,vstm,vmalu,vmalu")
(const_int 5)
@@ -848,7 +853,7 @@
vnclip,vicmp,vfalu,vfmul,vfminmax,vfdiv,vfwalu,vfwmul,\
vfsgnj,vfcmp,vslideup,vslidedown,vislide1up,\
vislide1down,vfslide1up,vfslide1down,vgather,viwmuladd,vfwmuladd,\
- vlsegds,vlsegdux,vlsegdox")
+ vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll")
(const_int 8)
(eq_attr "type" "vstux,vstox,vssegts,vssegtux,vssegtox")
(const_int 5)
@@ -859,7 +864,7 @@
(eq_attr "type" "vmsfs,vmidx,vcompress")
(const_int 6)
- (eq_attr "type" "vmpop,vmffs,vssegte")
+ (eq_attr "type" "vmpop,vmffs,vssegte,vclz,vctz")
(const_int 4)]
(const_int INVALID_ATTRIBUTE)))
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vandn.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vandn.c
new file mode 100644
index 00000000000..b044c020f71
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vandn.c
@@ -0,0 +1,1072 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vandn_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8mf8(vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8(vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8mf8(vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8mf4(vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4(vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8mf4(vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8mf2(vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2(vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8mf2(vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8m1(vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1(vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8m1(vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8m2(vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2(vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8m2(vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8m4(vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4(vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8m4(vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8m8(vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8(vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8m8(vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16mf4(vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4(vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16mf4(vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16mf2(vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2(vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16mf2(vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16m1(vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1(vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16m1(vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16m2(vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2(vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16m2(vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16m4(vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4(vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16m4(vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16m8(vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8(vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16m8(vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32mf2(vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2(vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32mf2(vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32m1(vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1(vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32m1(vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32m2(vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2(vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32m2(vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32m4(vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4(vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32m4(vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32m8(vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8(vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32m8(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u64m1(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u64m1(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u64m2(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u64m2(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u64m4(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u64m4(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u64m8(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u64m8(vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8mf8_m(mask, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8mf8_m(mask, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8mf4_m(mask, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8mf4_m(mask, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8m1_m(mask, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8m1_m(mask, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8m2_m(mask, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8m2_m(mask, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8m4_m(mask, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8m4_m(mask, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8m8_m(mask, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8m8_m(mask, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16mf4_m(mask, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16mf4_m(mask, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16m1_m(mask, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16m1_m(mask, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16m2_m(mask, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16m2_m(mask, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16m4_m(mask, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16m4_m(mask, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16m8_m(mask, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16m8_m(mask, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32m1_m(mask, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32m1_m(mask, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32m2_m(mask, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32m2_m(mask, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32m4_m(mask, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32m4_m(mask, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32m8_m(mask, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32m8_m(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u64m1_m(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u64m1_m(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u64m2_m(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u64m2_m(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u64m4_m(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u64m4_m(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u64m8_m(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u64m8_m(mask, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8mf8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8mf8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8mf4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8mf4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16mf4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16mf4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u64m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u64m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u64m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u64m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u64m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u64m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u64m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u64m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8mf8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8mf8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8mf4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8mf4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8mf8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8mf8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8mf4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8mf4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8mf8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8mf8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8mf4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8mf4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vandn\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vandn\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
+/* { dg-final { scan-assembler-times {vandn\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vandn\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vandn_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vandn_overloaded.c
new file mode 100644
index 00000000000..9af458cd671
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vandn_overloaded.c
@@ -0,0 +1,1072 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vandn_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8(vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4(vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2(vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1(vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2(vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4(vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8(vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4(vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2(vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1(vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2(vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4(vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8(vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2(vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1(vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2(vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4(vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8(vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vandn\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vandn\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
+/* { dg-final { scan-assembler-times {vandn\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vandn\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev.c
new file mode 100644
index 00000000000..6725744071d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev.c
@@ -0,0 +1,542 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vbrev_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8mf8(vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8mf4(vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8mf2(vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1(vuint8m1_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8m1(vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2(vuint8m2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8m2(vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4(vuint8m4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8m4(vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8(vuint8m8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8m8(vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16mf4(vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16mf2(vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1(vuint16m1_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16m1(vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2(vuint16m2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16m2(vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4(vuint16m4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16m4(vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8(vuint16m8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16m8(vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32mf2(vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1(vuint32m1_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32m1(vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2(vuint32m2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32m2(vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4(vuint32m4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32m4(vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8(vuint32m8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32m8(vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1(vuint64m1_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u64m1(vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2(vuint64m2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u64m2(vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4(vuint64m4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u64m4(vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8(vuint64m8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u64m8(vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8mf8_m(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8mf4_m(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8mf2_m(mask, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8m1_m(mask, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8m2_m(mask, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8m4_m(mask, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8m8_m(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16mf4_m(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16mf2_m(mask, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16m1_m(mask, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16m2_m(mask, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16m4_m(mask, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16m8_m(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32mf2_m(mask, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32m1_m(mask, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32m2_m(mask, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32m4_m(mask, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32m8_m(mask, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u64m1_m(mask, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u64m2_m(mask, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u64m4_m(mask, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u64m8_m(mask, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8mf8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8mf4_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8mf2_tu(maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8m1_tu(maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8m2_tu(maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8m4_tu(maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8m8_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16mf4_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16mf2_tu(maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16m1_tu(maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16m2_tu(maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16m4_tu(maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16m8_tu(maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32mf2_tu(maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32m1_tu(maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32m2_tu(maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32m4_tu(maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32m8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u64m1_tu(maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u64m2_tu(maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u64m4_tu(maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u64m8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8mf8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8mf4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8mf2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16mf4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16mf2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32mf2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u64m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u64m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u64m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u64m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8mf8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8mf4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8mf2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16mf4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16mf2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32mf2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u64m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u64m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u64m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u64m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8mf8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8mf4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8mf2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u8m8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16mf4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16mf2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u16m8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32mf2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u32m8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u64m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u64m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u64m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vbrev_v_u64m8_mu(mask, maskedoff, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vbrev\.v\s+v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vbrev\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev8.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev8.c
new file mode 100644
index 00000000000..bb9cffe6797
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev8.c
@@ -0,0 +1,542 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vbrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8mf8(vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8mf4(vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8mf2(vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1(vuint8m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8m1(vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2(vuint8m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8m2(vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4(vuint8m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8m4(vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8(vuint8m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8m8(vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16mf4(vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16mf2(vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1(vuint16m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16m1(vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2(vuint16m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16m2(vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4(vuint16m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16m4(vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8(vuint16m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16m8(vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32mf2(vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1(vuint32m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32m1(vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2(vuint32m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32m2(vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4(vuint32m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32m4(vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8(vuint32m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32m8(vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1(vuint64m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u64m1(vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2(vuint64m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u64m2(vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4(vuint64m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u64m4(vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8(vuint64m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u64m8(vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8mf8_m(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8mf4_m(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8mf2_m(mask, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8m1_m(mask, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8m2_m(mask, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8m4_m(mask, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8m8_m(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16mf4_m(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16mf2_m(mask, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16m1_m(mask, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16m2_m(mask, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16m4_m(mask, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16m8_m(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32mf2_m(mask, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32m1_m(mask, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32m2_m(mask, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32m4_m(mask, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32m8_m(mask, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u64m1_m(mask, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u64m2_m(mask, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u64m4_m(mask, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u64m8_m(mask, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8mf8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8mf4_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8mf2_tu(maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8m1_tu(maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8m2_tu(maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8m4_tu(maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8m8_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16mf4_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16mf2_tu(maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16m1_tu(maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16m2_tu(maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16m4_tu(maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16m8_tu(maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32mf2_tu(maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32m1_tu(maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32m2_tu(maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32m4_tu(maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32m8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u64m1_tu(maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u64m2_tu(maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u64m4_tu(maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u64m8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8mf8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8mf4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8mf2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16mf4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16mf2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32mf2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u64m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u64m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u64m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u64m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8mf8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8mf4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8mf2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16mf4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16mf2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32mf2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u64m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u64m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u64m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u64m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8mf8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8mf4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8mf2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8m8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16mf4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16mf2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u16m8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32mf2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u32m8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u64m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u64m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u64m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u64m8_mu(mask, maskedoff, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vbrev8\.v\s+v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vbrev8\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev8_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev8_overloaded.c
new file mode 100644
index 00000000000..25f6b1c343f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev8_overloaded.c
@@ -0,0 +1,543 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vbrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vbrev8(vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vbrev8(vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8(vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1(vuint8m1_t vs2, size_t vl) {
+ return __riscv_vbrev8(vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2(vuint8m2_t vs2, size_t vl) {
+ return __riscv_vbrev8(vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4(vuint8m4_t vs2, size_t vl) {
+ return __riscv_vbrev8(vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8(vuint8m8_t vs2, size_t vl) {
+ return __riscv_vbrev8(vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vbrev8(vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8(vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1(vuint16m1_t vs2, size_t vl) {
+ return __riscv_vbrev8(vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2(vuint16m2_t vs2, size_t vl) {
+ return __riscv_vbrev8(vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4(vuint16m4_t vs2, size_t vl) {
+ return __riscv_vbrev8(vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8(vuint16m8_t vs2, size_t vl) {
+ return __riscv_vbrev8(vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8(vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1(vuint32m1_t vs2, size_t vl) {
+ return __riscv_vbrev8(vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2(vuint32m2_t vs2, size_t vl) {
+ return __riscv_vbrev8(vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4(vuint32m4_t vs2, size_t vl) {
+ return __riscv_vbrev8(vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8(vuint32m8_t vs2, size_t vl) {
+ return __riscv_vbrev8(vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1(vuint64m1_t vs2, size_t vl) {
+ return __riscv_vbrev8(vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2(vuint64m2_t vs2, size_t vl) {
+ return __riscv_vbrev8(vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4(vuint64m4_t vs2, size_t vl) {
+ return __riscv_vbrev8(vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8(vuint64m8_t vs2, size_t vl) {
+ return __riscv_vbrev8(vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vbrev8\.v\s+v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vbrev8\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev_overloaded.c
new file mode 100644
index 00000000000..8a4d7bb7f8c
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev_overloaded.c
@@ -0,0 +1,542 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vbrev_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vbrev(vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vbrev(vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vbrev(vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1(vuint8m1_t vs2, size_t vl) {
+ return __riscv_vbrev(vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2(vuint8m2_t vs2, size_t vl) {
+ return __riscv_vbrev(vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4(vuint8m4_t vs2, size_t vl) {
+ return __riscv_vbrev(vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8(vuint8m8_t vs2, size_t vl) {
+ return __riscv_vbrev(vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vbrev(vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vbrev(vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1(vuint16m1_t vs2, size_t vl) {
+ return __riscv_vbrev(vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2(vuint16m2_t vs2, size_t vl) {
+ return __riscv_vbrev(vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4(vuint16m4_t vs2, size_t vl) {
+ return __riscv_vbrev(vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8(vuint16m8_t vs2, size_t vl) {
+ return __riscv_vbrev(vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vbrev(vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1(vuint32m1_t vs2, size_t vl) {
+ return __riscv_vbrev(vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2(vuint32m2_t vs2, size_t vl) {
+ return __riscv_vbrev(vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4(vuint32m4_t vs2, size_t vl) {
+ return __riscv_vbrev(vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8(vuint32m8_t vs2, size_t vl) {
+ return __riscv_vbrev(vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1(vuint64m1_t vs2, size_t vl) {
+ return __riscv_vbrev(vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2(vuint64m2_t vs2, size_t vl) {
+ return __riscv_vbrev(vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4(vuint64m4_t vs2, size_t vl) {
+ return __riscv_vbrev(vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8(vuint64m8_t vs2, size_t vl) {
+ return __riscv_vbrev(vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vbrev\.v\s+v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vbrev\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vclz.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vclz.c
new file mode 100644
index 00000000000..df19efd3c7d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vclz.c
@@ -0,0 +1,187 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vclz_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vclz_v_u8mf8(vs2, vl);
+}
+
+vuint8mf4_t test_vclz_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vclz_v_u8mf4(vs2, vl);
+}
+
+vuint8mf2_t test_vclz_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vclz_v_u8mf2(vs2, vl);
+}
+
+vuint8m1_t test_vclz_v_u8m1(vuint8m1_t vs2, size_t vl) {
+ return __riscv_vclz_v_u8m1(vs2, vl);
+}
+
+vuint8m2_t test_vclz_v_u8m2(vuint8m2_t vs2, size_t vl) {
+ return __riscv_vclz_v_u8m2(vs2, vl);
+}
+
+vuint8m4_t test_vclz_v_u8m4(vuint8m4_t vs2, size_t vl) {
+ return __riscv_vclz_v_u8m4(vs2, vl);
+}
+
+vuint8m8_t test_vclz_v_u8m8(vuint8m8_t vs2, size_t vl) {
+ return __riscv_vclz_v_u8m8(vs2, vl);
+}
+
+vuint16mf4_t test_vclz_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vclz_v_u16mf4(vs2, vl);
+}
+
+vuint16mf2_t test_vclz_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vclz_v_u16mf2(vs2, vl);
+}
+
+vuint16m1_t test_vclz_v_u16m1(vuint16m1_t vs2, size_t vl) {
+ return __riscv_vclz_v_u16m1(vs2, vl);
+}
+
+vuint16m2_t test_vclz_v_u16m2(vuint16m2_t vs2, size_t vl) {
+ return __riscv_vclz_v_u16m2(vs2, vl);
+}
+
+vuint16m4_t test_vclz_v_u16m4(vuint16m4_t vs2, size_t vl) {
+ return __riscv_vclz_v_u16m4(vs2, vl);
+}
+
+vuint16m8_t test_vclz_v_u16m8(vuint16m8_t vs2, size_t vl) {
+ return __riscv_vclz_v_u16m8(vs2, vl);
+}
+
+vuint32mf2_t test_vclz_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vclz_v_u32mf2(vs2, vl);
+}
+
+vuint32m1_t test_vclz_v_u32m1(vuint32m1_t vs2, size_t vl) {
+ return __riscv_vclz_v_u32m1(vs2, vl);
+}
+
+vuint32m2_t test_vclz_v_u32m2(vuint32m2_t vs2, size_t vl) {
+ return __riscv_vclz_v_u32m2(vs2, vl);
+}
+
+vuint32m4_t test_vclz_v_u32m4(vuint32m4_t vs2, size_t vl) {
+ return __riscv_vclz_v_u32m4(vs2, vl);
+}
+
+vuint32m8_t test_vclz_v_u32m8(vuint32m8_t vs2, size_t vl) {
+ return __riscv_vclz_v_u32m8(vs2, vl);
+}
+
+vuint64m1_t test_vclz_v_u64m1(vuint64m1_t vs2, size_t vl) {
+ return __riscv_vclz_v_u64m1(vs2, vl);
+}
+
+vuint64m2_t test_vclz_v_u64m2(vuint64m2_t vs2, size_t vl) {
+ return __riscv_vclz_v_u64m2(vs2, vl);
+}
+
+vuint64m4_t test_vclz_v_u64m4(vuint64m4_t vs2, size_t vl) {
+ return __riscv_vclz_v_u64m4(vs2, vl);
+}
+
+vuint64m8_t test_vclz_v_u64m8(vuint64m8_t vs2, size_t vl) {
+ return __riscv_vclz_v_u64m8(vs2, vl);
+}
+
+vuint8mf8_t test_vclz_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vclz_v_u8mf8_m(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vclz_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vclz_v_u8mf4_m(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vclz_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vclz_v_u8mf2_m(mask, vs2, vl);
+}
+
+vuint8m1_t test_vclz_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vclz_v_u8m1_m(mask, vs2, vl);
+}
+
+vuint8m2_t test_vclz_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vclz_v_u8m2_m(mask, vs2, vl);
+}
+
+vuint8m4_t test_vclz_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vclz_v_u8m4_m(mask, vs2, vl);
+}
+
+vuint8m8_t test_vclz_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vclz_v_u8m8_m(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vclz_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vclz_v_u16mf4_m(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vclz_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vclz_v_u16mf2_m(mask, vs2, vl);
+}
+
+vuint16m1_t test_vclz_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vclz_v_u16m1_m(mask, vs2, vl);
+}
+
+vuint16m2_t test_vclz_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vclz_v_u16m2_m(mask, vs2, vl);
+}
+
+vuint16m4_t test_vclz_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vclz_v_u16m4_m(mask, vs2, vl);
+}
+
+vuint16m8_t test_vclz_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vclz_v_u16m8_m(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vclz_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vclz_v_u32mf2_m(mask, vs2, vl);
+}
+
+vuint32m1_t test_vclz_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vclz_v_u32m1_m(mask, vs2, vl);
+}
+
+vuint32m2_t test_vclz_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vclz_v_u32m2_m(mask, vs2, vl);
+}
+
+vuint32m4_t test_vclz_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vclz_v_u32m4_m(mask, vs2, vl);
+}
+
+vuint32m8_t test_vclz_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vclz_v_u32m8_m(mask, vs2, vl);
+}
+
+vuint64m1_t test_vclz_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vclz_v_u64m1_m(mask, vs2, vl);
+}
+
+vuint64m2_t test_vclz_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vclz_v_u64m2_m(mask, vs2, vl);
+}
+
+vuint64m4_t test_vclz_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vclz_v_u64m4_m(mask, vs2, vl);
+}
+
+vuint64m8_t test_vclz_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vclz_v_u64m8_m(mask, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vclz\.v\s+v[0-9]+,\s*v[0-9]} 44 } } */
+/* { dg-final { scan-assembler-times {vclz\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 22 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vclz_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vclz_overloaded.c
new file mode 100644
index 00000000000..b4466bd4c23
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vclz_overloaded.c
@@ -0,0 +1,187 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vclz_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vclz(vs2, vl);
+}
+
+vuint8mf4_t test_vclz_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vclz(vs2, vl);
+}
+
+vuint8mf2_t test_vclz_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vclz(vs2, vl);
+}
+
+vuint8m1_t test_vclz_v_u8m1(vuint8m1_t vs2, size_t vl) {
+ return __riscv_vclz(vs2, vl);
+}
+
+vuint8m2_t test_vclz_v_u8m2(vuint8m2_t vs2, size_t vl) {
+ return __riscv_vclz(vs2, vl);
+}
+
+vuint8m4_t test_vclz_v_u8m4(vuint8m4_t vs2, size_t vl) {
+ return __riscv_vclz(vs2, vl);
+}
+
+vuint8m8_t test_vclz_v_u8m8(vuint8m8_t vs2, size_t vl) {
+ return __riscv_vclz(vs2, vl);
+}
+
+vuint16mf4_t test_vclz_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vclz(vs2, vl);
+}
+
+vuint16mf2_t test_vclz_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vclz(vs2, vl);
+}
+
+vuint16m1_t test_vclz_v_u16m1(vuint16m1_t vs2, size_t vl) {
+ return __riscv_vclz(vs2, vl);
+}
+
+vuint16m2_t test_vclz_v_u16m2(vuint16m2_t vs2, size_t vl) {
+ return __riscv_vclz(vs2, vl);
+}
+
+vuint16m4_t test_vclz_v_u16m4(vuint16m4_t vs2, size_t vl) {
+ return __riscv_vclz(vs2, vl);
+}
+
+vuint16m8_t test_vclz_v_u16m8(vuint16m8_t vs2, size_t vl) {
+ return __riscv_vclz(vs2, vl);
+}
+
+vuint32mf2_t test_vclz_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vclz(vs2, vl);
+}
+
+vuint32m1_t test_vclz_v_u32m1(vuint32m1_t vs2, size_t vl) {
+ return __riscv_vclz(vs2, vl);
+}
+
+vuint32m2_t test_vclz_v_u32m2(vuint32m2_t vs2, size_t vl) {
+ return __riscv_vclz(vs2, vl);
+}
+
+vuint32m4_t test_vclz_v_u32m4(vuint32m4_t vs2, size_t vl) {
+ return __riscv_vclz(vs2, vl);
+}
+
+vuint32m8_t test_vclz_v_u32m8(vuint32m8_t vs2, size_t vl) {
+ return __riscv_vclz(vs2, vl);
+}
+
+vuint64m1_t test_vclz_v_u64m1(vuint64m1_t vs2, size_t vl) {
+ return __riscv_vclz(vs2, vl);
+}
+
+vuint64m2_t test_vclz_v_u64m2(vuint64m2_t vs2, size_t vl) {
+ return __riscv_vclz(vs2, vl);
+}
+
+vuint64m4_t test_vclz_v_u64m4(vuint64m4_t vs2, size_t vl) {
+ return __riscv_vclz(vs2, vl);
+}
+
+vuint64m8_t test_vclz_v_u64m8(vuint64m8_t vs2, size_t vl) {
+ return __riscv_vclz(vs2, vl);
+}
+
+vuint8mf8_t test_vclz_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vclz_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vclz_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint8m1_t test_vclz_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint8m2_t test_vclz_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint8m4_t test_vclz_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint8m8_t test_vclz_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vclz_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vclz_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint16m1_t test_vclz_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint16m2_t test_vclz_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint16m4_t test_vclz_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint16m8_t test_vclz_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vclz_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint32m1_t test_vclz_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint32m2_t test_vclz_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint32m4_t test_vclz_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint32m8_t test_vclz_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint64m1_t test_vclz_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint64m2_t test_vclz_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint64m4_t test_vclz_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint64m8_t test_vclz_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vclz(mask, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vclz\.v\s+v[0-9]+,\s*v[0-9]} 44 } } */
+/* { dg-final { scan-assembler-times {vclz\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 22 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vctz.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vctz.c
new file mode 100644
index 00000000000..ce3cc6dc551
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vctz.c
@@ -0,0 +1,187 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vctz_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vctz_v_u8mf8(vs2, vl);
+}
+
+vuint8mf4_t test_vctz_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vctz_v_u8mf4(vs2, vl);
+}
+
+vuint8mf2_t test_vctz_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vctz_v_u8mf2(vs2, vl);
+}
+
+vuint8m1_t test_vctz_v_u8m1(vuint8m1_t vs2, size_t vl) {
+ return __riscv_vctz_v_u8m1(vs2, vl);
+}
+
+vuint8m2_t test_vctz_v_u8m2(vuint8m2_t vs2, size_t vl) {
+ return __riscv_vctz_v_u8m2(vs2, vl);
+}
+
+vuint8m4_t test_vctz_v_u8m4(vuint8m4_t vs2, size_t vl) {
+ return __riscv_vctz_v_u8m4(vs2, vl);
+}
+
+vuint8m8_t test_vctz_v_u8m8(vuint8m8_t vs2, size_t vl) {
+ return __riscv_vctz_v_u8m8(vs2, vl);
+}
+
+vuint16mf4_t test_vctz_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vctz_v_u16mf4(vs2, vl);
+}
+
+vuint16mf2_t test_vctz_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vctz_v_u16mf2(vs2, vl);
+}
+
+vuint16m1_t test_vctz_v_u16m1(vuint16m1_t vs2, size_t vl) {
+ return __riscv_vctz_v_u16m1(vs2, vl);
+}
+
+vuint16m2_t test_vctz_v_u16m2(vuint16m2_t vs2, size_t vl) {
+ return __riscv_vctz_v_u16m2(vs2, vl);
+}
+
+vuint16m4_t test_vctz_v_u16m4(vuint16m4_t vs2, size_t vl) {
+ return __riscv_vctz_v_u16m4(vs2, vl);
+}
+
+vuint16m8_t test_vctz_v_u16m8(vuint16m8_t vs2, size_t vl) {
+ return __riscv_vctz_v_u16m8(vs2, vl);
+}
+
+vuint32mf2_t test_vctz_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vctz_v_u32mf2(vs2, vl);
+}
+
+vuint32m1_t test_vctz_v_u32m1(vuint32m1_t vs2, size_t vl) {
+ return __riscv_vctz_v_u32m1(vs2, vl);
+}
+
+vuint32m2_t test_vctz_v_u32m2(vuint32m2_t vs2, size_t vl) {
+ return __riscv_vctz_v_u32m2(vs2, vl);
+}
+
+vuint32m4_t test_vctz_v_u32m4(vuint32m4_t vs2, size_t vl) {
+ return __riscv_vctz_v_u32m4(vs2, vl);
+}
+
+vuint32m8_t test_vctz_v_u32m8(vuint32m8_t vs2, size_t vl) {
+ return __riscv_vctz_v_u32m8(vs2, vl);
+}
+
+vuint64m1_t test_vctz_v_u64m1(vuint64m1_t vs2, size_t vl) {
+ return __riscv_vctz_v_u64m1(vs2, vl);
+}
+
+vuint64m2_t test_vctz_v_u64m2(vuint64m2_t vs2, size_t vl) {
+ return __riscv_vctz_v_u64m2(vs2, vl);
+}
+
+vuint64m4_t test_vctz_v_u64m4(vuint64m4_t vs2, size_t vl) {
+ return __riscv_vctz_v_u64m4(vs2, vl);
+}
+
+vuint64m8_t test_vctz_v_u64m8(vuint64m8_t vs2, size_t vl) {
+ return __riscv_vctz_v_u64m8(vs2, vl);
+}
+
+vuint8mf8_t test_vctz_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vctz_v_u8mf8_m(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vctz_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vctz_v_u8mf4_m(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vctz_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vctz_v_u8mf2_m(mask, vs2, vl);
+}
+
+vuint8m1_t test_vctz_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vctz_v_u8m1_m(mask, vs2, vl);
+}
+
+vuint8m2_t test_vctz_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vctz_v_u8m2_m(mask, vs2, vl);
+}
+
+vuint8m4_t test_vctz_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vctz_v_u8m4_m(mask, vs2, vl);
+}
+
+vuint8m8_t test_vctz_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vctz_v_u8m8_m(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vctz_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vctz_v_u16mf4_m(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vctz_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vctz_v_u16mf2_m(mask, vs2, vl);
+}
+
+vuint16m1_t test_vctz_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vctz_v_u16m1_m(mask, vs2, vl);
+}
+
+vuint16m2_t test_vctz_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vctz_v_u16m2_m(mask, vs2, vl);
+}
+
+vuint16m4_t test_vctz_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vctz_v_u16m4_m(mask, vs2, vl);
+}
+
+vuint16m8_t test_vctz_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vctz_v_u16m8_m(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vctz_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vctz_v_u32mf2_m(mask, vs2, vl);
+}
+
+vuint32m1_t test_vctz_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vctz_v_u32m1_m(mask, vs2, vl);
+}
+
+vuint32m2_t test_vctz_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vctz_v_u32m2_m(mask, vs2, vl);
+}
+
+vuint32m4_t test_vctz_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vctz_v_u32m4_m(mask, vs2, vl);
+}
+
+vuint32m8_t test_vctz_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vctz_v_u32m8_m(mask, vs2, vl);
+}
+
+vuint64m1_t test_vctz_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vctz_v_u64m1_m(mask, vs2, vl);
+}
+
+vuint64m2_t test_vctz_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vctz_v_u64m2_m(mask, vs2, vl);
+}
+
+vuint64m4_t test_vctz_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vctz_v_u64m4_m(mask, vs2, vl);
+}
+
+vuint64m8_t test_vctz_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vctz_v_u64m8_m(mask, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vctz\.v\s+v[0-9]+,\s*v[0-9]} 44 } } */
+/* { dg-final { scan-assembler-times {vctz\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 22 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vctz_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vctz_overloaded.c
new file mode 100644
index 00000000000..df083ea4504
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vctz_overloaded.c
@@ -0,0 +1,188 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vctz_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vctz(vs2, vl);
+}
+
+vuint8mf4_t test_vctz_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vctz(vs2, vl);
+}
+
+vuint8mf2_t test_vctz_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vctz(vs2, vl);
+}
+
+vuint8m1_t test_vctz_v_u8m1(vuint8m1_t vs2, size_t vl) {
+ return __riscv_vctz(vs2, vl);
+}
+
+vuint8m2_t test_vctz_v_u8m2(vuint8m2_t vs2, size_t vl) {
+ return __riscv_vctz(vs2, vl);
+}
+
+vuint8m4_t test_vctz_v_u8m4(vuint8m4_t vs2, size_t vl) {
+ return __riscv_vctz(vs2, vl);
+}
+
+vuint8m8_t test_vctz_v_u8m8(vuint8m8_t vs2, size_t vl) {
+ return __riscv_vctz(vs2, vl);
+}
+
+vuint16mf4_t test_vctz_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vctz(vs2, vl);
+}
+
+vuint16mf2_t test_vctz_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vctz(vs2, vl);
+}
+
+vuint16m1_t test_vctz_v_u16m1(vuint16m1_t vs2, size_t vl) {
+ return __riscv_vctz(vs2, vl);
+}
+
+vuint16m2_t test_vctz_v_u16m2(vuint16m2_t vs2, size_t vl) {
+ return __riscv_vctz(vs2, vl);
+}
+
+vuint16m4_t test_vctz_v_u16m4(vuint16m4_t vs2, size_t vl) {
+ return __riscv_vctz(vs2, vl);
+}
+
+vuint16m8_t test_vctz_v_u16m8(vuint16m8_t vs2, size_t vl) {
+ return __riscv_vctz(vs2, vl);
+}
+
+vuint32mf2_t test_vctz_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vctz(vs2, vl);
+}
+
+vuint32m1_t test_vctz_v_u32m1(vuint32m1_t vs2, size_t vl) {
+ return __riscv_vctz(vs2, vl);
+}
+
+vuint32m2_t test_vctz_v_u32m2(vuint32m2_t vs2, size_t vl) {
+ return __riscv_vctz(vs2, vl);
+}
+
+vuint32m4_t test_vctz_v_u32m4(vuint32m4_t vs2, size_t vl) {
+ return __riscv_vctz(vs2, vl);
+}
+
+vuint32m8_t test_vctz_v_u32m8(vuint32m8_t vs2, size_t vl) {
+ return __riscv_vctz(vs2, vl);
+}
+
+vuint64m1_t test_vctz_v_u64m1(vuint64m1_t vs2, size_t vl) {
+ return __riscv_vctz(vs2, vl);
+}
+
+vuint64m2_t test_vctz_v_u64m2(vuint64m2_t vs2, size_t vl) {
+ return __riscv_vctz(vs2, vl);
+}
+
+vuint64m4_t test_vctz_v_u64m4(vuint64m4_t vs2, size_t vl) {
+ return __riscv_vctz(vs2, vl);
+}
+
+vuint64m8_t test_vctz_v_u64m8(vuint64m8_t vs2, size_t vl) {
+ return __riscv_vctz(vs2, vl);
+}
+
+vuint8mf8_t test_vctz_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vctz_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vctz_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint8m1_t test_vctz_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint8m2_t test_vctz_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint8m4_t test_vctz_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint8m8_t test_vctz_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vctz_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vctz_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint16m1_t test_vctz_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint16m2_t test_vctz_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint16m4_t test_vctz_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint16m8_t test_vctz_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vctz_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint32m1_t test_vctz_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint32m2_t test_vctz_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint32m4_t test_vctz_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint32m8_t test_vctz_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint64m1_t test_vctz_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint64m2_t test_vctz_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint64m4_t test_vctz_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint64m8_t test_vctz_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vctz(mask, vs2, vl);
+}
+
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vctz\.v\s+v[0-9]+,\s*v[0-9]} 44 } } */
+/* { dg-final { scan-assembler-times {vctz\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 22 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrev8.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrev8.c
new file mode 100644
index 00000000000..a6533d11081
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrev8.c
@@ -0,0 +1,542 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8mf8(vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8mf4(vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8mf2(vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1(vuint8m1_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8m1(vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2(vuint8m2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8m2(vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4(vuint8m4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8m4(vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8(vuint8m8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8m8(vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16mf4(vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16mf2(vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1(vuint16m1_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16m1(vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2(vuint16m2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16m2(vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4(vuint16m4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16m4(vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8(vuint16m8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16m8(vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32mf2(vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1(vuint32m1_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32m1(vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2(vuint32m2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32m2(vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4(vuint32m4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32m4(vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8(vuint32m8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32m8(vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1(vuint64m1_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u64m1(vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2(vuint64m2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u64m2(vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4(vuint64m4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u64m4(vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8(vuint64m8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u64m8(vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8mf8_m(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8mf4_m(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8mf2_m(mask, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8m1_m(mask, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8m2_m(mask, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8m4_m(mask, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8m8_m(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16mf4_m(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16mf2_m(mask, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16m1_m(mask, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16m2_m(mask, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16m4_m(mask, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16m8_m(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32mf2_m(mask, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32m1_m(mask, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32m2_m(mask, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32m4_m(mask, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32m8_m(mask, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u64m1_m(mask, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u64m2_m(mask, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u64m4_m(mask, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u64m8_m(mask, vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8mf8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8mf4_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8mf2_tu(maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8m1_tu(maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8m2_tu(maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8m4_tu(maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8m8_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16mf4_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16mf2_tu(maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16m1_tu(maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16m2_tu(maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16m4_tu(maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16m8_tu(maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32mf2_tu(maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32m1_tu(maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32m2_tu(maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32m4_tu(maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32m8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u64m1_tu(maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u64m2_tu(maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u64m4_tu(maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u64m8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8mf8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8mf4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8mf2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16mf4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16mf2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32mf2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u64m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u64m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u64m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u64m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8mf8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8mf4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8mf2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16mf4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16mf2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32mf2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u64m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u64m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u64m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u64m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8mf8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8mf4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8mf2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8m8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16mf4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16mf2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u16m8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32mf2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u32m8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u64m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u64m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u64m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u64m8_mu(mask, maskedoff, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vrev8\.v\s+v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vrev8\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrev8_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrev8_overloaded.c
new file mode 100644
index 00000000000..49ce752376b
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrev8_overloaded.c
@@ -0,0 +1,542 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vrev8(vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vrev8(vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vrev8(vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1(vuint8m1_t vs2, size_t vl) {
+ return __riscv_vrev8(vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2(vuint8m2_t vs2, size_t vl) {
+ return __riscv_vrev8(vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4(vuint8m4_t vs2, size_t vl) {
+ return __riscv_vrev8(vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8(vuint8m8_t vs2, size_t vl) {
+ return __riscv_vrev8(vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vrev8(vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vrev8(vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1(vuint16m1_t vs2, size_t vl) {
+ return __riscv_vrev8(vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2(vuint16m2_t vs2, size_t vl) {
+ return __riscv_vrev8(vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4(vuint16m4_t vs2, size_t vl) {
+ return __riscv_vrev8(vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8(vuint16m8_t vs2, size_t vl) {
+ return __riscv_vrev8(vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vrev8(vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1(vuint32m1_t vs2, size_t vl) {
+ return __riscv_vrev8(vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2(vuint32m2_t vs2, size_t vl) {
+ return __riscv_vrev8(vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4(vuint32m4_t vs2, size_t vl) {
+ return __riscv_vrev8(vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8(vuint32m8_t vs2, size_t vl) {
+ return __riscv_vrev8(vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1(vuint64m1_t vs2, size_t vl) {
+ return __riscv_vrev8(vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2(vuint64m2_t vs2, size_t vl) {
+ return __riscv_vrev8(vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4(vuint64m4_t vs2, size_t vl) {
+ return __riscv_vrev8(vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8(vuint64m8_t vs2, size_t vl) {
+ return __riscv_vrev8(vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+ return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+ return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+ return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+ return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+ return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+ return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+ return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+ return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+ return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+ return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+ return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+ return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+ return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vrev8\.v\s+v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vrev8\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrol.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrol.c
new file mode 100644
index 00000000000..2bc8c44d738
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrol.c
@@ -0,0 +1,1072 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vrol_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8mf8(vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8mf8(vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8mf4(vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4(vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8mf4(vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8mf2(vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2(vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8mf2(vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8m1(vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1(vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8m1(vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8m2(vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2(vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8m2(vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8m4(vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4(vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8m4(vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8m8(vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8m8(vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16mf4(vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16mf4(vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16mf2(vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16mf2(vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16m1(vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16m1(vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16m2(vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16m2(vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16m4(vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16m4(vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16m8(vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16m8(vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32mf2(vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32mf2(vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32m1(vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32m1(vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32m2(vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32m2(vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32m4(vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32m4(vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32m8(vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32m8(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u64m1(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u64m1(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u64m2(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u64m2(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u64m4(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u64m4(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u64m8(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u64m8(vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8mf8_m(mask, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8mf8_m(mask, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8mf4_m(mask, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8mf4_m(mask, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8m1_m(mask, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8m1_m(mask, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8m2_m(mask, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8m2_m(mask, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8m4_m(mask, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8m4_m(mask, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8m8_m(mask, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8m8_m(mask, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16mf4_m(mask, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16mf4_m(mask, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16m1_m(mask, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16m1_m(mask, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16m2_m(mask, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16m2_m(mask, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16m4_m(mask, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16m4_m(mask, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16m8_m(mask, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16m8_m(mask, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32m1_m(mask, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32m1_m(mask, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32m2_m(mask, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32m2_m(mask, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32m4_m(mask, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32m4_m(mask, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32m8_m(mask, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32m8_m(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u64m1_m(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u64m1_m(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u64m2_m(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u64m2_m(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u64m4_m(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u64m4_m(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u64m8_m(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u64m8_m(mask, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8mf8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8mf8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8mf4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8mf4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16mf4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16mf4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u64m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u64m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u64m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u64m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u64m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u64m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u64m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u64m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8mf8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8mf8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8mf4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8mf4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8mf8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8mf8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8mf4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8mf4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8mf8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8mf8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8mf4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8mf4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vrol\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vrol\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
+/* { dg-final { scan-assembler-times {vrol\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vrol\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrol_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrol_overloaded.c
new file mode 100644
index 00000000000..2ce4559dc29
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrol_overloaded.c
@@ -0,0 +1,1072 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vrol_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4(vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2(vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1(vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2(vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4(vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vrol\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vrol\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
+/* { dg-final { scan-assembler-times {vrol\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vrol\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vror.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vror.c
new file mode 100644
index 00000000000..144be5c2756
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vror.c
@@ -0,0 +1,1073 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vror_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8mf8(vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8mf8(vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8mf4(vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4(vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8mf4(vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8mf2(vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2(vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8mf2(vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8m1(vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1(vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8m1(vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8m2(vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2(vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8m2(vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8m4(vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4(vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8m4(vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8m8(vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8m8(vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16mf4(vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16mf4(vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16mf2(vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16mf2(vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16m1(vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16m1(vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16m2(vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16m2(vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16m4(vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16m4(vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16m8(vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16m8(vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32mf2(vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32mf2(vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32m1(vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32m1(vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32m2(vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32m2(vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32m4(vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32m4(vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32m8(vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32m8(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vror_vv_u64m1(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u64m1(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u64m2(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u64m2(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u64m4(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u64m4(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u64m8(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u64m8(vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8mf8_m(mask, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8mf8_m(mask, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8mf4_m(mask, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8mf4_m(mask, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8m1_m(mask, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8m1_m(mask, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8m2_m(mask, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8m2_m(mask, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8m4_m(mask, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8m4_m(mask, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8m8_m(mask, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8m8_m(mask, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16mf4_m(mask, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16mf4_m(mask, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16m1_m(mask, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16m1_m(mask, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16m2_m(mask, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16m2_m(mask, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16m4_m(mask, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16m4_m(mask, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16m8_m(mask, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16m8_m(mask, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32m1_m(mask, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32m1_m(mask, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32m2_m(mask, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32m2_m(mask, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32m4_m(mask, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32m4_m(mask, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32m8_m(mask, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32m8_m(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vror_vv_u64m1_m(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u64m1_m(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u64m2_m(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u64m2_m(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u64m4_m(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u64m4_m(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u64m8_m(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u64m8_m(mask, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8mf8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8mf8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8mf4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8mf4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16mf4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16mf4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vror_vv_u64m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u64m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u64m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u64m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u64m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u64m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u64m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u64m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8mf8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8mf8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8mf4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8mf4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vror_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8mf8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8mf8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8mf4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8mf4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vror_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8mf8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8mf8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8mf4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8mf4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vror_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vror_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vror_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vror\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vror\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
+/* { dg-final { scan-assembler-times {vror\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vror\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vror_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vror_overloaded.c
new file mode 100644
index 00000000000..bebcf7facad
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vror_overloaded.c
@@ -0,0 +1,1072 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vror_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4(vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2(vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1(vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2(vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4(vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vror\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vror\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
+/* { dg-final { scan-assembler-times {vror\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vror\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vwsll.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vwsll.c
new file mode 100644
index 00000000000..e1946261e4f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vwsll.c
@@ -0,0 +1,736 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint16mf4_t test_vwsll_vv_u16mf4(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16mf4(vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4(vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16mf4(vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16mf2(vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2(vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16mf2(vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16m1(vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1(vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16m1(vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16m2(vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2(vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16m2(vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16m4(vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4(vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16m4(vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16m8(vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8(vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16m8(vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32mf2(vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2(vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32mf2(vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32m1(vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1(vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32m1(vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32m2(vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2(vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32m2(vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32m4(vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4(vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32m4(vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32m8(vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8(vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32m8(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u64m1(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1(vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u64m1(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u64m2(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2(vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u64m2(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u64m4(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4(vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u64m4(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u64m8(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8(vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u64m8(vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16mf4_m(mask, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16mf4_m(mask, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16m1_m(mask, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16m1_m(mask, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16m2_m(mask, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16m2_m(mask, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16m4_m(mask, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16m4_m(mask, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16m8_m(mask, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16m8_m(mask, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32m1_m(mask, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32m1_m(mask, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32m2_m(mask, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32m2_m(mask, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32m4_m(mask, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32m4_m(mask, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32m8_m(mask, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32m8_m(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u64m1_m(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u64m1_m(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u64m2_m(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u64m2_m(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u64m4_m(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u64m4_m(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u64m8_m(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u64m8_m(mask, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16mf4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16mf4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u64m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u64m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u64m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u64m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u64m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u64m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u64m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u64m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vwsll_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 60 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 60 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 30 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 30 } } */
+/* { dg-final { scan-assembler-times {vwsll\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 90 } } */
+/* { dg-final { scan-assembler-times {vwsll\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 60 } } */
+/* { dg-final { scan-assembler-times {vwsll\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 90 } } */
+/* { dg-final { scan-assembler-times {vwsll\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 60 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vwsll_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vwsll_overloaded.c
new file mode 100644
index 00000000000..512d76f5850
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vwsll_overloaded.c
@@ -0,0 +1,736 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint16mf4_t test_vwsll_vv_u16mf4(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4(vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2(vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1(vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2(vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4(vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8(vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2(vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1(vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2(vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4(vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8(vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1(vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2(vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4(vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8(vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 60 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 60 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 30 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 30 } } */
+/* { dg-final { scan-assembler-times {vwsll\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 90 } } */
+/* { dg-final { scan-assembler-times {vwsll\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 60 } } */
+/* { dg-final { scan-assembler-times {vwsll\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 90 } } */
+/* { dg-final { scan-assembler-times {vwsll\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 60 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/zvkb.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/zvkb.c
new file mode 100644
index 00000000000..cf956ced59e
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/zvkb.c
@@ -0,0 +1,48 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+
+vuint8mf8_t test_vandn_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vandn_vv_u8mf8(vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8(vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+ return __riscv_vandn_vx_u8mf8(vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8mf8(vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vbrev8_v_u8mf4(vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8mf8(vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+ return __riscv_vrev8_v_u8mf4(vs2, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vror_vv_u8mf8(vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vror_vx_u8mf8(vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+ return __riscv_vrol_vv_u8mf8(vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl) {
+ return __riscv_vrol_vx_u8mf8(vs2, rs1, vl);
+}
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp b/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
new file mode 100644
index 00000000000..f0c5431d00c
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
@@ -0,0 +1,41 @@
+# Copyright (C) 2022-2023 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with GCC; see the file COPYING3. If not see
+# <http://www.gnu.org/licenses/>.
+
+# GCC testsuite that uses the `dg.exp' driver.
+
+# Exit immediately if this isn't a RISC-V target.
+if ![istarget riscv*-*-*] then {
+ return
+}
+
+# Load support procs.
+load_lib gcc-dg.exp
+
+# If a testcase doesn't have special options, use these.
+global DEFAULT_CFLAGS
+if ![info exists DEFAULT_CFLAGS] then {
+ set DEFAULT_CFLAGS " -ansi -pedantic-errors"
+}
+
+# Initialize `dg'.
+dg-init
+
+# Main loop.
+dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvbb/*.\[cS\]]] \
+ "" $DEFAULT_CFLAGS
+
+# All done.
+dg-finish
diff --git a/gcc/testsuite/gcc.target/riscv/zvkb.c b/gcc/testsuite/gcc.target/riscv/zvkb.c
new file mode 100644
index 00000000000..d5c28e79ef6
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvkb.c
@@ -0,0 +1,13 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkb" { target { rv64 } } } */
+/* { dg-options "-march=rv32gc_zvkb" { target { rv32 } } } */
+
+#ifndef __riscv_zvkb
+#error "Feature macro not defined"
+#endif
+
+int
+foo (int a)
+{
+ return a;
+}
--
2.17.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 2/7] RISC-V: Add intrinsic functions for crypto vector Zvbc extension
2023-12-04 2:57 [PATCH 1/7] RISC-V: Add intrinsic functions for crypto vector Zvbb extension Feng Wang
@ 2023-12-04 2:57 ` Feng Wang
2023-12-04 2:57 ` [PATCH 3/7] RISC-V: Add intrinsic functions for crypto vector Zvkg extension Feng Wang
` (5 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Feng Wang @ 2023-12-04 2:57 UTC (permalink / raw)
To: gcc-patches; +Cc: kito.cheng, jeffreyalaw, zhusonghe, panciyan, Feng Wang
This patch add the intrinsic functions(according to https://github.com/
riscv-non-isa/rvv-intrinsic-doc/blob/eopc/vector-crypto/auto-generated/
vector-crypto/intrinsic_funcs.md) for crypto vector Zvbc extension. And all
the test cases are added for api-testing.
Co-Authored by: Songhe Zhu <zhusonghe@eswincomputing.com>
gcc/ChangeLog:
* common/config/riscv/riscv-common.cc: Add Zvbc in riscv_implied_info.
* config/riscv/riscv-vector-builtins-bases.cc (class clmul):Add new function_base for Zvbc.
(BASE): Add Zvbc BASE declaration.
* config/riscv/riscv-vector-builtins-bases.h: Ditto.
* config/riscv/riscv-vector-builtins-shapes.cc (struct zvbb_def): Add new function_builder for Zvbc.
(struct zvbb_zvbc_def): Combine function_base of Zvbb and Zvbc.
(SHAPE): Add Zvbc SHAPE declaration.
* config/riscv/riscv-vector-builtins-shapes.h: Ditto.
* config/riscv/riscv-vector-builtins.cc (DEF_RVV_CRYPTO_SEW32_OPS):Define new data struct for Zvbc.
(DEF_RVV_CRYPTO_SEW64_OPS): Ditto.
* config/riscv/riscv-vector-crypto-builtins-avail.h (AVAIL): Add enable condition.
* config/riscv/riscv-vector-crypto-builtins-functions.def (vandn): Add intrinsc def.
(vbrev): Ditto.
(vbrev8): Ditto.
(vrev8): Ditto.
(vclz): Ditto.
(vctz): Ditto.
(vrol): Ditto.
(vror): Ditto.
(vwsll): Ditto.
(vclmul): Ditto.
(vclmulh):Ditto.
* config/riscv/riscv.md: Add Zvbc ins name.
* config/riscv/vector-crypto.md (h): Add Zvbc md patterns.
(@pred_vclmul<h><mode>): Ditto.
(@pred_vclmul<h><mode>_scalar): Ditto.
* config/riscv/vector-iterators.md: Add new iterators for Zvbc.
* config/riscv/vector.md: Add the corresponding attribute for Zvbc.
* config/riscv/riscv-vector-crypto-builtins-types.def: New file.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/zvk/zvk.exp:
* gcc.target/riscv/zvk/zvbc/vclmul.c: New test.
* gcc.target/riscv/zvk/zvbc/vclmul_overloaded.c: New test.
* gcc.target/riscv/zvk/zvbc/vclmulh.c: New test.
* gcc.target/riscv/zvk/zvbc/vclmulh_overloaded.c: New test.
---
gcc/common/config/riscv/riscv-common.cc | 1 +
.../riscv/riscv-vector-builtins-bases.cc | 22 ++
.../riscv/riscv-vector-builtins-bases.h | 2 +
.../riscv/riscv-vector-builtins-shapes.cc | 6 +-
.../riscv/riscv-vector-builtins-shapes.h | 2 +-
gcc/config/riscv/riscv-vector-builtins.cc | 29 +++
.../riscv-vector-crypto-builtins-avail.h | 1 +
...riscv-vector-crypto-builtins-functions.def | 31 +--
.../riscv-vector-crypto-builtins-types.def | 21 ++
gcc/config/riscv/riscv.md | 5 +-
gcc/config/riscv/vector-crypto.md | 50 +++++
gcc/config/riscv/vector-iterators.md | 5 +
gcc/config/riscv/vector.md | 14 +-
.../gcc.target/riscv/zvk/zvbc/vclmul.c | 208 ++++++++++++++++++
.../riscv/zvk/zvbc/vclmul_overloaded.c | 208 ++++++++++++++++++
.../gcc.target/riscv/zvk/zvbc/vclmulh.c | 208 ++++++++++++++++++
.../riscv/zvk/zvbc/vclmulh_overloaded.c | 208 ++++++++++++++++++
gcc/testsuite/gcc.target/riscv/zvk/zvk.exp | 2 +
18 files changed, 998 insertions(+), 25 deletions(-)
create mode 100755 gcc/config/riscv/riscv-vector-crypto-builtins-types.def
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmul.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmul_overloaded.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmulh.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmulh_overloaded.c
diff --git a/gcc/common/config/riscv/riscv-common.cc b/gcc/common/config/riscv/riscv-common.cc
index a5fb492c690..296500e15df 100644
--- a/gcc/common/config/riscv/riscv-common.cc
+++ b/gcc/common/config/riscv/riscv-common.cc
@@ -121,6 +121,7 @@ static const riscv_implied_info_t riscv_implied_info[] =
{"zvksg", "zvks"},
{"zvksg", "zvkg"},
{"zvbb", "zvkb"},
+ {"zvbc", "v"},
{"zvkb", "v"},
{"zfh", "zfhmin"},
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index e41343b4a1a..45b1e563ff4 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -2209,6 +2209,24 @@ public:
}
};
+template<int UNSPEC>
+class clmul : public function_base
+{
+public:
+ rtx expand (function_expander &e) const override
+ {
+ switch (e.op_info->op)
+ {
+ case OP_TYPE_vv:
+ return e.use_exact_insn (code_for_pred_vclmul (UNSPEC, e.vector_mode ()));
+ case OP_TYPE_vx:
+ return e.use_exact_insn (code_for_pred_vclmul_scalar (UNSPEC, e.vector_mode ()));
+ default:
+ gcc_unreachable ();
+ }
+ }
+};
+
static CONSTEXPR const vsetvl<false> vsetvl_obj;
static CONSTEXPR const vsetvl<true> vsetvlmax_obj;
static CONSTEXPR const loadstore<false, LST_UNIT_STRIDE, false> vle_obj;
@@ -2476,6 +2494,8 @@ static CONSTEXPR const b_reverse<UNSPEC_VREV8> vrev8_obj;
static CONSTEXPR const vcltz<UNSPEC_VCLZ> vclz_obj;
static CONSTEXPR const vcltz<UNSPEC_VCTZ> vctz_obj;
static CONSTEXPR const vwsll vwsll_obj;
+static CONSTEXPR const clmul<UNSPEC_VCLMUL> vclmul_obj;
+static CONSTEXPR const clmul<UNSPEC_VCLMULH> vclmulh_obj;
/* Declare the function base NAME, pointing it to an instance
of class <NAME>_obj. */
@@ -2748,4 +2768,6 @@ BASE (vctz)
BASE (vrol)
BASE (vror)
BASE (vwsll)
+BASE (vclmul)
+BASE (vclmulh)
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.h b/gcc/config/riscv/riscv-vector-builtins-bases.h
index 2f46974bd27..7d2c86f9162 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.h
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.h
@@ -290,6 +290,8 @@ extern const function_base *const vctz;
extern const function_base *const vrol;
extern const function_base *const vror;
extern const function_base *const vwsll;
+extern const function_base *const vclmul;
+extern const function_base *const vclmulh;
}
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index a98c2389fbc..f21c459e6a2 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -984,8 +984,8 @@ struct seg_fault_load_def : public build_base
}
};
-/* vandn/vbrev/vbrev8/vrev8/vclz/vctz/vror[l]/vwsll class. */
-struct zvbb_def : public build_base
+/* vandn/vbrev/vbrev8/vrev8/vclz/vctz/vror[l]/vwsll/vclmul/vclmulh class. */
+struct zvbb_zvbc_def : public build_base
{
char *get_name (function_builder &b, const function_instance &instance,
bool overloaded_p) const override
@@ -1037,5 +1037,5 @@ SHAPE(vlenb, vlenb)
SHAPE(seg_loadstore, seg_loadstore)
SHAPE(seg_indexed_loadstore, seg_indexed_loadstore)
SHAPE(seg_fault_load, seg_fault_load)
-SHAPE(zvbb, zvbb)
+SHAPE(zvbb_zvbc, zvbb_zvbc)
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.h b/gcc/config/riscv/riscv-vector-builtins-shapes.h
index e8959a2f277..a217eae33f0 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.h
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.h
@@ -53,7 +53,7 @@ extern const function_shape *const seg_loadstore;
extern const function_shape *const seg_indexed_loadstore;
extern const function_shape *const seg_fault_load;
/* Below function_shape are Vectro Crypto*/
-extern const function_shape *const zvbb;
+extern const function_shape *const zvbb_zvbc;
}
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
index 7a1da5c4539..ffd30c1a806 100644
--- a/gcc/config/riscv/riscv-vector-builtins.cc
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -522,6 +522,19 @@ static const rvv_type_info tuple_ops[] = {
#include "riscv-vector-builtins-types.def"
{NUM_VECTOR_TYPES, 0}};
+/* Below types will be registered for vector-crypto intrinsic functions*/
+/* A list of sew32 will be registered for vector-crypto intrinsic functions. */
+static const rvv_type_info crypto_sew32_ops[] = {
+#define DEF_RVV_CRYPTO_SEW32_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-crypto-builtins-types.def"
+ {NUM_VECTOR_TYPES, 0}};
+
+/* A list of sew64 will be registered for vector-crypto intrinsic functions. */
+static const rvv_type_info crypto_sew64_ops[] = {
+#define DEF_RVV_CRYPTO_SEW64_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-crypto-builtins-types.def"
+ {NUM_VECTOR_TYPES, 0}};
+
static CONSTEXPR const rvv_arg_type_info rvv_arg_type_info_end
= rvv_arg_type_info (NUM_BASE_TYPES);
@@ -2626,6 +2639,22 @@ static CONSTEXPR const rvv_op_info all_v_vcreate_lmul4_x2_ops
rvv_arg_type_info (RVV_BASE_vlmul_ext_x2), /* Return type */
ext_vcreate_args /* Args */};
+/* A static operand information for vector_type func (vector_type).
+ Some ins just supports SEW=64, such as crypto vectol Zvbc extension
+ vclmul.vv, vclmul.vx.
+ * function registration. */
+static CONSTEXPR const rvv_op_info u_vvv_crypto_sew64_ops
+ = {crypto_sew64_ops, /* Types */
+ OP_TYPE_vv, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ vv_args /* Args */};
+
+static CONSTEXPR const rvv_op_info u_vvx_crypto_sew64_ops
+ = {crypto_sew64_ops, /* Types */
+ OP_TYPE_vx, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ vx_args /* Args */};
+
/* A list of all RVV base function types. */
static CONSTEXPR const function_type_info function_types[] = {
#define DEF_RVV_TYPE_INDEX( \
diff --git a/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h b/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
index 2719027a7da..a63dea6a27b 100755
--- a/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
+++ b/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
@@ -13,6 +13,7 @@ namespace riscv_vector {
}
AVAIL (zvbb, TARGET_ZVBB)
+AVAIL (zvbc, TARGET_ZVBC)
AVAIL (zvkb_or_zvbb, TARGET_ZVKB || TARGET_ZVBB)
}
#endif
diff --git a/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def b/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
index f3371f28a42..d8c74dec4f6 100755
--- a/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
+++ b/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
@@ -4,16 +4,21 @@
// ZVBB
-DEF_VECTOR_CRYPTO_FUNCTION (vandn, zvbb, full_preds, u_vvv_ops, zvkb_or_zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vandn, zvbb, full_preds, u_vvx_ops, zvkb_or_zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vbrev, zvbb, full_preds, u_vv_ops, zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vbrev8, zvbb, full_preds, u_vv_ops, zvkb_or_zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vrev8, zvbb, full_preds, u_vv_ops, zvkb_or_zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vclz, zvbb, none_m_preds, u_vv_ops, zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vctz, zvbb, none_m_preds, u_vv_ops, zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vrol, zvbb, full_preds, u_vvv_ops, zvkb_or_zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vrol, zvbb, full_preds, u_shift_vvx_ops, zvkb_or_zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vror, zvbb, full_preds, u_vvv_ops, zvkb_or_zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vror, zvbb, full_preds, u_shift_vvx_ops, zvkb_or_zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vwsll, zvbb, full_preds, u_wvv_ops, zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vwsll, zvbb, full_preds, u_shift_wvx_ops, zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vandn, zvbb_zvbc, full_preds, u_vvv_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vandn, zvbb_zvbc, full_preds, u_vvx_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vbrev, zvbb_zvbc, full_preds, u_vv_ops, zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vbrev8, zvbb_zvbc, full_preds, u_vv_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vrev8, zvbb_zvbc, full_preds, u_vv_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vclz, zvbb_zvbc, none_m_preds, u_vv_ops, zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vctz, zvbb_zvbc, none_m_preds, u_vv_ops, zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vrol, zvbb_zvbc, full_preds, u_vvv_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vrol, zvbb_zvbc, full_preds, u_shift_vvx_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vror, zvbb_zvbc, full_preds, u_vvv_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vror, zvbb_zvbc, full_preds, u_shift_vvx_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vwsll, zvbb_zvbc, full_preds, u_wvv_ops, zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vwsll, zvbb_zvbc, full_preds, u_shift_wvx_ops, zvbb)
+//ZVBC
+DEF_VECTOR_CRYPTO_FUNCTION (vclmul, zvbb_zvbc, full_preds, u_vvv_crypto_sew64_ops, zvbc)
+DEF_VECTOR_CRYPTO_FUNCTION (vclmul, zvbb_zvbc, full_preds, u_vvx_crypto_sew64_ops, zvbc)
+DEF_VECTOR_CRYPTO_FUNCTION (vclmulh, zvbb_zvbc, full_preds, u_vvv_crypto_sew64_ops, zvbc)
+DEF_VECTOR_CRYPTO_FUNCTION (vclmulh, zvbb_zvbc, full_preds, u_vvx_crypto_sew64_ops, zvbc)
diff --git a/gcc/config/riscv/riscv-vector-crypto-builtins-types.def b/gcc/config/riscv/riscv-vector-crypto-builtins-types.def
new file mode 100755
index 00000000000..f40367ae2c3
--- /dev/null
+++ b/gcc/config/riscv/riscv-vector-crypto-builtins-types.def
@@ -0,0 +1,21 @@
+#ifndef DEF_RVV_CRYPTO_SEW32_OPS
+#define DEF_RVV_CRYPTO_SEW32_OPS(TYPE, REQUIRE)
+#endif
+
+#ifndef DEF_RVV_CRYPTO_SEW64_OPS
+#define DEF_RVV_CRYPTO_SEW64_OPS(TYPE, REQUIRE)
+#endif
+
+DEF_RVV_CRYPTO_SEW32_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_CRYPTO_SEW32_OPS (vuint32m1_t, 0)
+DEF_RVV_CRYPTO_SEW32_OPS (vuint32m2_t, 0)
+DEF_RVV_CRYPTO_SEW32_OPS (vuint32m4_t, 0)
+DEF_RVV_CRYPTO_SEW32_OPS (vuint32m8_t, 0)
+
+DEF_RVV_CRYPTO_SEW64_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_CRYPTO_SEW64_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_CRYPTO_SEW64_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_CRYPTO_SEW64_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64)
+
+#undef DEF_RVV_CRYPTO_SEW32_OPS
+#undef DEF_RVV_CRYPTO_SEW64_OPS
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md
index 2a3777e168c..4a853d8238f 100644
--- a/gcc/config/riscv/riscv.md
+++ b/gcc/config/riscv/riscv.md
@@ -437,6 +437,8 @@
;; vrol crypto vector rotate left instructions
;; vror crypto vector rotate right instructions
;; vwsll crypto vector widening shift left logical instructions
+;; vclmul vector crypto carry-less multiply - return low half instructions
+;; vclmulh vector crypto carry-less multiply - return high half instructions
(define_attr "type"
"unknown,branch,jump,jalr,ret,call,load,fpload,store,fpstore,
mtc,mfc,const,arith,logical,shift,slt,imul,idiv,move,fmove,fadd,fmul,
@@ -456,7 +458,8 @@
vired,viwred,vfredu,vfredo,vfwredu,vfwredo,
vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv,
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,
- vgather,vcompress,vmov,vector,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,vror,vwsll"
+ vgather,vcompress,vmov,vector,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,vror,vwsll,
+ vclmul,vclmulh"
(cond [(eq_attr "got" "load") (const_string "load")
;; If a doubleword move uses these expensive instructions,
diff --git a/gcc/config/riscv/vector-crypto.md b/gcc/config/riscv/vector-crypto.md
index 0373cf6f48a..f3034ba122a 100755
--- a/gcc/config/riscv/vector-crypto.md
+++ b/gcc/config/riscv/vector-crypto.md
@@ -9,6 +9,8 @@
UNSPEC_VROL
UNSPEC_VROR
UNSPEC_VWSLL
+ UNSPEC_VCLMUL
+ UNSPEC_VCLMULH
])
(define_int_attr ror_rol [(UNSPEC_VROL "rol") (UNSPEC_VROR "ror")])
@@ -17,12 +19,16 @@
(define_int_attr rev [(UNSPEC_VBREV "brev") (UNSPEC_VBREV8 "brev8") (UNSPEC_VREV8 "rev8")])
+(define_int_attr h [(UNSPEC_VCLMUL "") (UNSPEC_VCLMULH "h")])
+
(define_int_iterator UNSPEC_VRORL [UNSPEC_VROL UNSPEC_VROR])
(define_int_iterator UNSPEC_VCLTZ [UNSPEC_VCLZ UNSPEC_VCTZ])
(define_int_iterator UNSPEC_VRBB8 [UNSPEC_VBREV UNSPEC_VBREV8 UNSPEC_VREV8])
+(define_int_iterator UNSPEC_CLMUL [UNSPEC_VCLMUL UNSPEC_VCLMULH])
+
;; zvbb instructions patterns.
;; vandn.vv vandn.vx vrol.vv vrol.vx
;; vror.vv vror.vx vror.vi
@@ -205,3 +211,47 @@
"vc<lt>.v\t%0,%2%p1"
[(set_attr "type" "vc<lt>")
(set_attr "mode" "<MODE>")])
+
+;; zvbc instructions patterns.
+;; vclmul.vv vclmul.vx
+;; vclmulh.vv vclmulh.vx
+
+(define_insn "@pred_vclmul<h><mode>"
+ [(set (match_operand:VDI 0 "register_operand" "=vd,vd")
+ (if_then_else:VDI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" "rK,rK")
+ (match_operand 6 "const_int_operand" "i, i")
+ (match_operand 7 "const_int_operand" "i, i")
+ (match_operand 8 "const_int_operand" "i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VDI
+ [(match_operand:VDI 3 "register_operand" "vr,vr")
+ (match_operand:VDI 4 "register_operand" "vr,vr")]UNSPEC_CLMUL)
+ (match_operand:VDI 2 "vector_merge_operand" "vu, 0")))]
+ "TARGET_ZVBC && TARGET_64BIT"
+ "vclmul<h>.vv\t%0,%3,%4%p1"
+ [(set_attr "type" "vclmul<h>")
+ (set_attr "mode" "<VDI:MODE>")])
+
+(define_insn "@pred_vclmul<h><mode>_scalar"
+ [(set (match_operand:VDI 0 "register_operand" "=vd,vd")
+ (if_then_else:VDI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" "rK,rK")
+ (match_operand 6 "const_int_operand" "i, i")
+ (match_operand 7 "const_int_operand" "i, i")
+ (match_operand 8 "const_int_operand" "i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VDI
+ [(match_operand:VDI 3 "register_operand" "vr,vr")
+ (match_operand:<VDI:VEL> 4 "register_operand" "r,r")]UNSPEC_CLMUL)
+ (match_operand:VDI 2 "vector_merge_operand" "vu, 0")))]
+ "TARGET_ZVBC && TARGET_64BIT"
+ "vclmul<h>.vx\t%0,%3,%4%p1"
+ [(set_attr "type" "vclmul<h>")
+ (set_attr "mode" "<VDI:MODE>")])
\ No newline at end of file
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 56080ed1f5f..e52709493f6 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -3916,3 +3916,8 @@
(V1024BI "riscv_vector::vls_mode_valid_p (V1024BImode) && TARGET_MIN_VLEN >= 1024")
(V2048BI "riscv_vector::vls_mode_valid_p (V2048BImode) && TARGET_MIN_VLEN >= 2048")
(V4096BI "riscv_vector::vls_mode_valid_p (V4096BImode) && TARGET_MIN_VLEN >= 4096")])
+
+(define_mode_iterator VDI [
+ (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
+ (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
+])
\ No newline at end of file
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 3e08e18d355..2733ea7728f 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -53,7 +53,7 @@
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
- vror,vwsll")
+ vror,vwsll,vclmul,vclmulh")
(const_string "true")]
(const_string "false")))
@@ -76,7 +76,7 @@
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
- vror,vwsll")
+ vror,vwsll,vclmul,vclmulh")
(const_string "true")]
(const_string "false")))
@@ -701,7 +701,7 @@
vired,viwred,vfredu,vfredo,vfwredu,vfwredo,vimovxv,vfmovfv,\
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
vgather,vldff,viwmuladd,vfwmuladd,vlsegde,vlsegds,vlsegdux,vlsegdox,vlsegdff,\
- vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,vror,vwsll")
+ vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,vror,vwsll,vclmul,vclmulh")
(const_int 2)
(eq_attr "type" "vimerge,vfmerge,vcompress")
@@ -759,7 +759,7 @@
vfsgnj,vfmerge,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
vgather,viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox,vandn,vrol,\
- vror,vwsll")
+ vror,vwsll,vclmul,vclmulh")
(const_int 5)
(eq_attr "type" "vicmp,vimuladd,vfcmp,vfmuladd")
@@ -790,7 +790,7 @@
vfwalu,vfwmul,vfsgnj,vfmerge,vired,viwred,vfredu,\
vfredo,vfwredu,vfwredo,vslideup,vslidedown,vislide1up,\
vislide1down,vfslide1up,vfslide1down,vgather,viwmuladd,vfwmuladd,\
- vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll")
+ vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll,vclmul,vclmulh")
(symbol_ref "riscv_vector::get_ta(operands[6])")
(eq_attr "type" "vimuladd,vfmuladd")
@@ -820,7 +820,7 @@
vfwalu,vfwmul,vfsgnj,vfcmp,vslideup,vslidedown,\
vislide1up,vislide1down,vfslide1up,vfslide1down,vgather,\
viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox,vandn,vrol,\
- vror,vwsll")
+ vror,vwsll,vclmul,vclmulh")
(symbol_ref "riscv_vector::get_ma(operands[7])")
(eq_attr "type" "vimuladd,vfmuladd")
@@ -855,7 +855,7 @@
vislide1down,vfslide1up,vfslide1down,vgather,viwmuladd,vfwmuladd,\
vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll")
(const_int 8)
- (eq_attr "type" "vstux,vstox,vssegts,vssegtux,vssegtox")
+ (eq_attr "type" "vstux,vstox,vssegts,vssegtux,vssegtox,vclmul,vclmulh")
(const_int 5)
(eq_attr "type" "vimuladd,vfmuladd")
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmul.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmul.c
new file mode 100644
index 00000000000..ba3e5cf858e
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmul.c
@@ -0,0 +1,208 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbc -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint64m1_t test_vclmul_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vclmul_vv_u64m1(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmul_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_vx_u64m1(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmul_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vclmul_vv_u64m2(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmul_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_vx_u64m2(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmul_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vclmul_vv_u64m4(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmul_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_vx_u64m4(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmul_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vclmul_vv_u64m8(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmul_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_vx_u64m8(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmul_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vclmul_vv_u64m1_m(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmul_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_vx_u64m1_m(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmul_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vclmul_vv_u64m2_m(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmul_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_vx_u64m2_m(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmul_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vclmul_vv_u64m4_m(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmul_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_vx_u64m4_m(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmul_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vclmul_vv_u64m8_m(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmul_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_vx_u64m8_m(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmul_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vclmul_vv_u64m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmul_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_vx_u64m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmul_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vclmul_vv_u64m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmul_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_vx_u64m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmul_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vclmul_vv_u64m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmul_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_vx_u64m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmul_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vclmul_vv_u64m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmul_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_vx_u64m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmul_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vclmul_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmul_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmul_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vclmul_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmul_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmul_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vclmul_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmul_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmul_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vclmul_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmul_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmul_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vclmul_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmul_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmul_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vclmul_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmul_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmul_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vclmul_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmul_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmul_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vclmul_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmul_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmul_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vclmul_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmul_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmul_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vclmul_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmul_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmul_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vclmul_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmul_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmul_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vclmul_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmul_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 16 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 16 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 8 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 8 } } */
+/* { dg-final { scan-assembler-times {vclmul\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 24 } } */
+/* { dg-final { scan-assembler-times {vclmul\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 16 } } */
+/* { dg-final { scan-assembler-times {vclmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 24 } } */
+/* { dg-final { scan-assembler-times {vclmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 16 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmul_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmul_overloaded.c
new file mode 100644
index 00000000000..1e25831f3f5
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmul_overloaded.c
@@ -0,0 +1,208 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbc -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint64m1_t test_vclmul_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vclmul(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmul_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmul_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vclmul(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmul_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmul_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vclmul(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmul_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmul_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vclmul(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmul_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmul_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vclmul(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmul_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmul_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vclmul(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmul_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmul_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vclmul(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmul_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmul_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vclmul(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmul_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmul_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vclmul_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmul_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmul_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vclmul_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmul_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmul_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vclmul_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmul_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmul_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vclmul_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmul_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmul_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vclmul_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmul_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmul_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vclmul_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmul_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmul_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vclmul_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmul_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmul_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vclmul_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmul_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmul_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vclmul_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmul_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmul_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vclmul_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmul_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmul_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vclmul_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmul_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmul_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vclmul_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmul_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmul_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vclmul_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmul_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmul_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vclmul_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmul_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmul_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vclmul_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmul_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmul_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vclmul_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmul_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmul_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 16 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 16 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 8 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 8 } } */
+/* { dg-final { scan-assembler-times {vclmul\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 24 } } */
+/* { dg-final { scan-assembler-times {vclmul\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 16 } } */
+/* { dg-final { scan-assembler-times {vclmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 24 } } */
+/* { dg-final { scan-assembler-times {vclmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 16 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmulh.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmulh.c
new file mode 100644
index 00000000000..c14b8a56490
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmulh.c
@@ -0,0 +1,208 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbc -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint64m1_t test_vclmulh_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vclmulh_vv_u64m1(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_vx_u64m1(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vclmulh_vv_u64m2(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_vx_u64m2(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vclmulh_vv_u64m4(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_vx_u64m4(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vclmulh_vv_u64m8(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_vx_u64m8(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vclmulh_vv_u64m1_m(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_vx_u64m1_m(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vclmulh_vv_u64m2_m(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_vx_u64m2_m(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vclmulh_vv_u64m4_m(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_vx_u64m4_m(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vclmulh_vv_u64m8_m(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_vx_u64m8_m(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vclmulh_vv_u64m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_vx_u64m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vclmulh_vv_u64m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_vx_u64m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vclmulh_vv_u64m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_vx_u64m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vclmulh_vv_u64m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_vx_u64m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vclmulh_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vclmulh_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vclmulh_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vclmulh_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vclmulh_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vclmulh_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vclmulh_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vclmulh_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vclmulh_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vclmulh_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vclmulh_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vclmulh_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 16 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 16 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 8 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 8 } } */
+/* { dg-final { scan-assembler-times {vclmulh\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 24 } } */
+/* { dg-final { scan-assembler-times {vclmulh\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 16 } } */
+/* { dg-final { scan-assembler-times {vclmulh\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 24 } } */
+/* { dg-final { scan-assembler-times {vclmulh\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 16 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmulh_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmulh_overloaded.c
new file mode 100644
index 00000000000..ed3c4388af6
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmulh_overloaded.c
@@ -0,0 +1,208 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbc -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint64m1_t test_vclmulh_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vclmulh(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vclmulh(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vclmulh(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vclmulh(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vclmulh(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vclmulh(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vclmulh(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vclmulh(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vclmulh_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vclmulh_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vclmulh_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vclmulh_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vclmulh_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vclmulh_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vclmulh_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vclmulh_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vclmulh_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vclmulh_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vclmulh_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vclmulh_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vclmulh_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vclmulh_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vclmulh_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vclmulh_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+ return __riscv_vclmulh_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 16 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 16 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 8 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 8 } } */
+/* { dg-final { scan-assembler-times {vclmulh\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 24 } } */
+/* { dg-final { scan-assembler-times {vclmulh\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 16 } } */
+/* { dg-final { scan-assembler-times {vclmulh\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 24 } } */
+/* { dg-final { scan-assembler-times {vclmulh\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 16 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp b/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
index f0c5431d00c..2426825baae 100644
--- a/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
@@ -36,6 +36,8 @@ dg-init
# Main loop.
dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvbb/*.\[cS\]]] \
"" $DEFAULT_CFLAGS
+dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvbc/*.\[cS\]]] \
+ "" $DEFAULT_CFLAGS
# All done.
dg-finish
--
2.17.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 3/7] RISC-V: Add intrinsic functions for crypto vector Zvkg extension
2023-12-04 2:57 [PATCH 1/7] RISC-V: Add intrinsic functions for crypto vector Zvbb extension Feng Wang
2023-12-04 2:57 ` [PATCH 2/7] RISC-V: Add intrinsic functions for crypto vector Zvbc extension Feng Wang
@ 2023-12-04 2:57 ` Feng Wang
2023-12-04 2:57 ` [PATCH 4/7] RISC-V: Add intrinsic functions for crypto vector Zvkned extension Feng Wang
` (4 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Feng Wang @ 2023-12-04 2:57 UTC (permalink / raw)
To: gcc-patches; +Cc: kito.cheng, jeffreyalaw, zhusonghe, panciyan, Feng Wang
This patch add the intrinsic functions(according to https://github.com/
riscv-non-isa/rvv-intrinsic-doc/blob/eopc/vector-crypto/auto-generated/
vector-crypto/intrinsic_funcs.md) for crypto vector Zvkg extension. And all
the test cases are added for api-testing.
Co-Authored by: Songhe Zhu <zhusonghe@eswincomputing.com>
Co-Authored by: Ciyan Pan <panciyan@eswincomputing.com>
gcc/ChangeLog:
* common/config/riscv/riscv-common.cc: Add Zvkg in riscv_implied_info.
* config/riscv/riscv-vector-builtins-bases.cc (class vghsh):Add new function_base for Zvkg.
(class vgmul): Ditto.
(BASE): Add Zvkg BASE declaration.
* config/riscv/riscv-vector-builtins-bases.h:Ditto.
* config/riscv/riscv-vector-builtins-shapes.cc (struct crypto_vv_def): Add function_builder for Zvkg.
(SHAPE): Add Zvkg SHAPE declaration.
* config/riscv/riscv-vector-builtins-shapes.h:Dito.
* config/riscv/riscv-vector-builtins.cc: Define new data struct for Zvkg.
* config/riscv/riscv-vector-crypto-builtins-avail.h (AVAIL): Add enable condition.
* config/riscv/riscv-vector-crypto-builtins-functions.def (vghsh): Add intrinsc def.
(vgmul): Ditto.
* config/riscv/riscv.md: Add Zvkg ins name.
* config/riscv/vector-crypto.md (@pred_vghsh<VSI:mode>): Add Zvkg md patterns.
(@pred_vgmul<VSI:mode>): Ditto.
* config/riscv/vector-iterators.md: Add new iterators for Zvkg.
* config/riscv/vector.md: Add the corresponding attribute for Zvkg.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/zvk/zvk.exp:
* gcc.target/riscv/zvk/zvkg/vghsh.c: New test.
* gcc.target/riscv/zvk/zvkg/vghsh_overloaded.c: New test.
* gcc.target/riscv/zvk/zvkg/vgmul.c: New test.
* gcc.target/riscv/zvk/zvkg/vgmul_overloaded.c: New test.
---
gcc/common/config/riscv/riscv-common.cc | 1 +
.../riscv/riscv-vector-builtins-bases.cc | 29 +++++++++++
.../riscv/riscv-vector-builtins-bases.h | 2 +
.../riscv/riscv-vector-builtins-shapes.cc | 23 +++++++++
.../riscv/riscv-vector-builtins-shapes.h | 1 +
gcc/config/riscv/riscv-vector-builtins.cc | 15 ++++++
.../riscv-vector-crypto-builtins-avail.h | 1 +
...riscv-vector-crypto-builtins-functions.def | 3 ++
gcc/config/riscv/riscv.md | 4 +-
gcc/config/riscv/vector-crypto.md | 43 +++++++++++++++-
gcc/config/riscv/vector-iterators.md | 4 ++
gcc/config/riscv/vector.md | 19 +++----
gcc/testsuite/gcc.target/riscv/zvk/zvk.exp | 2 +
.../gcc.target/riscv/zvk/zvkg/vghsh.c | 51 +++++++++++++++++++
.../riscv/zvk/zvkg/vghsh_overloaded.c | 51 +++++++++++++++++++
.../gcc.target/riscv/zvk/zvkg/vgmul.c | 51 +++++++++++++++++++
.../riscv/zvk/zvkg/vgmul_overloaded.c | 51 +++++++++++++++++++
17 files changed, 340 insertions(+), 11 deletions(-)
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkg/vghsh.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkg/vghsh_overloaded.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkg/vgmul.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkg/vgmul_overloaded.c
diff --git a/gcc/common/config/riscv/riscv-common.cc b/gcc/common/config/riscv/riscv-common.cc
index 296500e15df..3eefd0263f9 100644
--- a/gcc/common/config/riscv/riscv-common.cc
+++ b/gcc/common/config/riscv/riscv-common.cc
@@ -123,6 +123,7 @@ static const riscv_implied_info_t riscv_implied_info[] =
{"zvbb", "zvkb"},
{"zvbc", "v"},
{"zvkb", "v"},
+ {"zvkg", "v"},
{"zfh", "zfhmin"},
{"zfhmin", "f"},
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index 45b1e563ff4..0cb9b2925af 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -2227,6 +2227,31 @@ public:
}
};
+class vghsh : public function_base
+{
+public:
+ bool apply_mask_policy_p () const override { return false; }
+ bool use_mask_predication_p () const override { return false; }
+ bool has_merge_operand_p () const override { return false; }
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (code_for_pred_vghsh (e.vector_mode ()));
+ }
+};
+
+
+class vgmul : public function_base
+{
+public:
+ bool apply_mask_policy_p () const override { return false; }
+ bool use_mask_predication_p () const override { return false; }
+ bool has_merge_operand_p () const override { return false; }
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (code_for_pred_vgmul (e.vector_mode ()));
+ }
+};
+
static CONSTEXPR const vsetvl<false> vsetvl_obj;
static CONSTEXPR const vsetvl<true> vsetvlmax_obj;
static CONSTEXPR const loadstore<false, LST_UNIT_STRIDE, false> vle_obj;
@@ -2496,6 +2521,8 @@ static CONSTEXPR const vcltz<UNSPEC_VCTZ> vctz_obj;
static CONSTEXPR const vwsll vwsll_obj;
static CONSTEXPR const clmul<UNSPEC_VCLMUL> vclmul_obj;
static CONSTEXPR const clmul<UNSPEC_VCLMULH> vclmulh_obj;
+static CONSTEXPR const vghsh vghsh_obj;
+static CONSTEXPR const vgmul vgmul_obj;
/* Declare the function base NAME, pointing it to an instance
of class <NAME>_obj. */
@@ -2770,4 +2797,6 @@ BASE (vror)
BASE (vwsll)
BASE (vclmul)
BASE (vclmulh)
+BASE (vghsh)
+BASE (vgmul)
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.h b/gcc/config/riscv/riscv-vector-builtins-bases.h
index 7d2c86f9162..6a389113e1f 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.h
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.h
@@ -292,6 +292,8 @@ extern const function_base *const vror;
extern const function_base *const vwsll;
extern const function_base *const vclmul;
extern const function_base *const vclmulh;
+extern const function_base *const vghsh;
+extern const function_base *const vgmul;
}
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index f21c459e6a2..dd62d8b11b6 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -1009,6 +1009,28 @@ struct zvbb_zvbc_def : public build_base
}
};
+/* vghsh/vgmul class. */
+struct crypto_vv_def : public build_base
+{
+ char *get_name (function_builder &b, const function_instance &instance,
+ bool overloaded_p) const override
+ {
+ /* Return nullptr if it can not be overloaded. */
+ if (overloaded_p && !instance.base->can_be_overloaded_p (instance.pred))
+ return nullptr;
+ b.append_base_name (instance.base_name);
+
+ if (!overloaded_p)
+ {
+ b.append_name (operand_suffixes[instance.op_info->op]);
+ b.append_name (type_suffixes[instance.type.index].vector);
+ }
+
+ b.append_name (predication_suffixes[instance.pred]);
+ return b.finish_name ();
+ }
+};
+
SHAPE(vsetvl, vsetvl)
SHAPE(vsetvl, vsetvlmax)
SHAPE(loadstore, loadstore)
@@ -1038,4 +1060,5 @@ SHAPE(seg_loadstore, seg_loadstore)
SHAPE(seg_indexed_loadstore, seg_indexed_loadstore)
SHAPE(seg_fault_load, seg_fault_load)
SHAPE(zvbb_zvbc, zvbb_zvbc)
+SHAPE(crypto_vv, crypto_vv)
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.h b/gcc/config/riscv/riscv-vector-builtins-shapes.h
index a217eae33f0..37b7077a3b1 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.h
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.h
@@ -54,6 +54,7 @@ extern const function_shape *const seg_indexed_loadstore;
extern const function_shape *const seg_fault_load;
/* Below function_shape are Vectro Crypto*/
extern const function_shape *const zvbb_zvbc;
+extern const function_shape *const crypto_vv;
}
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
index ffd30c1a806..eaefb0f18cc 100644
--- a/gcc/config/riscv/riscv-vector-builtins.cc
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -2639,6 +2639,21 @@ static CONSTEXPR const rvv_op_info all_v_vcreate_lmul4_x2_ops
rvv_arg_type_info (RVV_BASE_vlmul_ext_x2), /* Return type */
ext_vcreate_args /* Args */};
+/* A static operand information for vector_type func (vector_type).
+ Some ins just supports SEW=32, such as crypto vectol Zvkg extension.
+ * function registration. */
+static CONSTEXPR const rvv_op_info u_vvv_crypto_sew32_ops
+ = {crypto_sew32_ops, /* Types */
+ OP_TYPE_vv, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ vv_args /* Args */};
+
+static CONSTEXPR const rvv_op_info u_vvvv_crypto_sew32_ops
+ = {crypto_sew32_ops, /* Types */
+ OP_TYPE_vv, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ vvv_args /* Args */};
+
/* A static operand information for vector_type func (vector_type).
Some ins just supports SEW=64, such as crypto vectol Zvbc extension
vclmul.vv, vclmul.vx.
diff --git a/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h b/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
index a63dea6a27b..fb1f195bf9b 100755
--- a/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
+++ b/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
@@ -15,5 +15,6 @@ namespace riscv_vector {
AVAIL (zvbb, TARGET_ZVBB)
AVAIL (zvbc, TARGET_ZVBC)
AVAIL (zvkb_or_zvbb, TARGET_ZVKB || TARGET_ZVBB)
+AVAIL (zvkg, TARGET_ZVKG)
}
#endif
diff --git a/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def b/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
index d8c74dec4f6..c2ed9353e24 100755
--- a/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
+++ b/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
@@ -22,3 +22,6 @@ DEF_VECTOR_CRYPTO_FUNCTION (vclmul, zvbb_zvbc, full_preds, u_vvv_crypto_sew64_o
DEF_VECTOR_CRYPTO_FUNCTION (vclmul, zvbb_zvbc, full_preds, u_vvx_crypto_sew64_ops, zvbc)
DEF_VECTOR_CRYPTO_FUNCTION (vclmulh, zvbb_zvbc, full_preds, u_vvv_crypto_sew64_ops, zvbc)
DEF_VECTOR_CRYPTO_FUNCTION (vclmulh, zvbb_zvbc, full_preds, u_vvx_crypto_sew64_ops, zvbc)
+//ZVKG
+DEF_VECTOR_CRYPTO_FUNCTION(vghsh, crypto_vv, none_tu_preds, u_vvvv_crypto_sew32_ops, zvkg)
+DEF_VECTOR_CRYPTO_FUNCTION(vgmul, crypto_vv, none_tu_preds, u_vvv_crypto_sew32_ops, zvkg)
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md
index 4a853d8238f..1ead762e552 100644
--- a/gcc/config/riscv/riscv.md
+++ b/gcc/config/riscv/riscv.md
@@ -439,6 +439,8 @@
;; vwsll crypto vector widening shift left logical instructions
;; vclmul vector crypto carry-less multiply - return low half instructions
;; vclmulh vector crypto carry-less multiply - return high half instructions
+;; vghsh vector crypto add-multiply over GHASH Galois-Field instructions
+;; vgmul vector crypto multiply over GHASH Galois-Field instrumctions
(define_attr "type"
"unknown,branch,jump,jalr,ret,call,load,fpload,store,fpstore,
mtc,mfc,const,arith,logical,shift,slt,imul,idiv,move,fmove,fadd,fmul,
@@ -459,7 +461,7 @@
vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv,
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,
vgather,vcompress,vmov,vector,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,vror,vwsll,
- vclmul,vclmulh"
+ vclmul,vclmulh,vghsh,vgmul"
(cond [(eq_attr "got" "load") (const_string "load")
;; If a doubleword move uses these expensive instructions,
diff --git a/gcc/config/riscv/vector-crypto.md b/gcc/config/riscv/vector-crypto.md
index f3034ba122a..edc7dc9d432 100755
--- a/gcc/config/riscv/vector-crypto.md
+++ b/gcc/config/riscv/vector-crypto.md
@@ -11,6 +11,8 @@
UNSPEC_VWSLL
UNSPEC_VCLMUL
UNSPEC_VCLMULH
+ UNSPEC_VGHSH
+ UNSPEC_VGMUL
])
(define_int_attr ror_rol [(UNSPEC_VROL "rol") (UNSPEC_VROR "ror")])
@@ -254,4 +256,43 @@
"TARGET_ZVBC && TARGET_64BIT"
"vclmul<h>.vx\t%0,%3,%4%p1"
[(set_attr "type" "vclmul<h>")
- (set_attr "mode" "<VDI:MODE>")])
\ No newline at end of file
+ (set_attr "mode" "<VDI:MODE>")])
+
+;; zvkg instructions patterns.
+;; vghsh.vv vgmul.vv
+(define_insn "@pred_vghsh<VSI:mode>"
+ [(set (match_operand:VSI 0 "register_operand" "=vd")
+ (if_then_else:VSI
+ (unspec:<VSI:VM>
+ [(match_operand 4 "vector_length_operand" "rK")
+ (match_operand 5 "const_int_operand" " i")
+ (match_operand 6 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VSI
+ [(match_operand:VSI 1 "register_operand" " 0")
+ (match_operand:VSI 2 "register_operand" "vr")
+ (match_operand:VSI 3 "register_operand" "vr")] UNSPEC_VGHSH)
+ (match_dup 1)))]
+ "TARGET_ZVKG"
+ "vghsh.vv\t%0,%2,%3"
+ [(set_attr "type" "vghsh")
+ (set_attr "mode" "<VSI:MODE>")])
+
+(define_insn "@pred_vgmul<VSI:mode>"
+ [(set (match_operand:VSI 0 "register_operand" "=vd")
+ (if_then_else:VSI
+ (unspec:<VSI:VM>
+ [(match_operand 3 "vector_length_operand" "rK")
+ (match_operand 4 "const_int_operand" " i")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VSI
+ [(match_operand:VSI 1 "register_operand" " 0")
+ (match_operand:VSI 2 "register_operand" "vr")] UNSPEC_VGMUL)
+ (match_dup 1)))]
+ "TARGET_ZVKG"
+ "vgmul.vv\t%0,%2"
+ [(set_attr "type" "vgmul")
+ (set_attr "mode" "<VSI:MODE>")])
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index e52709493f6..fea84a3f54c 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -3917,6 +3917,10 @@
(V2048BI "riscv_vector::vls_mode_valid_p (V2048BImode) && TARGET_MIN_VLEN >= 2048")
(V4096BI "riscv_vector::vls_mode_valid_p (V4096BImode) && TARGET_MIN_VLEN >= 4096")])
+(define_mode_iterator VSI [
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
(define_mode_iterator VDI [
(RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
(RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 2733ea7728f..aa529d6378f 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -53,7 +53,7 @@
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
- vror,vwsll,vclmul,vclmulh")
+ vror,vwsll,vclmul,vclmulh,vghsh,vgmul")
(const_string "true")]
(const_string "false")))
@@ -76,7 +76,7 @@
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
- vror,vwsll,vclmul,vclmulh")
+ vror,vwsll,vclmul,vclmulh,vghsh,vgmul")
(const_string "true")]
(const_string "false")))
@@ -704,7 +704,7 @@
vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,vror,vwsll,vclmul,vclmulh")
(const_int 2)
- (eq_attr "type" "vimerge,vfmerge,vcompress")
+ (eq_attr "type" "vimerge,vfmerge,vcompress,vghsh,vgmul")
(const_int 1)
(eq_attr "type" "vimuladd,vfmuladd")
@@ -743,7 +743,8 @@
vstox,vext,vmsfs,vmiota,vfsqrt,vfrecp,vfcvtitof,vldff,\
vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,\
vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,vcompress,\
- vlsegde,vssegts,vssegtux,vssegtox,vlsegdff,vbrev,vbrev8,vrev8")
+ vlsegde,vssegts,vssegtux,vssegtox,vlsegdff,vbrev,vbrev8,vrev8,\
+ vghsh")
(const_int 4)
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -765,7 +766,7 @@
(eq_attr "type" "vicmp,vimuladd,vfcmp,vfmuladd")
(const_int 6)
- (eq_attr "type" "vmpop,vmffs,vmidx,vssegte,vclz,vctz")
+ (eq_attr "type" "vmpop,vmffs,vmidx,vssegte,vclz,vctz,vgmul")
(const_int 3)]
(const_int INVALID_ATTRIBUTE)))
@@ -774,7 +775,7 @@
(cond [(eq_attr "type" "vlde,vimov,vfmov,vext,vmiota,vfsqrt,vfrecp,\
vfcvtitof,vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,\
vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,\
- vcompress,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8")
+ vcompress,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8,vghsh")
(symbol_ref "riscv_vector::get_ta(operands[5])")
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -796,7 +797,7 @@
(eq_attr "type" "vimuladd,vfmuladd")
(symbol_ref "riscv_vector::get_ta(operands[7])")
- (eq_attr "type" "vmidx")
+ (eq_attr "type" "vmidx,vgmul")
(symbol_ref "riscv_vector::get_ta(operands[4])")]
(const_int INVALID_ATTRIBUTE)))
@@ -838,7 +839,7 @@
vfclass,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\
vimovxv,vfmovfv,vlsegde,vlsegdff,vbrev,vbrev8,vrev8")
(const_int 7)
- (eq_attr "type" "vldm,vstm,vmalu,vmalu")
+ (eq_attr "type" "vldm,vstm,vmalu,vmalu,vgmul")
(const_int 5)
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -861,7 +862,7 @@
(eq_attr "type" "vimuladd,vfmuladd")
(const_int 9)
- (eq_attr "type" "vmsfs,vmidx,vcompress")
+ (eq_attr "type" "vmsfs,vmidx,vcompress,vghsh")
(const_int 6)
(eq_attr "type" "vmpop,vmffs,vssegte,vclz,vctz")
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp b/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
index 2426825baae..c1b9eede6ba 100644
--- a/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
@@ -38,6 +38,8 @@ dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvbb/*.\[cS\]]] \
"" $DEFAULT_CFLAGS
dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvbc/*.\[cS\]]] \
"" $DEFAULT_CFLAGS
+dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvkg/*.\[cS\]]] \
+ "" $DEFAULT_CFLAGS
# All done.
dg-finish
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkg/vghsh.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkg/vghsh.c
new file mode 100644
index 00000000000..3837f99fea3
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkg/vghsh.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkg -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vghsh_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vghsh_vv_u32mf2(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vghsh_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vghsh_vv_u32m1(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vghsh_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vghsh_vv_u32m2(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vghsh_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vghsh_vv_u32m4(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vghsh_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vghsh_vv_u32m8(vd, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vghsh_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vghsh_vv_u32mf2_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vghsh_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vghsh_vv_u32m1_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vghsh_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vghsh_vv_u32m2_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vghsh_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vghsh_vv_u32m4_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vghsh_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vghsh_vv_u32m8_tu(vd, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vghsh\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkg/vghsh_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkg/vghsh_overloaded.c
new file mode 100644
index 00000000000..2d2004bc653
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkg/vghsh_overloaded.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkg -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vghsh_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vghsh(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vghsh_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vghsh(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vghsh_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vghsh(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vghsh_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vghsh(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vghsh_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vghsh(vd, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vghsh_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vghsh_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vghsh_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vghsh_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vghsh_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vghsh_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vghsh_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vghsh_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vghsh_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vghsh_tu(vd, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vghsh\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkg/vgmul.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkg/vgmul.c
new file mode 100644
index 00000000000..902de106c12
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkg/vgmul.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkg -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vgmul_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vgmul_vv_u32mf2(vd, vs2, vl);
+}
+
+vuint32m1_t test_vgmul_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vgmul_vv_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vgmul_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vgmul_vv_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vgmul_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vgmul_vv_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vgmul_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vgmul_vv_u32m8(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vgmul_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vgmul_vv_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vgmul_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vgmul_vv_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vgmul_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vgmul_vv_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vgmul_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vgmul_vv_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vgmul_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vgmul_vv_u32m8_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vgmul\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkg/vgmul_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkg/vgmul_overloaded.c
new file mode 100644
index 00000000000..53397ebc69b
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkg/vgmul_overloaded.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkg -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vgmul_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vgmul(vd, vs2, vl);
+}
+
+vuint32m1_t test_vgmul_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vgmul(vd, vs2, vl);
+}
+
+vuint32m2_t test_vgmul_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vgmul(vd, vs2, vl);
+}
+
+vuint32m4_t test_vgmul_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vgmul(vd, vs2, vl);
+}
+
+vuint32m8_t test_vgmul_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vgmul(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vgmul_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vgmul_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vgmul_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vgmul_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vgmul_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vgmul_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vgmul_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vgmul_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vgmul_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vgmul_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vgmul\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
--
2.17.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 4/7] RISC-V: Add intrinsic functions for crypto vector Zvkned extension
2023-12-04 2:57 [PATCH 1/7] RISC-V: Add intrinsic functions for crypto vector Zvbb extension Feng Wang
2023-12-04 2:57 ` [PATCH 2/7] RISC-V: Add intrinsic functions for crypto vector Zvbc extension Feng Wang
2023-12-04 2:57 ` [PATCH 3/7] RISC-V: Add intrinsic functions for crypto vector Zvkg extension Feng Wang
@ 2023-12-04 2:57 ` Feng Wang
2023-12-04 2:57 ` [PATCH 5/7] RISC-V: Add intrinsic functions for crypto vector Zvknh[ab] extension Feng Wang
` (3 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Feng Wang @ 2023-12-04 2:57 UTC (permalink / raw)
To: gcc-patches; +Cc: kito.cheng, jeffreyalaw, zhusonghe, panciyan, Feng Wang
This patch add the intrinsic functions(according to https://github.com/
riscv-non-isa/rvv-intrinsic-doc/blob/eopc/vector-crypto/auto-generated/
vector-crypto/intrinsic_funcs.md) for crypto vector Zvkned extension. And all
the test cases are added for api-testing.
gcc/ChangeLog:
* common/config/riscv/riscv-common.cc: Add Zvkned in riscv_implied_info.
* config/riscv/riscv-vector-builtins-bases.cc (class crypto_vv): Add new function_base for Zvkned.
(class vaeskf1): Ditto.
(class vgmul): Ditto.
(class vaeskf2): Ditto.
(BASE): Add Zvkned BASE declaration.
* config/riscv/riscv-vector-builtins-bases.h: Ditto.
* config/riscv/riscv-vector-builtins-shapes.cc (struct zvbb_zvbc_def): Add new function_builder for Zvkned.
(struct crypto_vi_def): Ditto.
(SHAPE): Add Zvkned SHAPE declaration.
* config/riscv/riscv-vector-builtins-shapes.h: Ditto.
* config/riscv/riscv-vector-builtins.cc (registered_function::overloaded_hash): Process the overloaded of size_t.
* config/riscv/riscv-vector-builtins.def (vi): Add new operator type.
* config/riscv/riscv-vector-crypto-builtins-avail.h (AVAIL): Add enable condition.
* config/riscv/riscv-vector-crypto-builtins-functions.def (vgmul): Optimize vgmul.
(vaesef): Add intrinsc def.
(vaesem): Ditto.
(vaesdf): Ditto.
(vaesdm): Ditto.
(vaesz): Ditto.
(vaeskf1) Ditto.
(vaeskf2) Ditto.
* config/riscv/riscv.md: Add Zvkned ins name.
* config/riscv/vector-crypto.md (aesef): Add Zvkned md patterns.
(vv): Ditto.
(@pred_crypto_vv<vv_ins_name><ins_type><mode>): Ditto.
(@pred_crypto_vv<vv_ins_name><ins_type>x1<mode>_scalar): Ditto.
(@pred_crypto_vv<vv_ins_name><ins_type>x2<mode>_scalar): Ditto.
(@pred_crypto_vv<vv_ins_name><ins_type>x4<mode>_scalar): Ditto.
(@pred_crypto_vv<vv_ins_name><ins_type>x8<mode>_scalar): Ditto.
(@pred_crypto_vv<vv_ins_name><ins_type>x16<mode>_scalar): Ditto.
(@pred_vaeskf1<mode>_scalar): Ditto.
(@pred_vaeskf2<mode>_scalar): Ditto.
* config/riscv/vector-iterators.md: Add new iterators for Zvkned.
* config/riscv/vector.md: Add the corresponding attribute for Zvkned.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/zvk/zvk.exp:
* gcc.target/riscv/zvk/zvkned/vaesdf.c: New test.
* gcc.target/riscv/zvk/zvkned/vaesdf_overloaded.c: New test.
* gcc.target/riscv/zvk/zvkned/vaesdm.c: New test.
* gcc.target/riscv/zvk/zvkned/vaesdm_overloaded.c: New test.
* gcc.target/riscv/zvk/zvkned/vaesef.c: New test.
* gcc.target/riscv/zvk/zvkned/vaesef_overloaded.c: New test.
* gcc.target/riscv/zvk/zvkned/vaesem.c: New test.
* gcc.target/riscv/zvk/zvkned/vaesem_overloaded.c: New test.
* gcc.target/riscv/zvk/zvkned/vaeskf1.c: New test.
* gcc.target/riscv/zvk/zvkned/vaeskf1_overloaded.c: New test.
* gcc.target/riscv/zvk/zvkned/vaeskf2.c: New test.
* gcc.target/riscv/zvk/zvkned/vaeskf2_overloaded.c: New test.
* gcc.target/riscv/zvk/zvkned/vaesz.c: New test.
* gcc.target/riscv/zvk/zvkned/vaesz_overloaded.c: New test.
---
gcc/common/config/riscv/riscv-common.cc | 1 +
.../riscv/riscv-vector-builtins-bases.cc | 80 +++++++-
.../riscv/riscv-vector-builtins-bases.h | 7 +
.../riscv/riscv-vector-builtins-shapes.cc | 41 +++-
.../riscv/riscv-vector-builtins-shapes.h | 1 +
gcc/config/riscv/riscv-vector-builtins.cc | 62 +++++-
gcc/config/riscv/riscv-vector-builtins.def | 1 +
.../riscv-vector-crypto-builtins-avail.h | 1 +
...riscv-vector-crypto-builtins-functions.def | 34 +++-
gcc/config/riscv/riscv.md | 8 +-
gcc/config/riscv/vector-crypto.md | 184 ++++++++++++++++++
gcc/config/riscv/vector-iterators.md | 32 +++
gcc/config/riscv/vector.md | 23 ++-
gcc/testsuite/gcc.target/riscv/zvk/zvk.exp | 2 +
.../gcc.target/riscv/zvk/zvkned/vaesdf.c | 169 ++++++++++++++++
.../riscv/zvk/zvkned/vaesdf_overloaded.c | 169 ++++++++++++++++
.../gcc.target/riscv/zvk/zvkned/vaesdm.c | 170 ++++++++++++++++
.../riscv/zvk/zvkned/vaesdm_overloaded.c | 170 ++++++++++++++++
.../gcc.target/riscv/zvk/zvkned/vaesef.c | 170 ++++++++++++++++
.../riscv/zvk/zvkned/vaesef_overloaded.c | 170 ++++++++++++++++
.../gcc.target/riscv/zvk/zvkned/vaesem.c | 170 ++++++++++++++++
.../riscv/zvk/zvkned/vaesem_overloaded.c | 170 ++++++++++++++++
.../gcc.target/riscv/zvk/zvkned/vaeskf1.c | 50 +++++
.../riscv/zvk/zvkned/vaeskf1_overloaded.c | 50 +++++
.../gcc.target/riscv/zvk/zvkned/vaeskf2.c | 50 +++++
.../riscv/zvk/zvkned/vaeskf2_overloaded.c | 50 +++++
.../gcc.target/riscv/zvk/zvkned/vaesz.c | 130 +++++++++++++
.../riscv/zvk/zvkned/vaesz_overloaded.c | 130 +++++++++++++
28 files changed, 2278 insertions(+), 17 deletions(-)
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdf.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdf_overloaded.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdm.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdm_overloaded.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesef.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesef_overloaded.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesem.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesem_overloaded.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf1.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf1_overloaded.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf2.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf2_overloaded.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesz.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesz_overloaded.c
diff --git a/gcc/common/config/riscv/riscv-common.cc b/gcc/common/config/riscv/riscv-common.cc
index 3eefd0263f9..60a174d4801 100644
--- a/gcc/common/config/riscv/riscv-common.cc
+++ b/gcc/common/config/riscv/riscv-common.cc
@@ -124,6 +124,7 @@ static const riscv_implied_info_t riscv_implied_info[] =
{"zvbc", "v"},
{"zvkb", "v"},
{"zvkg", "v"},
+ {"zvkned", "v"},
{"zfh", "zfhmin"},
{"zfhmin", "f"},
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index 0cb9b2925af..61167c8d4e4 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -2209,6 +2209,7 @@ public:
}
};
+/* Implements clmul */
template<int UNSPEC>
class clmul : public function_base
{
@@ -2239,16 +2240,75 @@ public:
}
};
+/* Implements vgmul/vaes*. */
+template<int UNSPEC>
+class crypto_vv : public function_base
+{
+public:
+ bool apply_mask_policy_p () const override { return false; }
+ bool use_mask_predication_p () const override { return false; }
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ poly_uint64 nunits = 0U;
+ switch (e.op_info->op)
+ {
+ case OP_TYPE_vv:
+ if (UNSPEC == UNSPEC_VGMUL)
+ return e.use_exact_insn (code_for_pred_crypto_vv (UNSPEC, UNSPEC, e.vector_mode ()));
+ else
+ return e.use_exact_insn (code_for_pred_crypto_vv
+ (UNSPEC + 1, UNSPEC + 1, e.vector_mode ()));
+ case OP_TYPE_vs:
+ /* Calculate the ratio between arg0 and arg1*/
+ multiple_p (GET_MODE_BITSIZE (e.arg_mode (0)),
+ GET_MODE_BITSIZE (e.arg_mode (1)), &nunits);
+ if (maybe_eq (nunits, 1U))
+ return e.use_exact_insn (code_for_pred_crypto_vvx1_scalar
+ (UNSPEC +2, UNSPEC + 2, e.vector_mode ()));
+ else if (maybe_eq (nunits, 2U))
+ return e.use_exact_insn (code_for_pred_crypto_vvx2_scalar
+ (UNSPEC + 2, UNSPEC + 2, e.vector_mode ()));
+ else if (maybe_eq (nunits, 4U))
+ return e.use_exact_insn (code_for_pred_crypto_vvx4_scalar
+ (UNSPEC + 2, UNSPEC + 2, e.vector_mode ()));
+ else if (maybe_eq (nunits, 8U))
+ return e.use_exact_insn (code_for_pred_crypto_vvx8_scalar
+ (UNSPEC + 2, UNSPEC + 2, e.vector_mode ()));
+ else
+ return e.use_exact_insn (code_for_pred_crypto_vvx16_scalar
+ (UNSPEC + 2, UNSPEC + 2, e.vector_mode ()));
+ default:
+ gcc_unreachable ();
+ }
+ }
+};
+
+/* Implements vaeskf1. */
+class vaeskf1 : public function_base
+{
+public:
+ bool apply_mask_policy_p () const override { return false; }
+ bool use_mask_predication_p () const override { return false; }
-class vgmul : public function_base
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (code_for_pred_vaeskf1_scalar (e.vector_mode ()));
+ }
+};
+
+/* Implements vaeskf2. */
+class vaeskf2 : public function_base
{
public:
bool apply_mask_policy_p () const override { return false; }
bool use_mask_predication_p () const override { return false; }
bool has_merge_operand_p () const override { return false; }
+
rtx expand (function_expander &e) const override
{
- return e.use_exact_insn (code_for_pred_vgmul (e.vector_mode ()));
+ return e.use_exact_insn (code_for_pred_vaeskf2_scalar (e.vector_mode ()));
}
};
@@ -2522,7 +2582,14 @@ static CONSTEXPR const vwsll vwsll_obj;
static CONSTEXPR const clmul<UNSPEC_VCLMUL> vclmul_obj;
static CONSTEXPR const clmul<UNSPEC_VCLMULH> vclmulh_obj;
static CONSTEXPR const vghsh vghsh_obj;
-static CONSTEXPR const vgmul vgmul_obj;
+static CONSTEXPR const crypto_vv<UNSPEC_VGMUL> vgmul_obj;
+static CONSTEXPR const crypto_vv<UNSPEC_VAESEF> vaesef_obj;
+static CONSTEXPR const crypto_vv<UNSPEC_VAESEM> vaesem_obj;
+static CONSTEXPR const crypto_vv<UNSPEC_VAESDF> vaesdf_obj;
+static CONSTEXPR const crypto_vv<UNSPEC_VAESDM> vaesdm_obj;
+static CONSTEXPR const crypto_vv<UNSPEC_VAESZ> vaesz_obj;
+static CONSTEXPR const vaeskf1 vaeskf1_obj;
+static CONSTEXPR const vaeskf2 vaeskf2_obj;
/* Declare the function base NAME, pointing it to an instance
of class <NAME>_obj. */
@@ -2799,4 +2866,11 @@ BASE (vclmul)
BASE (vclmulh)
BASE (vghsh)
BASE (vgmul)
+BASE (vaesef)
+BASE (vaesem)
+BASE (vaesdf)
+BASE (vaesdm)
+BASE (vaesz)
+BASE (vaeskf1)
+BASE (vaeskf2)
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.h b/gcc/config/riscv/riscv-vector-builtins-bases.h
index 6a389113e1f..a420d9acd2c 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.h
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.h
@@ -294,6 +294,13 @@ extern const function_base *const vclmul;
extern const function_base *const vclmulh;
extern const function_base *const vghsh;
extern const function_base *const vgmul;
+extern const function_base *const vaesef;
+extern const function_base *const vaesem;
+extern const function_base *const vaesdf;
+extern const function_base *const vaesdm;
+extern const function_base *const vaesz;
+extern const function_base *const vaeskf1;
+extern const function_base *const vaeskf2;
}
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index dd62d8b11b6..22a2689eae5 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -1009,7 +1009,7 @@ struct zvbb_zvbc_def : public build_base
}
};
-/* vghsh/vgmul class. */
+/* vghsh/vgmul/vaes* class. */
struct crypto_vv_def : public build_base
{
char *get_name (function_builder &b, const function_instance &instance,
@@ -1019,13 +1019,49 @@ struct crypto_vv_def : public build_base
if (overloaded_p && !instance.base->can_be_overloaded_p (instance.pred))
return nullptr;
b.append_base_name (instance.base_name);
+ /* There is no op_type name in vghsh/vgmul/vaesz overloaded intrinsic */
+ if (!((strcmp (instance.base_name, "vghsh") == 0
+ || strcmp (instance.base_name, "vgmul") == 0
+ || strcmp (instance.base_name, "vaesz") == 0)
+ && overloaded_p))
+ b.append_name (operand_suffixes[instance.op_info->op]);
+ if (!overloaded_p)
+ {
+ if (instance.op_info->op == OP_TYPE_vv)
+ b.append_name (type_suffixes[instance.type.index].vector);
+ else
+ {
+ vector_type_index arg0_type_idx
+ = instance.op_info->args[1].get_function_type_index
+ (instance.type.index);
+ b.append_name (type_suffixes[arg0_type_idx].vector);
+ vector_type_index ret_type_idx
+ = instance.op_info->ret.get_function_type_index
+ (instance.type.index);
+ b.append_name (type_suffixes[ret_type_idx].vector);
+ }
+ }
+
+ b.append_name (predication_suffixes[instance.pred]);
+ return b.finish_name ();
+ }
+};
+/* vaeskf1/vaeskf2 class. */
+struct crypto_vi_def : public build_base
+{
+ char *get_name (function_builder &b, const function_instance &instance,
+ bool overloaded_p) const override
+ {
+ /* Return nullptr if it can not be overloaded. */
+ if (overloaded_p && !instance.base->can_be_overloaded_p (instance.pred))
+ return nullptr;
+ b.append_base_name (instance.base_name);
if (!overloaded_p)
{
b.append_name (operand_suffixes[instance.op_info->op]);
b.append_name (type_suffixes[instance.type.index].vector);
}
-
b.append_name (predication_suffixes[instance.pred]);
return b.finish_name ();
}
@@ -1061,4 +1097,5 @@ SHAPE(seg_indexed_loadstore, seg_indexed_loadstore)
SHAPE(seg_fault_load, seg_fault_load)
SHAPE(zvbb_zvbc, zvbb_zvbc)
SHAPE(crypto_vv, crypto_vv)
+SHAPE(crypto_vi, crypto_vi)
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.h b/gcc/config/riscv/riscv-vector-builtins-shapes.h
index 37b7077a3b1..3bb89b575ac 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.h
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.h
@@ -55,6 +55,7 @@ extern const function_shape *const seg_fault_load;
/* Below function_shape are Vectro Crypto*/
extern const function_shape *const zvbb_zvbc;
extern const function_shape *const crypto_vv;
+extern const function_shape *const crypto_vi;
}
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
index eaefb0f18cc..45162b289ec 100644
--- a/gcc/config/riscv/riscv-vector-builtins.cc
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -2642,6 +2642,22 @@ static CONSTEXPR const rvv_op_info all_v_vcreate_lmul4_x2_ops
/* A static operand information for vector_type func (vector_type).
Some ins just supports SEW=32, such as crypto vectol Zvkg extension.
* function registration. */
+static CONSTEXPR const rvv_arg_type_info vs_lmul_x2_args[]
+ = {rvv_arg_type_info (RVV_BASE_vlmul_ext_x2),
+ rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
+
+static CONSTEXPR const rvv_arg_type_info vs_lmul_x4_args[]
+ = {rvv_arg_type_info (RVV_BASE_vlmul_ext_x4),
+ rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
+
+static CONSTEXPR const rvv_arg_type_info vs_lmul_x8_args[]
+ = {rvv_arg_type_info (RVV_BASE_vlmul_ext_x8),
+ rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
+
+static CONSTEXPR const rvv_arg_type_info vs_lmul_x16_args[]
+ = {rvv_arg_type_info (RVV_BASE_vlmul_ext_x16),
+ rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
+
static CONSTEXPR const rvv_op_info u_vvv_crypto_sew32_ops
= {crypto_sew32_ops, /* Types */
OP_TYPE_vv, /* Suffix */
@@ -2654,6 +2670,48 @@ static CONSTEXPR const rvv_op_info u_vvvv_crypto_sew32_ops
rvv_arg_type_info (RVV_BASE_vector), /* Return type */
vvv_args /* Args */};
+static CONSTEXPR const rvv_op_info u_vvv_size_crypto_sew32_ops
+ = {crypto_sew32_ops, /* Types */
+ OP_TYPE_vi, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ vv_size_args /* Args */};
+
+static CONSTEXPR const rvv_op_info u_vv_size_crypto_sew32_ops
+ = {crypto_sew32_ops, /* Types */
+ OP_TYPE_vi, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ v_size_args /* Args */};
+
+static CONSTEXPR const rvv_op_info u_vvs_crypto_sew32_ops
+ = {crypto_sew32_ops, /* Types */
+ OP_TYPE_vs, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ vv_args /* Args */};
+
+static CONSTEXPR const rvv_op_info u_vvs_crypto_sew32_lmul_x2_ops
+ = {crypto_sew32_ops, /* Types */
+ OP_TYPE_vs, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vlmul_ext_x2), /* Return type */
+ vs_lmul_x2_args /* Args */};
+
+static CONSTEXPR const rvv_op_info u_vvs_crypto_sew32_lmul_x4_ops
+ = {crypto_sew32_ops, /* Types */
+ OP_TYPE_vs, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vlmul_ext_x4), /* Return type */
+ vs_lmul_x4_args /* Args */};
+
+static CONSTEXPR const rvv_op_info u_vvs_crypto_sew32_lmul_x8_ops
+ = {crypto_sew32_ops, /* Types */
+ OP_TYPE_vs, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vlmul_ext_x8), /* Return type */
+ vs_lmul_x8_args /* Args */};
+
+static CONSTEXPR const rvv_op_info u_vvs_crypto_sew32_lmul_x16_ops
+ = {crypto_sew32_ops, /* Types */
+ OP_TYPE_vs, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vlmul_ext_x16), /* Return type */
+ vs_lmul_x16_args /* Args */};
+
/* A static operand information for vector_type func (vector_type).
Some ins just supports SEW=64, such as crypto vectol Zvbc extension
vclmul.vv, vclmul.vx.
@@ -4250,7 +4308,9 @@ registered_function::overloaded_hash (const vec<tree, va_gc> &arglist)
__riscv_vset(vint8m2_t dest, size_t index, vint8m1_t value); The reason
is the same as above. */
if ((instance.base == bases::vget && (i == (len - 1)))
- || (instance.base == bases::vset && (i == (len - 2))))
+ || ((instance.base == bases::vset
+ || instance.shape == shapes::crypto_vi)
+ && (i == (len - 2))))
argument_types.safe_push (size_type_node);
/* Vector fixed-point arithmetic instructions requiring argument vxrm.
For example: vuint32m4_t __riscv_vaaddu(vuint32m4_t vs2,
diff --git a/gcc/config/riscv/riscv-vector-builtins.def b/gcc/config/riscv/riscv-vector-builtins.def
index 6661629aad8..0c3ee3b2986 100644
--- a/gcc/config/riscv/riscv-vector-builtins.def
+++ b/gcc/config/riscv/riscv-vector-builtins.def
@@ -558,6 +558,7 @@ DEF_RVV_TYPE (vfloat64m8_t, 17, __rvv_float64m8_t, double, RVVM8DF, _f64m8,
DEF_RVV_OP_TYPE (vv)
DEF_RVV_OP_TYPE (vx)
+DEF_RVV_OP_TYPE (vi)
DEF_RVV_OP_TYPE (v)
DEF_RVV_OP_TYPE (wv)
DEF_RVV_OP_TYPE (wx)
diff --git a/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h b/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
index fb1f195bf9b..8b993fb31f5 100755
--- a/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
+++ b/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
@@ -16,5 +16,6 @@ AVAIL (zvbb, TARGET_ZVBB)
AVAIL (zvbc, TARGET_ZVBC)
AVAIL (zvkb_or_zvbb, TARGET_ZVKB || TARGET_ZVBB)
AVAIL (zvkg, TARGET_ZVKG)
+AVAIL (zvkned, TARGET_ZVKNED)
}
#endif
diff --git a/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def b/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
index c2ed9353e24..9fea9f1a757 100755
--- a/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
+++ b/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
@@ -24,4 +24,36 @@ DEF_VECTOR_CRYPTO_FUNCTION (vclmulh, zvbb_zvbc, full_preds, u_vvv_crypto_sew64_o
DEF_VECTOR_CRYPTO_FUNCTION (vclmulh, zvbb_zvbc, full_preds, u_vvx_crypto_sew64_ops, zvbc)
//ZVKG
DEF_VECTOR_CRYPTO_FUNCTION(vghsh, crypto_vv, none_tu_preds, u_vvvv_crypto_sew32_ops, zvkg)
-DEF_VECTOR_CRYPTO_FUNCTION(vgmul, crypto_vv, none_tu_preds, u_vvv_crypto_sew32_ops, zvkg)
\ No newline at end of file
+DEF_VECTOR_CRYPTO_FUNCTION(vgmul, crypto_vv, none_tu_preds, u_vvv_crypto_sew32_ops, zvkg)
+//ZVKNED
+DEF_VECTOR_CRYPTO_FUNCTION (vaesef, crypto_vv, none_tu_preds, u_vvv_crypto_sew32_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesef, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesef, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x2_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesef, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x4_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesef, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x8_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesef, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x16_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesem, crypto_vv, none_tu_preds, u_vvv_crypto_sew32_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesem, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesem, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x2_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesem, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x4_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesem, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x8_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesem, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x16_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdf, crypto_vv, none_tu_preds, u_vvv_crypto_sew32_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdf, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdf, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x2_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdf, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x4_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdf, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x8_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdf, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x16_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdm, crypto_vv, none_tu_preds, u_vvv_crypto_sew32_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdm, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdm, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x2_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdm, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x4_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdm, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x8_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdm, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x16_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesz, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesz, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x2_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesz, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x4_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesz, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x8_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesz, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x16_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaeskf1, crypto_vi, none_tu_preds, u_vv_size_crypto_sew32_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaeskf2, crypto_vi, none_tu_preds, u_vvv_size_crypto_sew32_ops, zvkned)
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md
index 1ead762e552..39b4e4b2f6a 100644
--- a/gcc/config/riscv/riscv.md
+++ b/gcc/config/riscv/riscv.md
@@ -441,6 +441,12 @@
;; vclmulh vector crypto carry-less multiply - return high half instructions
;; vghsh vector crypto add-multiply over GHASH Galois-Field instructions
;; vgmul vector crypto multiply over GHASH Galois-Field instrumctions
+;; vaesef vector crypto AES final-round encryption instructions
+;; vaesem vector crypto AES middle-round encryption instructions
+;; vaesdf vector crypto AES final-round decryption instructions
+;; vaesdm vector crypto AES middle-round decryption instructions
+;; vaeskf1 vector crypto AES-128 Forward KeySchedule generation instructions
+;; vaeskf2 vector crypto AES-256 Forward KeySchedule generation instructions
(define_attr "type"
"unknown,branch,jump,jalr,ret,call,load,fpload,store,fpstore,
mtc,mfc,const,arith,logical,shift,slt,imul,idiv,move,fmove,fadd,fmul,
@@ -461,7 +467,7 @@
vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv,
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,
vgather,vcompress,vmov,vector,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,vror,vwsll,
- vclmul,vclmulh,vghsh,vgmul"
+ vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaeskf1,vaeskf2,vaesz"
(cond [(eq_attr "got" "load") (const_string "load")
;; If a doubleword move uses these expensive instructions,
diff --git a/gcc/config/riscv/vector-crypto.md b/gcc/config/riscv/vector-crypto.md
index edc7dc9d432..e84aaf50fc0 100755
--- a/gcc/config/riscv/vector-crypto.md
+++ b/gcc/config/riscv/vector-crypto.md
@@ -13,6 +13,23 @@
UNSPEC_VCLMULH
UNSPEC_VGHSH
UNSPEC_VGMUL
+ UNSPEC_VAESEF
+ UNSPEC_VAESEFVV
+ UNSPEC_VAESEFVS
+ UNSPEC_VAESEM
+ UNSPEC_VAESEMVV
+ UNSPEC_VAESEMVS
+ UNSPEC_VAESDF
+ UNSPEC_VAESDFVV
+ UNSPEC_VAESDFVS
+ UNSPEC_VAESDM
+ UNSPEC_VAESDMVV
+ UNSPEC_VAESDMVS
+ UNSPEC_VAESZ
+ UNSPEC_VAESZVVNULL
+ UNSPEC_VAESZVS
+ UNSPEC_VAESKF1
+ UNSPEC_VAESKF2
])
(define_int_attr ror_rol [(UNSPEC_VROL "rol") (UNSPEC_VROR "ror")])
@@ -23,6 +40,18 @@
(define_int_attr h [(UNSPEC_VCLMUL "") (UNSPEC_VCLMULH "h")])
+(define_int_attr vv_ins_name [(UNSPEC_VGMUL "gmul" ) (UNSPEC_VAESEFVV "aesef")
+ (UNSPEC_VAESEMVV "aesem") (UNSPEC_VAESDFVV "aesdf")
+ (UNSPEC_VAESDMVV "aesdm") (UNSPEC_VAESEFVS "aesef")
+ (UNSPEC_VAESEMVS "aesem") (UNSPEC_VAESDFVS "aesdf")
+ (UNSPEC_VAESDMVS "aesdm") (UNSPEC_VAESZVS "aesz" )])
+
+(define_int_attr ins_type [(UNSPEC_VGMUL "vv") (UNSPEC_VAESEFVV "vv")
+ (UNSPEC_VAESEMVV "vv") (UNSPEC_VAESDFVV "vv")
+ (UNSPEC_VAESDMVV "vv") (UNSPEC_VAESEFVS "vs")
+ (UNSPEC_VAESEMVS "vs") (UNSPEC_VAESDFVS "vs")
+ (UNSPEC_VAESDMVS "vs") (UNSPEC_VAESZVS "vs")])
+
(define_int_iterator UNSPEC_VRORL [UNSPEC_VROL UNSPEC_VROR])
(define_int_iterator UNSPEC_VCLTZ [UNSPEC_VCLZ UNSPEC_VCTZ])
@@ -31,6 +60,11 @@
(define_int_iterator UNSPEC_CLMUL [UNSPEC_VCLMUL UNSPEC_VCLMULH])
+(define_int_iterator UNSPEC_CRYPTO_VV [UNSPEC_VGMUL UNSPEC_VAESEFVV UNSPEC_VAESEMVV
+ UNSPEC_VAESDFVV UNSPEC_VAESDMVV UNSPEC_VAESEFVS
+ UNSPEC_VAESEMVS UNSPEC_VAESDFVS UNSPEC_VAESDMVS
+ UNSPEC_VAESZVS])
+
;; zvbb instructions patterns.
;; vandn.vv vandn.vx vrol.vv vrol.vx
;; vror.vv vror.vx vror.vi
@@ -296,3 +330,153 @@
"vgmul.vv\t%0,%2"
[(set_attr "type" "vgmul")
(set_attr "mode" "<VSI:MODE>")])
+
+;; zvkg and zvkned instructions patterns.
+;; vgmul.vv vaesz.vs
+;; vaesef.[vv,vs] vaesem.[vv,vs] vaesdf.[vv,vs] vaesdm.[vv,vs]
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type><mode>"
+ [(set (match_operand:VSI 0 "register_operand" "=vd")
+ (if_then_else:VSI
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" "rK")
+ (match_operand 4 "const_int_operand" " i")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VSI
+ [(match_operand:VSI 1 "register_operand" " 0")
+ (match_operand:VSI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+ (match_dup 1)))]
+ "TARGET_ZVKNED"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+ (set_attr "mode" "<VSI:MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x1<mode>_scalar"
+ [(set (match_operand:VSI 0 "register_operand" "=vd")
+ (if_then_else:VSI
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" "rK")
+ (match_operand 4 "const_int_operand" " i")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VSI
+ [(match_operand:VSI 1 "register_operand" " 0")
+ (match_operand:VSI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+ (match_dup 1)))]
+ "TARGET_ZVKNED"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+ (set_attr "mode" "<VSI:MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x2<mode>_scalar"
+ [(set (match_operand:<VSIX2> 0 "register_operand" "=vd")
+ (if_then_else:<VSIX2>
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" "rK")
+ (match_operand 4 "const_int_operand" " i")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<VSIX2>
+ [(match_operand:<VSIX2> 1 "register_operand" " 0")
+ (match_operand:VLMULX2_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+ (match_dup 1)))]
+ "TARGET_ZVKNED"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+ (set_attr "mode" "<VLMULX2_SI:MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x4<mode>_scalar"
+ [(set (match_operand:<VSIX4> 0 "register_operand" "=vd")
+ (if_then_else:<VSIX4>
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" "rK")
+ (match_operand 4 "const_int_operand" " i")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<VSIX4>
+ [(match_operand:<VSIX4> 1 "register_operand" " 0")
+ (match_operand:VLMULX4_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+ (match_dup 1)))]
+ "TARGET_ZVKNED"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+ (set_attr "mode" "<VLMULX4_SI:MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x8<mode>_scalar"
+ [(set (match_operand:<VSIX8> 0 "register_operand" "=vd")
+ (if_then_else:<VSIX8>
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" "rK")
+ (match_operand 4 "const_int_operand" " i")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<VSIX8>
+ [(match_operand:<VSIX8> 1 "register_operand" " 0")
+ (match_operand:VLMULX8_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+ (match_dup 1)))]
+ "TARGET_ZVKNED"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+ (set_attr "mode" "<VLMULX8_SI:MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x16<mode>_scalar"
+ [(set (match_operand:<VSIX16> 0 "register_operand" "=vd")
+ (if_then_else:<VSIX16>
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" "rK")
+ (match_operand 4 "const_int_operand" " i")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<VSIX16>
+ [(match_operand:<VSIX16> 1 "register_operand" " 0")
+ (match_operand:VLMULX16_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+ (match_dup 1)))]
+ "TARGET_ZVKNED"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+ (set_attr "mode" "<VLMULX16_SI:MODE>")])
+
+;; vaeskf1.vi
+(define_insn "@pred_vaeskf1<mode>_scalar"
+ [(set (match_operand:VSI 0 "register_operand" "=vd, vd")
+ (if_then_else:VSI
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" "rK, rK")
+ (match_operand 5 "const_int_operand" " i, i")
+ (match_operand 6 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VSI
+ [(match_operand:VSI 2 "register_operand" "vr, vr")
+ (match_operand:<VEL> 3 "const_int_operand" " i, i")] UNSPEC_VAESKF1)
+ (match_operand:VSI 1 "vector_merge_operand" "vu, 0")))]
+ "TARGET_ZVKNED"
+ "vaeskf1.vi\t%0,%2,%3"
+ [(set_attr "type" "vaeskf1")
+ (set_attr "mode" "<MODE>")])
+
+;; vaeskf2.vi
+(define_insn "@pred_vaeskf2<mode>_scalar"
+ [(set (match_operand:VSI 0 "register_operand" "=vd")
+ (if_then_else:VSI
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" "rK")
+ (match_operand 5 "const_int_operand" " i")
+ (match_operand 6 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VSI
+ [(match_operand:VSI 1 "register_operand" "0")
+ (match_operand:VSI 2 "register_operand" "vr")
+ (match_operand:<VEL> 3 "const_int_operand" " i")] UNSPEC_VAESKF2)
+ (match_dup 1)))]
+ "TARGET_ZVKNED"
+ "vaeskf2.vi\t%0,%2,%3"
+ [(set_attr "type" "vaeskf2")
+ (set_attr "mode" "<MODE>")])
\ No newline at end of file
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index fea84a3f54c..1b16b476035 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -3921,6 +3921,38 @@
RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
])
+(define_mode_iterator VLMULX2_SI [
+ RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
+(define_mode_iterator VLMULX4_SI [
+ RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
+(define_mode_iterator VLMULX8_SI [
+ RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
+(define_mode_iterator VLMULX16_SI [
+ (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
+(define_mode_attr VSIX2 [
+ (RVVM8SI "RVVM8SI") (RVVM4SI "RVVM8SI") (RVVM2SI "RVVM4SI") (RVVM1SI "RVVM2SI") (RVVMF2SI "RVVM1SI")
+])
+
+(define_mode_attr VSIX4 [
+ (RVVM2SI "RVVM8SI") (RVVM1SI "RVVM4SI") (RVVMF2SI "RVVM2SI")
+])
+
+(define_mode_attr VSIX8 [
+ (RVVM1SI "RVVM8SI") (RVVMF2SI "RVVM4SI")
+])
+
+(define_mode_attr VSIX16 [
+ (RVVMF2SI "RVVM8SI")
+])
+
(define_mode_iterator VDI [
(RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
(RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index aa529d6378f..66a2e9358cb 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -53,7 +53,8 @@
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
- vror,vwsll,vclmul,vclmulh,vghsh,vgmul")
+ vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
+ vaeskf1,vaeskf2,vaesz")
(const_string "true")]
(const_string "false")))
@@ -76,7 +77,8 @@
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
- vror,vwsll,vclmul,vclmulh,vghsh,vgmul")
+ vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
+ vaeskf1,vaeskf2,vaesz")
(const_string "true")]
(const_string "false")))
@@ -704,7 +706,8 @@
vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,vror,vwsll,vclmul,vclmulh")
(const_int 2)
- (eq_attr "type" "vimerge,vfmerge,vcompress,vghsh,vgmul")
+ (eq_attr "type" "vimerge,vfmerge,vcompress,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
+ vaeskf1,vaeskf2,vaesz")
(const_int 1)
(eq_attr "type" "vimuladd,vfmuladd")
@@ -744,7 +747,7 @@
vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,\
vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,vcompress,\
vlsegde,vssegts,vssegtux,vssegtox,vlsegdff,vbrev,vbrev8,vrev8,\
- vghsh")
+ vghsh,vaeskf1,vaeskf2")
(const_int 4)
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -766,7 +769,8 @@
(eq_attr "type" "vicmp,vimuladd,vfcmp,vfmuladd")
(const_int 6)
- (eq_attr "type" "vmpop,vmffs,vmidx,vssegte,vclz,vctz,vgmul")
+ (eq_attr "type" "vmpop,vmffs,vmidx,vssegte,vclz,vctz,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
+ vaesz")
(const_int 3)]
(const_int INVALID_ATTRIBUTE)))
@@ -775,7 +779,8 @@
(cond [(eq_attr "type" "vlde,vimov,vfmov,vext,vmiota,vfsqrt,vfrecp,\
vfcvtitof,vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,\
vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,\
- vcompress,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8,vghsh")
+ vcompress,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8,vghsh,\
+ vaeskf1,vaeskf2")
(symbol_ref "riscv_vector::get_ta(operands[5])")
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -797,7 +802,7 @@
(eq_attr "type" "vimuladd,vfmuladd")
(symbol_ref "riscv_vector::get_ta(operands[7])")
- (eq_attr "type" "vmidx,vgmul")
+ (eq_attr "type" "vmidx,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz")
(symbol_ref "riscv_vector::get_ta(operands[4])")]
(const_int INVALID_ATTRIBUTE)))
@@ -839,7 +844,7 @@
vfclass,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\
vimovxv,vfmovfv,vlsegde,vlsegdff,vbrev,vbrev8,vrev8")
(const_int 7)
- (eq_attr "type" "vldm,vstm,vmalu,vmalu,vgmul")
+ (eq_attr "type" "vldm,vstm,vmalu,vmalu,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz")
(const_int 5)
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -862,7 +867,7 @@
(eq_attr "type" "vimuladd,vfmuladd")
(const_int 9)
- (eq_attr "type" "vmsfs,vmidx,vcompress,vghsh")
+ (eq_attr "type" "vmsfs,vmidx,vcompress,vghsh,vaeskf1,vaeskf2")
(const_int 6)
(eq_attr "type" "vmpop,vmffs,vssegte,vclz,vctz")
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp b/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
index c1b9eede6ba..b47602e1c83 100644
--- a/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
@@ -40,6 +40,8 @@ dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvbc/*.\[cS\]]] \
"" $DEFAULT_CFLAGS
dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvkg/*.\[cS\]]] \
"" $DEFAULT_CFLAGS
+dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvkned/*.\[cS\]]] \
+ "" $DEFAULT_CFLAGS
# All done.
dg-finish
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdf.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdf.c
new file mode 100644
index 00000000000..8fcfd493f2f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdf.c
@@ -0,0 +1,169 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+#include "riscv_vector.h"
+/* non-policy */
+vuint32mf2_t test_vaesdf_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vv_u32mf2(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32mf2_u32mf2(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32mf2_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32mf2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32mf2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32mf2_u32m8(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdf_vv_u32m1(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32m1_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32m1_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32m1_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32m1_u32m8(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vv_u32m2(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32m2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32m2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32m2_u32m8(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesdf_vv_u32m4(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32m4_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32m4_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesdf_vv_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32m8_u32m8(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesdf_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vv_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32mf2_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32mf2_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32mf2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32mf2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32mf2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdf_vv_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32m1_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32m1_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32m1_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32m1_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vv_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32m2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32m2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32m2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesdf_vv_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32m4_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32m4_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesdf_vv_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_u32m8_u32m8_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vaesdf\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
+/* { dg-final { scan-assembler-times {vaesdf\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdf_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdf_overloaded.c
new file mode 100644
index 00000000000..b8570818358
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdf_overloaded.c
@@ -0,0 +1,169 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+#include "riscv_vector.h"
+/* non-policy */
+vuint32mf2_t test_vaesdf_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vv(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdf_vv(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vv(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesdf_vv(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesdf_vv(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesdf_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vv_tu(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdf_vv_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vv_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesdf_vv_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesdf_vv_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vaesdf\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
+/* { dg-final { scan-assembler-times {vaesdf\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdm.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdm.c
new file mode 100644
index 00000000000..1d4a1711cc9
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdm.c
@@ -0,0 +1,170 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaesdm_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vv_u32mf2(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32mf2_u32mf2(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32mf2_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32mf2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32mf2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32mf2_u32m8(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdm_vv_u32m1(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32m1_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32m1_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32m1_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32m1_u32m8(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vv_u32m2(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32m2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32m2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32m2_u32m8(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesdm_vv_u32m4(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32m4_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32m4_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesdm_vv_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32m8_u32m8(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesdm_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vv_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32mf2_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32mf2_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32mf2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32mf2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32mf2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdm_vv_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32m1_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32m1_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32m1_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32m1_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vv_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32m2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32m2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32m2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesdm_vv_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32m4_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32m4_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesdm_vv_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_u32m8_u32m8_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vaesdm\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
+/* { dg-final { scan-assembler-times {vaesdm\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdm_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdm_overloaded.c
new file mode 100644
index 00000000000..4247ba3901b
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdm_overloaded.c
@@ -0,0 +1,170 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaesdm_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vv(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdm_vv(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vv(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesdm_vv(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesdm_vv(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesdm_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vv_tu(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdm_vv_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vv_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesdm_vv_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesdm_vv_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vaesdm\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
+/* { dg-final { scan-assembler-times {vaesdm\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesef.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesef.c
new file mode 100644
index 00000000000..93a79ffa51c
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesef.c
@@ -0,0 +1,170 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaesef_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesef_vv_u32mf2(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32mf2_u32mf2(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32mf2_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32mf2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32mf2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32mf2_u32m8(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesef_vv_u32m1(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32m1_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32m1_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32m1_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32m1_u32m8(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesef_vv_u32m2(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32m2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32m2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32m2_u32m8(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesef_vv_u32m4(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32m4_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32m4_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesef_vv_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32m8_u32m8(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesef_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesef_vv_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32mf2_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32mf2_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32mf2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32mf2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32mf2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesef_vv_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32m1_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32m1_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32m1_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32m1_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesef_vv_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32m2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32m2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32m2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesef_vv_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32m4_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32m4_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesef_vv_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_u32m8_u32m8_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vaesef\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
+/* { dg-final { scan-assembler-times {vaesef\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesef_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesef_overloaded.c
new file mode 100644
index 00000000000..9e3998ef055
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesef_overloaded.c
@@ -0,0 +1,170 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaesef_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesef_vv(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesef_vv(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesef_vv(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesef_vv(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesef_vv(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesef_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesef_vv_tu(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesef_vv_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesef_vv_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesef_vv_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesef_vv_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vaesef\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
+/* { dg-final { scan-assembler-times {vaesef\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesem.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesem.c
new file mode 100644
index 00000000000..43e468c6f0e
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesem.c
@@ -0,0 +1,170 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaesem_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesem_vv_u32mf2(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32mf2_u32mf2(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32mf2_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32mf2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32mf2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32mf2_u32m8(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesem_vv_u32m1(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32m1_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32m1_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32m1_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32m1_u32m8(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesem_vv_u32m2(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32m2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32m2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32m2_u32m8(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesem_vv_u32m4(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32m4_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32m4_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesem_vv_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32m8_u32m8(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesem_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesem_vv_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32mf2_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32mf2_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32mf2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32mf2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32mf2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesem_vv_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32m1_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32m1_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32m1_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32m1_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesem_vv_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32m2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32m2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32m2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesem_vv_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32m4_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32m4_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesem_vv_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_u32m8_u32m8_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vaesem\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
+/* { dg-final { scan-assembler-times {vaesem\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesem_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesem_overloaded.c
new file mode 100644
index 00000000000..bb2e7dea733
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesem_overloaded.c
@@ -0,0 +1,170 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaesem_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesem_vv(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesem_vv(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesem_vv(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesem_vv(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesem_vv(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesem_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesem_vv_tu(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesem_vv_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesem_vv_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesem_vv_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesem_vv_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vaesem\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
+/* { dg-final { scan-assembler-times {vaesem\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf1.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf1.c
new file mode 100644
index 00000000000..0edbb6d9108
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf1.c
@@ -0,0 +1,50 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaeskf1_vi_u32mf2(vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaeskf1_vi_u32mf2(vs2, 0, vl);
+}
+
+vuint32m1_t test_vaeskf1_vi_u32m1(vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaeskf1_vi_u32m1(vs2, 0, vl);
+}
+
+vuint32m2_t test_vaeskf1_vi_u32m2(vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaeskf1_vi_u32m2(vs2, 0, vl);
+}
+
+vuint32m4_t test_vaeskf1_vi_u32m4(vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaeskf1_vi_u32m4(vs2, 0, vl);
+}
+
+vuint32m8_t test_vaeskf1_vi_u32m8(vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaeskf1_vi_u32m8(vs2, 0, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaeskf1_vi_u32mf2_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaeskf1_vi_u32m1_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaeskf1_vi_u32m2_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaeskf1_vi_u32m4_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaeskf1_vi_u32m8_tu(maskedoff, vs2, 0, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vaeskf1\.vi\s+v[0-9]+,\s*v[0-9]+,0} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf1_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf1_overloaded.c
new file mode 100644
index 00000000000..63e3537a06b
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf1_overloaded.c
@@ -0,0 +1,50 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaeskf1_vi_u32mf2(vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaeskf1(vs2, 0, vl);
+}
+
+vuint32m1_t test_vaeskf1_vi_u32m1(vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaeskf1(vs2, 0, vl);
+}
+
+vuint32m2_t test_vaeskf1_vi_u32m2(vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaeskf1(vs2, 0, vl);
+}
+
+vuint32m4_t test_vaeskf1_vi_u32m4(vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaeskf1(vs2, 0, vl);
+}
+
+vuint32m8_t test_vaeskf1_vi_u32m8(vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaeskf1(vs2, 0, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vaeskf1\.vi\s+v[0-9]+,\s*v[0-9]+,0} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf2.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf2.c
new file mode 100644
index 00000000000..06fed681d6a
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf2.c
@@ -0,0 +1,50 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaeskf2_vi_u32mf2(vd, vs2, 0, vl);
+}
+
+vuint32m1_t test_vaeskf2_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaeskf2_vi_u32m1(vd, vs2, 0, vl);
+}
+
+vuint32m2_t test_vaeskf2_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaeskf2_vi_u32m2(vd, vs2, 0, vl);
+}
+
+vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaeskf2_vi_u32m4(vd, vs2, 0, vl);
+}
+
+vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaeskf2_vi_u32m8(vd, vs2, 0, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaeskf2_vi_u32mf2_tu(vd, vs2, 0, vl);
+}
+
+vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaeskf2_vi_u32m1_tu(vd, vs2, 0, vl);
+}
+
+vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaeskf2_vi_u32m2_tu(vd, vs2, 0, vl);
+}
+
+vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaeskf2_vi_u32m4_tu(vd, vs2, 0, vl);
+}
+
+vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaeskf2_vi_u32m8_tu(vd, vs2, 0, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vaeskf2\.vi\s+v[0-9]+,\s*v[0-9]+,0} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf2_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf2_overloaded.c
new file mode 100644
index 00000000000..da7f42aef88
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf2_overloaded.c
@@ -0,0 +1,50 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaeskf2(vd, vs2, 0, vl);
+}
+
+vuint32m1_t test_vaeskf2_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaeskf2(vd, vs2, 0, vl);
+}
+
+vuint32m2_t test_vaeskf2_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaeskf2(vd, vs2, 0, vl);
+}
+
+vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaeskf2(vd, vs2, 0, vl);
+}
+
+vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaeskf2(vd, vs2, 0, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaeskf2_tu(vd, vs2, 0, vl);
+}
+
+vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaeskf2_tu(vd, vs2, 0, vl);
+}
+
+vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaeskf2_tu(vd, vs2, 0, vl);
+}
+
+vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaeskf2_tu(vd, vs2, 0, vl);
+}
+
+vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaeskf2_tu(vd, vs2, 0, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vaeskf2\.vi\s+v[0-9]+,\s*v[0-9]+,0} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesz.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesz.c
new file mode 100644
index 00000000000..fbbbeaa78ed
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesz.c
@@ -0,0 +1,130 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32mf2_u32mf2(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesz_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32mf2_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32mf2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32mf2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32mf2_u32m8(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesz_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32m1_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32m1_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32m1_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32m1_u32m8(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32m2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32m2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32m2_u32m8(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32m4_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32m4_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32m8_u32m8(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32mf2_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesz_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32mf2_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32mf2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32mf2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32mf2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesz_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32m1_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32m1_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32m1_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32m1_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32m2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32m2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32m2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32m4_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32m4_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesz_vs_u32m8_u32m8_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 15 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 15 } } */
+/* { dg-final { scan-assembler-times {vaesz\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesz_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesz_overloaded.c
new file mode 100644
index 00000000000..9130fbdc4ef
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesz_overloaded.c
@@ -0,0 +1,130 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesz_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesz_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesz(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesz_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesz_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 15 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 15 } } */
+/* { dg-final { scan-assembler-times {vaesz\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
\ No newline at end of file
--
2.17.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 5/7] RISC-V: Add intrinsic functions for crypto vector Zvknh[ab] extension
2023-12-04 2:57 [PATCH 1/7] RISC-V: Add intrinsic functions for crypto vector Zvbb extension Feng Wang
` (2 preceding siblings ...)
2023-12-04 2:57 ` [PATCH 4/7] RISC-V: Add intrinsic functions for crypto vector Zvkned extension Feng Wang
@ 2023-12-04 2:57 ` Feng Wang
2023-12-04 2:57 ` [PATCH 6/7] RISC-V: Add intrinsic functions for crypto vector Zvksed extension Feng Wang
` (2 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Feng Wang @ 2023-12-04 2:57 UTC (permalink / raw)
To: gcc-patches; +Cc: kito.cheng, jeffreyalaw, zhusonghe, panciyan, Feng Wang
This patch add the intrinsic functions(according to https://github.com/
riscv-non-isa/rvv-intrinsic-doc/blob/eopc/vector-crypto/auto-generated/
vector-crypto/intrinsic_funcs.md) for crypto vector Zvknh[ab] extension. And all
the test cases are added for api-testing.
Co-Authored by: Songhe Zhu <zhusonghe@eswincomputing.com>
gcc/ChangeLog:
* common/config/riscv/riscv-common.cc: Add Zvknh[ab] in riscv_implied_info.
* config/riscv/riscv-vector-builtins-bases.cc (class vghsh): Add new function_base for Zvknh[ab].
(class vg_nhab): Ditto.
(BASE): Add Zvknh[ab] BASE declaration.
* config/riscv/riscv-vector-builtins-bases.h: Ditto.
* config/riscv/riscv-vector-builtins-shapes.cc (struct crypto_vv_def): Add function_builder for Zvknh[ab].
* config/riscv/riscv-vector-builtins.cc: Define new data struct for Zvknh[ab].
* config/riscv/riscv-vector-crypto-builtins-avail.h (AVAIL): Add enable condition.
* config/riscv/riscv-vector-crypto-builtins-functions.def (vaeskf2): Add intrinsc def.
(vsha2ms): Ditto.
(vsha2ch): Ditto.
(vsha2cl): Ditto.
* config/riscv/riscv.md: Add Zvknh[ab] ins name.
* config/riscv/vector-crypto.md (sha2ms): Add Zvknh[ab] md patterns.
(@pred_vghsh<VSI:mode>): Ditto.
(@pred_v<vv_ins1_name><mode>): Dito.
(@pred_vgmul<VSI:mode>): Ditto
* config/riscv/vector.md: Add the corresponding attribute for Zvknh[ab].
gcc/testsuite/ChangeLog:
* gcc.target/riscv/zvk/zvk.exp:
* gcc.target/riscv/zvk/zvknha/vsha2ch.c: New test.
* gcc.target/riscv/zvk/zvknha/vsha2ch_overloaded.c: New test.
* gcc.target/riscv/zvk/zvknha/vsha2cl.c: New test.
* gcc.target/riscv/zvk/zvknha/vsha2cl_overloaded.c: New test.
* gcc.target/riscv/zvk/zvknha/vsha2ms.c: New test.
* gcc.target/riscv/zvk/zvknha/vsha2ms_overloaded.c: New test.
* gcc.target/riscv/zvk/zvknhb/vsha2ch.c: New test.
* gcc.target/riscv/zvk/zvknhb/vsha2ch_overloaded.c: New test.
* gcc.target/riscv/zvk/zvknhb/vsha2cl.c: New test.
* gcc.target/riscv/zvk/zvknhb/vsha2cl_overloaded.c: New test.
* gcc.target/riscv/zvk/zvknhb/vsha2ms.c: New test.
* gcc.target/riscv/zvk/zvknhb/vsha2ms_overloaded.c: New test.
---
gcc/common/config/riscv/riscv-common.cc | 2 +
.../riscv/riscv-vector-builtins-bases.cc | 15 +++-
.../riscv/riscv-vector-builtins-bases.h | 3 +
.../riscv/riscv-vector-builtins-shapes.cc | 7 +-
gcc/config/riscv/riscv-vector-builtins.cc | 6 ++
.../riscv-vector-crypto-builtins-avail.h | 2 +
...riscv-vector-crypto-builtins-functions.def | 10 ++-
gcc/config/riscv/riscv.md | 27 +++---
gcc/config/riscv/vector-crypto.md | 55 +++++-------
gcc/config/riscv/vector.md | 12 +--
gcc/testsuite/gcc.target/riscv/zvk/zvk.exp | 4 +
.../gcc.target/riscv/zvk/zvknha/vsha2ch.c | 51 ++++++++++++
.../riscv/zvk/zvknha/vsha2ch_overloaded.c | 51 ++++++++++++
.../gcc.target/riscv/zvk/zvknha/vsha2cl.c | 51 ++++++++++++
.../riscv/zvk/zvknha/vsha2cl_overloaded.c | 51 ++++++++++++
.../gcc.target/riscv/zvk/zvknha/vsha2ms.c | 51 ++++++++++++
.../riscv/zvk/zvknha/vsha2ms_overloaded.c | 51 ++++++++++++
.../gcc.target/riscv/zvk/zvknhb/vsha2ch.c | 83 +++++++++++++++++++
.../riscv/zvk/zvknhb/vsha2ch_overloaded.c | 83 +++++++++++++++++++
.../gcc.target/riscv/zvk/zvknhb/vsha2cl.c | 83 +++++++++++++++++++
.../riscv/zvk/zvknhb/vsha2cl_overloaded.c | 83 +++++++++++++++++++
.../gcc.target/riscv/zvk/zvknhb/vsha2ms.c | 83 +++++++++++++++++++
.../riscv/zvk/zvknhb/vsha2ms_overloaded.c | 83 +++++++++++++++++++
23 files changed, 892 insertions(+), 55 deletions(-)
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ch.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ch_overloaded.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2cl.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2cl_overloaded.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ms.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ms_overloaded.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ch.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ch_overloaded.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2cl.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2cl_overloaded.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ms.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ms_overloaded.c
diff --git a/gcc/common/config/riscv/riscv-common.cc b/gcc/common/config/riscv/riscv-common.cc
index 60a174d4801..7201ac3866c 100644
--- a/gcc/common/config/riscv/riscv-common.cc
+++ b/gcc/common/config/riscv/riscv-common.cc
@@ -125,6 +125,8 @@ static const riscv_implied_info_t riscv_implied_info[] =
{"zvkb", "v"},
{"zvkg", "v"},
{"zvkned", "v"},
+ {"zvknha", "v"},
+ {"zvknhb", "v"},
{"zfh", "zfhmin"},
{"zfhmin", "f"},
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index 61167c8d4e4..a3670ec5b38 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -2228,15 +2228,18 @@ public:
}
};
-class vghsh : public function_base
+/* Implements vghsh/vsh2ms/vsha2c[hl]. */
+template<int UNSPEC>
+class vg_nhab : public function_base
{
public:
bool apply_mask_policy_p () const override { return false; }
bool use_mask_predication_p () const override { return false; }
bool has_merge_operand_p () const override { return false; }
+
rtx expand (function_expander &e) const override
{
- return e.use_exact_insn (code_for_pred_vghsh (e.vector_mode ()));
+ return e.use_exact_insn (code_for_pred_v (UNSPEC, e.vector_mode ()));
}
};
@@ -2581,7 +2584,7 @@ static CONSTEXPR const vcltz<UNSPEC_VCTZ> vctz_obj;
static CONSTEXPR const vwsll vwsll_obj;
static CONSTEXPR const clmul<UNSPEC_VCLMUL> vclmul_obj;
static CONSTEXPR const clmul<UNSPEC_VCLMULH> vclmulh_obj;
-static CONSTEXPR const vghsh vghsh_obj;
+static CONSTEXPR const vg_nhab<UNSPEC_VGHSH> vghsh_obj;
static CONSTEXPR const crypto_vv<UNSPEC_VGMUL> vgmul_obj;
static CONSTEXPR const crypto_vv<UNSPEC_VAESEF> vaesef_obj;
static CONSTEXPR const crypto_vv<UNSPEC_VAESEM> vaesem_obj;
@@ -2590,6 +2593,9 @@ static CONSTEXPR const crypto_vv<UNSPEC_VAESDM> vaesdm_obj;
static CONSTEXPR const crypto_vv<UNSPEC_VAESZ> vaesz_obj;
static CONSTEXPR const vaeskf1 vaeskf1_obj;
static CONSTEXPR const vaeskf2 vaeskf2_obj;
+static CONSTEXPR const vg_nhab<UNSPEC_VSHA2MS> vsha2ms_obj;
+static CONSTEXPR const vg_nhab<UNSPEC_VSHA2CH> vsha2ch_obj;
+static CONSTEXPR const vg_nhab<UNSPEC_VSHA2CL> vsha2cl_obj;
/* Declare the function base NAME, pointing it to an instance
of class <NAME>_obj. */
@@ -2873,4 +2879,7 @@ BASE (vaesdm)
BASE (vaesz)
BASE (vaeskf1)
BASE (vaeskf2)
+BASE (vsha2ms)
+BASE (vsha2ch)
+BASE (vsha2cl)
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.h b/gcc/config/riscv/riscv-vector-builtins-bases.h
index a420d9acd2c..0560b0008f0 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.h
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.h
@@ -301,6 +301,9 @@ extern const function_base *const vaesdm;
extern const function_base *const vaesz;
extern const function_base *const vaeskf1;
extern const function_base *const vaeskf2;
+extern const function_base *const vsha2ms;
+extern const function_base *const vsha2ch;
+extern const function_base *const vsha2cl;
}
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 22a2689eae5..5873103857a 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -1019,10 +1019,13 @@ struct crypto_vv_def : public build_base
if (overloaded_p && !instance.base->can_be_overloaded_p (instance.pred))
return nullptr;
b.append_base_name (instance.base_name);
- /* There is no op_type name in vghsh/vgmul/vaesz overloaded intrinsic */
+ /* There is no op_type name in vghsh/vgmul/vaesz/vsha2ms/vsha2ch/vsha2cl overloaded intrinsic */
if (!((strcmp (instance.base_name, "vghsh") == 0
|| strcmp (instance.base_name, "vgmul") == 0
- || strcmp (instance.base_name, "vaesz") == 0)
+ || strcmp (instance.base_name, "vaesz") == 0
+ || strcmp (instance.base_name, "vsha2ms") == 0
+ || strcmp (instance.base_name, "vsha2ch") == 0
+ || strcmp (instance.base_name, "vsha2cl") == 0)
&& overloaded_p))
b.append_name (operand_suffixes[instance.op_info->op]);
if (!overloaded_p)
diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
index 45162b289ec..e5e8f0d19ac 100644
--- a/gcc/config/riscv/riscv-vector-builtins.cc
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -2728,6 +2728,12 @@ static CONSTEXPR const rvv_op_info u_vvx_crypto_sew64_ops
rvv_arg_type_info (RVV_BASE_vector), /* Return type */
vx_args /* Args */};
+static CONSTEXPR const rvv_op_info u_vvvv_crypto_sew64_ops
+ = {crypto_sew64_ops, /* Types */
+ OP_TYPE_vv, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ vvv_args /* Args */};
+
/* A list of all RVV base function types. */
static CONSTEXPR const function_type_info function_types[] = {
#define DEF_RVV_TYPE_INDEX( \
diff --git a/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h b/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
index 8b993fb31f5..bc1b6ec9b5b 100755
--- a/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
+++ b/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
@@ -17,5 +17,7 @@ AVAIL (zvbc, TARGET_ZVBC)
AVAIL (zvkb_or_zvbb, TARGET_ZVKB || TARGET_ZVBB)
AVAIL (zvkg, TARGET_ZVKG)
AVAIL (zvkned, TARGET_ZVKNED)
+AVAIL (zvknha_or_zvknhb, TARGET_ZVKNHA || TARGET_ZVKNHB)
+AVAIL (zvknhb, TARGET_ZVKNHB)
}
#endif
diff --git a/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def b/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
index 9fea9f1a757..9c89412a9a9 100755
--- a/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
+++ b/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
@@ -56,4 +56,12 @@ DEF_VECTOR_CRYPTO_FUNCTION (vaesz, crypto_vv, none_tu_preds, u_vvs_crypto_sew
DEF_VECTOR_CRYPTO_FUNCTION (vaesz, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x8_ops, zvkned)
DEF_VECTOR_CRYPTO_FUNCTION (vaesz, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x16_ops, zvkned)
DEF_VECTOR_CRYPTO_FUNCTION (vaeskf1, crypto_vi, none_tu_preds, u_vv_size_crypto_sew32_ops, zvkned)
-DEF_VECTOR_CRYPTO_FUNCTION (vaeskf2, crypto_vi, none_tu_preds, u_vvv_size_crypto_sew32_ops, zvkned)
\ No newline at end of file
+DEF_VECTOR_CRYPTO_FUNCTION (vaeskf2, crypto_vi, none_tu_preds, u_vvv_size_crypto_sew32_ops, zvkned)
+//ZVKNHA
+DEF_VECTOR_CRYPTO_FUNCTION (vsha2ms, crypto_vv, none_tu_preds, u_vvvv_crypto_sew32_ops, zvknha_or_zvknhb)
+DEF_VECTOR_CRYPTO_FUNCTION (vsha2ch, crypto_vv, none_tu_preds, u_vvvv_crypto_sew32_ops, zvknha_or_zvknhb)
+DEF_VECTOR_CRYPTO_FUNCTION (vsha2cl, crypto_vv, none_tu_preds, u_vvvv_crypto_sew32_ops, zvknha_or_zvknhb)
+//ZVKNHB
+DEF_VECTOR_CRYPTO_FUNCTION (vsha2ms, crypto_vv, none_tu_preds, u_vvvv_crypto_sew64_ops, zvknhb)
+DEF_VECTOR_CRYPTO_FUNCTION (vsha2ch, crypto_vv, none_tu_preds, u_vvvv_crypto_sew64_ops, zvknhb)
+DEF_VECTOR_CRYPTO_FUNCTION (vsha2cl, crypto_vv, none_tu_preds, u_vvvv_crypto_sew64_ops, zvknhb)
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md
index 39b4e4b2f6a..e8fc21e8ceb 100644
--- a/gcc/config/riscv/riscv.md
+++ b/gcc/config/riscv/riscv.md
@@ -437,16 +437,20 @@
;; vrol crypto vector rotate left instructions
;; vror crypto vector rotate right instructions
;; vwsll crypto vector widening shift left logical instructions
-;; vclmul vector crypto carry-less multiply - return low half instructions
-;; vclmulh vector crypto carry-less multiply - return high half instructions
-;; vghsh vector crypto add-multiply over GHASH Galois-Field instructions
-;; vgmul vector crypto multiply over GHASH Galois-Field instrumctions
-;; vaesef vector crypto AES final-round encryption instructions
-;; vaesem vector crypto AES middle-round encryption instructions
-;; vaesdf vector crypto AES final-round decryption instructions
-;; vaesdm vector crypto AES middle-round decryption instructions
-;; vaeskf1 vector crypto AES-128 Forward KeySchedule generation instructions
-;; vaeskf2 vector crypto AES-256 Forward KeySchedule generation instructions
+;; vclmul crypto vector carry-less multiply - return low half instructions
+;; vclmulh crypto vector carry-less multiply - return high half instructions
+;; vghsh crypto vector add-multiply over GHASH Galois-Field instructions
+;; vgmul crypto vector multiply over GHASH Galois-Field instrumctions
+;; vaesef crypto vector AES final-round encryption instructions
+;; vaesem crypto vector AES middle-round encryption instructions
+;; vaesdf crypto vector AES final-round decryption instructions
+;; vaesdm crypto vector AES middle-round decryption instructions
+;; vaeskf1 crypto vector AES-128 Forward KeySchedule generation instructions
+;; vaeskf2 crypto vector AES-256 Forward KeySchedule generation instructions
+;; vaesz crypto vector AES round zero encryption/decryption instructions
+;; vsha2ms crypto vector SHA-2 message schedule instructions
+;; vsha2ch crypto vector SHA-2 two rounds of compression instructions
+;; vsha2cl crypto vector SHA-2 two rounds of compression instructions
(define_attr "type"
"unknown,branch,jump,jalr,ret,call,load,fpload,store,fpstore,
mtc,mfc,const,arith,logical,shift,slt,imul,idiv,move,fmove,fadd,fmul,
@@ -467,7 +471,8 @@
vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv,
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,
vgather,vcompress,vmov,vector,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,vror,vwsll,
- vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaeskf1,vaeskf2,vaesz"
+ vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaeskf1,vaeskf2,vaesz,
+ vsha2ms,vsha2ch,vsha2cl"
(cond [(eq_attr "got" "load") (const_string "load")
;; If a doubleword move uses these expensive instructions,
diff --git a/gcc/config/riscv/vector-crypto.md b/gcc/config/riscv/vector-crypto.md
index e84aaf50fc0..38b41fb3664 100755
--- a/gcc/config/riscv/vector-crypto.md
+++ b/gcc/config/riscv/vector-crypto.md
@@ -30,6 +30,9 @@
UNSPEC_VAESZVS
UNSPEC_VAESKF1
UNSPEC_VAESKF2
+ UNSPEC_VSHA2MS
+ UNSPEC_VSHA2CH
+ UNSPEC_VSHA2CL
])
(define_int_attr ror_rol [(UNSPEC_VROL "rol") (UNSPEC_VROR "ror")])
@@ -46,6 +49,9 @@
(UNSPEC_VAESEMVS "aesem") (UNSPEC_VAESDFVS "aesdf")
(UNSPEC_VAESDMVS "aesdm") (UNSPEC_VAESZVS "aesz" )])
+(define_int_attr vv_ins1_name [(UNSPEC_VGHSH "ghsh") (UNSPEC_VSHA2MS "sha2ms")
+ (UNSPEC_VSHA2CH "sha2ch") (UNSPEC_VSHA2CL "sha2cl")])
+
(define_int_attr ins_type [(UNSPEC_VGMUL "vv") (UNSPEC_VAESEFVV "vv")
(UNSPEC_VAESEMVV "vv") (UNSPEC_VAESDFVV "vv")
(UNSPEC_VAESDMVV "vv") (UNSPEC_VAESEFVS "vs")
@@ -65,6 +71,8 @@
UNSPEC_VAESEMVS UNSPEC_VAESDFVS UNSPEC_VAESDMVS
UNSPEC_VAESZVS])
+(define_int_iterator UNSPEC_VGNHAB [UNSPEC_VGHSH UNSPEC_VSHA2MS UNSPEC_VSHA2CH UNSPEC_VSHA2CL])
+
;; zvbb instructions patterns.
;; vandn.vv vandn.vx vrol.vv vrol.vx
;; vror.vv vror.vx vror.vi
@@ -292,44 +300,27 @@
[(set_attr "type" "vclmul<h>")
(set_attr "mode" "<VDI:MODE>")])
-;; zvkg instructions patterns.
-;; vghsh.vv vgmul.vv
-(define_insn "@pred_vghsh<VSI:mode>"
- [(set (match_operand:VSI 0 "register_operand" "=vd")
- (if_then_else:VSI
- (unspec:<VSI:VM>
+;; zvknh[ab] and zvkg instructions patterns.
+;; vsha2ms.vv vsha2ch.vv vsha2cl.vv vghsh.vv
+
+(define_insn "@pred_v<vv_ins1_name><mode>"
+ [(set (match_operand:VQEXTI 0 "register_operand" "=vd")
+ (if_then_else:VQEXTI
+ (unspec:<VQEXTI:VM>
[(match_operand 4 "vector_length_operand" "rK")
(match_operand 5 "const_int_operand" " i")
(match_operand 6 "const_int_operand" " i")
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
- (unspec:VSI
- [(match_operand:VSI 1 "register_operand" " 0")
- (match_operand:VSI 2 "register_operand" "vr")
- (match_operand:VSI 3 "register_operand" "vr")] UNSPEC_VGHSH)
- (match_dup 1)))]
- "TARGET_ZVKG"
- "vghsh.vv\t%0,%2,%3"
- [(set_attr "type" "vghsh")
- (set_attr "mode" "<VSI:MODE>")])
-
-(define_insn "@pred_vgmul<VSI:mode>"
- [(set (match_operand:VSI 0 "register_operand" "=vd")
- (if_then_else:VSI
- (unspec:<VSI:VM>
- [(match_operand 3 "vector_length_operand" "rK")
- (match_operand 4 "const_int_operand" " i")
- (match_operand 5 "const_int_operand" " i")
- (reg:SI VL_REGNUM)
- (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
- (unspec:VSI
- [(match_operand:VSI 1 "register_operand" " 0")
- (match_operand:VSI 2 "register_operand" "vr")] UNSPEC_VGMUL)
+ (unspec:VQEXTI
+ [(match_operand:VQEXTI 1 "register_operand" " 0")
+ (match_operand:VQEXTI 2 "register_operand" "vr")
+ (match_operand:VQEXTI 3 "register_operand" "vr")] UNSPEC_VGNHAB)
(match_dup 1)))]
- "TARGET_ZVKG"
- "vgmul.vv\t%0,%2"
- [(set_attr "type" "vgmul")
- (set_attr "mode" "<VSI:MODE>")])
+ "TARGET_ZVKNHA || TARGET_ZVKNHB || TARGET_ZVKG"
+ "v<vv_ins1_name>.vv\t%0,%2,%3"
+ [(set_attr "type" "v<vv_ins1_name>")
+ (set_attr "mode" "<VQEXTI:MODE>")])
;; zvkned instructions patterns.
;; vgmul.vv vaesz.vs
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 66a2e9358cb..e2de9bfc496 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -54,7 +54,7 @@
vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
- vaeskf1,vaeskf2,vaesz")
+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl")
(const_string "true")]
(const_string "false")))
@@ -78,7 +78,7 @@
vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
- vaeskf1,vaeskf2,vaesz")
+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl")
(const_string "true")]
(const_string "false")))
@@ -707,7 +707,7 @@
(const_int 2)
(eq_attr "type" "vimerge,vfmerge,vcompress,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
- vaeskf1,vaeskf2,vaesz")
+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl")
(const_int 1)
(eq_attr "type" "vimuladd,vfmuladd")
@@ -747,7 +747,7 @@
vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,\
vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,vcompress,\
vlsegde,vssegts,vssegtux,vssegtox,vlsegdff,vbrev,vbrev8,vrev8,\
- vghsh,vaeskf1,vaeskf2")
+ vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl")
(const_int 4)
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -780,7 +780,7 @@
vfcvtitof,vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,\
vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,\
vcompress,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8,vghsh,\
- vaeskf1,vaeskf2")
+ vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl")
(symbol_ref "riscv_vector::get_ta(operands[5])")
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -867,7 +867,7 @@
(eq_attr "type" "vimuladd,vfmuladd")
(const_int 9)
- (eq_attr "type" "vmsfs,vmidx,vcompress,vghsh,vaeskf1,vaeskf2")
+ (eq_attr "type" "vmsfs,vmidx,vcompress,vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl")
(const_int 6)
(eq_attr "type" "vmpop,vmffs,vssegte,vclz,vctz")
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp b/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
index b47602e1c83..13f1302314a 100644
--- a/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
@@ -42,6 +42,10 @@ dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvkg/*.\[cS\]]] \
"" $DEFAULT_CFLAGS
dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvkned/*.\[cS\]]] \
"" $DEFAULT_CFLAGS
+dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvknha/*.\[cS\]]] \
+ "" $DEFAULT_CFLAGS
+dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvknhb/*.\[cS\]]] \
+ "" $DEFAULT_CFLAGS
# All done.
dg-finish
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ch.c b/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ch.c
new file mode 100644
index 00000000000..2dea4bbb89f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ch.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvknha -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsha2ch_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vsha2ch_vv_u32mf2(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ch_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vsha2ch_vv_u32m1(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ch_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vsha2ch_vv_u32m2(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ch_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vsha2ch_vv_u32m4(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ch_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vsha2ch_vv_u32m8(vd, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vsha2ch_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vsha2ch_vv_u32mf2_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ch_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vsha2ch_vv_u32m1_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ch_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vsha2ch_vv_u32m2_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ch_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vsha2ch_vv_u32m4_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ch_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vsha2ch_vv_u32m8_tu(vd, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsha2ch\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ch_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ch_overloaded.c
new file mode 100644
index 00000000000..5a16400f800
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ch_overloaded.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvknha -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsha2ch_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vsha2ch(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ch_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vsha2ch(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ch_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vsha2ch(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ch_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vsha2ch(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ch_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vsha2ch(vd, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vsha2ch_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vsha2ch_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ch_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vsha2ch_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ch_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vsha2ch_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ch_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vsha2ch_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ch_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vsha2ch_tu(vd, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsha2ch\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2cl.c b/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2cl.c
new file mode 100644
index 00000000000..b2bbc1559f6
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2cl.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvknha -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsha2cl_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vsha2cl_vv_u32mf2(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2cl_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vsha2cl_vv_u32m1(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2cl_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vsha2cl_vv_u32m2(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2cl_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vsha2cl_vv_u32m4(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2cl_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vsha2cl_vv_u32m8(vd, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vsha2cl_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vsha2cl_vv_u32mf2_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2cl_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vsha2cl_vv_u32m1_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2cl_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vsha2cl_vv_u32m2_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2cl_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vsha2cl_vv_u32m4_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2cl_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vsha2cl_vv_u32m8_tu(vd, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsha2cl\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2cl_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2cl_overloaded.c
new file mode 100644
index 00000000000..7a54466b204
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2cl_overloaded.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvknha -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsha2cl_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vsha2cl(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2cl_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vsha2cl(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2cl_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vsha2cl(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2cl_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vsha2cl(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2cl_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vsha2cl(vd, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vsha2cl_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vsha2cl_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2cl_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vsha2cl_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2cl_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vsha2cl_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2cl_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vsha2cl_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2cl_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vsha2cl_tu(vd, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsha2cl\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ms.c b/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ms.c
new file mode 100644
index 00000000000..57523576c3a
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ms.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvknha -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsha2ms_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vsha2ms_vv_u32mf2(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ms_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vsha2ms_vv_u32m1(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ms_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vsha2ms_vv_u32m2(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ms_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vsha2ms_vv_u32m4(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ms_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vsha2ms_vv_u32m8(vd, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vsha2ms_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vsha2ms_vv_u32mf2_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ms_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vsha2ms_vv_u32m1_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ms_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vsha2ms_vv_u32m2_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ms_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vsha2ms_vv_u32m4_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ms_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vsha2ms_vv_u32m8_tu(vd, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsha2ms\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ms_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ms_overloaded.c
new file mode 100644
index 00000000000..4d31ee0ee34
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ms_overloaded.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvknha -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsha2ms_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vsha2ms(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ms_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vsha2ms(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ms_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vsha2ms(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ms_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vsha2ms(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ms_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vsha2ms(vd, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vsha2ms_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vsha2ms_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ms_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vsha2ms_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ms_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vsha2ms_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ms_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vsha2ms_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ms_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vsha2ms_tu(vd, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsha2ms\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ch.c b/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ch.c
new file mode 100644
index 00000000000..811c313887b
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ch.c
@@ -0,0 +1,83 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvknhb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsha2ch_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vsha2ch_vv_u32mf2(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ch_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vsha2ch_vv_u32m1(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ch_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vsha2ch_vv_u32m2(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ch_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vsha2ch_vv_u32m4(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ch_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vsha2ch_vv_u32m8(vd, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vsha2ch_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vsha2ch_vv_u64m1(vd, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vsha2ch_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vsha2ch_vv_u64m2(vd, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vsha2ch_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vsha2ch_vv_u64m4(vd, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vsha2ch_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vsha2ch_vv_u64m8(vd, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vsha2ch_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vsha2ch_vv_u32mf2_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ch_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vsha2ch_vv_u32m1_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ch_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vsha2ch_vv_u32m2_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ch_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vsha2ch_vv_u32m4_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ch_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vsha2ch_vv_u32m8_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vsha2ch_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vsha2ch_vv_u64m1_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vsha2ch_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vsha2ch_vv_u64m2_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vsha2ch_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vsha2ch_vv_u64m4_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vsha2ch_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vsha2ch_vv_u64m8_tu(vd, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 9 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 9 } } */
+/* { dg-final { scan-assembler-times {vsha2ch\.vv\s+v[0-9]+,\s*v[0-9]} 18 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ch_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ch_overloaded.c
new file mode 100644
index 00000000000..a09f9876c75
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ch_overloaded.c
@@ -0,0 +1,83 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvknhb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsha2ch_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vsha2ch(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ch_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vsha2ch(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ch_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vsha2ch(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ch_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vsha2ch(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ch_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vsha2ch(vd, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vsha2ch_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vsha2ch(vd, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vsha2ch_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vsha2ch(vd, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vsha2ch_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vsha2ch(vd, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vsha2ch_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vsha2ch(vd, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vsha2ch_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vsha2ch_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ch_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vsha2ch_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ch_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vsha2ch_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ch_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vsha2ch_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ch_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vsha2ch_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vsha2ch_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vsha2ch_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vsha2ch_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vsha2ch_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vsha2ch_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vsha2ch_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vsha2ch_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vsha2ch_tu(vd, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 9 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 9 } } */
+/* { dg-final { scan-assembler-times {vsha2ch\.vv\s+v[0-9]+,\s*v[0-9]} 18 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2cl.c b/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2cl.c
new file mode 100644
index 00000000000..f44c5a2cfbf
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2cl.c
@@ -0,0 +1,83 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvknhb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsha2cl_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vsha2cl_vv_u32mf2(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2cl_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vsha2cl_vv_u32m1(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2cl_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vsha2cl_vv_u32m2(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2cl_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vsha2cl_vv_u32m4(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2cl_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vsha2cl_vv_u32m8(vd, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vsha2cl_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vsha2cl_vv_u64m1(vd, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vsha2cl_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vsha2cl_vv_u64m2(vd, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vsha2cl_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vsha2cl_vv_u64m4(vd, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vsha2cl_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vsha2cl_vv_u64m8(vd, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vsha2cl_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vsha2cl_vv_u32mf2_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2cl_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vsha2cl_vv_u32m1_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2cl_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vsha2cl_vv_u32m2_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2cl_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vsha2cl_vv_u32m4_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2cl_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vsha2cl_vv_u32m8_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vsha2cl_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vsha2cl_vv_u64m1_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vsha2cl_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vsha2cl_vv_u64m2_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vsha2cl_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vsha2cl_vv_u64m4_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vsha2cl_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vsha2cl_vv_u64m8_tu(vd, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 9 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 9 } } */
+/* { dg-final { scan-assembler-times {vsha2cl\.vv\s+v[0-9]+,\s*v[0-9]} 18 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2cl_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2cl_overloaded.c
new file mode 100644
index 00000000000..2354ab54a63
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2cl_overloaded.c
@@ -0,0 +1,83 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvknhb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsha2cl_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vsha2cl(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2cl_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vsha2cl(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2cl_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vsha2cl(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2cl_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vsha2cl(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2cl_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vsha2cl(vd, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vsha2cl_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vsha2cl(vd, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vsha2cl_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vsha2cl(vd, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vsha2cl_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vsha2cl(vd, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vsha2cl_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vsha2cl(vd, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vsha2cl_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vsha2cl_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2cl_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vsha2cl_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2cl_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vsha2cl_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2cl_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vsha2cl_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2cl_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vsha2cl_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vsha2cl_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vsha2cl_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vsha2cl_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vsha2cl_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vsha2cl_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vsha2cl_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vsha2cl_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vsha2cl_tu(vd, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 9 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 9 } } */
+/* { dg-final { scan-assembler-times {vsha2cl\.vv\s+v[0-9]+,\s*v[0-9]} 18 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ms.c b/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ms.c
new file mode 100644
index 00000000000..45aba16119d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ms.c
@@ -0,0 +1,83 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvknhb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsha2ms_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vsha2ms_vv_u32mf2(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ms_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vsha2ms_vv_u32m1(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ms_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vsha2ms_vv_u32m2(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ms_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vsha2ms_vv_u32m4(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ms_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vsha2ms_vv_u32m8(vd, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vsha2ms_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vsha2ms_vv_u64m1(vd, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vsha2ms_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vsha2ms_vv_u64m2(vd, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vsha2ms_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vsha2ms_vv_u64m4(vd, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vsha2ms_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vsha2ms_vv_u64m8(vd, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vsha2ms_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vsha2ms_vv_u32mf2_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ms_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vsha2ms_vv_u32m1_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ms_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vsha2ms_vv_u32m2_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ms_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vsha2ms_vv_u32m4_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ms_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vsha2ms_vv_u32m8_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vsha2ms_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vsha2ms_vv_u64m1_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vsha2ms_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vsha2ms_vv_u64m2_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vsha2ms_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vsha2ms_vv_u64m4_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vsha2ms_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vsha2ms_vv_u64m8_tu(vd, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 9 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 9 } } */
+/* { dg-final { scan-assembler-times {vsha2ms\.vv\s+v[0-9]+,\s*v[0-9]} 18 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ms_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ms_overloaded.c
new file mode 100644
index 00000000000..3cad2e09fc7
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ms_overloaded.c
@@ -0,0 +1,83 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvknhb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsha2ms_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vsha2ms(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ms_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vsha2ms(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ms_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vsha2ms(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ms_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vsha2ms(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ms_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vsha2ms(vd, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vsha2ms_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vsha2ms(vd, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vsha2ms_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vsha2ms(vd, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vsha2ms_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vsha2ms(vd, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vsha2ms_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vsha2ms(vd, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vsha2ms_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vsha2ms_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ms_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vsha2ms_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ms_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vsha2ms_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ms_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vsha2ms_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ms_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vsha2ms_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vsha2ms_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+ return __riscv_vsha2ms_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vsha2ms_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+ return __riscv_vsha2ms_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vsha2ms_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+ return __riscv_vsha2ms_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vsha2ms_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+ return __riscv_vsha2ms_tu(vd, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 9 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 9 } } */
+/* { dg-final { scan-assembler-times {vsha2ms\.vv\s+v[0-9]+,\s*v[0-9]} 18 } } */
--
2.17.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 6/7] RISC-V: Add intrinsic functions for crypto vector Zvksed extension.
2023-12-04 2:57 [PATCH 1/7] RISC-V: Add intrinsic functions for crypto vector Zvbb extension Feng Wang
` (3 preceding siblings ...)
2023-12-04 2:57 ` [PATCH 5/7] RISC-V: Add intrinsic functions for crypto vector Zvknh[ab] extension Feng Wang
@ 2023-12-04 2:57 ` Feng Wang
2023-12-04 2:57 ` [PATCH 7/7] RISC-V: Add intrinsic functions for crypto vector Zvksh extension Feng Wang
2023-12-04 8:01 ` [PATCH 1/7] RISC-V: Add intrinsic functions for crypto vector Zvbb extension Kito Cheng
6 siblings, 0 replies; 10+ messages in thread
From: Feng Wang @ 2023-12-04 2:57 UTC (permalink / raw)
To: gcc-patches; +Cc: kito.cheng, jeffreyalaw, zhusonghe, panciyan, Feng Wang
This patch add the intrinsic functions(according to https://github.com/
riscv-non-isa/rvv-intrinsic-doc/blob/eopc/vector-crypto/auto-generated/
vector-crypto/intrinsic_funcs.md) for crypto vector Zvksed extension. And all
the test cases are added for api-testing.
gcc/ChangeLog:
* common/config/riscv/riscv-common.cc: Add Zvksed in riscv_implied_info.
* config/riscv/riscv-vector-builtins-bases.cc (class vaeskf1): Add new function_base for Zvksed.
(class crypto_vi): Ditto.
(BASE): Add Zvksed BASE declaration.
* config/riscv/riscv-vector-builtins-bases.h: Ditto.
* config/riscv/riscv-vector-builtins-shapes.cc (struct crypto_vv_def): Add function_builder for Zvksed.
* config/riscv/riscv-vector-crypto-builtins-avail.h (AVAIL): Add enable condition.
* config/riscv/riscv-vector-crypto-builtins-functions.def (vsha2cl): Add intrinsc def.
(vsm4k): Ditto.
(vsm4r): Ditto.
* config/riscv/riscv.md: Add Zvksed ins name.
* config/riscv/vector-crypto.md (sm4k): Add Zvksed md patterns.
(@pred_vaeskf1<mode>_scalar):Ditto.
(@pred_crypto_vi<vi_ins_name><mode>_scalar): Ditto.
* config/riscv/vector.md: Add the corresponding attribute for Zvksed.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/zvk/zvk.exp:
* gcc.target/riscv/zvk/zvksed/vsm4k.c: New test.
* gcc.target/riscv/zvk/zvksed/vsm4k_overloaded.c: New test.
* gcc.target/riscv/zvk/zvksed/vsm4r.c: New test.
* gcc.target/riscv/zvk/zvksed/vsm4r_overloaded.c: New test.
---
gcc/common/config/riscv/riscv-common.cc | 1 +
.../riscv/riscv-vector-builtins-bases.cc | 13 +-
.../riscv/riscv-vector-builtins-bases.h | 2 +
.../riscv/riscv-vector-builtins-shapes.cc | 2 +-
.../riscv-vector-crypto-builtins-avail.h | 1 +
...riscv-vector-crypto-builtins-functions.def | 10 +-
gcc/config/riscv/riscv.md | 5 +-
gcc/config/riscv/vector-crypto.md | 40 +++--
gcc/config/riscv/vector.md | 20 ++-
gcc/testsuite/gcc.target/riscv/zvk/zvk.exp | 3 +-
.../gcc.target/riscv/zvk/zvksed/vsm4k.c | 50 ++++++
.../riscv/zvk/zvksed/vsm4k_overloaded.c | 50 ++++++
.../gcc.target/riscv/zvk/zvksed/vsm4r.c | 170 ++++++++++++++++++
.../riscv/zvk/zvksed/vsm4r_overloaded.c | 170 ++++++++++++++++++
14 files changed, 505 insertions(+), 32 deletions(-)
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4k.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4k_overloaded.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4r.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4r_overloaded.c
diff --git a/gcc/common/config/riscv/riscv-common.cc b/gcc/common/config/riscv/riscv-common.cc
index 7201ac3866c..87595b135ef 100644
--- a/gcc/common/config/riscv/riscv-common.cc
+++ b/gcc/common/config/riscv/riscv-common.cc
@@ -127,6 +127,7 @@ static const riscv_implied_info_t riscv_implied_info[] =
{"zvkned", "v"},
{"zvknha", "v"},
{"zvknhb", "v"},
+ {"zvksed", "v"},
{"zfh", "zfhmin"},
{"zfhmin", "f"},
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index a3670ec5b38..83309f07661 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -2288,8 +2288,9 @@ public:
}
};
-/* Implements vaeskf1. */
-class vaeskf1 : public function_base
+/* Implements vaeskf1/vsm4k. */
+template<int UNSPEC>
+class crypto_vi : public function_base
{
public:
bool apply_mask_policy_p () const override { return false; }
@@ -2297,7 +2298,7 @@ public:
rtx expand (function_expander &e) const override
{
- return e.use_exact_insn (code_for_pred_vaeskf1_scalar (e.vector_mode ()));
+ return e.use_exact_insn (code_for_pred_crypto_vi_scalar (UNSPEC, e.vector_mode ()));
}
};
@@ -2591,11 +2592,13 @@ static CONSTEXPR const crypto_vv<UNSPEC_VAESEM> vaesem_obj;
static CONSTEXPR const crypto_vv<UNSPEC_VAESDF> vaesdf_obj;
static CONSTEXPR const crypto_vv<UNSPEC_VAESDM> vaesdm_obj;
static CONSTEXPR const crypto_vv<UNSPEC_VAESZ> vaesz_obj;
-static CONSTEXPR const vaeskf1 vaeskf1_obj;
+static CONSTEXPR const crypto_vi<UNSPEC_VAESKF1> vaeskf1_obj;
static CONSTEXPR const vaeskf2 vaeskf2_obj;
static CONSTEXPR const vg_nhab<UNSPEC_VSHA2MS> vsha2ms_obj;
static CONSTEXPR const vg_nhab<UNSPEC_VSHA2CH> vsha2ch_obj;
static CONSTEXPR const vg_nhab<UNSPEC_VSHA2CL> vsha2cl_obj;
+static CONSTEXPR const crypto_vi<UNSPEC_VSM4K> vsm4k_obj;
+static CONSTEXPR const crypto_vv<UNSPEC_VSM4R> vsm4r_obj;
/* Declare the function base NAME, pointing it to an instance
of class <NAME>_obj. */
@@ -2882,4 +2885,6 @@ BASE (vaeskf2)
BASE (vsha2ms)
BASE (vsha2ch)
BASE (vsha2cl)
+BASE (vsm4k)
+BASE (vsm4r)
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.h b/gcc/config/riscv/riscv-vector-builtins-bases.h
index 0560b0008f0..e9e6d7bfe7f 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.h
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.h
@@ -304,6 +304,8 @@ extern const function_base *const vaeskf2;
extern const function_base *const vsha2ms;
extern const function_base *const vsha2ch;
extern const function_base *const vsha2cl;
+extern const function_base *const vsm4k;
+extern const function_base *const vsm4r;
}
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 5873103857a..4fe298917f6 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -1050,7 +1050,7 @@ struct crypto_vv_def : public build_base
}
};
-/* vaeskf1/vaeskf2 class. */
+/* vaeskf1/vaeskf2/vsm4k class. */
struct crypto_vi_def : public build_base
{
char *get_name (function_builder &b, const function_instance &instance,
diff --git a/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h b/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
index bc1b6ec9b5b..f09315923f3 100755
--- a/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
+++ b/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
@@ -19,5 +19,6 @@ AVAIL (zvkg, TARGET_ZVKG)
AVAIL (zvkned, TARGET_ZVKNED)
AVAIL (zvknha_or_zvknhb, TARGET_ZVKNHA || TARGET_ZVKNHB)
AVAIL (zvknhb, TARGET_ZVKNHB)
+AVAIL (zvksed, TARGET_ZVKSED)
}
#endif
diff --git a/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def b/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
index 9c89412a9a9..67f3bf5284b 100755
--- a/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
+++ b/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
@@ -64,4 +64,12 @@ DEF_VECTOR_CRYPTO_FUNCTION (vsha2cl, crypto_vv, none_tu_preds, u_vvvv_crypto_se
//ZVKNHB
DEF_VECTOR_CRYPTO_FUNCTION (vsha2ms, crypto_vv, none_tu_preds, u_vvvv_crypto_sew64_ops, zvknhb)
DEF_VECTOR_CRYPTO_FUNCTION (vsha2ch, crypto_vv, none_tu_preds, u_vvvv_crypto_sew64_ops, zvknhb)
-DEF_VECTOR_CRYPTO_FUNCTION (vsha2cl, crypto_vv, none_tu_preds, u_vvvv_crypto_sew64_ops, zvknhb)
\ No newline at end of file
+DEF_VECTOR_CRYPTO_FUNCTION (vsha2cl, crypto_vv, none_tu_preds, u_vvvv_crypto_sew64_ops, zvknhb)
+//Zvksed
+DEF_VECTOR_CRYPTO_FUNCTION (vsm4k, crypto_vi, none_tu_preds, u_vv_size_crypto_sew32_ops, zvksed)
+DEF_VECTOR_CRYPTO_FUNCTION (vsm4r, crypto_vv, none_tu_preds, u_vvv_crypto_sew32_ops, zvksed)
+DEF_VECTOR_CRYPTO_FUNCTION (vsm4r, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_ops, zvksed)
+DEF_VECTOR_CRYPTO_FUNCTION (vsm4r, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x2_ops, zvksed)
+DEF_VECTOR_CRYPTO_FUNCTION (vsm4r, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x4_ops, zvksed)
+DEF_VECTOR_CRYPTO_FUNCTION (vsm4r, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x8_ops, zvksed)
+DEF_VECTOR_CRYPTO_FUNCTION (vsm4r, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x16_ops, zvksed)
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md
index e8fc21e8ceb..c076b82008a 100644
--- a/gcc/config/riscv/riscv.md
+++ b/gcc/config/riscv/riscv.md
@@ -428,6 +428,7 @@
;; vcompress vector compress instruction
;; vmov whole vector register move
;; vector unknown vector instruction
+;; 17. Crypto Vector instructions
;; vandn crypto vector bitwise and-not instructions
;; vbrev crypto vector reverse bits in elements instructions
;; vbrev8 crypto vector reverse bits in bytes instructions
@@ -451,6 +452,8 @@
;; vsha2ms crypto vector SHA-2 message schedule instructions
;; vsha2ch crypto vector SHA-2 two rounds of compression instructions
;; vsha2cl crypto vector SHA-2 two rounds of compression instructions
+;; vsm4k crypto vector SM4 KeyExpansion instructions
+;; vsm4r crypto vector SM4 Rounds instructions
(define_attr "type"
"unknown,branch,jump,jalr,ret,call,load,fpload,store,fpstore,
mtc,mfc,const,arith,logical,shift,slt,imul,idiv,move,fmove,fadd,fmul,
@@ -472,7 +475,7 @@
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,
vgather,vcompress,vmov,vector,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,vror,vwsll,
vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaeskf1,vaeskf2,vaesz,
- vsha2ms,vsha2ch,vsha2cl"
+ vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r"
(cond [(eq_attr "got" "load") (const_string "load")
;; If a doubleword move uses these expensive instructions,
diff --git a/gcc/config/riscv/vector-crypto.md b/gcc/config/riscv/vector-crypto.md
index 38b41fb3664..7bd4cd9f8b9 100755
--- a/gcc/config/riscv/vector-crypto.md
+++ b/gcc/config/riscv/vector-crypto.md
@@ -33,6 +33,10 @@
UNSPEC_VSHA2MS
UNSPEC_VSHA2CH
UNSPEC_VSHA2CL
+ UNSPEC_VSM4K
+ UNSPEC_VSM4R
+ UNSPEC_VSM4RVV
+ UNSPEC_VSM4RVS
])
(define_int_attr ror_rol [(UNSPEC_VROL "rol") (UNSPEC_VROR "ror")])
@@ -47,16 +51,20 @@
(UNSPEC_VAESEMVV "aesem") (UNSPEC_VAESDFVV "aesdf")
(UNSPEC_VAESDMVV "aesdm") (UNSPEC_VAESEFVS "aesef")
(UNSPEC_VAESEMVS "aesem") (UNSPEC_VAESDFVS "aesdf")
- (UNSPEC_VAESDMVS "aesdm") (UNSPEC_VAESZVS "aesz" )])
+ (UNSPEC_VAESDMVS "aesdm") (UNSPEC_VAESZVS "aesz" )
+ (UNSPEC_VSM4RVV "sm4r" ) (UNSPEC_VSM4RVS "sm4r" )])
(define_int_attr vv_ins1_name [(UNSPEC_VGHSH "ghsh") (UNSPEC_VSHA2MS "sha2ms")
(UNSPEC_VSHA2CH "sha2ch") (UNSPEC_VSHA2CL "sha2cl")])
+(define_int_attr vi_ins_name [(UNSPEC_VAESKF1 "aeskf1") (UNSPEC_VSM4K "sm4k")])
+
(define_int_attr ins_type [(UNSPEC_VGMUL "vv") (UNSPEC_VAESEFVV "vv")
(UNSPEC_VAESEMVV "vv") (UNSPEC_VAESDFVV "vv")
(UNSPEC_VAESDMVV "vv") (UNSPEC_VAESEFVS "vs")
(UNSPEC_VAESEMVS "vs") (UNSPEC_VAESDFVS "vs")
- (UNSPEC_VAESDMVS "vs") (UNSPEC_VAESZVS "vs")])
+ (UNSPEC_VAESDMVS "vs") (UNSPEC_VAESZVS "vs")
+ (UNSPEC_VSM4RVV "vv") (UNSPEC_VSM4RVS "vs")])
(define_int_iterator UNSPEC_VRORL [UNSPEC_VROL UNSPEC_VROR])
@@ -69,10 +77,12 @@
(define_int_iterator UNSPEC_CRYPTO_VV [UNSPEC_VGMUL UNSPEC_VAESEFVV UNSPEC_VAESEMVV
UNSPEC_VAESDFVV UNSPEC_VAESDMVV UNSPEC_VAESEFVS
UNSPEC_VAESEMVS UNSPEC_VAESDFVS UNSPEC_VAESDMVS
- UNSPEC_VAESZVS])
+ UNSPEC_VAESZVS UNSPEC_VSM4RVV UNSPEC_VSM4RVS])
(define_int_iterator UNSPEC_VGNHAB [UNSPEC_VGHSH UNSPEC_VSHA2MS UNSPEC_VSHA2CH UNSPEC_VSHA2CL])
+(define_int_iterator UNSPEC_CRYPTO_VI [UNSPEC_VAESKF1 UNSPEC_VSM4K])
+
;; zvbb instructions patterns.
;; vandn.vv vandn.vx vrol.vv vrol.vx
;; vror.vv vror.vx vror.vi
@@ -338,7 +348,7 @@
[(match_operand:VSI 1 "register_operand" " 0")
(match_operand:VSI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
(match_dup 1)))]
- "TARGET_ZVKNED"
+ "TARGET_ZVKG || TARGET_ZVKNED"
"v<vv_ins_name>.<ins_type>\t%0,%2"
[(set_attr "type" "v<vv_ins_name>")
(set_attr "mode" "<VSI:MODE>")])
@@ -356,7 +366,7 @@
[(match_operand:VSI 1 "register_operand" " 0")
(match_operand:VSI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
(match_dup 1)))]
- "TARGET_ZVKNED"
+ "TARGET_ZVKNED || TARGET_ZVKSED"
"v<vv_ins_name>.<ins_type>\t%0,%2"
[(set_attr "type" "v<vv_ins_name>")
(set_attr "mode" "<VSI:MODE>")])
@@ -374,7 +384,7 @@
[(match_operand:<VSIX2> 1 "register_operand" " 0")
(match_operand:VLMULX2_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
(match_dup 1)))]
- "TARGET_ZVKNED"
+ "TARGET_ZVKNED || TARGET_ZVKSED"
"v<vv_ins_name>.<ins_type>\t%0,%2"
[(set_attr "type" "v<vv_ins_name>")
(set_attr "mode" "<VLMULX2_SI:MODE>")])
@@ -392,7 +402,7 @@
[(match_operand:<VSIX4> 1 "register_operand" " 0")
(match_operand:VLMULX4_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
(match_dup 1)))]
- "TARGET_ZVKNED"
+ "TARGET_ZVKNED || TARGET_ZVKSED"
"v<vv_ins_name>.<ins_type>\t%0,%2"
[(set_attr "type" "v<vv_ins_name>")
(set_attr "mode" "<VLMULX4_SI:MODE>")])
@@ -410,7 +420,7 @@
[(match_operand:<VSIX8> 1 "register_operand" " 0")
(match_operand:VLMULX8_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
(match_dup 1)))]
- "TARGET_ZVKNED"
+ "TARGET_ZVKNED || TARGET_ZVKSED"
"v<vv_ins_name>.<ins_type>\t%0,%2"
[(set_attr "type" "v<vv_ins_name>")
(set_attr "mode" "<VLMULX8_SI:MODE>")])
@@ -428,13 +438,13 @@
[(match_operand:<VSIX16> 1 "register_operand" " 0")
(match_operand:VLMULX16_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
(match_dup 1)))]
- "TARGET_ZVKNED"
+ "TARGET_ZVKNED || TARGET_ZVKSED"
"v<vv_ins_name>.<ins_type>\t%0,%2"
[(set_attr "type" "v<vv_ins_name>")
(set_attr "mode" "<VLMULX16_SI:MODE>")])
-;; vaeskf1.vi
-(define_insn "@pred_vaeskf1<mode>_scalar"
+;; vaeskf1.vi vsm4k.vi
+(define_insn "@pred_crypto_vi<vi_ins_name><mode>_scalar"
[(set (match_operand:VSI 0 "register_operand" "=vd, vd")
(if_then_else:VSI
(unspec:<VM>
@@ -445,11 +455,11 @@
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
(unspec:VSI
[(match_operand:VSI 2 "register_operand" "vr, vr")
- (match_operand:<VEL> 3 "const_int_operand" " i, i")] UNSPEC_VAESKF1)
+ (match_operand:<VEL> 3 "const_int_operand" " i, i")] UNSPEC_CRYPTO_VI)
(match_operand:VSI 1 "vector_merge_operand" "vu, 0")))]
- "TARGET_ZVKNED"
- "vaeskf1.vi\t%0,%2,%3"
- [(set_attr "type" "vaeskf1")
+ "TARGET_ZVKNED || TARGET_ZVKSED"
+ "v<vi_ins_name>.vi\t%0,%2,%3"
+ [(set_attr "type" "v<vi_ins_name>")
(set_attr "mode" "<MODE>")])
;; vaeskf2.vi
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index e2de9bfc496..7fae91b3860 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -54,7 +54,7 @@
vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
- vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl")
+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r")
(const_string "true")]
(const_string "false")))
@@ -78,7 +78,7 @@
vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
- vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl")
+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r")
(const_string "true")]
(const_string "false")))
@@ -707,7 +707,7 @@
(const_int 2)
(eq_attr "type" "vimerge,vfmerge,vcompress,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
- vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl")
+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r")
(const_int 1)
(eq_attr "type" "vimuladd,vfmuladd")
@@ -747,7 +747,7 @@
vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,\
vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,vcompress,\
vlsegde,vssegts,vssegtux,vssegtox,vlsegdff,vbrev,vbrev8,vrev8,\
- vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl")
+ vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,vsm4k")
(const_int 4)
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -770,7 +770,7 @@
(const_int 6)
(eq_attr "type" "vmpop,vmffs,vmidx,vssegte,vclz,vctz,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
- vaesz")
+ vaesz,vsm4r")
(const_int 3)]
(const_int INVALID_ATTRIBUTE)))
@@ -780,7 +780,7 @@
vfcvtitof,vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,\
vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,\
vcompress,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8,vghsh,\
- vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl")
+ vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,vsm4k")
(symbol_ref "riscv_vector::get_ta(operands[5])")
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -802,7 +802,7 @@
(eq_attr "type" "vimuladd,vfmuladd")
(symbol_ref "riscv_vector::get_ta(operands[7])")
- (eq_attr "type" "vmidx,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz")
+ (eq_attr "type" "vmidx,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz,vsm4r")
(symbol_ref "riscv_vector::get_ta(operands[4])")]
(const_int INVALID_ATTRIBUTE)))
@@ -844,7 +844,8 @@
vfclass,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\
vimovxv,vfmovfv,vlsegde,vlsegdff,vbrev,vbrev8,vrev8")
(const_int 7)
- (eq_attr "type" "vldm,vstm,vmalu,vmalu,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz")
+ (eq_attr "type" "vldm,vstm,vmalu,vmalu,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz,\
+ vsm4r")
(const_int 5)
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -867,7 +868,8 @@
(eq_attr "type" "vimuladd,vfmuladd")
(const_int 9)
- (eq_attr "type" "vmsfs,vmidx,vcompress,vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl")
+ (eq_attr "type" "vmsfs,vmidx,vcompress,vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,\
+ vsm4k")
(const_int 6)
(eq_attr "type" "vmpop,vmffs,vssegte,vclz,vctz")
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp b/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
index 13f1302314a..7d87b0c1bee 100644
--- a/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
@@ -46,6 +46,7 @@ dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvknha/*.\[cS\]]] \
"" $DEFAULT_CFLAGS
dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvknhb/*.\[cS\]]] \
"" $DEFAULT_CFLAGS
-
+dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvksed/*.\[cS\]]] \
+ "" $DEFAULT_CFLAGS
# All done.
dg-finish
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4k.c b/gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4k.c
new file mode 100644
index 00000000000..7a8a0857f31
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4k.c
@@ -0,0 +1,50 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvksed -mabi=lp64d -O2 -Wno-psabi" } */
+
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vsm4k_vi_u32mf2(vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm4k_vi_u32mf2(vs2, 0, vl);
+}
+
+vuint32m1_t test_vsm4k_vi_u32m1(vuint32m1_t vs2, size_t vl) {
+ return __riscv_vsm4k_vi_u32m1(vs2, 0, vl);
+}
+
+vuint32m2_t test_vsm4k_vi_u32m2(vuint32m2_t vs2, size_t vl) {
+ return __riscv_vsm4k_vi_u32m2(vs2, 0, vl);
+}
+
+vuint32m4_t test_vsm4k_vi_u32m4(vuint32m4_t vs2, size_t vl) {
+ return __riscv_vsm4k_vi_u32m4(vs2, 0, vl);
+}
+
+vuint32m8_t test_vsm4k_vi_u32m8(vuint32m8_t vs2, size_t vl) {
+ return __riscv_vsm4k_vi_u32m8(vs2, 0, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vsm4k_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm4k_vi_u32mf2_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m1_t test_vsm4k_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vsm4k_vi_u32m1_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m2_t test_vsm4k_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vsm4k_vi_u32m2_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m4_t test_vsm4k_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vsm4k_vi_u32m4_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m8_t test_vsm4k_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vsm4k_vi_u32m8_tu(maskedoff, vs2, 0, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsm4k\.vi\s+v[0-9]+,\s*v[0-9]+,0} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4k_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4k_overloaded.c
new file mode 100644
index 00000000000..dd06a7e58d8
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4k_overloaded.c
@@ -0,0 +1,50 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvksed -mabi=lp64d -O2 -Wno-psabi" } */
+
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vsm4k_vi_u32mf2(vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm4k(vs2, 0, vl);
+}
+
+vuint32m1_t test_vsm4k_vi_u32m1(vuint32m1_t vs2, size_t vl) {
+ return __riscv_vsm4k(vs2, 0, vl);
+}
+
+vuint32m2_t test_vsm4k_vi_u32m2(vuint32m2_t vs2, size_t vl) {
+ return __riscv_vsm4k(vs2, 0, vl);
+}
+
+vuint32m4_t test_vsm4k_vi_u32m4(vuint32m4_t vs2, size_t vl) {
+ return __riscv_vsm4k(vs2, 0, vl);
+}
+
+vuint32m8_t test_vsm4k_vi_u32m8(vuint32m8_t vs2, size_t vl) {
+ return __riscv_vsm4k(vs2, 0, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vsm4k_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m1_t test_vsm4k_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m2_t test_vsm4k_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m4_t test_vsm4k_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m8_t test_vsm4k_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsm4k\.vi\s+v[0-9]+,\s*v[0-9]+,0} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4r.c b/gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4r.c
new file mode 100644
index 00000000000..dac66db3abb
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4r.c
@@ -0,0 +1,170 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvksed -mabi=lp64d -O2 -Wno-psabi" } */
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vsm4r_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vv_u32mf2(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32mf2_u32mf2(vd, vs2, vl);
+}
+
+vuint32m1_t test_vsm4r_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32mf2_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32mf2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32mf2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32mf2_u32m8(vd, vs2, vl);
+}
+
+vuint32m1_t test_vsm4r_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vsm4r_vv_u32m1(vd, vs2, vl);
+}
+
+vuint32m1_t test_vsm4r_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32m1_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32m1_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32m1_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32m1_u32m8(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vv_u32m2(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32m2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32m2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32m2_u32m8(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vsm4r_vv_u32m4(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32m4_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32m4_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vsm4r_vv_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32m8_u32m8(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vsm4r_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vv_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32mf2_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vsm4r_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32mf2_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32mf2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32mf2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32mf2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vsm4r_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vsm4r_vv_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vsm4r_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32m1_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32m1_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32m1_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32m1_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vv_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32m2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32m2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32m2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vsm4r_vv_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32m4_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32m4_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vsm4r_vv_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_u32m8_u32m8_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsm4r\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
+/* { dg-final { scan-assembler-times {vsm4r\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4r_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4r_overloaded.c
new file mode 100644
index 00000000000..6311adfb2d5
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4r_overloaded.c
@@ -0,0 +1,170 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvksed -mabi=lp64d -O2 -Wno-psabi" } */
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vsm4r_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vv(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+vuint32m1_t test_vsm4r_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+vuint32m1_t test_vsm4r_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vsm4r_vv(vd, vs2, vl);
+}
+
+vuint32m1_t test_vsm4r_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vv(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vsm4r_vv(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vsm4r_vv(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vsm4r_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vv_tu(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vsm4r_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vsm4r_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vsm4r_vv_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vsm4r_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vv_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vsm4r_vv_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vsm4r_vv_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsm4r\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
+/* { dg-final { scan-assembler-times {vsm4r\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
--
2.17.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 7/7] RISC-V: Add intrinsic functions for crypto vector Zvksh extension
2023-12-04 2:57 [PATCH 1/7] RISC-V: Add intrinsic functions for crypto vector Zvbb extension Feng Wang
` (4 preceding siblings ...)
2023-12-04 2:57 ` [PATCH 6/7] RISC-V: Add intrinsic functions for crypto vector Zvksed extension Feng Wang
@ 2023-12-04 2:57 ` Feng Wang
2023-12-04 8:01 ` [PATCH 1/7] RISC-V: Add intrinsic functions for crypto vector Zvbb extension Kito Cheng
6 siblings, 0 replies; 10+ messages in thread
From: Feng Wang @ 2023-12-04 2:57 UTC (permalink / raw)
To: gcc-patches; +Cc: kito.cheng, jeffreyalaw, zhusonghe, panciyan, Feng Wang
This patch add the intrinsic functions(according to https://github.com/
riscv-non-isa/rvv-intrinsic-doc/blob/eopc/vector-crypto/auto-generated/
vector-crypto/intrinsic_funcs.md) for crypto vector Zvksh extension. And all
the test cases are added for api-testing.
Co-Authored by: Songhe Zhu <zhusonghe@eswincomputing.com>
gcc/ChangeLog:
* common/config/riscv/riscv-common.cc: Add Zvksh in riscv_implied_info.
* config/riscv/riscv-vector-builtins-bases.cc (class vaeskf2): Add new function_base for Zvksh.
(class vaeskf2_vsm3c): Ditto.
(class vsm3me): Ditto.
(BASE): Add Zvksh BASE declaration.
* config/riscv/riscv-vector-builtins-bases.h: Ditto.
* config/riscv/riscv-vector-builtins-shapes.cc (struct zvbb_zvbc_def): Add function_builder for Zvksh.
(struct crypto_vv_def): Ditto.
* config/riscv/riscv-vector-crypto-builtins-avail.h (AVAIL): Add enable condition.
* config/riscv/riscv-vector-crypto-builtins-functions.def (vsm4r): Add intrinsc def.
(vsm3me): Ditto.
(vsm3c): Ditto.
* config/riscv/riscv.md: Add Zvksh ins name.
* config/riscv/vector-crypto.md (sm3c): Add Zvksh md patterns.
(@pred_vaeskf2<mode>_scalar): Ditto.
(@pred_vi<vi_ins1_name><mode>_nomaskedoff_scalar): Ditto.
(@pred_vsm3me<mode>): Ditto.
* config/riscv/vector.md: Add the corresponding attribute for Zvksh.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/zvk/zvk.exp:
* gcc.target/riscv/zvk/zvksh/vsm3c.c: New test.
* gcc.target/riscv/zvk/zvksh/vsm3c_overloaded.c: New test.
* gcc.target/riscv/zvk/zvksh/vsm3me.c: New test.
* gcc.target/riscv/zvk/zvksh/vsm3me_overloaded.c: New test.
---
gcc/common/config/riscv/riscv-common.cc | 1 +
.../riscv/riscv-vector-builtins-bases.cc | 26 ++++++++--
.../riscv/riscv-vector-builtins-bases.h | 2 +
.../riscv/riscv-vector-builtins-shapes.cc | 10 ++--
.../riscv-vector-crypto-builtins-avail.h | 1 +
...riscv-vector-crypto-builtins-functions.def | 5 +-
gcc/config/riscv/riscv.md | 4 +-
gcc/config/riscv/vector-crypto.md | 43 +++++++++++++---
gcc/config/riscv/vector.md | 12 ++---
gcc/testsuite/gcc.target/riscv/zvk/zvk.exp | 2 +
.../gcc.target/riscv/zvk/zvksh/vsm3c.c | 51 +++++++++++++++++++
.../riscv/zvk/zvksh/vsm3c_overloaded.c | 51 +++++++++++++++++++
.../gcc.target/riscv/zvk/zvksh/vsm3me.c | 51 +++++++++++++++++++
.../riscv/zvk/zvksh/vsm3me_overloaded.c | 51 +++++++++++++++++++
14 files changed, 286 insertions(+), 24 deletions(-)
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3c.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3c_overloaded.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3me.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3me_overloaded.c
diff --git a/gcc/common/config/riscv/riscv-common.cc b/gcc/common/config/riscv/riscv-common.cc
index 87595b135ef..dbb42ca2f1e 100644
--- a/gcc/common/config/riscv/riscv-common.cc
+++ b/gcc/common/config/riscv/riscv-common.cc
@@ -128,6 +128,7 @@ static const riscv_implied_info_t riscv_implied_info[] =
{"zvknha", "v"},
{"zvknhb", "v"},
{"zvksed", "v"},
+ {"zvksh", "v"},
{"zfh", "zfhmin"},
{"zfhmin", "f"},
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index 83309f07661..07a9dc49104 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -2302,8 +2302,9 @@ public:
}
};
-/* Implements vaeskf2. */
-class vaeskf2 : public function_base
+/* Implements vaeskf2/vsm3c. */
+template<int UNSPEC>
+class vaeskf2_vsm3c : public function_base
{
public:
bool apply_mask_policy_p () const override { return false; }
@@ -2312,7 +2313,20 @@ public:
rtx expand (function_expander &e) const override
{
- return e.use_exact_insn (code_for_pred_vaeskf2_scalar (e.vector_mode ()));
+ return e.use_exact_insn (code_for_pred_vi_nomaskedoff_scalar (UNSPEC, e.vector_mode ()));
+ }
+};
+
+/* Implements vsm3me. */
+class vsm3me : public function_base
+{
+public:
+ bool apply_mask_policy_p () const override { return false; }
+ bool use_mask_predication_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (code_for_pred_vsm3me (e.vector_mode ()));
}
};
@@ -2593,12 +2607,14 @@ static CONSTEXPR const crypto_vv<UNSPEC_VAESDF> vaesdf_obj;
static CONSTEXPR const crypto_vv<UNSPEC_VAESDM> vaesdm_obj;
static CONSTEXPR const crypto_vv<UNSPEC_VAESZ> vaesz_obj;
static CONSTEXPR const crypto_vi<UNSPEC_VAESKF1> vaeskf1_obj;
-static CONSTEXPR const vaeskf2 vaeskf2_obj;
+static CONSTEXPR const vaeskf2_vsm3c<UNSPEC_VAESKF2> vaeskf2_obj;
static CONSTEXPR const vg_nhab<UNSPEC_VSHA2MS> vsha2ms_obj;
static CONSTEXPR const vg_nhab<UNSPEC_VSHA2CH> vsha2ch_obj;
static CONSTEXPR const vg_nhab<UNSPEC_VSHA2CL> vsha2cl_obj;
static CONSTEXPR const crypto_vi<UNSPEC_VSM4K> vsm4k_obj;
static CONSTEXPR const crypto_vv<UNSPEC_VSM4R> vsm4r_obj;
+static CONSTEXPR const vsm3me vsm3me_obj;
+static CONSTEXPR const vaeskf2_vsm3c<UNSPEC_VSM3C> vsm3c_obj;
/* Declare the function base NAME, pointing it to an instance
of class <NAME>_obj. */
@@ -2887,4 +2903,6 @@ BASE (vsha2ch)
BASE (vsha2cl)
BASE (vsm4k)
BASE (vsm4r)
+BASE (vsm3me)
+BASE (vsm3c)
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.h b/gcc/config/riscv/riscv-vector-builtins-bases.h
index e9e6d7bfe7f..51b53a3d4a9 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.h
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.h
@@ -306,6 +306,8 @@ extern const function_base *const vsha2ch;
extern const function_base *const vsha2cl;
extern const function_base *const vsm4k;
extern const function_base *const vsm4r;
+extern const function_base *const vsm3me;
+extern const function_base *const vsm3c;
}
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 4fe298917f6..0272a0d51cf 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -1009,7 +1009,7 @@ struct zvbb_zvbc_def : public build_base
}
};
-/* vghsh/vgmul/vaes* class. */
+/* vghsh/vgmul/vsha2ms/vsha2ch/vsha2cl/vsm3me/vaes* class. */
struct crypto_vv_def : public build_base
{
char *get_name (function_builder &b, const function_instance &instance,
@@ -1019,13 +1019,15 @@ struct crypto_vv_def : public build_base
if (overloaded_p && !instance.base->can_be_overloaded_p (instance.pred))
return nullptr;
b.append_base_name (instance.base_name);
- /* There is no op_type name in vghsh/vgmul/vaesz/vsha2ms/vsha2ch/vsha2cl overloaded intrinsic */
+ /* There is no op_type name in vghsh/vgmul/vaesz/vsha2ms/vsha2ch/vsha2cl/
+ vsm3me overloaded intrinsic */
if (!((strcmp (instance.base_name, "vghsh") == 0
|| strcmp (instance.base_name, "vgmul") == 0
|| strcmp (instance.base_name, "vaesz") == 0
|| strcmp (instance.base_name, "vsha2ms") == 0
|| strcmp (instance.base_name, "vsha2ch") == 0
- || strcmp (instance.base_name, "vsha2cl") == 0)
+ || strcmp (instance.base_name, "vsha2cl") == 0
+ || strcmp (instance.base_name, "vsm3me") == 0)
&& overloaded_p))
b.append_name (operand_suffixes[instance.op_info->op]);
if (!overloaded_p)
@@ -1050,7 +1052,7 @@ struct crypto_vv_def : public build_base
}
};
-/* vaeskf1/vaeskf2/vsm4k class. */
+/* vaeskf1/vaeskf2/vsm4k/vsm3c class. */
struct crypto_vi_def : public build_base
{
char *get_name (function_builder &b, const function_instance &instance,
diff --git a/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h b/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
index f09315923f3..c360c1d794f 100755
--- a/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
+++ b/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
@@ -20,5 +20,6 @@ AVAIL (zvkned, TARGET_ZVKNED)
AVAIL (zvknha_or_zvknhb, TARGET_ZVKNHA || TARGET_ZVKNHB)
AVAIL (zvknhb, TARGET_ZVKNHB)
AVAIL (zvksed, TARGET_ZVKSED)
+AVAIL (zvksh, TARGET_ZVKSH)
}
#endif
diff --git a/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def b/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
index 67f3bf5284b..53be469b2e6 100755
--- a/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
+++ b/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
@@ -72,4 +72,7 @@ DEF_VECTOR_CRYPTO_FUNCTION (vsm4r, crypto_vv, none_tu_preds, u_vvs_crypto_sew
DEF_VECTOR_CRYPTO_FUNCTION (vsm4r, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x2_ops, zvksed)
DEF_VECTOR_CRYPTO_FUNCTION (vsm4r, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x4_ops, zvksed)
DEF_VECTOR_CRYPTO_FUNCTION (vsm4r, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x8_ops, zvksed)
-DEF_VECTOR_CRYPTO_FUNCTION (vsm4r, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x16_ops, zvksed)
\ No newline at end of file
+DEF_VECTOR_CRYPTO_FUNCTION (vsm4r, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x16_ops, zvksed)
+//Zvksh
+DEF_VECTOR_CRYPTO_FUNCTION (vsm3me, crypto_vv, none_tu_preds, u_vvv_crypto_sew32_ops, zvksh)
+DEF_VECTOR_CRYPTO_FUNCTION (vsm3c, crypto_vi, none_tu_preds, u_vvv_size_crypto_sew32_ops, zvksh)
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md
index c076b82008a..2df2cb66455 100644
--- a/gcc/config/riscv/riscv.md
+++ b/gcc/config/riscv/riscv.md
@@ -454,6 +454,8 @@
;; vsha2cl crypto vector SHA-2 two rounds of compression instructions
;; vsm4k crypto vector SM4 KeyExpansion instructions
;; vsm4r crypto vector SM4 Rounds instructions
+;; vsm3me crypto vector SM3 Message Expansion instructions
+;; vsm3c crypto vector SM3 Compression instructions
(define_attr "type"
"unknown,branch,jump,jalr,ret,call,load,fpload,store,fpstore,
mtc,mfc,const,arith,logical,shift,slt,imul,idiv,move,fmove,fadd,fmul,
@@ -475,7 +477,7 @@
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,
vgather,vcompress,vmov,vector,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,vror,vwsll,
vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaeskf1,vaeskf2,vaesz,
- vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r"
+ vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c"
(cond [(eq_attr "got" "load") (const_string "load")
;; If a doubleword move uses these expensive instructions,
diff --git a/gcc/config/riscv/vector-crypto.md b/gcc/config/riscv/vector-crypto.md
index 7bd4cd9f8b9..c62553d3292 100755
--- a/gcc/config/riscv/vector-crypto.md
+++ b/gcc/config/riscv/vector-crypto.md
@@ -37,6 +37,8 @@
UNSPEC_VSM4R
UNSPEC_VSM4RVV
UNSPEC_VSM4RVS
+ UNSPEC_VSM3ME
+ UNSPEC_VSM3C
])
(define_int_attr ror_rol [(UNSPEC_VROL "rol") (UNSPEC_VROR "ror")])
@@ -59,6 +61,8 @@
(define_int_attr vi_ins_name [(UNSPEC_VAESKF1 "aeskf1") (UNSPEC_VSM4K "sm4k")])
+(define_int_attr vi_ins1_name [(UNSPEC_VAESKF2 "aeskf2") (UNSPEC_VSM3C "sm3c")])
+
(define_int_attr ins_type [(UNSPEC_VGMUL "vv") (UNSPEC_VAESEFVV "vv")
(UNSPEC_VAESEMVV "vv") (UNSPEC_VAESDFVV "vv")
(UNSPEC_VAESDMVV "vv") (UNSPEC_VAESEFVS "vs")
@@ -83,6 +87,8 @@
(define_int_iterator UNSPEC_CRYPTO_VI [UNSPEC_VAESKF1 UNSPEC_VSM4K])
+(define_int_iterator UNSPEC_CRYPTO_VI1 [UNSPEC_VAESKF2 UNSPEC_VSM3C])
+
;; zvbb instructions patterns.
;; vandn.vv vandn.vx vrol.vv vrol.vx
;; vror.vv vror.vx vror.vi
@@ -462,11 +468,11 @@
[(set_attr "type" "v<vi_ins_name>")
(set_attr "mode" "<MODE>")])
-;; vaeskf2.vi
-(define_insn "@pred_vaeskf2<mode>_scalar"
+;; vaeskf2.vi vsm3c.vi
+(define_insn "@pred_vi<vi_ins1_name><mode>_nomaskedoff_scalar"
[(set (match_operand:VSI 0 "register_operand" "=vd")
(if_then_else:VSI
- (unspec:<VM>
+ (unspec:<VSI:VM>
[(match_operand 4 "vector_length_operand" "rK")
(match_operand 5 "const_int_operand" " i")
(match_operand 6 "const_int_operand" " i")
@@ -475,9 +481,30 @@
(unspec:VSI
[(match_operand:VSI 1 "register_operand" "0")
(match_operand:VSI 2 "register_operand" "vr")
- (match_operand:<VEL> 3 "const_int_operand" " i")] UNSPEC_VAESKF2)
+ (match_operand:<VEL> 3 "const_int_operand" " i")] UNSPEC_CRYPTO_VI1)
(match_dup 1)))]
- "TARGET_ZVKNED"
- "vaeskf2.vi\t%0,%2,%3"
- [(set_attr "type" "vaeskf2")
- (set_attr "mode" "<MODE>")])
\ No newline at end of file
+ "TARGET_ZVKNED || TARGET_ZVKSH"
+ "v<vi_ins1_name>.vi\t%0,%2,%3"
+ [(set_attr "type" "v<vi_ins1_name>")
+ (set_attr "mode" "<MODE>")])
+
+;; zvksh instructions patterns.
+;; vsm3me.vv
+
+(define_insn "@pred_vsm3me<mode>"
+ [(set (match_operand:VSI 0 "register_operand" "=vd, vd")
+ (if_then_else:VSI
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" "rK, rK")
+ (match_operand 5 "const_int_operand" " i, i")
+ (match_operand 6 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VSI
+ [(match_operand:VSI 2 "register_operand" "vr, vr")
+ (match_operand:VSI 3 "register_operand" "vr, vr")] UNSPEC_VSM3ME)
+ (match_operand:VSI 1 "vector_merge_operand" "vu, 0")))]
+ "TARGET_ZVKSH"
+ "vsm3me.vv\t%0,%2,%3"
+ [(set_attr "type" "vsm3me")
+ (set_attr "mode" "<VSI:MODE>")])
\ No newline at end of file
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 7fae91b3860..2dda02d96f3 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -54,7 +54,7 @@
vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
- vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r")
+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c")
(const_string "true")]
(const_string "false")))
@@ -78,7 +78,7 @@
vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
- vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r")
+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c")
(const_string "true")]
(const_string "false")))
@@ -707,7 +707,7 @@
(const_int 2)
(eq_attr "type" "vimerge,vfmerge,vcompress,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
- vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r")
+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c")
(const_int 1)
(eq_attr "type" "vimuladd,vfmuladd")
@@ -747,7 +747,7 @@
vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,\
vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,vcompress,\
vlsegde,vssegts,vssegtux,vssegtox,vlsegdff,vbrev,vbrev8,vrev8,\
- vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,vsm4k")
+ vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm3me,vsm3c")
(const_int 4)
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -780,7 +780,7 @@
vfcvtitof,vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,\
vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,\
vcompress,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8,vghsh,\
- vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,vsm4k")
+ vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm3me,vsm3c")
(symbol_ref "riscv_vector::get_ta(operands[5])")
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -869,7 +869,7 @@
(const_int 9)
(eq_attr "type" "vmsfs,vmidx,vcompress,vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,\
- vsm4k")
+ vsm4k,vsm3me,vsm3c")
(const_int 6)
(eq_attr "type" "vmpop,vmffs,vssegte,vclz,vctz")
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp b/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
index 7d87b0c1bee..5e2778a51a8 100644
--- a/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
@@ -48,5 +48,7 @@ dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvknhb/*.\[cS\]]] \
"" $DEFAULT_CFLAGS
dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvksed/*.\[cS\]]] \
"" $DEFAULT_CFLAGS
+dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvksh/*.\[cS\]]] \
+ "" $DEFAULT_CFLAGS
# All done.
dg-finish
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3c.c b/gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3c.c
new file mode 100644
index 00000000000..1cea2489708
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3c.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvksh -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsm3c_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm3c_vi_u32mf2(vd, vs2, 0, vl);
+}
+
+vuint32m1_t test_vsm3c_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vsm3c_vi_u32m1(vd, vs2, 0, vl);
+}
+
+vuint32m2_t test_vsm3c_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vsm3c_vi_u32m2(vd, vs2, 0, vl);
+}
+
+vuint32m4_t test_vsm3c_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vsm3c_vi_u32m4(vd, vs2, 0, vl);
+}
+
+vuint32m8_t test_vsm3c_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vsm3c_vi_u32m8(vd, vs2, 0, vl);
+}
+
+vuint32mf2_t test_vsm3c_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm3c_vi_u32mf2_tu(vd, vs2, 0, vl);
+}
+
+vuint32m1_t test_vsm3c_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vsm3c_vi_u32m1_tu(vd, vs2, 0, vl);
+}
+
+vuint32m2_t test_vsm3c_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vsm3c_vi_u32m2_tu(vd, vs2, 0, vl);
+}
+
+vuint32m4_t test_vsm3c_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vsm3c_vi_u32m4_tu(vd, vs2, 0, vl);
+}
+
+vuint32m8_t test_vsm3c_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vsm3c_vi_u32m8_tu(vd, vs2, 0, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsm3c\.vi\s+v[0-9]+,\s*v[0-9]+,0} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3c_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3c_overloaded.c
new file mode 100644
index 00000000000..01b4c0fbb95
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3c_overloaded.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvksh -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsm3c_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm3c(vd, vs2, 0, vl);
+}
+
+vuint32m1_t test_vsm3c_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vsm3c(vd, vs2, 0, vl);
+}
+
+vuint32m2_t test_vsm3c_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vsm3c(vd, vs2, 0, vl);
+}
+
+vuint32m4_t test_vsm3c_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vsm3c(vd, vs2, 0, vl);
+}
+
+vuint32m8_t test_vsm3c_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vsm3c(vd, vs2, 0, vl);
+}
+
+vuint32mf2_t test_vsm3c_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+ return __riscv_vsm3c_tu(vd, vs2, 0, vl);
+}
+
+vuint32m1_t test_vsm3c_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+ return __riscv_vsm3c_tu(vd, vs2, 0, vl);
+}
+
+vuint32m2_t test_vsm3c_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+ return __riscv_vsm3c_tu(vd, vs2, 0, vl);
+}
+
+vuint32m4_t test_vsm3c_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+ return __riscv_vsm3c_tu(vd, vs2, 0, vl);
+}
+
+vuint32m8_t test_vsm3c_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+ return __riscv_vsm3c_tu(vd, vs2, 0, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsm3c\.vi\s+v[0-9]+,\s*v[0-9]+,0} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3me.c b/gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3me.c
new file mode 100644
index 00000000000..78fdf741643
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3me.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvksh -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsm3me_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vsm3me_vv_u32mf2(vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsm3me_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vsm3me_vv_u32m1(vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsm3me_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vsm3me_vv_u32m2(vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsm3me_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vsm3me_vv_u32m4(vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsm3me_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vsm3me_vv_u32m8(vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vsm3me_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vsm3me_vv_u32mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsm3me_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vsm3me_vv_u32m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsm3me_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vsm3me_vv_u32m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsm3me_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vsm3me_vv_u32m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsm3me_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vsm3me_vv_u32m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsm3me\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3me_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3me_overloaded.c
new file mode 100644
index 00000000000..00c9cfe56ca
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3me_overloaded.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvksh -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsm3me_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vsm3me(vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsm3me_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vsm3me(vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsm3me_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vsm3me(vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsm3me_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vsm3me(vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsm3me_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vsm3me(vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vsm3me_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+ return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsm3me_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+ return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsm3me_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+ return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsm3me_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+ return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsm3me_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+ return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsm3me\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 10 } } */
\ No newline at end of file
--
2.17.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/7] RISC-V: Add intrinsic functions for crypto vector Zvbb extension
2023-12-04 2:57 [PATCH 1/7] RISC-V: Add intrinsic functions for crypto vector Zvbb extension Feng Wang
` (5 preceding siblings ...)
2023-12-04 2:57 ` [PATCH 7/7] RISC-V: Add intrinsic functions for crypto vector Zvksh extension Feng Wang
@ 2023-12-04 8:01 ` Kito Cheng
2023-12-04 8:44 ` Feng Wang
6 siblings, 1 reply; 10+ messages in thread
From: Kito Cheng @ 2023-12-04 8:01 UTC (permalink / raw)
To: Feng Wang; +Cc: gcc-patches, jeffreyalaw, zhusonghe, panciyan
Hi Feng:
Thanks for the patch! a few inline comments below, also don't include
all test files from doc generator, only include a few within the patch
is fine, e.g. pick one for each group, so that it won't make GCC
source tree bloat too much.
> diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md
> index 935eeb7fd8e..2a3777e168c 100644
> --- a/gcc/config/riscv/riscv.md
> +++ b/gcc/config/riscv/riscv.md
> @@ -428,6 +428,15 @@
> ;; vcompress vector compress instruction
> ;; vmov whole vector register move
> ;; vector unknown vector instruction
> +;; vandn crypto vector bitwise and-not instructions
> +;; vbrev crypto vector reverse bits in elements instructions
> +;; vbrev8 crypto vector reverse bits in bytes instructions
> +;; vrev8 crypto vector reverse bytes instructions
> +;; vclz crypto vector count leading Zeros instructions
> +;; vctz crypto vector count lrailing Zeros instructions
> +;; vrol crypto vector rotate left instructions
> +;; vror crypto vector rotate right instructions
Use vialu for above operations, no new type for those instructions.
> +;; vwsll crypto vector widening shift left logical instructions
Rename to vwshift to make it consistent with vnshift.
> diff --git a/gcc/config/riscv/vector-crypto.md b/gcc/config/riscv/vector-crypto.md
> new file mode 100755
> index 00000000000..0373cf6f48a
> --- /dev/null
> +++ b/gcc/config/riscv/vector-crypto.md
> @@ -0,0 +1,207 @@
> +(define_c_enum "unspec" [
> + ;; Zvbb unspecs
> + UNSPEC_VANDN
> + UNSPEC_VBREV
> + UNSPEC_VBREV8
> + UNSPEC_VREV8
> + UNSPEC_VCLZ
> + UNSPEC_VCTZ
> + UNSPEC_VROL
> + UNSPEC_VROR
> + UNSPEC_VWSLL
> +])
Could you use generic RTL code for andn, clz, ctl, rol, ror and wsll
rather than unspec?
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Re: [PATCH 1/7] RISC-V: Add intrinsic functions for crypto vector Zvbb extension
2023-12-04 8:01 ` [PATCH 1/7] RISC-V: Add intrinsic functions for crypto vector Zvbb extension Kito Cheng
@ 2023-12-04 8:44 ` Feng Wang
0 siblings, 0 replies; 10+ messages in thread
From: Feng Wang @ 2023-12-04 8:44 UTC (permalink / raw)
To: kito.cheng; +Cc: gcc-patches, Jeff Law, zhusonghe, panciyan
2023-12-04 16:01 Kito Cheng <kito.cheng@gmail.com> wrote:
>Hi Feng:
>
>Thanks for the patch! a few inline comments below, also don't include
>all test files from doc generator, only include a few within the patch
>is fine, e.g. pick one for each group, so that it won't make GCC
>source tree bloat too much.
>
OK. All the test cases are indeed too large, will be reduced.
>> diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md
>> index 935eeb7fd8e..2a3777e168c 100644
>> --- a/gcc/config/riscv/riscv.md
>> +++ b/gcc/config/riscv/riscv.md
>> @@ -428,6 +428,15 @@
>> ;; vcompress vector compress instruction
>> ;; vmov whole vector register move
>> ;; vector unknown vector instruction
>> +;; vandn crypto vector bitwise and-not instructions
>> +;; vbrev crypto vector reverse bits in elements instructions
>> +;; vbrev8 crypto vector reverse bits in bytes instructions
>> +;; vrev8 crypto vector reverse bytes instructions
>> +;; vclz crypto vector count leading Zeros instructions
>> +;; vctz crypto vector count lrailing Zeros instructions
>> +;; vrol crypto vector rotate left instructions
>> +;; vror crypto vector rotate right instructions
>
>Use vialu for above operations, no new type for those instructions.
>
>> +;; vwsll crypto vector widening shift left logical instructions
>
>Rename to vwshift to make it consistent with vnshift.
>
>> diff --git a/gcc/config/riscv/vector-crypto.md b/gcc/config/riscv/vector-crypto.md
>> new file mode 100755
>> index 00000000000..0373cf6f48a
>> --- /dev/null
>> +++ b/gcc/config/riscv/vector-crypto.md
>> @@ -0,0 +1,207 @@
>> +(define_c_enum "unspec" [
>> + ;; Zvbb unspecs
>> + UNSPEC_VANDN
>> + UNSPEC_VBREV
>> + UNSPEC_VBREV8
>> + UNSPEC_VREV8
>> + UNSPEC_VCLZ
>> + UNSPEC_VCTZ
>> + UNSPEC_VROL
>> + UNSPEC_VROR
>> + UNSPEC_VWSLL
>> +])
>
>Could you use generic RTL code for andn, clz, ctl, rol, ror and wsll
>rather than unspec?
Got it! will optimize it., thanks!
Feng Wang
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 2/7] RISC-V: Add intrinsic functions for crypto vector Zvbc extension
@ 2023-12-04 3:37 juzhe.zhong
0 siblings, 0 replies; 10+ messages in thread
From: juzhe.zhong @ 2023-12-04 3:37 UTC (permalink / raw)
To: gcc-patches; +Cc: kito.cheng, jeffreyalaw, zhusonghe, panciyan, wangfeng
[-- Attachment #1: Type: text/plain, Size: 2605 bytes --]
Hi, eswin.
Thanks for contributing vector crypto support.
It seems patches mess up. Could you rebase your patch to the trunk GCC cleanly and send it again.
The patches look odd to me, for example:
// ZVBB
-DEF_VECTOR_CRYPTO_FUNCTION (vandn, zvbb, full_preds, u_vvv_ops, zvkb_or_zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vandn, zvbb, full_preds, u_vvx_ops, zvkb_or_zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vbrev, zvbb, full_preds, u_vv_ops, zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vbrev8, zvbb, full_preds, u_vv_ops, zvkb_or_zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vrev8, zvbb, full_preds, u_vv_ops, zvkb_or_zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vclz, zvbb, none_m_preds, u_vv_ops, zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vctz, zvbb, none_m_preds, u_vv_ops, zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vrol, zvbb, full_preds, u_vvv_ops, zvkb_or_zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vrol, zvbb, full_preds, u_shift_vvx_ops, zvkb_or_zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vror, zvbb, full_preds, u_vvv_ops, zvkb_or_zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vror, zvbb, full_preds, u_shift_vvx_ops, zvkb_or_zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vwsll, zvbb, full_preds, u_wvv_ops, zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vwsll, zvbb, full_preds, u_shift_wvx_ops, zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vandn, zvbb_zvbc, full_preds, u_vvv_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vandn, zvbb_zvbc, full_preds, u_vvx_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vbrev, zvbb_zvbc, full_preds, u_vv_ops, zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vbrev8, zvbb_zvbc, full_preds, u_vv_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vrev8, zvbb_zvbc, full_preds, u_vv_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vclz, zvbb_zvbc, none_m_preds, u_vv_ops, zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vctz, zvbb_zvbc, none_m_preds, u_vv_ops, zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vrol, zvbb_zvbc, full_preds, u_vvv_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vrol, zvbb_zvbc, full_preds, u_shift_vvx_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vror, zvbb_zvbc, full_preds, u_vvv_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vror, zvbb_zvbc, full_preds, u_shift_vvx_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vwsll, zvbb_zvbc, full_preds, u_wvv_ops, zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vwsll, zvbb_zvbc, full_preds, u_shift_wvx_ops, zvbb)
Seems you mess up your local development which is not easy to review.
I would expecting patches as follows:
1. Add crypto march support (riscv-common.cc)
2. Add crypto machine descriptions (vector-cryptio.md)
3. Add crypto builtin.
4. Add testcases.
Thanks.
juzhe.zhong@rivai.ai
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2023-12-04 8:44 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-12-04 2:57 [PATCH 1/7] RISC-V: Add intrinsic functions for crypto vector Zvbb extension Feng Wang
2023-12-04 2:57 ` [PATCH 2/7] RISC-V: Add intrinsic functions for crypto vector Zvbc extension Feng Wang
2023-12-04 2:57 ` [PATCH 3/7] RISC-V: Add intrinsic functions for crypto vector Zvkg extension Feng Wang
2023-12-04 2:57 ` [PATCH 4/7] RISC-V: Add intrinsic functions for crypto vector Zvkned extension Feng Wang
2023-12-04 2:57 ` [PATCH 5/7] RISC-V: Add intrinsic functions for crypto vector Zvknh[ab] extension Feng Wang
2023-12-04 2:57 ` [PATCH 6/7] RISC-V: Add intrinsic functions for crypto vector Zvksed extension Feng Wang
2023-12-04 2:57 ` [PATCH 7/7] RISC-V: Add intrinsic functions for crypto vector Zvksh extension Feng Wang
2023-12-04 8:01 ` [PATCH 1/7] RISC-V: Add intrinsic functions for crypto vector Zvbb extension Kito Cheng
2023-12-04 8:44 ` Feng Wang
2023-12-04 3:37 [PATCH 2/7] RISC-V: Add intrinsic functions for crypto vector Zvbc extension juzhe.zhong
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).