public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH 1/7] RISC-V: Add intrinsic functions for crypto vector Zvbb extension
@ 2023-12-04  2:57 Feng Wang
  2023-12-04  2:57 ` [PATCH 2/7] RISC-V: Add intrinsic functions for crypto vector Zvbc extension Feng Wang
                   ` (6 more replies)
  0 siblings, 7 replies; 10+ messages in thread
From: Feng Wang @ 2023-12-04  2:57 UTC (permalink / raw)
  To: gcc-patches; +Cc: kito.cheng, jeffreyalaw, zhusonghe, panciyan, Feng Wang

This patch add the intrinsic functions(according to https://github.com/
riscv-non-isa/rvv-intrinsic-doc/blob/eopc/vector-crypto/auto-generated/
vector-crypto/intrinsic_funcs.md) for crypto vector Zvbb extension. And all
the test cases are added for api-testing.

Co-Authored by: Songhe Zhu <zhusonghe@eswincomputing.com>

gcc/ChangeLog:

        * common/config/riscv/riscv-common.cc: Add Zvbb in riscv_implied_info.
        * config/riscv/riscv-vector-builtins-bases.cc (class vandn): Add new function_base for Zvbb.
        (class ror_rol):  Ditto.
        (class b_reverse): Ditto.
        (class vcltz): Ditto.
        (class vwsll): Ditto.
        (BASE): Add Zvbb BASE declaration.
        * config/riscv/riscv-vector-builtins-bases.h: Ditto.
        * config/riscv/riscv-vector-builtins-shapes.cc (struct zvbb_def): Add new function_builder for Zvbb.
        (SHAPE): Add Zvbb SHAPE declaration.
        * config/riscv/riscv-vector-builtins-shapes.h: Ditto.
        * config/riscv/riscv-vector-builtins.cc (DEF_VECTOR_CRYPTO_FUNCTION): Read crypto vector function def.
        (handle_pragma_vector): Register intrinsc functions for crypto vector.
        * config/riscv/riscv-vector-builtins.h (struct crypto_function_group_info): 
                                 Definition of crypto_function_group_info, add AVAIL info.
        * config/riscv/riscv.md: Add Zvbb ins name.
        * config/riscv/t-riscv:  Add the dependency file for building.
        * config/riscv/vector.md: Add the corresponding attribute for Zvbb.
        * config/riscv/riscv-vector-crypto-builtins-avail.h: New file.The enable condition.
        * config/riscv/riscv-vector-crypto-builtins-functions.def: New file. The intrinsic def file.
        * config/riscv/vector-crypto.md: New file.Crypto Vector md file. 

gcc/testsuite/ChangeLog:

        * gcc.target/riscv/zvk/zvbb/vandn.c: New test.
        * gcc.target/riscv/zvk/zvbb/vandn_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvbb/vbrev.c: New test.
        * gcc.target/riscv/zvk/zvbb/vbrev8.c: New test.
        * gcc.target/riscv/zvk/zvbb/vbrev8_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvbb/vbrev_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvbb/vclz.c: New test.
        * gcc.target/riscv/zvk/zvbb/vclz_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvbb/vctz.c: New test.
        * gcc.target/riscv/zvk/zvbb/vctz_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvbb/vrev8.c: New test.
        * gcc.target/riscv/zvk/zvbb/vrev8_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvbb/vrol.c: New test.
        * gcc.target/riscv/zvk/zvbb/vrol_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvbb/vror.c: New test.
        * gcc.target/riscv/zvk/zvbb/vror_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvbb/vwsll.c: New test.
        * gcc.target/riscv/zvk/zvbb/vwsll_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvbb/zvkb.c: New test.
        * gcc.target/riscv/zvk/zvk.exp: New test.
        * gcc.target/riscv/zvkb.c: New test.
---
 gcc/common/config/riscv/riscv-common.cc       |    2 +
 .../riscv/riscv-vector-builtins-bases.cc      |  104 +-
 .../riscv/riscv-vector-builtins-bases.h       |   10 +
 .../riscv/riscv-vector-builtins-shapes.cc     |   27 +-
 .../riscv/riscv-vector-builtins-shapes.h      |    2 +
 gcc/config/riscv/riscv-vector-builtins.cc     |   40 +
 gcc/config/riscv/riscv-vector-builtins.h      |    8 +
 .../riscv-vector-crypto-builtins-avail.h      |   18 +
 ...riscv-vector-crypto-builtins-functions.def |   19 +
 gcc/config/riscv/riscv.md                     |   12 +-
 gcc/config/riscv/t-riscv                      |    2 +
 gcc/config/riscv/vector-crypto.md             |  207 ++++
 gcc/config/riscv/vector.md                    |   31 +-
 .../gcc.target/riscv/zvk/zvbb/vandn.c         | 1072 ++++++++++++++++
 .../riscv/zvk/zvbb/vandn_overloaded.c         | 1072 ++++++++++++++++
 .../gcc.target/riscv/zvk/zvbb/vbrev.c         |  542 +++++++++
 .../gcc.target/riscv/zvk/zvbb/vbrev8.c        |  542 +++++++++
 .../riscv/zvk/zvbb/vbrev8_overloaded.c        |  543 +++++++++
 .../riscv/zvk/zvbb/vbrev_overloaded.c         |  542 +++++++++
 .../gcc.target/riscv/zvk/zvbb/vclz.c          |  187 +++
 .../riscv/zvk/zvbb/vclz_overloaded.c          |  187 +++
 .../gcc.target/riscv/zvk/zvbb/vctz.c          |  187 +++
 .../riscv/zvk/zvbb/vctz_overloaded.c          |  188 +++
 .../gcc.target/riscv/zvk/zvbb/vrev8.c         |  542 +++++++++
 .../riscv/zvk/zvbb/vrev8_overloaded.c         |  542 +++++++++
 .../gcc.target/riscv/zvk/zvbb/vrol.c          | 1072 ++++++++++++++++
 .../riscv/zvk/zvbb/vrol_overloaded.c          | 1072 ++++++++++++++++
 .../gcc.target/riscv/zvk/zvbb/vror.c          | 1073 +++++++++++++++++
 .../riscv/zvk/zvbb/vror_overloaded.c          | 1072 ++++++++++++++++
 .../gcc.target/riscv/zvk/zvbb/vwsll.c         |  736 +++++++++++
 .../riscv/zvk/zvbb/vwsll_overloaded.c         |  736 +++++++++++
 .../gcc.target/riscv/zvk/zvbb/zvkb.c          |   48 +
 gcc/testsuite/gcc.target/riscv/zvk/zvk.exp    |   41 +
 gcc/testsuite/gcc.target/riscv/zvkb.c         |   13 +
 34 files changed, 12475 insertions(+), 16 deletions(-)
 create mode 100755 gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
 create mode 100755 gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
 create mode 100755 gcc/config/riscv/vector-crypto.md
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vandn.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vandn_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev8_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vclz.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vclz_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vctz.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vctz_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrev8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrev8_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrol.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrol_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vror.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vror_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vwsll.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vwsll_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/zvkb.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvkb.c

diff --git a/gcc/common/config/riscv/riscv-common.cc b/gcc/common/config/riscv/riscv-common.cc
index 6c210412515..a5fb492c690 100644
--- a/gcc/common/config/riscv/riscv-common.cc
+++ b/gcc/common/config/riscv/riscv-common.cc
@@ -120,6 +120,8 @@ static const riscv_implied_info_t riscv_implied_info[] =
   {"zvksc", "zvbc"},
   {"zvksg", "zvks"},
   {"zvksg", "zvkg"},
+  {"zvbb",  "zvkb"},
+  {"zvkb",     "v"},
 
   {"zfh", "zfhmin"},
   {"zfhmin", "f"},
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index d70468542ee..e41343b4a1a 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -2127,6 +2127,88 @@ public:
   }
 };
 
+/* Below implements are vector crypto */
+/* Implements vandn.[vv,vx] */
+class vandn : public function_base
+{
+public:
+  rtx expand (function_expander &e) const override
+  {
+    switch (e.op_info->op)
+      {
+      case OP_TYPE_vv:
+        return e.use_exact_insn (code_for_pred_vandn (e.vector_mode ()));
+      case OP_TYPE_vx:
+        return e.use_exact_insn (code_for_pred_vandn_scalar (e.vector_mode ()));
+      default:
+        gcc_unreachable ();
+      }
+  }
+};
+
+/* Implements vrol/vror.[vv,vx] */
+template<int UNSPEC>
+class ror_rol : public function_base
+{
+public:
+  rtx expand (function_expander &e) const override
+  {
+    switch (e.op_info->op)
+      {
+      case OP_TYPE_vv:
+        return e.use_exact_insn (code_for_pred_v (UNSPEC, e.vector_mode ()));
+      case OP_TYPE_vx:
+        return e.use_exact_insn (code_for_pred_v_scalar (UNSPEC, e.vector_mode ()));
+      default:
+        gcc_unreachable ();
+      }
+  }
+};
+
+/* Implements vbrev/vbrev8/vrev8.v */
+template<int UNSPEC>
+class b_reverse : public function_base
+{
+public:
+  rtx expand (function_expander &e) const override
+  {
+      return e.use_exact_insn (code_for_pred_v (UNSPEC, e.vector_mode ()));
+  }
+};
+
+/* Implements vclz/vctz.v */
+template<int UNSPEC>
+class vcltz : public function_base
+{
+public:
+  bool apply_tail_policy_p () const override { return false; }
+  bool apply_mask_policy_p () const override { return false; }
+  bool has_merge_operand_p () const override { return false; }
+
+  rtx expand (function_expander &e) const override
+  {
+      return e.use_exact_insn (code_for_pred_vc (UNSPEC, e.vector_mode ()));
+  }
+};
+
+/* Implements vwsll */
+class vwsll : public function_base
+{
+public:
+  rtx expand (function_expander &e) const override
+  {
+    switch (e.op_info->op)
+      {
+      case OP_TYPE_vv:
+        return e.use_exact_insn (code_for_pred_vwsll (e.vector_mode ()));
+      case OP_TYPE_vx:
+        return e.use_exact_insn (code_for_pred_vwsll_scalar (e.vector_mode ()));
+      default:
+        gcc_unreachable ();
+      }
+  }
+};
+
 static CONSTEXPR const vsetvl<false> vsetvl_obj;
 static CONSTEXPR const vsetvl<true> vsetvlmax_obj;
 static CONSTEXPR const loadstore<false, LST_UNIT_STRIDE, false> vle_obj;
@@ -2384,6 +2466,17 @@ static CONSTEXPR const seg_indexed_store<UNSPEC_UNORDERED> vsuxseg_obj;
 static CONSTEXPR const seg_indexed_store<UNSPEC_ORDERED> vsoxseg_obj;
 static CONSTEXPR const vlsegff vlsegff_obj;
 
+/* Crypto Vector */
+static CONSTEXPR const vandn vandn_obj;
+static CONSTEXPR const ror_rol<UNSPEC_VROL>      vrol_obj;
+static CONSTEXPR const ror_rol<UNSPEC_VROR>      vror_obj;
+static CONSTEXPR const b_reverse<UNSPEC_VBREV>   vbrev_obj;
+static CONSTEXPR const b_reverse<UNSPEC_VBREV8>  vbrev8_obj;
+static CONSTEXPR const b_reverse<UNSPEC_VREV8>   vrev8_obj;
+static CONSTEXPR const vcltz<UNSPEC_VCLZ>        vclz_obj;
+static CONSTEXPR const vcltz<UNSPEC_VCTZ>        vctz_obj;
+static CONSTEXPR const vwsll vwsll_obj;
+
 /* Declare the function base NAME, pointing it to an instance
    of class <NAME>_obj.  */
 #define BASE(NAME) \
@@ -2645,5 +2738,14 @@ BASE (vloxseg)
 BASE (vsuxseg)
 BASE (vsoxseg)
 BASE (vlsegff)
-
+/* Crypto vector */
+BASE (vandn)
+BASE (vbrev)
+BASE (vbrev8)
+BASE (vrev8)
+BASE (vclz)
+BASE (vctz)
+BASE (vrol)
+BASE (vror)
+BASE (vwsll)
 } // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.h b/gcc/config/riscv/riscv-vector-builtins-bases.h
index 131041ea66f..2f46974bd27 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.h
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.h
@@ -280,6 +280,16 @@ extern const function_base *const vloxseg;
 extern const function_base *const vsuxseg;
 extern const function_base *const vsoxseg;
 extern const function_base *const vlsegff;
+/* Below function_base are Vectro Crypto*/
+extern const function_base *const vandn;
+extern const function_base *const vbrev;
+extern const function_base *const vbrev8;
+extern const function_base *const vrev8;
+extern const function_base *const vclz;
+extern const function_base *const vctz;
+extern const function_base *const vrol;
+extern const function_base *const vror;
+extern const function_base *const vwsll;
 }
 
 } // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 4a754e0228f..a98c2389fbc 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -984,6 +984,31 @@ struct seg_fault_load_def : public build_base
   }
 };
 
+/* vandn/vbrev/vbrev8/vrev8/vclz/vctz/vror[l]/vwsll class.  */
+struct zvbb_def : public build_base
+{
+  char *get_name (function_builder &b, const function_instance &instance,
+                  bool overloaded_p) const override
+  {
+    /* Return nullptr if it can not be overloaded.  */
+    if (overloaded_p && !instance.base->can_be_overloaded_p (instance.pred))
+      return nullptr;
+
+    b.append_base_name (instance.base_name);
+    if (!overloaded_p)
+    {
+      b.append_name (operand_suffixes[instance.op_info->op]);
+      b.append_name (type_suffixes[instance.type.index].vector);
+    }
+    /* According to vector-crypto-intrinsic-doc, it does not
+       add "_m" suffix for vop_m C++ overloaded API.  */
+    if (overloaded_p && instance.pred == PRED_TYPE_m)
+      return b.finish_name ();
+    b.append_name (predication_suffixes[instance.pred]);
+    return b.finish_name ();
+  }
+};
+
 SHAPE(vsetvl, vsetvl)
 SHAPE(vsetvl, vsetvlmax)
 SHAPE(loadstore, loadstore)
@@ -1012,5 +1037,5 @@ SHAPE(vlenb, vlenb)
 SHAPE(seg_loadstore, seg_loadstore)
 SHAPE(seg_indexed_loadstore, seg_indexed_loadstore)
 SHAPE(seg_fault_load, seg_fault_load)
-
+SHAPE(zvbb, zvbb)
 } // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.h b/gcc/config/riscv/riscv-vector-builtins-shapes.h
index df9884bb572..e8959a2f277 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.h
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.h
@@ -52,6 +52,8 @@ extern const function_shape *const vlenb;
 extern const function_shape *const seg_loadstore;
 extern const function_shape *const seg_indexed_loadstore;
 extern const function_shape *const seg_fault_load;
+/* Below function_shape are Vectro Crypto*/
+extern const function_shape *const zvbb;
 }
 
 } // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
index 6330a3a41c3..7a1da5c4539 100644
--- a/gcc/config/riscv/riscv-vector-builtins.cc
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -51,6 +51,7 @@
 #include "riscv-vector-builtins.h"
 #include "riscv-vector-builtins-shapes.h"
 #include "riscv-vector-builtins-bases.h"
+#include "riscv-vector-crypto-builtins-avail.h"
 
 using namespace riscv_vector;
 
@@ -754,6 +755,11 @@ static CONSTEXPR const rvv_arg_type_info v_size_args[]
   = {rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info (RVV_BASE_size),
      rvv_arg_type_info_end};
 
+/* A list of args for vector_type func (double demote_type, size_t) function.  */
+static CONSTEXPR const rvv_arg_type_info wv_size_args[]
+  = {rvv_arg_type_info (RVV_BASE_double_trunc_vector),
+    rvv_arg_type_info (RVV_BASE_size),rvv_arg_type_info_end};
+
 /* A list of args for vector_type func (vector_type, vector_type, size)
  * function.  */
 static CONSTEXPR const rvv_arg_type_info vv_size_args[]
@@ -1044,6 +1050,14 @@ static CONSTEXPR const rvv_op_info u_v_ops
      rvv_arg_type_info (RVV_BASE_vector), /* Return type */
      end_args /* Args */};
 
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info u_vv_ops
+  = {u_ops,					/* Types */
+     OP_TYPE_v,					/* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     v_args /* Args */};
+
 /* A static operand information for unsigned long func (vector_type)
  * function registration. */
 static CONSTEXPR const rvv_op_info b_ulong_m_ops
@@ -2174,6 +2188,14 @@ static CONSTEXPR const rvv_op_info u_wvv_ops
      rvv_arg_type_info (RVV_BASE_vector), /* Return type */
      wvv_args /* Args */};
 
+/* A static operand information for vector_type func (double demote type, size type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info u_shift_wvx_ops
+  = {wextu_ops,				  /* Types */
+     OP_TYPE_vx,			  /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     wv_size_args /* Args */};
+
 /* A static operand information for vector_type func (double demote type, double
  * demote scalar_type) function registration. */
 static CONSTEXPR const rvv_op_info i_wvx_ops
@@ -2689,6 +2711,14 @@ static function_group_info function_groups[] = {
 #include "riscv-vector-builtins-functions.def"
 };
 
+/* A list of all Vector Crypto intrinsic functions.  */
+static crypto_function_group_info cryoto_function_groups[] = {
+#define DEF_VECTOR_CRYPTO_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO, AVAIL) \
+  {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO,\
+   riscv_vector_crypto_avail_ ## AVAIL},
+#include "riscv-vector-crypto-builtins-functions.def"
+};
+
 /* The RVV types, with their built-in
    "__rvv..._t" name.  Allow an index of NUM_VECTOR_TYPES, which always
    yields a null tree.  */
@@ -4414,6 +4444,16 @@ handle_pragma_vector ()
   function_builder builder;
   for (unsigned int i = 0; i < ARRAY_SIZE (function_groups); ++i)
     builder.register_function_group (function_groups[i]);
+
+  /* Since Crypto Vector intrinsic functions are depend on vector extension,
+     so the registration is placed at last */
+  for (unsigned int i = 0; i < ARRAY_SIZE (cryoto_function_groups); ++i)
+  {
+    crypto_function_group_info *f = &cryoto_function_groups[i];
+    if (f->avail ())
+      builder.register_function_group\
+             (cryoto_function_groups[i].rvv_function_group_info);
+  }
 }
 
 /* Return the function decl with RVV function subcode CODE, or error_mark_node
diff --git a/gcc/config/riscv/riscv-vector-builtins.h b/gcc/config/riscv/riscv-vector-builtins.h
index cd8ccab1724..a62e8cb4845 100644
--- a/gcc/config/riscv/riscv-vector-builtins.h
+++ b/gcc/config/riscv/riscv-vector-builtins.h
@@ -234,6 +234,14 @@ struct function_group_info
   const rvv_op_info ops_infos;
 };
 
+/* Static information about a set of crypto vector functions.  */
+struct crypto_function_group_info
+{
+  struct function_group_info rvv_function_group_info;
+  /* Whether the function is available.  */
+  unsigned int (*avail) (void);
+};
+
 class GTY ((user)) function_instance
 {
 public:
diff --git a/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h b/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
new file mode 100755
index 00000000000..2719027a7da
--- /dev/null
+++ b/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
@@ -0,0 +1,18 @@
+#ifndef GCC_RISCV_VECTOR_CRYPTO_BUILTINS_AVAIL_H
+#define GCC_RISCV_VECTOR_CRYPTO_BUILTINS_AVAIL_H
+
+#include "insn-codes.h"
+namespace riscv_vector {
+
+/* Declare an availability predicate for built-in functions.  */
+#define AVAIL(NAME, COND)		\
+ static unsigned int			\
+ riscv_vector_crypto_avail_##NAME (void)	\
+ {					\
+   return (COND);			\
+ }
+
+AVAIL (zvbb, TARGET_ZVBB)
+AVAIL (zvkb_or_zvbb, TARGET_ZVKB || TARGET_ZVBB)
+}
+#endif
diff --git a/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def b/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
new file mode 100755
index 00000000000..f3371f28a42
--- /dev/null
+++ b/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
@@ -0,0 +1,19 @@
+#ifndef DEF_VECTOR_CRYPTO_FUNCTION
+#define DEF_VECTOR_CRYPTO_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO, AVAIL)
+#endif
+
+
+// ZVBB
+DEF_VECTOR_CRYPTO_FUNCTION (vandn, zvbb, full_preds, u_vvv_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vandn, zvbb, full_preds, u_vvx_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vbrev, zvbb, full_preds, u_vv_ops, zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vbrev8, zvbb, full_preds, u_vv_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vrev8, zvbb, full_preds, u_vv_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vclz, zvbb, none_m_preds, u_vv_ops, zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vctz, zvbb, none_m_preds, u_vv_ops, zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vrol, zvbb, full_preds, u_vvv_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vrol, zvbb, full_preds, u_shift_vvx_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vror, zvbb, full_preds, u_vvv_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vror, zvbb, full_preds, u_shift_vvx_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vwsll, zvbb, full_preds, u_wvv_ops, zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vwsll, zvbb, full_preds, u_shift_wvx_ops, zvbb)
diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md
index 935eeb7fd8e..2a3777e168c 100644
--- a/gcc/config/riscv/riscv.md
+++ b/gcc/config/riscv/riscv.md
@@ -428,6 +428,15 @@
 ;; vcompress    vector compress instruction
 ;; vmov         whole vector register move
 ;; vector       unknown vector instruction
+;; vandn        crypto vector bitwise and-not instructions
+;; vbrev        crypto vector reverse bits in elements instructions
+;; vbrev8       crypto vector reverse bits in bytes instructions
+;; vrev8        crypto vector reverse bytes instructions
+;; vclz         crypto vector count leading Zeros instructions
+;; vctz         crypto vector count lrailing Zeros instructions
+;; vrol         crypto vector rotate left instructions
+;; vror         crypto vector rotate right instructions
+;; vwsll        crypto vector widening shift left logical instructions
 (define_attr "type"
   "unknown,branch,jump,jalr,ret,call,load,fpload,store,fpstore,
    mtc,mfc,const,arith,logical,shift,slt,imul,idiv,move,fmove,fadd,fmul,
@@ -447,7 +456,7 @@
    vired,viwred,vfredu,vfredo,vfwredu,vfwredo,
    vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv,
    vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,
-   vgather,vcompress,vmov,vector"
+   vgather,vcompress,vmov,vector,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,vror,vwsll"
   (cond [(eq_attr "got" "load") (const_string "load")
 
 	 ;; If a doubleword move uses these expensive instructions,
@@ -3747,6 +3756,7 @@
 (include "thead.md")
 (include "generic-ooo.md")
 (include "vector.md")
+(include "vector-crypto.md")
 (include "zicond.md")
 (include "zc.md")
 (include "corev.md")
diff --git a/gcc/config/riscv/t-riscv b/gcc/config/riscv/t-riscv
index 3b9686daa58..429d36b6425 100644
--- a/gcc/config/riscv/t-riscv
+++ b/gcc/config/riscv/t-riscv
@@ -1,6 +1,8 @@
 RISCV_BUILTINS_H = $(srcdir)/config/riscv/riscv-vector-builtins.h \
 		   $(srcdir)/config/riscv/riscv-vector-builtins.def \
 		   $(srcdir)/config/riscv/riscv-vector-builtins-functions.def \
+		   $(srcdir)/config/riscv/riscv-vector-crypto-builtins-avail.h \
+		   $(srcdir)/config/riscv/riscv-vector-crypto-builtins-functions.def \
 		   riscv-vector-type-indexer.gen.def
 
 riscv-builtins.o: $(srcdir)/config/riscv/riscv-builtins.cc $(CONFIG_H) \
diff --git a/gcc/config/riscv/vector-crypto.md b/gcc/config/riscv/vector-crypto.md
new file mode 100755
index 00000000000..0373cf6f48a
--- /dev/null
+++ b/gcc/config/riscv/vector-crypto.md
@@ -0,0 +1,207 @@
+(define_c_enum "unspec" [
+    ;; Zvbb unspecs
+    UNSPEC_VANDN
+    UNSPEC_VBREV
+    UNSPEC_VBREV8
+    UNSPEC_VREV8
+    UNSPEC_VCLZ
+    UNSPEC_VCTZ
+    UNSPEC_VROL
+    UNSPEC_VROR
+    UNSPEC_VWSLL
+])
+
+(define_int_attr ror_rol [(UNSPEC_VROL "rol") (UNSPEC_VROR "ror")])
+
+(define_int_attr lt [(UNSPEC_VCLZ "lz") (UNSPEC_VCTZ "tz")])
+
+(define_int_attr rev  [(UNSPEC_VBREV "brev") (UNSPEC_VBREV8 "brev8") (UNSPEC_VREV8 "rev8")])
+
+(define_int_iterator UNSPEC_VRORL [UNSPEC_VROL UNSPEC_VROR])
+
+(define_int_iterator UNSPEC_VCLTZ [UNSPEC_VCLZ UNSPEC_VCTZ])
+
+(define_int_iterator UNSPEC_VRBB8 [UNSPEC_VBREV UNSPEC_VBREV8 UNSPEC_VREV8])
+
+;; zvbb instructions patterns.
+;; vandn.vv vandn.vx vrol.vv vrol.vx
+;; vror.vv vror.vx vror.vi
+;; vwsll.vv vwsll.vx vwsll.vi
+
+(define_insn "@pred_vandn<mode>"
+  [(set (match_operand:VI 0 "register_operand"       "=vd,vd")
+	(if_then_else:VI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+	     (match_operand 5 "vector_length_operand"    "rK,rK")
+	     (match_operand 6 "const_int_operand"        "i,  i")
+	     (match_operand 7 "const_int_operand"        "i,  i")
+	     (match_operand 8 "const_int_operand"        "i,  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:VI
+	    [(match_operand:VI 3 "register_operand"  "vr,vr")
+	     (match_operand:VI 4 "register_operand"  "vr,vr")]UNSPEC_VANDN)
+	  (match_operand:VI 2 "vector_merge_operand" "vu, 0")))]
+  "TARGET_ZVBB || TARGET_ZVKB"
+  "vandn.vv\t%0,%3,%4%p1"
+  [(set_attr "type" "vandn")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_vandn<mode>_scalar"
+  [(set (match_operand:VI 0 "register_operand"       "=vd,vd")
+	(if_then_else:VI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+	     (match_operand 5 "vector_length_operand"    "rK,rK")
+	     (match_operand 6 "const_int_operand"        "i,  i")
+	     (match_operand 7 "const_int_operand"        "i,  i")
+	     (match_operand 8 "const_int_operand"        "i,  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:VI
+	    [(match_operand:VI 3 "register_operand"     "vr,vr")
+	     (match_operand:<VEL> 4 "register_operand"  "r,  r")]UNSPEC_VANDN)
+	  (match_operand:VI 2 "vector_merge_operand"    "vu, 0")))]
+  "TARGET_ZVBB || TARGET_ZVKB"
+  "vandn.vx\t%0,%3,%4%p1"
+  [(set_attr "type" "vandn")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_v<ror_rol><mode>"
+  [(set (match_operand:VI 0 "register_operand"       "=vd,vd")
+	(if_then_else:VI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+	     (match_operand 5 "vector_length_operand"    "rK,rK")
+	     (match_operand 6 "const_int_operand"        "i,  i")
+	     (match_operand 7 "const_int_operand"        "i,  i")
+	     (match_operand 8 "const_int_operand"        "i,  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:VI
+	    [(match_operand:VI 3 "register_operand"  "vr,vr")
+	     (match_operand:VI 4 "register_operand"  "vr,vr")]UNSPEC_VRORL)
+	  (match_operand:VI 2 "vector_merge_operand" "vu, 0")))]
+  "TARGET_ZVBB || TARGET_ZVKB"
+  "v<ror_rol>.vv\t%0,%3,%4%p1"
+  [(set_attr "type" "v<ror_rol>")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_v<ror_rol><mode>_scalar"
+  [(set (match_operand:VI 0 "register_operand"       "=vd, vd")
+	(if_then_else:VI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, vmWc1")
+	     (match_operand 5 "vector_length_operand"    "rK, rK")
+	     (match_operand 6 "const_int_operand"        "i,   i")
+	     (match_operand 7 "const_int_operand"        "i,   i")
+	     (match_operand 8 "const_int_operand"        "i,   i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:VI
+	    [(match_operand:VI 3 "register_operand"    "vr, vr")
+	     (match_operand 4 "pmode_register_operand" "r,   r")]UNSPEC_VRORL)
+	  (match_operand:VI 2 "vector_merge_operand"   "vu,  0")))]
+  "TARGET_ZVBB || TARGET_ZVKB"
+  "v<ror_rol>.vx\t%0,%3,%4%p1"
+  [(set_attr "type" "v<ror_rol>")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_vror<mode>_scalar"
+  [(set (match_operand:VI 0 "register_operand"       "=vd, vd")
+	(if_then_else:VI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, vmWc1")
+	     (match_operand 5 "vector_length_operand"    "rK, rK")
+	     (match_operand 6 "const_int_operand"        "i,   i")
+	     (match_operand 7 "const_int_operand"        "i,   i")
+	     (match_operand 8 "const_int_operand"        "i,   i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:VI
+	    [(match_operand:VI 3 "register_operand"  "vr, vr")
+	     (match_operand 4 "const_csr_operand"     "K,  K")]UNSPEC_VROR)
+	  (match_operand:VI 2 "vector_merge_operand" "vu,  0")))]
+  "TARGET_ZVBB || TARGET_ZVKB"
+  "vror.vi\t%0,%3,%4%p1"
+  [(set_attr "type" "vror")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_vwsll<mode>"
+  [(set (match_operand:VWEXTI 0 "register_operand"       "=&vd")
+	(if_then_else:VWEXTI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+	     (match_operand 5 "vector_length_operand"    "   rK")
+	     (match_operand 6 "const_int_operand"        "    i")
+	     (match_operand 7 "const_int_operand"        "    i")
+	     (match_operand 8 "const_int_operand"        "    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:VWEXTI
+	    [(match_operand:<V_DOUBLE_TRUNC> 3 "register_operand"  "vr")
+	     (match_operand:<V_DOUBLE_TRUNC> 4 "register_operand"  "vr")]UNSPEC_VWSLL)
+	  (match_operand:VWEXTI 2 "vector_merge_operand" "0vu")))]
+  "TARGET_ZVBB"
+  "vwsll.vv\t%0,%3,%4%p1"
+  [(set_attr "type" "vwsll")
+   (set_attr "mode" "<VWEXTI:MODE>")])
+
+(define_insn "@pred_vwsll<mode>_scalar"
+  [(set (match_operand:VWEXTI 0 "register_operand"       "=&vd")
+	(if_then_else:VWEXTI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+	     (match_operand 5 "vector_length_operand"    "   rK")
+	     (match_operand 6 "const_int_operand"        "    i")
+	     (match_operand 7 "const_int_operand"        "    i")
+	     (match_operand 8 "const_int_operand"        "    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:VWEXTI
+	    [(match_operand:<V_DOUBLE_TRUNC> 3 "register_operand"   "vr")
+	     (match_operand:<VSUBEL> 4 "pmode_reg_or_uimm5_operand" "rK")]UNSPEC_VWSLL)
+	  (match_operand:VWEXTI 2 "vector_merge_operand" "0vu")))]
+  "TARGET_ZVBB"
+  "vwsll.v%o4\t%0,%3,%4%p1"
+  [(set_attr "type" "vwsll")
+   (set_attr "mode" "<VWEXTI:MODE>")])
+
+;; vbrev.v vbrev8.v vrev8.v
+
+(define_insn "@pred_v<rev><mode>"
+  [(set (match_operand:VI 0 "register_operand"       "=vd,vd")
+	(if_then_else:VI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+	     (match_operand 4 "vector_length_operand"    "   rK,   rK")
+	     (match_operand 5 "const_int_operand"        "    i,    i")
+	     (match_operand 6 "const_int_operand"        "    i,    i")
+	     (match_operand 7 "const_int_operand"        "    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:VI
+	    [(match_operand:VI 3 "register_operand"  "vr,vr")]UNSPEC_VRBB8)
+	  (match_operand:VI 2 "vector_merge_operand" "vu, 0")))]
+  "TARGET_ZVBB || TARGET_ZVKB"
+  "v<rev>.v\t%0,%3%p1"
+  [(set_attr "type" "v<rev>")
+   (set_attr "mode" "<MODE>")])
+
+;; vclz.v vctz.v
+
+(define_insn "@pred_vc<lt><mode>"
+  [(set (match_operand:VI 0  "register_operand"  "=vd")
+        (unspec:VI
+          [(match_operand:VI 2 "register_operand" "vr")
+           (unspec:<VM>
+             [(match_operand:<VM> 1 "vector_mask_operand"   "vmWc1")
+              (match_operand 3      "vector_length_operand" "   rK")
+              (match_operand 4      "const_int_operand"     "    i")
+              (reg:SI VL_REGNUM)
+              (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)]UNSPEC_VCLTZ))]
+  "TARGET_ZVBB"
+  "vc<lt>.v\t%0,%2%p1"
+  [(set_attr "type" "vc<lt>")
+   (set_attr "mode" "<MODE>")])
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index ba9c9e5a9b6..3e08e18d355 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -52,7 +52,8 @@
 			  vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv,\
 			  vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
 			  vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
-			  vssegtux,vssegtox,vlsegdff")
+			  vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
+                          vror,vwsll")
 	 (const_string "true")]
 	(const_string "false")))
 
@@ -74,7 +75,8 @@
 			  vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovxv,vfmovfv,\
 			  vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
 			  vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
-			  vssegtux,vssegtox,vlsegdff")
+			  vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
+                          vror,vwsll")
 	 (const_string "true")]
 	(const_string "false")))
 
@@ -698,7 +700,8 @@
 				vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,\
 				vired,viwred,vfredu,vfredo,vfwredu,vfwredo,vimovxv,vfmovfv,\
 				vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
-				vgather,vldff,viwmuladd,vfwmuladd,vlsegde,vlsegds,vlsegdux,vlsegdox,vlsegdff")
+				vgather,vldff,viwmuladd,vfwmuladd,vlsegde,vlsegds,vlsegdux,vlsegdox,vlsegdff,\
+                                vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,vror,vwsll")
 	       (const_int 2)
 
 	       (eq_attr "type" "vimerge,vfmerge,vcompress")
@@ -740,7 +743,7 @@
 			  vstox,vext,vmsfs,vmiota,vfsqrt,vfrecp,vfcvtitof,vldff,\
 			  vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,\
 			  vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,vcompress,\
-			  vlsegde,vssegts,vssegtux,vssegtox,vlsegdff")
+			  vlsegde,vssegts,vssegtux,vssegtox,vlsegdff,vbrev,vbrev8,vrev8")
 	   (const_int 4)
 
 	 ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -755,13 +758,14 @@
 			  vsshift,vnclip,vfalu,vfmul,vfminmax,vfdiv,vfwalu,vfwmul,\
 			  vfsgnj,vfmerge,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\
 			  vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
-			  vgather,viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox")
+			  vgather,viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox,vandn,vrol,\
+                          vror,vwsll")
 	   (const_int 5)
 
 	 (eq_attr "type" "vicmp,vimuladd,vfcmp,vfmuladd")
 	   (const_int 6)
 
-	 (eq_attr "type" "vmpop,vmffs,vmidx,vssegte")
+	 (eq_attr "type" "vmpop,vmffs,vmidx,vssegte,vclz,vctz")
 	   (const_int 3)]
   (const_int INVALID_ATTRIBUTE)))
 
@@ -770,7 +774,7 @@
   (cond [(eq_attr "type" "vlde,vimov,vfmov,vext,vmiota,vfsqrt,vfrecp,\
 			  vfcvtitof,vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,\
 			  vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,\
-			  vcompress,vldff,vlsegde,vlsegdff")
+			  vcompress,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8")
 	   (symbol_ref "riscv_vector::get_ta(operands[5])")
 
 	 ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -786,7 +790,7 @@
 			  vfwalu,vfwmul,vfsgnj,vfmerge,vired,viwred,vfredu,\
 			  vfredo,vfwredu,vfwredo,vslideup,vslidedown,vislide1up,\
 			  vislide1down,vfslide1up,vfslide1down,vgather,viwmuladd,vfwmuladd,\
-			  vlsegds,vlsegdux,vlsegdox")
+			  vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll")
 	   (symbol_ref "riscv_vector::get_ta(operands[6])")
 
 	 (eq_attr "type" "vimuladd,vfmuladd")
@@ -800,7 +804,7 @@
 (define_attr "ma" ""
   (cond [(eq_attr "type" "vlde,vext,vmiota,vfsqrt,vfrecp,vfcvtitof,vfcvtftoi,\
 			  vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,\
-			  vfncvtftof,vfclass,vldff,vlsegde,vlsegdff")
+			  vfncvtftof,vfclass,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8")
 	   (symbol_ref "riscv_vector::get_ma(operands[6])")
 
 	 ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -815,7 +819,8 @@
 			  vnclip,vicmp,vfalu,vfmul,vfminmax,vfdiv,\
 			  vfwalu,vfwmul,vfsgnj,vfcmp,vslideup,vslidedown,\
 			  vislide1up,vislide1down,vfslide1up,vfslide1down,vgather,\
-			  viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox")
+			  viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox,vandn,vrol,\
+                          vror,vwsll")
 	   (symbol_ref "riscv_vector::get_ma(operands[7])")
 
 	 (eq_attr "type" "vimuladd,vfmuladd")
@@ -831,7 +836,7 @@
 			  vfsqrt,vfrecp,vfmerge,vfcvtitof,vfcvtftoi,vfwcvtitof,\
 			  vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,vfncvtftof,\
 			  vfclass,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\
-			  vimovxv,vfmovfv,vlsegde,vlsegdff")
+			  vimovxv,vfmovfv,vlsegde,vlsegdff,vbrev,vbrev8,vrev8")
 	   (const_int 7)
 	 (eq_attr "type" "vldm,vstm,vmalu,vmalu")
 	   (const_int 5)
@@ -848,7 +853,7 @@
 			  vnclip,vicmp,vfalu,vfmul,vfminmax,vfdiv,vfwalu,vfwmul,\
 			  vfsgnj,vfcmp,vslideup,vslidedown,vislide1up,\
 			  vislide1down,vfslide1up,vfslide1down,vgather,viwmuladd,vfwmuladd,\
-			  vlsegds,vlsegdux,vlsegdox")
+			  vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll")
 	   (const_int 8)
 	 (eq_attr "type" "vstux,vstox,vssegts,vssegtux,vssegtox")
 	   (const_int 5)
@@ -859,7 +864,7 @@
 	 (eq_attr "type" "vmsfs,vmidx,vcompress")
 	   (const_int 6)
 
-	 (eq_attr "type" "vmpop,vmffs,vssegte")
+	 (eq_attr "type" "vmpop,vmffs,vssegte,vclz,vctz")
 	   (const_int 4)]
 	(const_int INVALID_ATTRIBUTE)))
 
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vandn.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vandn.c
new file mode 100644
index 00000000000..b044c020f71
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vandn.c
@@ -0,0 +1,1072 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vandn_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf8(vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8(vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf8(vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf4(vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4(vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf4(vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf2(vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2(vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf2(vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m1(vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1(vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m1(vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m2(vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2(vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m2(vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m4(vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4(vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m4(vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m8(vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8(vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m8(vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16mf4(vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4(vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16mf4(vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16mf2(vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2(vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16mf2(vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m1(vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1(vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m1(vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m2(vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2(vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m2(vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m4(vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4(vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m4(vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m8(vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8(vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m8(vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32mf2(vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2(vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32mf2(vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m1(vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1(vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m1(vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m2(vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2(vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m2(vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m4(vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4(vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m4(vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m8(vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8(vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m8(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m1(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m1(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m2(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m2(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m4(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m4(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m8(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m8(vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf8_m(mask, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf8_m(mask, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf4_m(mask, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf4_m(mask, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m1_m(mask, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m1_m(mask, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m2_m(mask, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m2_m(mask, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m4_m(mask, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m4_m(mask, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m8_m(mask, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m8_m(mask, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16mf4_m(mask, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16mf4_m(mask, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m1_m(mask, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m1_m(mask, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m2_m(mask, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m2_m(mask, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m4_m(mask, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m4_m(mask, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m8_m(mask, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m8_m(mask, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m1_m(mask, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m1_m(mask, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m2_m(mask, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m2_m(mask, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m4_m(mask, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m4_m(mask, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m8_m(mask, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m8_m(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m1_m(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m1_m(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m2_m(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m2_m(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m4_m(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m4_m(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m8_m(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m8_m(mask, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16mf4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16mf4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vandn\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vandn\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
+/* { dg-final { scan-assembler-times {vandn\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vandn\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vandn_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vandn_overloaded.c
new file mode 100644
index 00000000000..9af458cd671
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vandn_overloaded.c
@@ -0,0 +1,1072 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vandn_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8(vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4(vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2(vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1(vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2(vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4(vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8(vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4(vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2(vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1(vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2(vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4(vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8(vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2(vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1(vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2(vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4(vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8(vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vandn\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vandn\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
+/* { dg-final { scan-assembler-times {vandn\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vandn\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev.c
new file mode 100644
index 00000000000..6725744071d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev.c
@@ -0,0 +1,542 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vbrev_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf8(vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf4(vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf2(vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1(vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m1(vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2(vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m2(vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4(vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m4(vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8(vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m8(vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16mf4(vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16mf2(vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1(vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m1(vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2(vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m2(vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4(vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m4(vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8(vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m8(vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32mf2(vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1(vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m1(vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2(vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m2(vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4(vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m4(vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8(vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m8(vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1(vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m1(vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2(vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m2(vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4(vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m4(vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8(vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m8(vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf8_m(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf4_m(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf2_m(mask, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m1_m(mask, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m2_m(mask, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m4_m(mask, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m8_m(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16mf4_m(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16mf2_m(mask, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m1_m(mask, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m2_m(mask, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m4_m(mask, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m8_m(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32mf2_m(mask, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m1_m(mask, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m2_m(mask, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m4_m(mask, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m8_m(mask, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m1_m(mask, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m2_m(mask, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m4_m(mask, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m8_m(mask, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf4_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf2_tu(maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m1_tu(maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m2_tu(maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m4_tu(maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m8_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16mf4_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16mf2_tu(maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m1_tu(maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m2_tu(maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m4_tu(maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m8_tu(maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32mf2_tu(maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m1_tu(maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m2_tu(maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m4_tu(maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m1_tu(maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m2_tu(maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m4_tu(maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16mf4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16mf2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32mf2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16mf4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16mf2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32mf2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16mf4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16mf2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32mf2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m8_mu(mask, maskedoff, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vbrev\.v\s+v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vbrev\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev8.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev8.c
new file mode 100644
index 00000000000..bb9cffe6797
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev8.c
@@ -0,0 +1,542 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vbrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf8(vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf4(vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf2(vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1(vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m1(vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2(vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m2(vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4(vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m4(vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8(vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m8(vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16mf4(vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16mf2(vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1(vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m1(vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2(vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m2(vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4(vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m4(vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8(vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m8(vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32mf2(vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1(vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m1(vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2(vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m2(vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4(vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m4(vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8(vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m8(vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1(vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m1(vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2(vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m2(vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4(vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m4(vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8(vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m8(vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf8_m(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf4_m(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf2_m(mask, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m1_m(mask, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m2_m(mask, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m4_m(mask, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m8_m(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16mf4_m(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16mf2_m(mask, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m1_m(mask, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m2_m(mask, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m4_m(mask, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m8_m(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32mf2_m(mask, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m1_m(mask, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m2_m(mask, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m4_m(mask, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m8_m(mask, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m1_m(mask, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m2_m(mask, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m4_m(mask, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m8_m(mask, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf4_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf2_tu(maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m1_tu(maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m2_tu(maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m4_tu(maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m8_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16mf4_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16mf2_tu(maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m1_tu(maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m2_tu(maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m4_tu(maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m8_tu(maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32mf2_tu(maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m1_tu(maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m2_tu(maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m4_tu(maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m1_tu(maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m2_tu(maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m4_tu(maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16mf4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16mf2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32mf2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16mf4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16mf2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32mf2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16mf4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16mf2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32mf2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m8_mu(mask, maskedoff, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vbrev8\.v\s+v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vbrev8\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev8_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev8_overloaded.c
new file mode 100644
index 00000000000..25f6b1c343f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev8_overloaded.c
@@ -0,0 +1,543 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vbrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1(vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2(vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4(vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8(vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1(vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2(vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4(vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8(vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1(vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2(vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4(vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8(vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1(vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2(vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4(vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8(vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vbrev8\.v\s+v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vbrev8\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev_overloaded.c
new file mode 100644
index 00000000000..8a4d7bb7f8c
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev_overloaded.c
@@ -0,0 +1,542 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vbrev_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1(vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2(vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4(vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8(vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1(vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2(vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4(vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8(vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1(vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2(vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4(vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8(vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1(vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2(vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4(vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8(vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vbrev\.v\s+v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vbrev\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vclz.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vclz.c
new file mode 100644
index 00000000000..df19efd3c7d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vclz.c
@@ -0,0 +1,187 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vclz_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vclz_v_u8mf8(vs2, vl);
+}
+
+vuint8mf4_t test_vclz_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vclz_v_u8mf4(vs2, vl);
+}
+
+vuint8mf2_t test_vclz_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vclz_v_u8mf2(vs2, vl);
+}
+
+vuint8m1_t test_vclz_v_u8m1(vuint8m1_t vs2, size_t vl) {
+  return __riscv_vclz_v_u8m1(vs2, vl);
+}
+
+vuint8m2_t test_vclz_v_u8m2(vuint8m2_t vs2, size_t vl) {
+  return __riscv_vclz_v_u8m2(vs2, vl);
+}
+
+vuint8m4_t test_vclz_v_u8m4(vuint8m4_t vs2, size_t vl) {
+  return __riscv_vclz_v_u8m4(vs2, vl);
+}
+
+vuint8m8_t test_vclz_v_u8m8(vuint8m8_t vs2, size_t vl) {
+  return __riscv_vclz_v_u8m8(vs2, vl);
+}
+
+vuint16mf4_t test_vclz_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vclz_v_u16mf4(vs2, vl);
+}
+
+vuint16mf2_t test_vclz_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vclz_v_u16mf2(vs2, vl);
+}
+
+vuint16m1_t test_vclz_v_u16m1(vuint16m1_t vs2, size_t vl) {
+  return __riscv_vclz_v_u16m1(vs2, vl);
+}
+
+vuint16m2_t test_vclz_v_u16m2(vuint16m2_t vs2, size_t vl) {
+  return __riscv_vclz_v_u16m2(vs2, vl);
+}
+
+vuint16m4_t test_vclz_v_u16m4(vuint16m4_t vs2, size_t vl) {
+  return __riscv_vclz_v_u16m4(vs2, vl);
+}
+
+vuint16m8_t test_vclz_v_u16m8(vuint16m8_t vs2, size_t vl) {
+  return __riscv_vclz_v_u16m8(vs2, vl);
+}
+
+vuint32mf2_t test_vclz_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vclz_v_u32mf2(vs2, vl);
+}
+
+vuint32m1_t test_vclz_v_u32m1(vuint32m1_t vs2, size_t vl) {
+  return __riscv_vclz_v_u32m1(vs2, vl);
+}
+
+vuint32m2_t test_vclz_v_u32m2(vuint32m2_t vs2, size_t vl) {
+  return __riscv_vclz_v_u32m2(vs2, vl);
+}
+
+vuint32m4_t test_vclz_v_u32m4(vuint32m4_t vs2, size_t vl) {
+  return __riscv_vclz_v_u32m4(vs2, vl);
+}
+
+vuint32m8_t test_vclz_v_u32m8(vuint32m8_t vs2, size_t vl) {
+  return __riscv_vclz_v_u32m8(vs2, vl);
+}
+
+vuint64m1_t test_vclz_v_u64m1(vuint64m1_t vs2, size_t vl) {
+  return __riscv_vclz_v_u64m1(vs2, vl);
+}
+
+vuint64m2_t test_vclz_v_u64m2(vuint64m2_t vs2, size_t vl) {
+  return __riscv_vclz_v_u64m2(vs2, vl);
+}
+
+vuint64m4_t test_vclz_v_u64m4(vuint64m4_t vs2, size_t vl) {
+  return __riscv_vclz_v_u64m4(vs2, vl);
+}
+
+vuint64m8_t test_vclz_v_u64m8(vuint64m8_t vs2, size_t vl) {
+  return __riscv_vclz_v_u64m8(vs2, vl);
+}
+
+vuint8mf8_t test_vclz_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vclz_v_u8mf8_m(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vclz_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vclz_v_u8mf4_m(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vclz_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vclz_v_u8mf2_m(mask, vs2, vl);
+}
+
+vuint8m1_t test_vclz_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vclz_v_u8m1_m(mask, vs2, vl);
+}
+
+vuint8m2_t test_vclz_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vclz_v_u8m2_m(mask, vs2, vl);
+}
+
+vuint8m4_t test_vclz_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vclz_v_u8m4_m(mask, vs2, vl);
+}
+
+vuint8m8_t test_vclz_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vclz_v_u8m8_m(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vclz_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vclz_v_u16mf4_m(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vclz_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vclz_v_u16mf2_m(mask, vs2, vl);
+}
+
+vuint16m1_t test_vclz_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vclz_v_u16m1_m(mask, vs2, vl);
+}
+
+vuint16m2_t test_vclz_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vclz_v_u16m2_m(mask, vs2, vl);
+}
+
+vuint16m4_t test_vclz_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vclz_v_u16m4_m(mask, vs2, vl);
+}
+
+vuint16m8_t test_vclz_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vclz_v_u16m8_m(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vclz_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vclz_v_u32mf2_m(mask, vs2, vl);
+}
+
+vuint32m1_t test_vclz_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vclz_v_u32m1_m(mask, vs2, vl);
+}
+
+vuint32m2_t test_vclz_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vclz_v_u32m2_m(mask, vs2, vl);
+}
+
+vuint32m4_t test_vclz_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vclz_v_u32m4_m(mask, vs2, vl);
+}
+
+vuint32m8_t test_vclz_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vclz_v_u32m8_m(mask, vs2, vl);
+}
+
+vuint64m1_t test_vclz_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vclz_v_u64m1_m(mask, vs2, vl);
+}
+
+vuint64m2_t test_vclz_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vclz_v_u64m2_m(mask, vs2, vl);
+}
+
+vuint64m4_t test_vclz_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vclz_v_u64m4_m(mask, vs2, vl);
+}
+
+vuint64m8_t test_vclz_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vclz_v_u64m8_m(mask, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vclz\.v\s+v[0-9]+,\s*v[0-9]} 44 } } */
+/* { dg-final { scan-assembler-times {vclz\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 22 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vclz_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vclz_overloaded.c
new file mode 100644
index 00000000000..b4466bd4c23
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vclz_overloaded.c
@@ -0,0 +1,187 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vclz_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint8mf4_t test_vclz_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint8mf2_t test_vclz_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint8m1_t test_vclz_v_u8m1(vuint8m1_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint8m2_t test_vclz_v_u8m2(vuint8m2_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint8m4_t test_vclz_v_u8m4(vuint8m4_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint8m8_t test_vclz_v_u8m8(vuint8m8_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint16mf4_t test_vclz_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint16mf2_t test_vclz_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint16m1_t test_vclz_v_u16m1(vuint16m1_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint16m2_t test_vclz_v_u16m2(vuint16m2_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint16m4_t test_vclz_v_u16m4(vuint16m4_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint16m8_t test_vclz_v_u16m8(vuint16m8_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint32mf2_t test_vclz_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint32m1_t test_vclz_v_u32m1(vuint32m1_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint32m2_t test_vclz_v_u32m2(vuint32m2_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint32m4_t test_vclz_v_u32m4(vuint32m4_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint32m8_t test_vclz_v_u32m8(vuint32m8_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint64m1_t test_vclz_v_u64m1(vuint64m1_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint64m2_t test_vclz_v_u64m2(vuint64m2_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint64m4_t test_vclz_v_u64m4(vuint64m4_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint64m8_t test_vclz_v_u64m8(vuint64m8_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint8mf8_t test_vclz_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vclz_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vclz_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint8m1_t test_vclz_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint8m2_t test_vclz_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint8m4_t test_vclz_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint8m8_t test_vclz_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vclz_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vclz_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint16m1_t test_vclz_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint16m2_t test_vclz_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint16m4_t test_vclz_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint16m8_t test_vclz_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vclz_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint32m1_t test_vclz_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint32m2_t test_vclz_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint32m4_t test_vclz_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint32m8_t test_vclz_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint64m1_t test_vclz_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint64m2_t test_vclz_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint64m4_t test_vclz_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint64m8_t test_vclz_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vclz\.v\s+v[0-9]+,\s*v[0-9]} 44 } } */
+/* { dg-final { scan-assembler-times {vclz\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 22 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vctz.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vctz.c
new file mode 100644
index 00000000000..ce3cc6dc551
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vctz.c
@@ -0,0 +1,187 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vctz_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vctz_v_u8mf8(vs2, vl);
+}
+
+vuint8mf4_t test_vctz_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vctz_v_u8mf4(vs2, vl);
+}
+
+vuint8mf2_t test_vctz_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vctz_v_u8mf2(vs2, vl);
+}
+
+vuint8m1_t test_vctz_v_u8m1(vuint8m1_t vs2, size_t vl) {
+  return __riscv_vctz_v_u8m1(vs2, vl);
+}
+
+vuint8m2_t test_vctz_v_u8m2(vuint8m2_t vs2, size_t vl) {
+  return __riscv_vctz_v_u8m2(vs2, vl);
+}
+
+vuint8m4_t test_vctz_v_u8m4(vuint8m4_t vs2, size_t vl) {
+  return __riscv_vctz_v_u8m4(vs2, vl);
+}
+
+vuint8m8_t test_vctz_v_u8m8(vuint8m8_t vs2, size_t vl) {
+  return __riscv_vctz_v_u8m8(vs2, vl);
+}
+
+vuint16mf4_t test_vctz_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vctz_v_u16mf4(vs2, vl);
+}
+
+vuint16mf2_t test_vctz_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vctz_v_u16mf2(vs2, vl);
+}
+
+vuint16m1_t test_vctz_v_u16m1(vuint16m1_t vs2, size_t vl) {
+  return __riscv_vctz_v_u16m1(vs2, vl);
+}
+
+vuint16m2_t test_vctz_v_u16m2(vuint16m2_t vs2, size_t vl) {
+  return __riscv_vctz_v_u16m2(vs2, vl);
+}
+
+vuint16m4_t test_vctz_v_u16m4(vuint16m4_t vs2, size_t vl) {
+  return __riscv_vctz_v_u16m4(vs2, vl);
+}
+
+vuint16m8_t test_vctz_v_u16m8(vuint16m8_t vs2, size_t vl) {
+  return __riscv_vctz_v_u16m8(vs2, vl);
+}
+
+vuint32mf2_t test_vctz_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vctz_v_u32mf2(vs2, vl);
+}
+
+vuint32m1_t test_vctz_v_u32m1(vuint32m1_t vs2, size_t vl) {
+  return __riscv_vctz_v_u32m1(vs2, vl);
+}
+
+vuint32m2_t test_vctz_v_u32m2(vuint32m2_t vs2, size_t vl) {
+  return __riscv_vctz_v_u32m2(vs2, vl);
+}
+
+vuint32m4_t test_vctz_v_u32m4(vuint32m4_t vs2, size_t vl) {
+  return __riscv_vctz_v_u32m4(vs2, vl);
+}
+
+vuint32m8_t test_vctz_v_u32m8(vuint32m8_t vs2, size_t vl) {
+  return __riscv_vctz_v_u32m8(vs2, vl);
+}
+
+vuint64m1_t test_vctz_v_u64m1(vuint64m1_t vs2, size_t vl) {
+  return __riscv_vctz_v_u64m1(vs2, vl);
+}
+
+vuint64m2_t test_vctz_v_u64m2(vuint64m2_t vs2, size_t vl) {
+  return __riscv_vctz_v_u64m2(vs2, vl);
+}
+
+vuint64m4_t test_vctz_v_u64m4(vuint64m4_t vs2, size_t vl) {
+  return __riscv_vctz_v_u64m4(vs2, vl);
+}
+
+vuint64m8_t test_vctz_v_u64m8(vuint64m8_t vs2, size_t vl) {
+  return __riscv_vctz_v_u64m8(vs2, vl);
+}
+
+vuint8mf8_t test_vctz_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vctz_v_u8mf8_m(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vctz_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vctz_v_u8mf4_m(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vctz_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vctz_v_u8mf2_m(mask, vs2, vl);
+}
+
+vuint8m1_t test_vctz_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vctz_v_u8m1_m(mask, vs2, vl);
+}
+
+vuint8m2_t test_vctz_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vctz_v_u8m2_m(mask, vs2, vl);
+}
+
+vuint8m4_t test_vctz_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vctz_v_u8m4_m(mask, vs2, vl);
+}
+
+vuint8m8_t test_vctz_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vctz_v_u8m8_m(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vctz_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vctz_v_u16mf4_m(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vctz_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vctz_v_u16mf2_m(mask, vs2, vl);
+}
+
+vuint16m1_t test_vctz_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vctz_v_u16m1_m(mask, vs2, vl);
+}
+
+vuint16m2_t test_vctz_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vctz_v_u16m2_m(mask, vs2, vl);
+}
+
+vuint16m4_t test_vctz_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vctz_v_u16m4_m(mask, vs2, vl);
+}
+
+vuint16m8_t test_vctz_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vctz_v_u16m8_m(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vctz_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vctz_v_u32mf2_m(mask, vs2, vl);
+}
+
+vuint32m1_t test_vctz_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vctz_v_u32m1_m(mask, vs2, vl);
+}
+
+vuint32m2_t test_vctz_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vctz_v_u32m2_m(mask, vs2, vl);
+}
+
+vuint32m4_t test_vctz_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vctz_v_u32m4_m(mask, vs2, vl);
+}
+
+vuint32m8_t test_vctz_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vctz_v_u32m8_m(mask, vs2, vl);
+}
+
+vuint64m1_t test_vctz_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vctz_v_u64m1_m(mask, vs2, vl);
+}
+
+vuint64m2_t test_vctz_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vctz_v_u64m2_m(mask, vs2, vl);
+}
+
+vuint64m4_t test_vctz_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vctz_v_u64m4_m(mask, vs2, vl);
+}
+
+vuint64m8_t test_vctz_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vctz_v_u64m8_m(mask, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vctz\.v\s+v[0-9]+,\s*v[0-9]} 44 } } */
+/* { dg-final { scan-assembler-times {vctz\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 22 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vctz_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vctz_overloaded.c
new file mode 100644
index 00000000000..df083ea4504
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vctz_overloaded.c
@@ -0,0 +1,188 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vctz_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint8mf4_t test_vctz_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint8mf2_t test_vctz_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint8m1_t test_vctz_v_u8m1(vuint8m1_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint8m2_t test_vctz_v_u8m2(vuint8m2_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint8m4_t test_vctz_v_u8m4(vuint8m4_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint8m8_t test_vctz_v_u8m8(vuint8m8_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint16mf4_t test_vctz_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint16mf2_t test_vctz_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint16m1_t test_vctz_v_u16m1(vuint16m1_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint16m2_t test_vctz_v_u16m2(vuint16m2_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint16m4_t test_vctz_v_u16m4(vuint16m4_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint16m8_t test_vctz_v_u16m8(vuint16m8_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint32mf2_t test_vctz_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint32m1_t test_vctz_v_u32m1(vuint32m1_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint32m2_t test_vctz_v_u32m2(vuint32m2_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint32m4_t test_vctz_v_u32m4(vuint32m4_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint32m8_t test_vctz_v_u32m8(vuint32m8_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint64m1_t test_vctz_v_u64m1(vuint64m1_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint64m2_t test_vctz_v_u64m2(vuint64m2_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint64m4_t test_vctz_v_u64m4(vuint64m4_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint64m8_t test_vctz_v_u64m8(vuint64m8_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint8mf8_t test_vctz_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vctz_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vctz_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint8m1_t test_vctz_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint8m2_t test_vctz_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint8m4_t test_vctz_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint8m8_t test_vctz_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vctz_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vctz_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint16m1_t test_vctz_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint16m2_t test_vctz_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint16m4_t test_vctz_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint16m8_t test_vctz_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vctz_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint32m1_t test_vctz_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint32m2_t test_vctz_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint32m4_t test_vctz_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint32m8_t test_vctz_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint64m1_t test_vctz_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint64m2_t test_vctz_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint64m4_t test_vctz_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint64m8_t test_vctz_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vctz\.v\s+v[0-9]+,\s*v[0-9]} 44 } } */
+/* { dg-final { scan-assembler-times {vctz\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 22 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrev8.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrev8.c
new file mode 100644
index 00000000000..a6533d11081
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrev8.c
@@ -0,0 +1,542 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf8(vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf4(vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf2(vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1(vuint8m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m1(vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2(vuint8m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m2(vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4(vuint8m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m4(vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8(vuint8m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m8(vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16mf4(vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16mf2(vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1(vuint16m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m1(vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2(vuint16m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m2(vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4(vuint16m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m4(vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8(vuint16m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m8(vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32mf2(vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1(vuint32m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m1(vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2(vuint32m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m2(vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4(vuint32m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m4(vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8(vuint32m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m8(vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1(vuint64m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m1(vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2(vuint64m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m2(vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4(vuint64m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m4(vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8(vuint64m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m8(vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf8_m(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf4_m(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf2_m(mask, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m1_m(mask, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m2_m(mask, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m4_m(mask, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m8_m(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16mf4_m(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16mf2_m(mask, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m1_m(mask, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m2_m(mask, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m4_m(mask, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m8_m(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32mf2_m(mask, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m1_m(mask, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m2_m(mask, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m4_m(mask, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m8_m(mask, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m1_m(mask, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m2_m(mask, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m4_m(mask, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m8_m(mask, vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf4_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf2_tu(maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m1_tu(maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m2_tu(maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m4_tu(maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m8_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16mf4_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16mf2_tu(maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m1_tu(maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m2_tu(maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m4_tu(maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m8_tu(maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32mf2_tu(maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m1_tu(maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m2_tu(maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m4_tu(maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m1_tu(maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m2_tu(maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m4_tu(maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16mf4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16mf2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32mf2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16mf4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16mf2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32mf2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16mf4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16mf2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32mf2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m8_mu(mask, maskedoff, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vrev8\.v\s+v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vrev8\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrev8_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrev8_overloaded.c
new file mode 100644
index 00000000000..49ce752376b
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrev8_overloaded.c
@@ -0,0 +1,542 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1(vuint8m1_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2(vuint8m2_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4(vuint8m4_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8(vuint8m8_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1(vuint16m1_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2(vuint16m2_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4(vuint16m4_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8(vuint16m8_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1(vuint32m1_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2(vuint32m2_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4(vuint32m4_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8(vuint32m8_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1(vuint64m1_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2(vuint64m2_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4(vuint64m4_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8(vuint64m8_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vrev8\.v\s+v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vrev8\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrol.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrol.c
new file mode 100644
index 00000000000..2bc8c44d738
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrol.c
@@ -0,0 +1,1072 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vrol_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf8(vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf8(vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf4(vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4(vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf4(vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf2(vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2(vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf2(vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m1(vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1(vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m1(vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m2(vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2(vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m2(vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m4(vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4(vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m4(vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m8(vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m8(vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16mf4(vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16mf4(vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16mf2(vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16mf2(vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m1(vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m1(vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m2(vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m2(vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m4(vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m4(vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m8(vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m8(vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32mf2(vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32mf2(vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m1(vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m1(vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m2(vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m2(vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m4(vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m4(vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m8(vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m8(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m1(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m1(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m2(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m2(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m4(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m4(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m8(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m8(vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf8_m(mask, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf8_m(mask, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf4_m(mask, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf4_m(mask, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m1_m(mask, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m1_m(mask, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m2_m(mask, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m2_m(mask, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m4_m(mask, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m4_m(mask, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m8_m(mask, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m8_m(mask, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16mf4_m(mask, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16mf4_m(mask, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m1_m(mask, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m1_m(mask, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m2_m(mask, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m2_m(mask, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m4_m(mask, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m4_m(mask, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m8_m(mask, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m8_m(mask, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m1_m(mask, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m1_m(mask, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m2_m(mask, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m2_m(mask, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m4_m(mask, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m4_m(mask, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m8_m(mask, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m8_m(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m1_m(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m1_m(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m2_m(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m2_m(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m4_m(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m4_m(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m8_m(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m8_m(mask, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16mf4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16mf4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vrol\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vrol\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
+/* { dg-final { scan-assembler-times {vrol\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vrol\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrol_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrol_overloaded.c
new file mode 100644
index 00000000000..2ce4559dc29
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrol_overloaded.c
@@ -0,0 +1,1072 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vrol_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4(vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2(vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1(vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2(vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4(vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vrol\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vrol\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
+/* { dg-final { scan-assembler-times {vrol\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vrol\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vror.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vror.c
new file mode 100644
index 00000000000..144be5c2756
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vror.c
@@ -0,0 +1,1073 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vror_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf8(vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf8(vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf4(vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4(vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf4(vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf2(vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2(vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf2(vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m1(vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1(vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m1(vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m2(vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2(vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m2(vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m4(vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4(vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m4(vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m8(vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m8(vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16mf4(vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16mf4(vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16mf2(vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16mf2(vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m1(vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m1(vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m2(vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m2(vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m4(vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m4(vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m8(vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m8(vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32mf2(vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32mf2(vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m1(vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m1(vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m2(vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m2(vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m4(vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m4(vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m8(vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m8(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m1(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m1(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m2(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m2(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m4(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m4(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m8(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m8(vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf8_m(mask, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf8_m(mask, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf4_m(mask, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf4_m(mask, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m1_m(mask, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m1_m(mask, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m2_m(mask, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m2_m(mask, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m4_m(mask, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m4_m(mask, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m8_m(mask, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m8_m(mask, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16mf4_m(mask, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16mf4_m(mask, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m1_m(mask, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m1_m(mask, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m2_m(mask, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m2_m(mask, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m4_m(mask, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m4_m(mask, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m8_m(mask, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m8_m(mask, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m1_m(mask, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m1_m(mask, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m2_m(mask, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m2_m(mask, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m4_m(mask, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m4_m(mask, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m8_m(mask, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m8_m(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m1_m(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m1_m(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m2_m(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m2_m(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m4_m(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m4_m(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m8_m(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m8_m(mask, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16mf4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16mf4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vror\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vror\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
+/* { dg-final { scan-assembler-times {vror\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vror\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vror_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vror_overloaded.c
new file mode 100644
index 00000000000..bebcf7facad
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vror_overloaded.c
@@ -0,0 +1,1072 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vror_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4(vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2(vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1(vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2(vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4(vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vror\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vror\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
+/* { dg-final { scan-assembler-times {vror\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vror\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vwsll.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vwsll.c
new file mode 100644
index 00000000000..e1946261e4f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vwsll.c
@@ -0,0 +1,736 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint16mf4_t test_vwsll_vv_u16mf4(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16mf4(vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4(vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16mf4(vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16mf2(vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2(vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16mf2(vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m1(vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1(vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m1(vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m2(vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2(vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m2(vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m4(vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4(vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m4(vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m8(vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8(vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m8(vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32mf2(vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2(vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32mf2(vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m1(vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1(vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m1(vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m2(vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2(vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m2(vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m4(vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4(vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m4(vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m8(vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8(vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m8(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m1(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1(vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m1(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m2(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2(vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m2(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m4(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4(vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m4(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m8(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8(vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m8(vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16mf4_m(mask, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16mf4_m(mask, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m1_m(mask, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m1_m(mask, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m2_m(mask, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m2_m(mask, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m4_m(mask, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m4_m(mask, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m8_m(mask, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m8_m(mask, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m1_m(mask, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m1_m(mask, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m2_m(mask, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m2_m(mask, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m4_m(mask, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m4_m(mask, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m8_m(mask, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m8_m(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m1_m(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m1_m(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m2_m(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m2_m(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m4_m(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m4_m(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m8_m(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m8_m(mask, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16mf4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16mf4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 60 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 60 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 30 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 30 } } */
+/* { dg-final { scan-assembler-times {vwsll\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 90 } } */
+/* { dg-final { scan-assembler-times {vwsll\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 60 } } */
+/* { dg-final { scan-assembler-times {vwsll\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 90 } } */
+/* { dg-final { scan-assembler-times {vwsll\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 60 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vwsll_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vwsll_overloaded.c
new file mode 100644
index 00000000000..512d76f5850
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vwsll_overloaded.c
@@ -0,0 +1,736 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint16mf4_t test_vwsll_vv_u16mf4(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4(vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2(vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1(vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2(vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4(vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8(vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2(vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1(vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2(vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4(vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8(vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1(vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2(vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4(vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8(vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 60 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 60 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 30 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 30 } } */
+/* { dg-final { scan-assembler-times {vwsll\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 90 } } */
+/* { dg-final { scan-assembler-times {vwsll\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 60 } } */
+/* { dg-final { scan-assembler-times {vwsll\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 90 } } */
+/* { dg-final { scan-assembler-times {vwsll\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 60 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/zvkb.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/zvkb.c
new file mode 100644
index 00000000000..cf956ced59e
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/zvkb.c
@@ -0,0 +1,48 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+
+vuint8mf8_t test_vandn_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf8(vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8(vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf8(vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf8(vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf4(vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf8(vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf4(vs2, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf8(vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf8(vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf8(vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf8(vs2, rs1, vl);
+}
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp b/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
new file mode 100644
index 00000000000..f0c5431d00c
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
@@ -0,0 +1,41 @@
+# Copyright (C) 2022-2023 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with GCC; see the file COPYING3.  If not see
+# <http://www.gnu.org/licenses/>.
+
+# GCC testsuite that uses the `dg.exp' driver.
+
+# Exit immediately if this isn't a RISC-V target.
+if ![istarget riscv*-*-*] then {
+  return
+}
+
+# Load support procs.
+load_lib gcc-dg.exp
+
+# If a testcase doesn't have special options, use these.
+global DEFAULT_CFLAGS
+if ![info exists DEFAULT_CFLAGS] then {
+    set DEFAULT_CFLAGS " -ansi -pedantic-errors"
+}
+
+# Initialize `dg'.
+dg-init
+
+# Main loop.
+dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvbb/*.\[cS\]]] \
+        "" $DEFAULT_CFLAGS
+
+# All done.
+dg-finish
diff --git a/gcc/testsuite/gcc.target/riscv/zvkb.c b/gcc/testsuite/gcc.target/riscv/zvkb.c
new file mode 100644
index 00000000000..d5c28e79ef6
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvkb.c
@@ -0,0 +1,13 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkb" { target { rv64 } } } */
+/* { dg-options "-march=rv32gc_zvkb" { target { rv32 } } } */
+
+#ifndef __riscv_zvkb
+#error "Feature macro not defined"
+#endif
+
+int
+foo (int a)
+{
+  return a;
+}
-- 
2.17.1


^ permalink raw reply	[flat|nested] 10+ messages in thread
* [PATCH 2/7] RISC-V: Add intrinsic functions for crypto vector Zvbc extension
@ 2023-12-04  3:37 juzhe.zhong
  0 siblings, 0 replies; 10+ messages in thread
From: juzhe.zhong @ 2023-12-04  3:37 UTC (permalink / raw)
  To: gcc-patches; +Cc: kito.cheng, jeffreyalaw, zhusonghe, panciyan, wangfeng

[-- Attachment #1: Type: text/plain, Size: 2605 bytes --]

Hi, eswin.

Thanks for contributing vector crypto support.

It seems patches mess up. Could you rebase your patch to the trunk GCC cleanly and send it again.

The patches look odd to me, for example:

 // ZVBB
-DEF_VECTOR_CRYPTO_FUNCTION (vandn, zvbb, full_preds, u_vvv_ops, zvkb_or_zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vandn, zvbb, full_preds, u_vvx_ops, zvkb_or_zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vbrev, zvbb, full_preds, u_vv_ops, zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vbrev8, zvbb, full_preds, u_vv_ops, zvkb_or_zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vrev8, zvbb, full_preds, u_vv_ops, zvkb_or_zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vclz, zvbb, none_m_preds, u_vv_ops, zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vctz, zvbb, none_m_preds, u_vv_ops, zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vrol, zvbb, full_preds, u_vvv_ops, zvkb_or_zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vrol, zvbb, full_preds, u_shift_vvx_ops, zvkb_or_zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vror, zvbb, full_preds, u_vvv_ops, zvkb_or_zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vror, zvbb, full_preds, u_shift_vvx_ops, zvkb_or_zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vwsll, zvbb, full_preds, u_wvv_ops, zvbb)
-DEF_VECTOR_CRYPTO_FUNCTION (vwsll, zvbb, full_preds, u_shift_wvx_ops, zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vandn,  zvbb_zvbc, full_preds, u_vvv_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vandn,  zvbb_zvbc, full_preds, u_vvx_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vbrev,  zvbb_zvbc, full_preds, u_vv_ops,  zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vbrev8, zvbb_zvbc, full_preds, u_vv_ops,  zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vrev8,  zvbb_zvbc, full_preds, u_vv_ops,  zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vclz,   zvbb_zvbc, none_m_preds, u_vv_ops, zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vctz,   zvbb_zvbc, none_m_preds, u_vv_ops, zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vrol,   zvbb_zvbc, full_preds, u_vvv_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vrol,   zvbb_zvbc, full_preds, u_shift_vvx_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vror,   zvbb_zvbc, full_preds, u_vvv_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vror,   zvbb_zvbc, full_preds, u_shift_vvx_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vwsll,  zvbb_zvbc, full_preds, u_wvv_ops, zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vwsll,  zvbb_zvbc, full_preds, u_shift_wvx_ops, zvbb)

Seems you mess up your local development which is not easy to review.

I would expecting patches as follows:

1. Add crypto march support (riscv-common.cc)
2. Add crypto machine descriptions (vector-cryptio.md)
3. Add crypto builtin.
4. Add testcases.

Thanks.



juzhe.zhong@rivai.ai

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2023-12-04  8:44 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-12-04  2:57 [PATCH 1/7] RISC-V: Add intrinsic functions for crypto vector Zvbb extension Feng Wang
2023-12-04  2:57 ` [PATCH 2/7] RISC-V: Add intrinsic functions for crypto vector Zvbc extension Feng Wang
2023-12-04  2:57 ` [PATCH 3/7] RISC-V: Add intrinsic functions for crypto vector Zvkg extension Feng Wang
2023-12-04  2:57 ` [PATCH 4/7] RISC-V: Add intrinsic functions for crypto vector Zvkned extension Feng Wang
2023-12-04  2:57 ` [PATCH 5/7] RISC-V: Add intrinsic functions for crypto vector Zvknh[ab] extension Feng Wang
2023-12-04  2:57 ` [PATCH 6/7] RISC-V: Add intrinsic functions for crypto vector Zvksed extension Feng Wang
2023-12-04  2:57 ` [PATCH 7/7] RISC-V: Add intrinsic functions for crypto vector Zvksh extension Feng Wang
2023-12-04  8:01 ` [PATCH 1/7] RISC-V: Add intrinsic functions for crypto vector Zvbb extension Kito Cheng
2023-12-04  8:44   ` Feng Wang
2023-12-04  3:37 [PATCH 2/7] RISC-V: Add intrinsic functions for crypto vector Zvbc extension juzhe.zhong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).