public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH] [RISC-V] Add intrinsics for Vector Crypto extension
@ 2023-11-30  0:25 Feng Wang
  0 siblings, 0 replies; only message in thread
From: Feng Wang @ 2023-11-30  0:25 UTC (permalink / raw)
  To: gcc-patches; +Cc: kito.cheng, jeffreyalaw, Feng Wang

This patch add the intrinsic functions(according to https://github.com/
riscv-non-isa/rvv-intrinsic-doc/blob/eopc/vector-crypto/auto-generated/
vector-crypto/intrinsic_funcs.md) for crypto vector extension. And all
the test cases are added for api-testing.

Co-Authored by: Songhe Zhu <zhusonghe@eswincomputing.com>
Co-Authored by: Ciyan Pan <panciyan@eswincomputing.com>

gcc/ChangeLog:

        * common/config/riscv/riscv-common.cc: Update with the latest spec
                                               and add more scenario in riscv_implied_info.
        * config/riscv/riscv-vector-builtins-bases.cc (class vandn): Add expand for vector crypto intrinsic.
        (class ror_rol):
        (class b_reverse):
        (class vcltz):
        (class vwsll):
        (class clmul):
        (class crypto_vv):
        (class crypto_vi):
        (class vaeskf2_vsm3c):
        (class vgnhab):
        (class vsm3me):
        (BASE):
        * config/riscv/riscv-vector-builtins-bases.h: Add function_base declaration for vector crypto.
        * config/riscv/riscv-vector-builtins-shapes.cc (struct zvbb_zvbc_def):Add get_name for vector crypto intrinsic.
        (struct crypto_vv_def):
        (struct crypto_vi_def):
        (SHAPE):
        * config/riscv/riscv-vector-builtins-shapes.h: Add function_shape declaration for vector crypto.
        * config/riscv/riscv-vector-builtins.cc (DEF_RVV_CRYPTO_SEW32_OPS):
        (DEF_RVV_CRYPTO_SEW64_OPS):
        (DEF_VECTOR_CRYPTO_FUNCTION):
        (registered_function::overloaded_hash):
        (handle_pragma_vector):
        * config/riscv/riscv-vector-builtins.def (vi):Add OP_TYPE_vi for vector crypto.
        * config/riscv/riscv-vector-builtins.h (struct crypto_function_group_info): Add new function struct
                                                                                    for vector crypto.
        * config/riscv/riscv.md: Add new ins name for vector crypto.
        * config/riscv/riscv.opt: Update with the latest spec.
        * config/riscv/t-riscv: Add new file dependency for vector crypto.
        * config/riscv/vector-iterators.md: Add new iterators for vector crypto.
        * config/riscv/vector.md:  Add vector crypto ins in some "dfine_attr".
        * config/riscv/riscv-vector-crypto-builtins-avail.h: New file. Control enable.
        * config/riscv/riscv-vector-crypto-builtins-functions.def: New file.Format definition for vector crypto.
        * config/riscv/riscv-vector-crypto-builtins-types.def: New type for vector crypto.
        * config/riscv/vector-crypto.md: New md file for vector crypto.

gcc/testsuite/ChangeLog:

        * gcc.target/riscv/zvk/zvbb/vandn.c: New test.
        * gcc.target/riscv/zvk/zvbb/vandn_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvbb/vbrev.c: New test.
        * gcc.target/riscv/zvk/zvbb/vbrev8.c: New test.
        * gcc.target/riscv/zvk/zvbb/vbrev8_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvbb/vbrev_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvbb/vclz.c: New test.
        * gcc.target/riscv/zvk/zvbb/vclz_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvbb/vctz.c: New test.
        * gcc.target/riscv/zvk/zvbb/vctz_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvbb/vrev8.c: New test.
        * gcc.target/riscv/zvk/zvbb/vrev8_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvbb/vrol.c: New test.
        * gcc.target/riscv/zvk/zvbb/vrol_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvbb/vror.c: New test.
        * gcc.target/riscv/zvk/zvbb/vror_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvbb/vwsll.c: New test.
        * gcc.target/riscv/zvk/zvbb/vwsll_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvbb/zvkb.c: New test.
        * gcc.target/riscv/zvk/zvbc/vclmul.c: New test.
        * gcc.target/riscv/zvk/zvbc/vclmul_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvbc/vclmulh.c: New test.
        * gcc.target/riscv/zvk/zvbc/vclmulh_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvk.exp: New test.
        * gcc.target/riscv/zvk/zvkg/vghsh.c: New test.
        * gcc.target/riscv/zvk/zvkg/vghsh_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvkg/vgmul.c: New test.
        * gcc.target/riscv/zvk/zvkg/vgmul_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvkned/vaesdf.c: New test.
        * gcc.target/riscv/zvk/zvkned/vaesdf_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvkned/vaesdm.c: New test.
        * gcc.target/riscv/zvk/zvkned/vaesdm_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvkned/vaesef.c: New test.
        * gcc.target/riscv/zvk/zvkned/vaesef_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvkned/vaesem.c: New test.
        * gcc.target/riscv/zvk/zvkned/vaesem_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvkned/vaeskf1.c: New test.
        * gcc.target/riscv/zvk/zvkned/vaeskf1_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvkned/vaeskf2.c: New test.
        * gcc.target/riscv/zvk/zvkned/vaeskf2_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvkned/vaesz.c: New test.
        * gcc.target/riscv/zvk/zvkned/vaesz_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvknha/vsha2ch.c: New test.
        * gcc.target/riscv/zvk/zvknha/vsha2ch_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvknha/vsha2cl.c: New test.
        * gcc.target/riscv/zvk/zvknha/vsha2cl_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvknha/vsha2ms.c: New test.
        * gcc.target/riscv/zvk/zvknha/vsha2ms_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvknhb/vsha2ch.c: New test.
        * gcc.target/riscv/zvk/zvknhb/vsha2ch_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvknhb/vsha2cl.c: New test.
        * gcc.target/riscv/zvk/zvknhb/vsha2cl_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvknhb/vsha2ms.c: New test.
        * gcc.target/riscv/zvk/zvknhb/vsha2ms_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvksed/vsm4k.c: New test.
        * gcc.target/riscv/zvk/zvksed/vsm4k_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvksed/vsm4r.c: New test.
        * gcc.target/riscv/zvk/zvksed/vsm4r_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvksh/vsm3c.c: New test.
        * gcc.target/riscv/zvk/zvksh/vsm3c_overloaded.c: New test.
        * gcc.target/riscv/zvk/zvksh/vsm3me.c: New test.
        * gcc.target/riscv/zvk/zvksh/vsm3me_overloaded.c: New test.
---
 gcc/common/config/riscv/riscv-common.cc       |   66 +-
 .../riscv/riscv-vector-builtins-bases.cc      |  253 ++++
 .../riscv/riscv-vector-builtins-bases.h       |   28 +
 .../riscv/riscv-vector-builtins-shapes.cc     |   89 ++
 .../riscv/riscv-vector-builtins-shapes.h      |    4 +
 gcc/config/riscv/riscv-vector-builtins.cc     |  150 ++-
 gcc/config/riscv/riscv-vector-builtins.def    |    1 +
 gcc/config/riscv/riscv-vector-builtins.h      |    8 +
 .../riscv-vector-crypto-builtins-avail.h      |   26 +
 ...riscv-vector-crypto-builtins-functions.def |   85 ++
 .../riscv-vector-crypto-builtins-types.def    |   21 +
 gcc/config/riscv/riscv.md                     |   32 +-
 gcc/config/riscv/riscv.opt                    |    2 +
 gcc/config/riscv/t-riscv                      |    2 +
 gcc/config/riscv/vector-crypto.md             |  519 ++++++++
 gcc/config/riscv/vector-iterators.md          |   41 +
 gcc/config/riscv/vector.md                    |   47 +-
 .../gcc.target/riscv/zvk/zvbb/vandn.c         | 1072 ++++++++++++++++
 .../riscv/zvk/zvbb/vandn_overloaded.c         | 1072 ++++++++++++++++
 .../gcc.target/riscv/zvk/zvbb/vbrev.c         |  542 +++++++++
 .../gcc.target/riscv/zvk/zvbb/vbrev8.c        |  542 +++++++++
 .../riscv/zvk/zvbb/vbrev8_overloaded.c        |  543 +++++++++
 .../riscv/zvk/zvbb/vbrev_overloaded.c         |  542 +++++++++
 .../gcc.target/riscv/zvk/zvbb/vclz.c          |  187 +++
 .../riscv/zvk/zvbb/vclz_overloaded.c          |  187 +++
 .../gcc.target/riscv/zvk/zvbb/vctz.c          |  187 +++
 .../riscv/zvk/zvbb/vctz_overloaded.c          |  188 +++
 .../gcc.target/riscv/zvk/zvbb/vrev8.c         |  542 +++++++++
 .../riscv/zvk/zvbb/vrev8_overloaded.c         |  542 +++++++++
 .../gcc.target/riscv/zvk/zvbb/vrol.c          | 1072 ++++++++++++++++
 .../riscv/zvk/zvbb/vrol_overloaded.c          | 1072 ++++++++++++++++
 .../gcc.target/riscv/zvk/zvbb/vror.c          | 1073 +++++++++++++++++
 .../riscv/zvk/zvbb/vror_overloaded.c          | 1072 ++++++++++++++++
 .../gcc.target/riscv/zvk/zvbb/vwsll.c         |  736 +++++++++++
 .../riscv/zvk/zvbb/vwsll_overloaded.c         |  736 +++++++++++
 .../gcc.target/riscv/zvk/zvbb/zvkb.c          |   48 +
 .../gcc.target/riscv/zvk/zvbc/vclmul.c        |  208 ++++
 .../riscv/zvk/zvbc/vclmul_overloaded.c        |  208 ++++
 .../gcc.target/riscv/zvk/zvbc/vclmulh.c       |  208 ++++
 .../riscv/zvk/zvbc/vclmulh_overloaded.c       |  208 ++++
 gcc/testsuite/gcc.target/riscv/zvk/zvk.exp    |   57 +
 .../gcc.target/riscv/zvk/zvkg/vghsh.c         |   51 +
 .../riscv/zvk/zvkg/vghsh_overloaded.c         |   51 +
 .../gcc.target/riscv/zvk/zvkg/vgmul.c         |   51 +
 .../riscv/zvk/zvkg/vgmul_overloaded.c         |   51 +
 .../gcc.target/riscv/zvk/zvkned/vaesdf.c      |  169 +++
 .../riscv/zvk/zvkned/vaesdf_overloaded.c      |  169 +++
 .../gcc.target/riscv/zvk/zvkned/vaesdm.c      |  170 +++
 .../riscv/zvk/zvkned/vaesdm_overloaded.c      |  170 +++
 .../gcc.target/riscv/zvk/zvkned/vaesef.c      |  170 +++
 .../riscv/zvk/zvkned/vaesef_overloaded.c      |  170 +++
 .../gcc.target/riscv/zvk/zvkned/vaesem.c      |  170 +++
 .../riscv/zvk/zvkned/vaesem_overloaded.c      |  170 +++
 .../gcc.target/riscv/zvk/zvkned/vaeskf1.c     |   50 +
 .../riscv/zvk/zvkned/vaeskf1_overloaded.c     |   50 +
 .../gcc.target/riscv/zvk/zvkned/vaeskf2.c     |   50 +
 .../riscv/zvk/zvkned/vaeskf2_overloaded.c     |   50 +
 .../gcc.target/riscv/zvk/zvkned/vaesz.c       |  130 ++
 .../riscv/zvk/zvkned/vaesz_overloaded.c       |  130 ++
 .../gcc.target/riscv/zvk/zvknha/vsha2ch.c     |   51 +
 .../riscv/zvk/zvknha/vsha2ch_overloaded.c     |   51 +
 .../gcc.target/riscv/zvk/zvknha/vsha2cl.c     |   51 +
 .../riscv/zvk/zvknha/vsha2cl_overloaded.c     |   51 +
 .../gcc.target/riscv/zvk/zvknha/vsha2ms.c     |   51 +
 .../riscv/zvk/zvknha/vsha2ms_overloaded.c     |   51 +
 .../gcc.target/riscv/zvk/zvknhb/vsha2ch.c     |   83 ++
 .../riscv/zvk/zvknhb/vsha2ch_overloaded.c     |   83 ++
 .../gcc.target/riscv/zvk/zvknhb/vsha2cl.c     |   83 ++
 .../riscv/zvk/zvknhb/vsha2cl_overloaded.c     |   83 ++
 .../gcc.target/riscv/zvk/zvknhb/vsha2ms.c     |   83 ++
 .../riscv/zvk/zvknhb/vsha2ms_overloaded.c     |   83 ++
 .../gcc.target/riscv/zvk/zvksed/vsm4k.c       |   50 +
 .../riscv/zvk/zvksed/vsm4k_overloaded.c       |   50 +
 .../gcc.target/riscv/zvk/zvksed/vsm4r.c       |  170 +++
 .../riscv/zvk/zvksed/vsm4r_overloaded.c       |  170 +++
 .../gcc.target/riscv/zvk/zvksh/vsm3c.c        |   51 +
 .../riscv/zvk/zvksh/vsm3c_overloaded.c        |   51 +
 .../gcc.target/riscv/zvk/zvksh/vsm3me.c       |   51 +
 .../riscv/zvk/zvksh/vsm3me_overloaded.c       |   51 +
 79 files changed, 17642 insertions(+), 46 deletions(-)
 create mode 100755 gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
 create mode 100755 gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
 create mode 100755 gcc/config/riscv/riscv-vector-crypto-builtins-types.def
 create mode 100755 gcc/config/riscv/vector-crypto.md
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vandn.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vandn_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev8_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vclz.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vclz_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vctz.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vctz_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrev8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrev8_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrol.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrol_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vror.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vror_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vwsll.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/vwsll_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbb/zvkb.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmul.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmul_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmulh.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmulh_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkg/vghsh.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkg/vghsh_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkg/vgmul.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkg/vgmul_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdf.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdf_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdm.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdm_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesef.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesef_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesem.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesem_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf1_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf2_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesz.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesz_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ch.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ch_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2cl.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2cl_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ms.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ms_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ch.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ch_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2cl.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2cl_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ms.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ms_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4k.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4k_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4r.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4r_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3c.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3c_overloaded.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3me.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3me_overloaded.c

diff --git a/gcc/common/config/riscv/riscv-common.cc b/gcc/common/config/riscv/riscv-common.cc
index ded85b4c578..a6d393d75f9 100644
--- a/gcc/common/config/riscv/riscv-common.cc
+++ b/gcc/common/config/riscv/riscv-common.cc
@@ -104,22 +104,32 @@ static const riscv_implied_info_t riscv_implied_info[] =
   {"zvl32768b", "zvl16384b"},
   {"zvl65536b", "zvl32768b"},
 
-  {"zvkn", "zvkned"},
-  {"zvkn", "zvknhb"},
-  {"zvkn", "zvbb"},
-  {"zvkn", "zvkt"},
-  {"zvknc", "zvkn"},
-  {"zvknc", "zvbc"},
-  {"zvkng", "zvkn"},
-  {"zvkng", "zvkg"},
-  {"zvks", "zvksed"},
-  {"zvks", "zvksh"},
-  {"zvks", "zvbb"},
-  {"zvks", "zvkt"},
-  {"zvksc", "zvks"},
-  {"zvksc", "zvbc"},
-  {"zvksg", "zvks"},
-  {"zvksg", "zvkg"},
+  {"zvknc",   "zvbc"},
+  {"zvknc",   "zvkn"},
+  {"zvkng",   "zvkg"},
+  {"zvkng",   "zvkn"},
+  {"zvkn",    "zvkb"},
+  {"zvkn",    "zvkt"},
+  {"zvkn",  "zvkned"},
+  {"zvkn",  "zvknhb"},
+  {"zvksc",   "zvbc"},
+  {"zvksc",   "zvks"},
+  {"zvksg",   "zvkg"},
+  {"zvksg",   "zvks"},
+  {"zvks",    "zvkb"},
+  {"zvks",    "zvkt"},
+  {"zvks",   "zvksh"},
+  {"zvks",  "zvksed"},
+  {"zvbb",    "zvkb"},
+  {"zvbc",       "v"},
+  {"zvkb",       "v"},
+  {"zvkg",       "v"},
+  {"zvkt",       "v"},
+  {"zvksh",      "v"},
+  {"zvkned",     "v"},
+  {"zvknha",     "v"},
+  {"zvknhb",     "v"},
+  {"zvksed",     "v"},
 
   {"zfh", "zfhmin"},
   {"zfhmin", "f"},
@@ -251,21 +261,22 @@ static const struct riscv_ext_version riscv_ext_version_table[] =
   {"zve64f", ISA_SPEC_CLASS_NONE, 1, 0},
   {"zve64d", ISA_SPEC_CLASS_NONE, 1, 0},
 
-  {"zvbb", ISA_SPEC_CLASS_NONE, 1, 0},
-  {"zvbc", ISA_SPEC_CLASS_NONE, 1, 0},
-  {"zvkg", ISA_SPEC_CLASS_NONE, 1, 0},
+  {"zvbb",   ISA_SPEC_CLASS_NONE, 1, 0},
+  {"zvbc",   ISA_SPEC_CLASS_NONE, 1, 0},
+  {"zvkb",   ISA_SPEC_CLASS_NONE, 1, 0},
+  {"zvkg",   ISA_SPEC_CLASS_NONE, 1, 0},
   {"zvkned", ISA_SPEC_CLASS_NONE, 1, 0},
   {"zvknha", ISA_SPEC_CLASS_NONE, 1, 0},
   {"zvknhb", ISA_SPEC_CLASS_NONE, 1, 0},
   {"zvksed", ISA_SPEC_CLASS_NONE, 1, 0},
-  {"zvksh", ISA_SPEC_CLASS_NONE, 1, 0},
-  {"zvkn", ISA_SPEC_CLASS_NONE, 1, 0},
-  {"zvknc", ISA_SPEC_CLASS_NONE, 1, 0},
-  {"zvkng", ISA_SPEC_CLASS_NONE, 1, 0},
-  {"zvks", ISA_SPEC_CLASS_NONE, 1, 0},
-  {"zvksc", ISA_SPEC_CLASS_NONE, 1, 0},
-  {"zvksg", ISA_SPEC_CLASS_NONE, 1, 0},
-  {"zvkt", ISA_SPEC_CLASS_NONE, 1, 0},
+  {"zvksh",  ISA_SPEC_CLASS_NONE, 1, 0},
+  {"zvkn",   ISA_SPEC_CLASS_NONE, 1, 0},
+  {"zvknc",  ISA_SPEC_CLASS_NONE, 1, 0},
+  {"zvkng",  ISA_SPEC_CLASS_NONE, 1, 0},
+  {"zvks",   ISA_SPEC_CLASS_NONE, 1, 0},
+  {"zvksc",  ISA_SPEC_CLASS_NONE, 1, 0},
+  {"zvksg",  ISA_SPEC_CLASS_NONE, 1, 0},
+  {"zvkt",   ISA_SPEC_CLASS_NONE, 1, 0},
 
   {"zvl32b", ISA_SPEC_CLASS_NONE, 1, 0},
   {"zvl64b", ISA_SPEC_CLASS_NONE, 1, 0},
@@ -1624,6 +1635,7 @@ static const riscv_ext_flag_table_t riscv_ext_flag_table[] =
 
   {"zvbb",     &gcc_options::x_riscv_zvb_subext, MASK_ZVBB},
   {"zvbc",     &gcc_options::x_riscv_zvb_subext, MASK_ZVBC},
+  {"zvkb",     &gcc_options::x_riscv_zvb_subext, MASK_ZVKB},
   {"zvkg",     &gcc_options::x_riscv_zvk_subext, MASK_ZVKG},
   {"zvkned",   &gcc_options::x_riscv_zvk_subext, MASK_ZVKNED},
   {"zvknha",   &gcc_options::x_riscv_zvk_subext, MASK_ZVKNHA},
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index d70468542ee..13179e6e655 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -2127,6 +2127,201 @@ public:
   }
 };
 
+/* Below implements are vector crypto */
+/* Implements vandn. */
+class vandn : public function_base
+{
+public:
+  rtx expand (function_expander &e) const override
+  {
+    switch (e.op_info->op)
+      {
+      case OP_TYPE_vv:
+        return e.use_exact_insn (code_for_pred_vandn (e.vector_mode ()));
+      case OP_TYPE_vx:
+        return e.use_exact_insn (code_for_pred_vandn_scalar (e.vector_mode ()));
+      default:
+        gcc_unreachable ();
+      }
+  }
+};
+
+/* Implements vrol/vror. */
+template<int UNSPEC>
+class ror_rol : public function_base
+{
+public:
+  rtx expand (function_expander &e) const override
+  {
+    switch (e.op_info->op)
+      {
+      case OP_TYPE_vv:
+        return e.use_exact_insn (code_for_pred_v (UNSPEC, e.vector_mode ()));
+      case OP_TYPE_vx:
+        return e.use_exact_insn (code_for_pred_v_scalar (UNSPEC, e.vector_mode ()));
+      default:
+        gcc_unreachable ();
+      }
+  }
+};
+
+/* Implements vbrev/vbrev8/vrev8. */
+template<int UNSPEC>
+class b_reverse : public function_base
+{
+public:
+  rtx expand (function_expander &e) const override
+  {
+      return e.use_exact_insn (code_for_pred_v (UNSPEC, e.vector_mode ()));
+  }
+};
+
+/* Implements vclz/vctz. */
+template<int UNSPEC>
+class vcltz : public function_base
+{
+public:
+  bool apply_tail_policy_p () const override { return false; }
+  bool apply_mask_policy_p () const override { return false; }
+  bool has_merge_operand_p () const override { return false; }
+
+  rtx expand (function_expander &e) const override
+  {
+      return e.use_exact_insn (code_for_pred_vc (UNSPEC, e.vector_mode ()));
+  }
+};
+
+/* Implements vwsll. */
+class vwsll : public function_base
+{
+public:
+  rtx expand (function_expander &e) const override
+  {
+    switch (e.op_info->op)
+      {
+      case OP_TYPE_vv:
+        return e.use_exact_insn (code_for_pred_vwsll (e.vector_mode ()));
+      case OP_TYPE_vx:
+        return e.use_exact_insn (code_for_pred_vwsll_scalar (e.vector_mode ()));
+      default:
+        gcc_unreachable ();
+      }
+  }
+};
+
+/* Implements clmul. */
+template<int UNSPEC>
+class clmul : public function_base
+{
+public:
+  rtx expand (function_expander &e) const override
+  {
+    switch (e.op_info->op)
+      {
+      case OP_TYPE_vv:
+        return e.use_exact_insn (code_for_pred_vclmul (UNSPEC, e.vector_mode ()));
+      case OP_TYPE_vx:
+        return e.use_exact_insn (code_for_pred_vclmul_scalar (UNSPEC, e.vector_mode ()));
+      default:
+        gcc_unreachable ();
+      }
+  }
+};
+
+/* Implements vgmul/vsm4r/vaes*. */
+template<int UNSPEC>
+class crypto_vv : public function_base
+{
+public:
+  bool apply_mask_policy_p () const override { return false; }
+  bool use_mask_predication_p () const override { return false; }
+  bool has_merge_operand_p () const override { return false; }
+
+  rtx expand (function_expander &e) const override
+  {
+    poly_uint64 nunits = 0U;
+    switch (e.op_info->op)
+    {
+      case OP_TYPE_vv:
+        if (UNSPEC == UNSPEC_VGMUL)
+          return e.use_exact_insn (code_for_pred_crypto_vv (UNSPEC, UNSPEC, e.vector_mode ()));
+        else
+          return e.use_exact_insn (code_for_pred_crypto_vv
+                                 (UNSPEC + 1, UNSPEC + 1, e.vector_mode ()));
+      case OP_TYPE_vs:
+        /* Calculate the ratio between arg0 and arg1*/
+        multiple_p (GET_MODE_BITSIZE (e.arg_mode (0)),
+                    GET_MODE_BITSIZE (e.arg_mode (1)), &nunits);
+        if (maybe_eq (nunits, 1U))
+          return e.use_exact_insn (code_for_pred_crypto_vvx1_scalar
+                                   (UNSPEC +2, UNSPEC + 2, e.vector_mode ()));
+        else if (maybe_eq (nunits, 2U))
+          return e.use_exact_insn (code_for_pred_crypto_vvx2_scalar
+                                   (UNSPEC + 2, UNSPEC + 2, e.vector_mode ()));
+        else if (maybe_eq (nunits, 4U))
+          return e.use_exact_insn (code_for_pred_crypto_vvx4_scalar
+                                   (UNSPEC + 2, UNSPEC + 2, e.vector_mode ()));
+        else if (maybe_eq (nunits, 8U))
+          return e.use_exact_insn (code_for_pred_crypto_vvx8_scalar
+                                   (UNSPEC + 2, UNSPEC + 2, e.vector_mode ()));
+        else
+          return e.use_exact_insn (code_for_pred_crypto_vvx16_scalar
+                                   (UNSPEC + 2, UNSPEC + 2, e.vector_mode ()));
+      default:
+        gcc_unreachable ();
+    }
+  }
+};
+
+/* Implements vaeskf1/vsm4k. */
+template<int UNSPEC>
+class crypto_vi : public function_base
+{
+public:
+  bool apply_mask_policy_p () const override { return false; }
+  bool use_mask_predication_p () const override { return false; }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (code_for_pred_crypto_vi_scalar (UNSPEC, e.vector_mode ()));
+  }
+};
+
+/* Implements vaeskf2/vsm3c. */
+template<int UNSPEC>
+class vaeskf2_vsm3c : public function_base
+{
+public:
+  bool apply_mask_policy_p () const override { return false; }
+  bool use_mask_predication_p () const override { return false; }
+  bool has_merge_operand_p () const override { return false; }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (code_for_pred_vi_nomaskedoff_scalar (UNSPEC, e.vector_mode ()));
+  }
+};
+
+/* Implements vghsh/vsha2ms/vsha2ch/vsha2cl. */
+template<int UNSPEC>
+class vgnhab : public function_base
+{
+public:
+  bool apply_mask_policy_p () const override { return false; }
+  bool use_mask_predication_p () const override { return false; }
+  bool has_merge_operand_p () const override { return false; }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (code_for_pred_v (UNSPEC, e.vector_mode ()));
+  }
+};
+
+/* Implements vsm3me. */
+class vsm3me : public function_base
+{
+public:
+  bool apply_mask_policy_p () const override { return false; }
+  bool use_mask_predication_p () const override { return false; }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (code_for_pred_vsm3me (e.vector_mode ()));
+  }
+};
+
 static CONSTEXPR const vsetvl<false> vsetvl_obj;
 static CONSTEXPR const vsetvl<true> vsetvlmax_obj;
 static CONSTEXPR const loadstore<false, LST_UNIT_STRIDE, false> vle_obj;
@@ -2384,6 +2579,35 @@ static CONSTEXPR const seg_indexed_store<UNSPEC_UNORDERED> vsuxseg_obj;
 static CONSTEXPR const seg_indexed_store<UNSPEC_ORDERED> vsoxseg_obj;
 static CONSTEXPR const vlsegff vlsegff_obj;
 
+// Vector Crypto
+static CONSTEXPR const vandn vandn_obj;
+static CONSTEXPR const ror_rol<UNSPEC_VROL>      vrol_obj;
+static CONSTEXPR const ror_rol<UNSPEC_VROR>      vror_obj;
+static CONSTEXPR const b_reverse<UNSPEC_VBREV>   vbrev_obj;
+static CONSTEXPR const b_reverse<UNSPEC_VBREV8>  vbrev8_obj;
+static CONSTEXPR const b_reverse<UNSPEC_VREV8>   vrev8_obj;
+static CONSTEXPR const vcltz<UNSPEC_VCLZ>        vclz_obj;
+static CONSTEXPR const vcltz<UNSPEC_VCTZ>        vctz_obj;
+static CONSTEXPR const vwsll vwsll_obj;
+static CONSTEXPR const clmul<UNSPEC_VCLMUL>      vclmul_obj;
+static CONSTEXPR const clmul<UNSPEC_VCLMULH>     vclmulh_obj;
+static CONSTEXPR const vgnhab<UNSPEC_VGHSH>      vghsh_obj;
+static CONSTEXPR const crypto_vv<UNSPEC_VGMUL>   vgmul_obj;
+static CONSTEXPR const crypto_vv<UNSPEC_VAESEF>  vaesef_obj;
+static CONSTEXPR const crypto_vv<UNSPEC_VAESEM>  vaesem_obj;
+static CONSTEXPR const crypto_vv<UNSPEC_VAESDF>  vaesdf_obj;
+static CONSTEXPR const crypto_vv<UNSPEC_VAESDM>  vaesdm_obj;
+static CONSTEXPR const crypto_vv<UNSPEC_VAESZ>   vaesz_obj;
+static CONSTEXPR const crypto_vi<UNSPEC_VAESKF1> vaeskf1_obj;
+static CONSTEXPR const vaeskf2_vsm3c<UNSPEC_VAESKF2> vaeskf2_obj;
+static CONSTEXPR const vaeskf2_vsm3c<UNSPEC_VSM3C>   vsm3c_obj;
+static CONSTEXPR const vgnhab<UNSPEC_VSHA2MS>    vsha2ms_obj;
+static CONSTEXPR const vgnhab<UNSPEC_VSHA2CH>    vsha2ch_obj;
+static CONSTEXPR const vgnhab<UNSPEC_VSHA2CL>    vsha2cl_obj;
+static CONSTEXPR const crypto_vi<UNSPEC_VSM4K>   vsm4k_obj;
+static CONSTEXPR const crypto_vv<UNSPEC_VSM4R>   vsm4r_obj;
+static CONSTEXPR const vsm3me vsm3me_obj;
+
 /* Declare the function base NAME, pointing it to an instance
    of class <NAME>_obj.  */
 #define BASE(NAME) \
@@ -2646,4 +2870,33 @@ BASE (vsuxseg)
 BASE (vsoxseg)
 BASE (vlsegff)
 
+// Vector Crypto
+BASE (vandn)
+BASE (vbrev)
+BASE (vbrev8)
+BASE (vrev8)
+BASE (vclz)
+BASE (vctz)
+BASE (vrol)
+BASE (vror)
+BASE (vwsll)
+BASE (vclmul)
+BASE (vclmulh)
+BASE (vghsh)
+BASE (vgmul)
+BASE (vaesef)
+BASE (vaesem)
+BASE (vaesdf)
+BASE (vaesdm)
+BASE (vaesz)
+BASE (vaeskf1)
+BASE (vaeskf2)
+BASE (vsha2ms)
+BASE (vsha2ch)
+BASE (vsha2cl)
+BASE (vsm4k)
+BASE (vsm4r)
+BASE (vsm3me)
+BASE (vsm3c)
+
 } // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.h b/gcc/config/riscv/riscv-vector-builtins-bases.h
index 131041ea66f..95b7d982b5f 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.h
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.h
@@ -280,6 +280,34 @@ extern const function_base *const vloxseg;
 extern const function_base *const vsuxseg;
 extern const function_base *const vsoxseg;
 extern const function_base *const vlsegff;
+/* Below function_base are Vectro Crypto*/
+extern const function_base *const vandn;
+extern const function_base *const vbrev;
+extern const function_base *const vbrev8;
+extern const function_base *const vrev8;
+extern const function_base *const vclz;
+extern const function_base *const vctz;
+extern const function_base *const vrol;
+extern const function_base *const vror;
+extern const function_base *const vwsll;
+extern const function_base *const vclmul;
+extern const function_base *const vclmulh;
+extern const function_base *const vghsh;
+extern const function_base *const vgmul;
+extern const function_base *const vaesef;
+extern const function_base *const vaesem;
+extern const function_base *const vaesdf;
+extern const function_base *const vaesdm;
+extern const function_base *const vaesz;
+extern const function_base *const vaeskf1;
+extern const function_base *const vaeskf2;
+extern const function_base *const vsha2ms;
+extern const function_base *const vsha2ch;
+extern const function_base *const vsha2cl;
+extern const function_base *const vsm4k;
+extern const function_base *const vsm4r;
+extern const function_base *const vsm3me;
+extern const function_base *const vsm3c;
 }
 
 } // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 4a754e0228f..e94949c68ba 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -984,6 +984,92 @@ struct seg_fault_load_def : public build_base
   }
 };
 
+/* vandn/vbrev/vbrev8/vrev8/vclz/vctz/vror[l]/vwsll/vclmul/vclmulh class.  */
+struct zvbb_zvbc_def : public build_base
+{
+  char *get_name (function_builder &b, const function_instance &instance,
+                  bool overloaded_p) const override
+  {
+    /* Return nullptr if it can not be overloaded.  */
+    if (overloaded_p && !instance.base->can_be_overloaded_p (instance.pred))
+      return nullptr;
+
+    b.append_base_name (instance.base_name);
+    if (!overloaded_p)
+    {
+      b.append_name (operand_suffixes[instance.op_info->op]);
+      b.append_name (type_suffixes[instance.type.index].vector);
+    }
+    /* According to vector-crypto-intrinsic-doc, it does not
+       add "_m" suffix for vop_m C++ overloaded API.  */
+    if (overloaded_p && instance.pred == PRED_TYPE_m)
+      return b.finish_name ();
+    b.append_name (predication_suffixes[instance.pred]);
+    return b.finish_name ();
+  }
+};
+
+/* vsha2ms/vsha2ch/vsha2cl/vsm3me/vaes* class.  */
+struct crypto_vv_def : public build_base
+{
+  char *get_name (function_builder &b, const function_instance &instance,
+                  bool overloaded_p) const override
+  {
+    /* Return nullptr if it can not be overloaded.  */
+    if (overloaded_p && !instance.base->can_be_overloaded_p (instance.pred))
+      return nullptr;
+    b.append_base_name (instance.base_name);
+    /* There is no op_type name in vaesz/vsha2ms/vsha2ch/vsha2cl/vghsh/vgmul/vsm3me overloaded intrinsic */
+    if (!((strcmp (instance.base_name, "vaesz") == 0
+          || strcmp (instance.base_name, "vsha2ms") == 0
+          || strcmp (instance.base_name, "vsha2ch") == 0
+          || strcmp (instance.base_name, "vsha2cl") == 0
+          || strcmp (instance.base_name, "vghsh") == 0
+          || strcmp (instance.base_name, "vgmul") == 0
+          || strcmp (instance.base_name, "vsm3me") == 0)
+          && overloaded_p))
+      b.append_name (operand_suffixes[instance.op_info->op]);
+    if (!overloaded_p)
+    {
+      if (instance.op_info->op == OP_TYPE_vv)
+        b.append_name (type_suffixes[instance.type.index].vector);
+      else
+      {
+        vector_type_index arg0_type_idx
+          = instance.op_info->args[1].get_function_type_index
+            (instance.type.index);
+        b.append_name (type_suffixes[arg0_type_idx].vector);
+        vector_type_index ret_type_idx
+          = instance.op_info->ret.get_function_type_index
+            (instance.type.index);
+        b.append_name (type_suffixes[ret_type_idx].vector);
+      }
+    }
+
+    b.append_name (predication_suffixes[instance.pred]);
+    return b.finish_name ();
+  }
+};
+
+/* vaeskf1/vaeskf2/vsm3c/vsm4k class.  */
+struct crypto_vi_def : public build_base
+{
+  char *get_name (function_builder &b, const function_instance &instance,
+                  bool overloaded_p) const override
+  {
+    /* Return nullptr if it can not be overloaded.  */
+    if (overloaded_p && !instance.base->can_be_overloaded_p (instance.pred))
+      return nullptr;
+    b.append_base_name (instance.base_name);
+    if (!overloaded_p)
+    {
+      b.append_name (operand_suffixes[instance.op_info->op]);
+      b.append_name (type_suffixes[instance.type.index].vector);
+    }
+    b.append_name (predication_suffixes[instance.pred]);
+    return b.finish_name ();
+  }
+};
 SHAPE(vsetvl, vsetvl)
 SHAPE(vsetvl, vsetvlmax)
 SHAPE(loadstore, loadstore)
@@ -1012,5 +1098,8 @@ SHAPE(vlenb, vlenb)
 SHAPE(seg_loadstore, seg_loadstore)
 SHAPE(seg_indexed_loadstore, seg_indexed_loadstore)
 SHAPE(seg_fault_load, seg_fault_load)
+SHAPE(zvbb_zvbc, zvbb_zvbc)
+SHAPE(crypto_vv, crypto_vv)
+SHAPE(crypto_vi, crypto_vi)
 
 } // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.h b/gcc/config/riscv/riscv-vector-builtins-shapes.h
index df9884bb572..07ecbc0ea07 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.h
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.h
@@ -52,6 +52,10 @@ extern const function_shape *const vlenb;
 extern const function_shape *const seg_loadstore;
 extern const function_shape *const seg_indexed_loadstore;
 extern const function_shape *const seg_fault_load;
+/* Below function_shape are Vector Crypto */
+extern const function_shape *const zvbb_zvbc;
+extern const function_shape *const crypto_vv;
+extern const function_shape *const crypto_vi;
 }
 
 } // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
index 6330a3a41c3..7efec4277a7 100644
--- a/gcc/config/riscv/riscv-vector-builtins.cc
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -51,6 +51,7 @@
 #include "riscv-vector-builtins.h"
 #include "riscv-vector-builtins-shapes.h"
 #include "riscv-vector-builtins-bases.h"
+#include "riscv-vector-crypto-builtins-avail.h"
 
 using namespace riscv_vector;
 
@@ -521,6 +522,19 @@ static const rvv_type_info tuple_ops[] = {
 #include "riscv-vector-builtins-types.def"
   {NUM_VECTOR_TYPES, 0}};
 
+/* Below types will be registered for vector-crypto intrinsic functions*/
+/* A list of sew32 will be registered for vector-crypto intrinsic functions.  */
+static const rvv_type_info crypto_sew32_ops[] = {
+#define DEF_RVV_CRYPTO_SEW32_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-crypto-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of sew64 will be registered for vector-crypto intrinsic functions.  */
+static const rvv_type_info crypto_sew64_ops[] = {
+#define DEF_RVV_CRYPTO_SEW64_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-crypto-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
 static CONSTEXPR const rvv_arg_type_info rvv_arg_type_info_end
   = rvv_arg_type_info (NUM_BASE_TYPES);
 
@@ -754,6 +768,11 @@ static CONSTEXPR const rvv_arg_type_info v_size_args[]
   = {rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info (RVV_BASE_size),
      rvv_arg_type_info_end};
 
+/* A list of args for vector_type func (double demote_type, size_t) function.  */
+static CONSTEXPR const rvv_arg_type_info wv_size_args[]
+  = {rvv_arg_type_info (RVV_BASE_double_trunc_vector),
+    rvv_arg_type_info (RVV_BASE_size),rvv_arg_type_info_end};
+
 /* A list of args for vector_type func (vector_type, vector_type, size)
  * function.  */
 static CONSTEXPR const rvv_arg_type_info vv_size_args[]
@@ -1702,6 +1721,14 @@ static CONSTEXPR const rvv_op_info i_v_u_ops
      rvv_arg_type_info (RVV_BASE_unsigned_vector), /* Return type */
      v_args /* Args */};
 
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info u_vv_ops
+  = {u_ops,					/* Types */
+     OP_TYPE_v,					/* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     v_args /* Args */};
+
 /* A static operand information for vector_type func (vector_type)
  * function registration. */
 static CONSTEXPR const rvv_op_info u_v_i_ops
@@ -2174,6 +2201,12 @@ static CONSTEXPR const rvv_op_info u_wvv_ops
      rvv_arg_type_info (RVV_BASE_vector), /* Return type */
      wvv_args /* Args */};
 
+static CONSTEXPR const rvv_op_info u_shift_wvx_ops
+  = {wextu_ops,				  /* Types */
+     OP_TYPE_vx,			  /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     wv_size_args /* Args */};
+
 /* A static operand information for vector_type func (double demote type, double
  * demote scalar_type) function registration. */
 static CONSTEXPR const rvv_op_info i_wvx_ops
@@ -2604,6 +2637,101 @@ static CONSTEXPR const rvv_op_info all_v_vcreate_lmul4_x2_ops
      rvv_arg_type_info (RVV_BASE_vlmul_ext_x2), /* Return type */
      ext_vcreate_args /* Args */};
 
+/* A static operand information for vector crypto func (vector_type).
+   Some ins just supports SEW=32, such as vaes*.
+ * function registration.  */
+/* A list of args for vector_type func (vector_type, lmul1_type) function.  */
+static CONSTEXPR const rvv_arg_type_info vs_lmul_x2_args[]
+  = {rvv_arg_type_info (RVV_BASE_vlmul_ext_x2),
+     rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
+
+static CONSTEXPR const rvv_arg_type_info vs_lmul_x4_args[]
+  = {rvv_arg_type_info (RVV_BASE_vlmul_ext_x4),
+     rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
+
+static CONSTEXPR const rvv_arg_type_info vs_lmul_x8_args[]
+  = {rvv_arg_type_info (RVV_BASE_vlmul_ext_x8),
+     rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
+
+static CONSTEXPR const rvv_arg_type_info vs_lmul_x16_args[]
+  = {rvv_arg_type_info (RVV_BASE_vlmul_ext_x16),
+     rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
+
+static CONSTEXPR const rvv_op_info u_vvv_crypto_sew32_ops
+  = {crypto_sew32_ops,			   /* Types */
+     OP_TYPE_vv,					   /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     vv_args /* Args */};
+
+static CONSTEXPR const rvv_op_info u_vvvv_crypto_sew32_ops
+  = {crypto_sew32_ops,			   /* Types */
+     OP_TYPE_vv,					   /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     vvv_args /* Args */};
+
+static CONSTEXPR const rvv_op_info u_vvvv_crypto_sew64_ops
+  = {crypto_sew64_ops,			   /* Types */
+     OP_TYPE_vv,					   /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     vvv_args /* Args */};
+
+static CONSTEXPR const rvv_op_info u_vvv_size_crypto_sew32_ops
+  = {crypto_sew32_ops,			   /* Types */
+     OP_TYPE_vi,					   /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     vv_size_args /* Args */};
+
+static CONSTEXPR const rvv_op_info u_vv_size_crypto_sew32_ops
+  = {crypto_sew32_ops,			   /* Types */
+     OP_TYPE_vi,					   /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     v_size_args /* Args */};
+
+static CONSTEXPR const rvv_op_info u_vvs_crypto_sew32_ops
+  = {crypto_sew32_ops,			   /* Types */
+     OP_TYPE_vs,					   /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     vv_args /* Args */};
+
+static CONSTEXPR const rvv_op_info u_vvs_crypto_sew32_lmul_x2_ops
+  = {crypto_sew32_ops,			   /* Types */
+     OP_TYPE_vs,					   /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vlmul_ext_x2), /* Return type */
+     vs_lmul_x2_args /* Args */};
+
+static CONSTEXPR const rvv_op_info u_vvs_crypto_sew32_lmul_x4_ops
+  = {crypto_sew32_ops,			   /* Types */
+     OP_TYPE_vs,					   /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vlmul_ext_x4), /* Return type */
+     vs_lmul_x4_args /* Args */};
+
+static CONSTEXPR const rvv_op_info u_vvs_crypto_sew32_lmul_x8_ops
+  = {crypto_sew32_ops,			   /* Types */
+     OP_TYPE_vs,					   /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vlmul_ext_x8), /* Return type */
+     vs_lmul_x8_args /* Args */};
+
+static CONSTEXPR const rvv_op_info u_vvs_crypto_sew32_lmul_x16_ops
+  = {crypto_sew32_ops,			   /* Types */
+     OP_TYPE_vs,					   /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vlmul_ext_x16), /* Return type */
+     vs_lmul_x16_args /* Args */};
+
+/* A static operand information for vector crypto func (vector_type).
+   Some ins just supports SEW=64, such as vclmul.vv, vclmul.vx.
+ * function registration.  */
+static CONSTEXPR const rvv_op_info u_vvv_crypto_sew64_ops
+  = {crypto_sew64_ops,			   /* Types */
+     OP_TYPE_vv,					   /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     vv_args /* Args */};
+
+static CONSTEXPR const rvv_op_info u_vvx_crypto_sew64_ops
+  = {crypto_sew64_ops,			   /* Types */
+     OP_TYPE_vx,					   /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     vx_args /* Args */};
+
 /* A list of all RVV base function types.  */
 static CONSTEXPR const function_type_info function_types[] = {
 #define DEF_RVV_TYPE_INDEX(                                                    \
@@ -2689,6 +2817,14 @@ static function_group_info function_groups[] = {
 #include "riscv-vector-builtins-functions.def"
 };
 
+/* A list of all Vector Crypto intrinsic functions.  */
+static crypto_function_group_info cryoto_function_groups[] = {
+#define DEF_VECTOR_CRYPTO_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO, AVAIL) \
+  {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO,\
+   riscv_vector_crypto_avail_ ## AVAIL},
+#include "riscv-vector-crypto-builtins-functions.def"
+};
+
 /* The RVV types, with their built-in
    "__rvv..._t" name.  Allow an index of NUM_VECTOR_TYPES, which always
    yields a null tree.  */
@@ -4176,7 +4312,9 @@ registered_function::overloaded_hash (const vec<tree, va_gc> &arglist)
        __riscv_vset(vint8m2_t dest, size_t index, vint8m1_t value); The reason
        is the same as above. */
       if ((instance.base == bases::vget && (i == (len - 1)))
-	  || (instance.base == bases::vset && (i == (len - 2))))
+	  || ((instance.base == bases::vset
+               || instance.shape == shapes::crypto_vi)
+              && (i == (len - 2))))
 	argument_types.safe_push (size_type_node);
       /* Vector fixed-point arithmetic instructions requiring argument vxrm.
 	     For example: vuint32m4_t __riscv_vaaddu(vuint32m4_t vs2,
@@ -4414,6 +4552,16 @@ handle_pragma_vector ()
   function_builder builder;
   for (unsigned int i = 0; i < ARRAY_SIZE (function_groups); ++i)
     builder.register_function_group (function_groups[i]);
+
+  /* Since Vector Crypto relies on vector extension, its inline function
+     registration is placed last */
+  for (unsigned int i = 0; i < ARRAY_SIZE (cryoto_function_groups); ++i)
+  {
+    crypto_function_group_info *f = &cryoto_function_groups[i];
+    if (f->avail ())
+      builder.register_function_group\
+             (cryoto_function_groups[i].rvv_function_group_info);
+  }
 }
 
 /* Return the function decl with RVV function subcode CODE, or error_mark_node
diff --git a/gcc/config/riscv/riscv-vector-builtins.def b/gcc/config/riscv/riscv-vector-builtins.def
index 6661629aad8..0c3ee3b2986 100644
--- a/gcc/config/riscv/riscv-vector-builtins.def
+++ b/gcc/config/riscv/riscv-vector-builtins.def
@@ -558,6 +558,7 @@ DEF_RVV_TYPE (vfloat64m8_t, 17, __rvv_float64m8_t, double, RVVM8DF, _f64m8,
 
 DEF_RVV_OP_TYPE (vv)
 DEF_RVV_OP_TYPE (vx)
+DEF_RVV_OP_TYPE (vi)
 DEF_RVV_OP_TYPE (v)
 DEF_RVV_OP_TYPE (wv)
 DEF_RVV_OP_TYPE (wx)
diff --git a/gcc/config/riscv/riscv-vector-builtins.h b/gcc/config/riscv/riscv-vector-builtins.h
index cd8ccab1724..5032a39c742 100644
--- a/gcc/config/riscv/riscv-vector-builtins.h
+++ b/gcc/config/riscv/riscv-vector-builtins.h
@@ -234,6 +234,14 @@ struct function_group_info
   const rvv_op_info ops_infos;
 };
 
+/* Static information about a set of functions.  */
+struct crypto_function_group_info
+{
+  struct function_group_info rvv_function_group_info;
+  /* Whether the function is available.  */
+  unsigned int (*avail) (void);
+};
+
 class GTY ((user)) function_instance
 {
 public:
diff --git a/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h b/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
new file mode 100755
index 00000000000..dc159e1914b
--- /dev/null
+++ b/gcc/config/riscv/riscv-vector-crypto-builtins-avail.h
@@ -0,0 +1,26 @@
+#ifndef GCC_RISCV_VECTOR_CRYPTO_BUILTINS_AVAIL_H
+#define GCC_RISCV_VECTOR_CRYPTO_BUILTINS_AVAIL_H
+
+#include "insn-codes.h"
+namespace riscv_vector {
+
+/* Declare an availability predicate for built-in functions.  */
+#define AVAIL(NAME, COND)		\
+ static unsigned int			\
+ riscv_vector_crypto_avail_##NAME (void)	\
+ {					\
+   return (COND);			\
+ }
+
+AVAIL (zvbb, TARGET_ZVBB)
+AVAIL (zvbc, TARGET_ZVBC)
+AVAIL (zvkb_or_zvbb, TARGET_ZVKB || TARGET_ZVBB)
+AVAIL (zvkg, TARGET_ZVKG)
+AVAIL (zvksh, TARGET_ZVKSH)
+AVAIL (zvkned, TARGET_ZVKNED)
+AVAIL (zvknha_or_zvknhb, TARGET_ZVKNHA || TARGET_ZVKNHB)
+AVAIL (zvknhb, TARGET_ZVKNHB)
+AVAIL (zvksed, TARGET_ZVKSED)
+
+}
+#endif
diff --git a/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def b/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
new file mode 100755
index 00000000000..99b07a23fc8
--- /dev/null
+++ b/gcc/config/riscv/riscv-vector-crypto-builtins-functions.def
@@ -0,0 +1,85 @@
+#ifndef DEF_VECTOR_CRYPTO_FUNCTION
+#define DEF_VECTOR_CRYPTO_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO, AVAIL)
+#endif
+
+
+// ZVBB
+DEF_VECTOR_CRYPTO_FUNCTION (vandn, zvbb_zvbc, full_preds, u_vvv_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vandn, zvbb_zvbc, full_preds, u_vvx_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vbrev, zvbb_zvbc, full_preds, u_vv_ops, zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vbrev8, zvbb_zvbc, full_preds, u_vv_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vrev8, zvbb_zvbc, full_preds, u_vv_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vclz, zvbb_zvbc, none_m_preds, u_vv_ops, zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vctz, zvbb_zvbc, none_m_preds, u_vv_ops, zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vrol, zvbb_zvbc, full_preds, u_vvv_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vrol, zvbb_zvbc, full_preds, u_shift_vvx_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vror, zvbb_zvbc, full_preds, u_vvv_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vror, zvbb_zvbc, full_preds, u_shift_vvx_ops, zvkb_or_zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vwsll, zvbb_zvbc, full_preds, u_wvv_ops, zvbb)
+DEF_VECTOR_CRYPTO_FUNCTION (vwsll, zvbb_zvbc, full_preds, u_shift_wvx_ops, zvbb)
+
+//ZVBC
+DEF_VECTOR_CRYPTO_FUNCTION (vclmul, zvbb_zvbc, full_preds, u_vvv_crypto_sew64_ops, zvbc)
+DEF_VECTOR_CRYPTO_FUNCTION (vclmul, zvbb_zvbc, full_preds, u_vvx_crypto_sew64_ops, zvbc)
+DEF_VECTOR_CRYPTO_FUNCTION (vclmulh, zvbb_zvbc, full_preds, u_vvv_crypto_sew64_ops, zvbc)
+DEF_VECTOR_CRYPTO_FUNCTION (vclmulh, zvbb_zvbc, full_preds, u_vvx_crypto_sew64_ops, zvbc)
+
+//ZVKG
+DEF_VECTOR_CRYPTO_FUNCTION(vghsh, crypto_vv, none_tu_preds, u_vvvv_crypto_sew32_ops, zvkg)
+DEF_VECTOR_CRYPTO_FUNCTION(vgmul, crypto_vv, none_tu_preds, u_vvv_crypto_sew32_ops, zvkg)
+
+//ZVKNED
+DEF_VECTOR_CRYPTO_FUNCTION (vaesef,   crypto_vv, none_tu_preds, u_vvv_crypto_sew32_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesef,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesef,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x2_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesef,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x4_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesef,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x8_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesef,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x16_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesem,   crypto_vv, none_tu_preds, u_vvv_crypto_sew32_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesem,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesem,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x2_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesem,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x4_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesem,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x8_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesem,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x16_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdf,   crypto_vv, none_tu_preds, u_vvv_crypto_sew32_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdf,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdf,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x2_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdf,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x4_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdf,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x8_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdf,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x16_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdm,   crypto_vv, none_tu_preds, u_vvv_crypto_sew32_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdm,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdm,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x2_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdm,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x4_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdm,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x8_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesdm,   crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x16_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesz,    crypto_vv, none_tu_preds, u_vvs_crypto_sew32_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesz,    crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x2_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesz,    crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x4_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesz,    crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x8_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaesz,    crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x16_ops, zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaeskf1,  crypto_vi, none_tu_preds, u_vv_size_crypto_sew32_ops,  zvkned)
+DEF_VECTOR_CRYPTO_FUNCTION (vaeskf2,  crypto_vi, none_tu_preds, u_vvv_size_crypto_sew32_ops, zvkned)
+
+//ZVKNHA
+DEF_VECTOR_CRYPTO_FUNCTION (vsha2ms,  crypto_vv, none_tu_preds, u_vvvv_crypto_sew32_ops, zvknha_or_zvknhb)
+DEF_VECTOR_CRYPTO_FUNCTION (vsha2ch,  crypto_vv, none_tu_preds, u_vvvv_crypto_sew32_ops, zvknha_or_zvknhb)
+DEF_VECTOR_CRYPTO_FUNCTION (vsha2cl,  crypto_vv, none_tu_preds, u_vvvv_crypto_sew32_ops, zvknha_or_zvknhb)
+//ZVKNHB
+DEF_VECTOR_CRYPTO_FUNCTION (vsha2ms,  crypto_vv, none_tu_preds, u_vvvv_crypto_sew64_ops, zvknhb)
+DEF_VECTOR_CRYPTO_FUNCTION (vsha2ch,  crypto_vv, none_tu_preds, u_vvvv_crypto_sew64_ops, zvknhb)
+DEF_VECTOR_CRYPTO_FUNCTION (vsha2cl,  crypto_vv, none_tu_preds, u_vvvv_crypto_sew64_ops, zvknhb)
+
+//Zvksed
+DEF_VECTOR_CRYPTO_FUNCTION (vsm4k, crypto_vi, none_tu_preds, u_vv_size_crypto_sew32_ops, zvksed)
+DEF_VECTOR_CRYPTO_FUNCTION (vsm4r, crypto_vv, none_tu_preds, u_vvv_crypto_sew32_ops, zvksed)
+DEF_VECTOR_CRYPTO_FUNCTION (vsm4r, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_ops, zvksed)
+DEF_VECTOR_CRYPTO_FUNCTION (vsm4r, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x2_ops,  zvksed)
+DEF_VECTOR_CRYPTO_FUNCTION (vsm4r, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x4_ops,  zvksed)
+DEF_VECTOR_CRYPTO_FUNCTION (vsm4r, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x8_ops,  zvksed)
+DEF_VECTOR_CRYPTO_FUNCTION (vsm4r, crypto_vv, none_tu_preds, u_vvs_crypto_sew32_lmul_x16_ops, zvksed)
+
+//Zvksh
+DEF_VECTOR_CRYPTO_FUNCTION (vsm3me, crypto_vv,    none_tu_preds, u_vvv_crypto_sew32_ops,      zvksh)
+DEF_VECTOR_CRYPTO_FUNCTION (vsm3c,  crypto_vi,    none_tu_preds, u_vvv_size_crypto_sew32_ops, zvksh)
diff --git a/gcc/config/riscv/riscv-vector-crypto-builtins-types.def b/gcc/config/riscv/riscv-vector-crypto-builtins-types.def
new file mode 100755
index 00000000000..f40367ae2c3
--- /dev/null
+++ b/gcc/config/riscv/riscv-vector-crypto-builtins-types.def
@@ -0,0 +1,21 @@
+#ifndef DEF_RVV_CRYPTO_SEW32_OPS
+#define DEF_RVV_CRYPTO_SEW32_OPS(TYPE, REQUIRE)
+#endif
+
+#ifndef DEF_RVV_CRYPTO_SEW64_OPS
+#define DEF_RVV_CRYPTO_SEW64_OPS(TYPE, REQUIRE)
+#endif
+
+DEF_RVV_CRYPTO_SEW32_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_CRYPTO_SEW32_OPS (vuint32m1_t, 0)
+DEF_RVV_CRYPTO_SEW32_OPS (vuint32m2_t, 0)
+DEF_RVV_CRYPTO_SEW32_OPS (vuint32m4_t, 0)
+DEF_RVV_CRYPTO_SEW32_OPS (vuint32m8_t, 0)
+
+DEF_RVV_CRYPTO_SEW64_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_CRYPTO_SEW64_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_CRYPTO_SEW64_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_CRYPTO_SEW64_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64)
+
+#undef DEF_RVV_CRYPTO_SEW32_OPS
+#undef DEF_RVV_CRYPTO_SEW64_OPS
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md
index 935eeb7fd8e..838713695b4 100644
--- a/gcc/config/riscv/riscv.md
+++ b/gcc/config/riscv/riscv.md
@@ -428,6 +428,33 @@
 ;; vcompress    vector compress instruction
 ;; vmov         whole vector register move
 ;; vector       unknown vector instruction
+;; vandn        vector crypto bitwise and-not instructions
+;; vbrev        vector crypto reverse bits in elements instructions
+;; vbrev8       vector crypto reverse bits in bytes instructions
+;; vrev8        vector crypto reverse bytes instructions
+;; vclz         vector crypto count leading Zeros instructions
+;; vctz         vector crypto count lrailing Zeros instructions
+;; vrol         vector crypto rotate left instructions
+;; vror         vector crypto rotate right instructions
+;; vwsll        vector crypto widening shift left logical instructions
+;; vclmul       vector crypto carry-less multiply - return low half instructions
+;; vclmulh      vector crypto carry-less multiply - return high half instructions
+;; vghsh        vector crypto add-multiply over GHASH Galois-Field instructions
+;; vgmul        vector crypto multiply over GHASH Galois-Field instructions
+;; vaesef       vector crypto AES final-round encryption instructions
+;; vaesem       vector crypto AES middle-round encryption instructions
+;; vaesdf       vector crypto AES final-round decryption instructions
+;; vaesdm       vector crypto AES middle-round decryption instructions
+;; vaeskf1      vector crypto AES-128 Forward KeySchedule generation instructions
+;; vaeskf2      vector crypto AES-256 Forward KeySchedule generation instructions
+;; vaesz        vector crypto AES round zero encryption/decryption instructions
+;; vsha2ms      vector crypto SHA-2 message schedule instructions
+;; vsha2ch      vector crypto SHA-2 two rounds of compression instructions
+;; vsha2cl      vector crypto SHA-2 two rounds of compression instructions
+;; vsm4k        vector crypto SM4 KeyExpansion instructions
+;; vsm4r        vector crypto SM4 Rounds instructions
+;; vsm3me       vector crypto SM3 Message Expansion instructions
+;; vsm3c        vector crypto SM3 Compression instructions
 (define_attr "type"
   "unknown,branch,jump,jalr,ret,call,load,fpload,store,fpstore,
    mtc,mfc,const,arith,logical,shift,slt,imul,idiv,move,fmove,fadd,fmul,
@@ -447,7 +474,9 @@
    vired,viwred,vfredu,vfredo,vfwredu,vfwredo,
    vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv,
    vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,
-   vgather,vcompress,vmov,vector"
+   vgather,vcompress,vmov,vector,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,vror,vwsll,
+   vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaeskf1,vaeskf2,vaesz,vsha2ms,
+   vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c"
   (cond [(eq_attr "got" "load") (const_string "load")
 
 	 ;; If a doubleword move uses these expensive instructions,
@@ -3747,6 +3776,7 @@
 (include "thead.md")
 (include "generic-ooo.md")
 (include "vector.md")
+(include "vector-crypto.md")
 (include "zicond.md")
 (include "zc.md")
 (include "corev.md")
diff --git a/gcc/config/riscv/riscv.opt b/gcc/config/riscv/riscv.opt
index 0c6517bdc8b..78186fff6c5 100644
--- a/gcc/config/riscv/riscv.opt
+++ b/gcc/config/riscv/riscv.opt
@@ -319,6 +319,8 @@ Mask(ZVBB) Var(riscv_zvb_subext)
 
 Mask(ZVBC) Var(riscv_zvb_subext)
 
+Mask(ZVKB) Var(riscv_zvb_subext)
+
 TargetVariable
 int riscv_zvk_subext
 
diff --git a/gcc/config/riscv/t-riscv b/gcc/config/riscv/t-riscv
index 3b9686daa58..429d36b6425 100644
--- a/gcc/config/riscv/t-riscv
+++ b/gcc/config/riscv/t-riscv
@@ -1,6 +1,8 @@
 RISCV_BUILTINS_H = $(srcdir)/config/riscv/riscv-vector-builtins.h \
 		   $(srcdir)/config/riscv/riscv-vector-builtins.def \
 		   $(srcdir)/config/riscv/riscv-vector-builtins-functions.def \
+		   $(srcdir)/config/riscv/riscv-vector-crypto-builtins-avail.h \
+		   $(srcdir)/config/riscv/riscv-vector-crypto-builtins-functions.def \
 		   riscv-vector-type-indexer.gen.def
 
 riscv-builtins.o: $(srcdir)/config/riscv/riscv-builtins.cc $(CONFIG_H) \
diff --git a/gcc/config/riscv/vector-crypto.md b/gcc/config/riscv/vector-crypto.md
new file mode 100755
index 00000000000..8778a336375
--- /dev/null
+++ b/gcc/config/riscv/vector-crypto.md
@@ -0,0 +1,519 @@
+(define_c_enum "unspec" [
+    ;; Zvbb unspecs
+    UNSPEC_VANDN
+    UNSPEC_VBREV
+    UNSPEC_VBREV8
+    UNSPEC_VREV8
+    UNSPEC_VCLZ
+    UNSPEC_VCTZ
+    UNSPEC_VROL
+    UNSPEC_VROR
+    UNSPEC_VWSLL
+    UNSPEC_VCLMUL
+    UNSPEC_VCLMULH
+    UNSPEC_VGHSH
+    UNSPEC_VGMUL
+    UNSPEC_VAESEF
+    UNSPEC_VAESEFVV
+    UNSPEC_VAESEFVS
+    UNSPEC_VAESEM
+    UNSPEC_VAESEMVV
+    UNSPEC_VAESEMVS
+    UNSPEC_VAESDF
+    UNSPEC_VAESDFVV
+    UNSPEC_VAESDFVS
+    UNSPEC_VAESDM
+    UNSPEC_VAESDMVV
+    UNSPEC_VAESDMVS
+    UNSPEC_VAESZ
+    UNSPEC_VAESZVVNULL
+    UNSPEC_VAESZVS
+    UNSPEC_VAESKF1
+    UNSPEC_VAESKF2
+    UNSPEC_VSHA2MS
+    UNSPEC_VSHA2CH
+    UNSPEC_VSHA2CL
+    UNSPEC_VSM4K
+    UNSPEC_VSM4R
+    UNSPEC_VSM4RVV
+    UNSPEC_VSM4RVS
+    UNSPEC_VSM3ME
+    UNSPEC_VSM3C
+])
+
+(define_int_attr ror_rol [(UNSPEC_VROL "rol") (UNSPEC_VROR "ror")])
+
+(define_int_attr lt [(UNSPEC_VCLZ "lz") (UNSPEC_VCTZ "tz")])
+
+(define_int_attr rev  [(UNSPEC_VBREV "brev") (UNSPEC_VBREV8 "brev8") (UNSPEC_VREV8 "rev8")])
+
+(define_int_attr h [(UNSPEC_VCLMUL "") (UNSPEC_VCLMULH "h")])
+
+(define_int_attr vv_ins_name [(UNSPEC_VAESEFVV "aesef") (UNSPEC_VAESEMVV "aesem")
+                              (UNSPEC_VAESDFVV "aesdf") (UNSPEC_VAESDMVV "aesdm")
+                              (UNSPEC_VSM4RVV  "sm4r" ) (UNSPEC_VGMUL    "gmul" )
+                              (UNSPEC_VAESEFVS "aesef") (UNSPEC_VAESEMVS "aesem")
+                              (UNSPEC_VAESDFVS "aesdf") (UNSPEC_VAESDMVS "aesdm")
+                              (UNSPEC_VAESZVS  "aesz" ) (UNSPEC_VSM4RVS  "sm4r" )])
+
+(define_int_attr ins_type [(UNSPEC_VAESEFVV "vv") (UNSPEC_VAESEMVV "vv")
+                           (UNSPEC_VAESDFVV "vv") (UNSPEC_VAESDMVV "vv")
+                           (UNSPEC_VSM4RVV  "vv") (UNSPEC_VGMUL    "vv")
+                           (UNSPEC_VAESEFVS "vs") (UNSPEC_VAESEMVS "vs")
+                           (UNSPEC_VAESDFVS "vs") (UNSPEC_VAESDMVS "vs")
+                           (UNSPEC_VAESZVS  "vs") (UNSPEC_VSM4RVS  "vs")])
+
+(define_int_attr vv_ins1_name [(UNSPEC_VSHA2MS "sha2ms") (UNSPEC_VSHA2CH "sha2ch")
+                               (UNSPEC_VSHA2CL "sha2cl") (UNSPEC_VGHSH "ghsh")])
+
+(define_int_attr vi_ins_name [(UNSPEC_VAESKF1 "aeskf1") (UNSPEC_VSM4K "sm4k")])
+
+(define_int_attr vi_ins1_name [(UNSPEC_VAESKF2 "aeskf2") (UNSPEC_VSM3C "sm3c")])
+
+(define_int_iterator UNSPEC_VRORL [UNSPEC_VROL UNSPEC_VROR])
+
+(define_int_iterator UNSPEC_VCLTZ [UNSPEC_VCLZ UNSPEC_VCTZ])
+
+(define_int_iterator UNSPEC_VRBB8 [UNSPEC_VBREV UNSPEC_VBREV8 UNSPEC_VREV8])
+
+(define_int_iterator UNSPEC_CLMUL [UNSPEC_VCLMUL UNSPEC_VCLMULH])
+
+(define_int_iterator UNSPEC_CRYPTO_VV [UNSPEC_VAESEFVV UNSPEC_VAESEMVV UNSPEC_VAESDFVV
+                                       UNSPEC_VAESDMVV UNSPEC_VSM4RVV  UNSPEC_VGMUL
+                                       UNSPEC_VAESEFVS UNSPEC_VAESEMVS UNSPEC_VAESDFVS
+                                       UNSPEC_VAESDMVS UNSPEC_VAESZVS  UNSPEC_VSM4RVS])
+
+(define_int_iterator UNSPEC_CRYPTO_VI [UNSPEC_VAESKF1 UNSPEC_VSM4K])
+
+(define_int_iterator UNSPEC_CRYPTO_VI1 [UNSPEC_VAESKF2 UNSPEC_VSM3C])
+
+(define_int_iterator UNSPEC_VGNHAB [UNSPEC_VSHA2MS UNSPEC_VSHA2CH UNSPEC_VSHA2CL UNSPEC_VGHSH])
+
+;; zvbb instructions patterns.
+;; vandn.vv vandn.vx vrol.vv vrol.vx
+;; vror.vv vror.vx vror.vi
+;; vwsll.vv vwsll.vx vwsll.vi
+
+(define_insn "@pred_vandn<mode>"
+  [(set (match_operand:VI 0 "register_operand"       "=vd,vd")
+	(if_then_else:VI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+	     (match_operand 5 "vector_length_operand"    "rK,rK")
+	     (match_operand 6 "const_int_operand"        "i,  i")
+	     (match_operand 7 "const_int_operand"        "i,  i")
+	     (match_operand 8 "const_int_operand"        "i,  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:VI
+	    [(match_operand:VI 3 "register_operand"  "vr,vr")
+	     (match_operand:VI 4 "register_operand"  "vr,vr")]UNSPEC_VANDN)
+	  (match_operand:VI 2 "vector_merge_operand" "vu, 0")))]
+  "TARGET_ZVBB || TARGET_ZVKB"
+  "vandn.vv\t%0,%3,%4%p1"
+  [(set_attr "type" "vandn")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_vandn<mode>_scalar"
+  [(set (match_operand:VI 0 "register_operand"       "=vd,vd")
+	(if_then_else:VI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+	     (match_operand 5 "vector_length_operand"    "rK,rK")
+	     (match_operand 6 "const_int_operand"        "i,  i")
+	     (match_operand 7 "const_int_operand"        "i,  i")
+	     (match_operand 8 "const_int_operand"        "i,  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:VI
+	    [(match_operand:VI 3 "register_operand"     "vr,vr")
+	     (match_operand:<VEL> 4 "register_operand"  "r,  r")]UNSPEC_VANDN)
+	  (match_operand:VI 2 "vector_merge_operand"    "vu, 0")))]
+  "TARGET_ZVBB || TARGET_ZVKB"
+  "vandn.vx\t%0,%3,%4%p1"
+  [(set_attr "type" "vandn")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_v<ror_rol><mode>"
+  [(set (match_operand:VI 0 "register_operand"       "=vd,vd")
+	(if_then_else:VI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+	     (match_operand 5 "vector_length_operand"    "rK,rK")
+	     (match_operand 6 "const_int_operand"        "i,  i")
+	     (match_operand 7 "const_int_operand"        "i,  i")
+	     (match_operand 8 "const_int_operand"        "i,  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:VI
+	    [(match_operand:VI 3 "register_operand"  "vr,vr")
+	     (match_operand:VI 4 "register_operand"  "vr,vr")]UNSPEC_VRORL)
+	  (match_operand:VI 2 "vector_merge_operand" "vu, 0")))]
+  "TARGET_ZVBB || TARGET_ZVKB"
+  "v<ror_rol>.vv\t%0,%3,%4%p1"
+  [(set_attr "type" "v<ror_rol>")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_v<ror_rol><mode>_scalar"
+  [(set (match_operand:VI 0 "register_operand"       "=vd, vd")
+	(if_then_else:VI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, vmWc1")
+	     (match_operand 5 "vector_length_operand"    "rK, rK")
+	     (match_operand 6 "const_int_operand"        "i,   i")
+	     (match_operand 7 "const_int_operand"        "i,   i")
+	     (match_operand 8 "const_int_operand"        "i,   i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:VI
+	    [(match_operand:VI 3 "register_operand"    "vr, vr")
+	     (match_operand 4 "pmode_register_operand" "r,   r")]UNSPEC_VRORL)
+	  (match_operand:VI 2 "vector_merge_operand"   "vu,  0")))]
+  "TARGET_ZVBB || TARGET_ZVKB"
+  "v<ror_rol>.vx\t%0,%3,%4%p1"
+  [(set_attr "type" "v<ror_rol>")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_vror<mode>_scalar"
+  [(set (match_operand:VI 0 "register_operand"       "=vd, vd")
+	(if_then_else:VI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, vmWc1")
+	     (match_operand 5 "vector_length_operand"    "rK, rK")
+	     (match_operand 6 "const_int_operand"        "i,   i")
+	     (match_operand 7 "const_int_operand"        "i,   i")
+	     (match_operand 8 "const_int_operand"        "i,   i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:VI
+	    [(match_operand:VI 3 "register_operand"  "vr, vr")
+	     (match_operand 4 "const_csr_operand"     "K,  K")]UNSPEC_VROR)
+	  (match_operand:VI 2 "vector_merge_operand" "vu,  0")))]
+  "TARGET_ZVBB || TARGET_ZVKB"
+  "vror.vi\t%0,%3,%4%p1"
+  [(set_attr "type" "vror")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_vwsll<mode>"
+  [(set (match_operand:VWEXTI 0 "register_operand"       "=&vd")
+	(if_then_else:VWEXTI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+	     (match_operand 5 "vector_length_operand"    "   rK")
+	     (match_operand 6 "const_int_operand"        "    i")
+	     (match_operand 7 "const_int_operand"        "    i")
+	     (match_operand 8 "const_int_operand"        "    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:VWEXTI
+	    [(match_operand:<V_DOUBLE_TRUNC> 3 "register_operand"  "vr")
+	     (match_operand:<V_DOUBLE_TRUNC> 4 "register_operand"  "vr")]UNSPEC_VWSLL)
+	  (match_operand:VWEXTI 2 "vector_merge_operand" "0vu")))]
+  "TARGET_ZVBB"
+  "vwsll.vv\t%0,%3,%4%p1"
+  [(set_attr "type" "vwsll")
+   (set_attr "mode" "<VWEXTI:MODE>")])
+
+(define_insn "@pred_vwsll<mode>_scalar"
+  [(set (match_operand:VWEXTI 0 "register_operand"       "=&vd")
+	(if_then_else:VWEXTI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+	     (match_operand 5 "vector_length_operand"    "   rK")
+	     (match_operand 6 "const_int_operand"        "    i")
+	     (match_operand 7 "const_int_operand"        "    i")
+	     (match_operand 8 "const_int_operand"        "    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:VWEXTI
+	    [(match_operand:<V_DOUBLE_TRUNC> 3 "register_operand"   "vr")
+	     (match_operand:<VSUBEL> 4 "pmode_reg_or_uimm5_operand" "rK")]UNSPEC_VWSLL)
+	  (match_operand:VWEXTI 2 "vector_merge_operand" "0vu")))]
+  "TARGET_ZVBB"
+  "vwsll.v%o4\t%0,%3,%4%p1"
+  [(set_attr "type" "vwsll")
+   (set_attr "mode" "<VWEXTI:MODE>")])
+
+;; vbrev.v vbrev8.v vrev8.v
+
+(define_insn "@pred_v<rev><mode>"
+  [(set (match_operand:VI 0 "register_operand"       "=vd,vd")
+	(if_then_else:VI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+	     (match_operand 4 "vector_length_operand"    "   rK,   rK")
+	     (match_operand 5 "const_int_operand"        "    i,    i")
+	     (match_operand 6 "const_int_operand"        "    i,    i")
+	     (match_operand 7 "const_int_operand"        "    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:VI
+	    [(match_operand:VI 3 "register_operand"  "vr,vr")]UNSPEC_VRBB8)
+	  (match_operand:VI 2 "vector_merge_operand" "vu, 0")))]
+  "TARGET_ZVBB || TARGET_ZVKB"
+  "v<rev>.v\t%0,%3%p1"
+  [(set_attr "type" "v<rev>")
+   (set_attr "mode" "<MODE>")])
+
+;; vclz.v vctz.v
+
+(define_insn "@pred_vc<lt><mode>"
+  [(set (match_operand:VI 0  "register_operand"  "=vd")
+        (unspec:VI
+          [(match_operand:VI 2 "register_operand" "vr")
+           (unspec:<VM>
+             [(match_operand:<VM> 1 "vector_mask_operand"   "vmWc1")
+              (match_operand 3      "vector_length_operand" "   rK")
+              (match_operand 4      "const_int_operand"     "    i")
+              (reg:SI VL_REGNUM)
+              (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)]UNSPEC_VCLTZ))]
+  "TARGET_ZVBB"
+  "vc<lt>.v\t%0,%2%p1"
+  [(set_attr "type" "vc<lt>")
+   (set_attr "mode" "<MODE>")])
+
+;; zvbc instructions patterns.
+;; vclmul.vv vclmul.vx
+;; vclmulh.vv vclmulh.vx
+
+(define_insn "@pred_vclmul<h><mode>"
+  [(set (match_operand:VDI 0  "register_operand" "=vd,vd")
+        (if_then_else:VDI
+	          (unspec:<VM>
+	            [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+	              (match_operand 5 "vector_length_operand"    "  rK,   rK")
+	              (match_operand 6 "const_int_operand"        "   i,    i")
+	              (match_operand 7 "const_int_operand"        "   i,    i")
+	              (match_operand 8 "const_int_operand"        "   i,    i")
+	              (reg:SI VL_REGNUM)
+	              (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	          (unspec:VDI
+	            [(match_operand:VDI 3 "register_operand"  "vr,vr")
+	             (match_operand:VDI 4 "register_operand"  "vr,vr")]UNSPEC_CLMUL)
+	          (match_operand:VDI 2 "vector_merge_operand" "vu, 0")))]
+  "TARGET_ZVBC && TARGET_64BIT"
+  "vclmul<h>.vv\t%0,%3,%4%p1"
+  [(set_attr "type" "vclmul<h>")
+   (set_attr "mode" "<VDI:MODE>")])
+
+(define_insn "@pred_vclmul<h><mode>_scalar"
+  [(set (match_operand:VDI 0 "register_operand"       "=vd,vd")
+    (if_then_else:VDI
+      (unspec:<VM>
+        [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+        (match_operand 5 "vector_length_operand"     "   rK,   rK")
+        (match_operand 6 "const_int_operand"         "    i,    i")
+        (match_operand 7 "const_int_operand"         "    i,    i")
+        (match_operand 8 "const_int_operand"         "    i,    i")
+        (reg:SI VL_REGNUM)
+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+      (unspec:VDI
+        [(match_operand:VDI 3 "register_operand"      "vr,vr")
+        (match_operand:<VDI:VEL> 4 "register_operand" " r, r")]UNSPEC_CLMUL)
+      (match_operand:VDI 2 "vector_merge_operand"     "vu, 0")))]
+  "TARGET_ZVBC && TARGET_64BIT"
+  "vclmul<h>.vx\t%0,%3,%4%p1"
+  [(set_attr "type" "vclmul<h>")
+   (set_attr "mode" "<VDI:MODE>")])
+
+;; zvkned and zvksed amd zvkg instructions patterns.
+;; vaesef.[vv,vs] vaesem.[vv,vs] vaesdf.[vv,vs]
+;; vaesdm.[vv,vs] vsm4r.[vv,vs]  vgmul.vv
+;; vaeskf1.vi     vaeskf2.vi     vaesz.vs     vsm4k.vi
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type><mode>"
+  [(set (match_operand:VSI 0 "register_operand"           "=vd")
+        (if_then_else:VSI
+          (unspec:<VM>
+            [(match_operand 3 "vector_length_operand"     "rK")
+             (match_operand 4 "const_int_operand"         " i")
+             (match_operand 5 "const_int_operand"         " i")
+             (reg:SI VL_REGNUM)
+             (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+          (unspec:VSI
+             [(match_operand:VSI 1 "register_operand" " 0")
+              (match_operand:VSI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+          (match_dup 1)))]
+  "TARGET_ZVKNED || TARGET_ZVKSED || TARGET_ZVKG"
+  "v<vv_ins_name>.<ins_type>\t%0,%2"
+  [(set_attr "type" "v<vv_ins_name>")
+   (set_attr "mode" "<VSI:MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x1<mode>_scalar"
+  [(set (match_operand:VSI 0 "register_operand"           "=vd")
+        (if_then_else:VSI
+          (unspec:<VM>
+            [(match_operand 3 "vector_length_operand"     "rK")
+             (match_operand 4 "const_int_operand"         " i")
+             (match_operand 5 "const_int_operand"         " i")
+             (reg:SI VL_REGNUM)
+             (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+          (unspec:VSI
+             [(match_operand:VSI 1 "register_operand" " 0")
+              (match_operand:VSI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+          (match_dup 1)))]
+  "TARGET_ZVKNED || TARGET_ZVKSED"
+  "v<vv_ins_name>.<ins_type>\t%0,%2"
+  [(set_attr "type" "v<vv_ins_name>")
+   (set_attr "mode" "<VSI:MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x2<mode>_scalar"
+  [(set (match_operand:<VSIX2> 0 "register_operand"           "=vd")
+        (if_then_else:<VSIX2>
+          (unspec:<VM>
+            [(match_operand 3 "vector_length_operand"     "rK")
+             (match_operand 4 "const_int_operand"         " i")
+             (match_operand 5 "const_int_operand"         " i")
+             (reg:SI VL_REGNUM)
+             (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+          (unspec:<VSIX2>
+             [(match_operand:<VSIX2> 1  "register_operand" " 0")
+              (match_operand:VLMULX2_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+          (match_dup 1)))]
+  "TARGET_ZVKNED || TARGET_ZVKSED"
+  "v<vv_ins_name>.<ins_type>\t%0,%2"
+  [(set_attr "type" "v<vv_ins_name>")
+   (set_attr "mode" "<VLMULX2_SI:MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x4<mode>_scalar"
+ [(set (match_operand:<VSIX4> 0 "register_operand"           "=vd")
+       (if_then_else:<VSIX4>
+         (unspec:<VM>
+            [(match_operand 3 "vector_length_operand"     "rK")
+             (match_operand 4 "const_int_operand"         " i")
+             (match_operand 5 "const_int_operand"         " i")
+            (reg:SI VL_REGNUM)
+            (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+         (unspec:<VSIX4>
+            [(match_operand:<VSIX4> 1 "register_operand" " 0")
+             (match_operand:VLMULX4_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+         (match_dup 1)))]
+ "TARGET_ZVKNED || TARGET_ZVKSED"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+  (set_attr "mode" "<VLMULX4_SI:MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x8<mode>_scalar"
+ [(set (match_operand:<VSIX8> 0 "register_operand"           "=vd")
+       (if_then_else:<VSIX8>
+         (unspec:<VM>
+            [(match_operand 3 "vector_length_operand"     "rK")
+             (match_operand 4 "const_int_operand"         " i")
+             (match_operand 5 "const_int_operand"         " i")
+            (reg:SI VL_REGNUM)
+            (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+         (unspec:<VSIX8>
+            [(match_operand:<VSIX8> 1 "register_operand" " 0")
+             (match_operand:VLMULX8_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+         (match_dup 1)))]
+ "TARGET_ZVKNED || TARGET_ZVKSED"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+  (set_attr "mode" "<VLMULX8_SI:MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x16<mode>_scalar"
+ [(set (match_operand:<VSIX16> 0 "register_operand"           "=vd")
+       (if_then_else:<VSIX16>
+         (unspec:<VM>
+            [(match_operand 3 "vector_length_operand"     "rK")
+             (match_operand 4 "const_int_operand"         " i")
+             (match_operand 5 "const_int_operand"         " i")
+            (reg:SI VL_REGNUM)
+            (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+         (unspec:<VSIX16>
+            [(match_operand:<VSIX16> 1 "register_operand" " 0")
+             (match_operand:VLMULX16_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+         (match_dup 1)))]
+ "TARGET_ZVKNED || TARGET_ZVKSED"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+  (set_attr "mode" "<VLMULX16_SI:MODE>")])
+
+;; vaeskf1.vi vsm4k.vi
+(define_insn "@pred_crypto_vi<vi_ins_name><mode>_scalar"
+  [(set (match_operand:VSI 0 "register_operand"           "=vd, vd")
+        (if_then_else:VSI
+          (unspec:<VM>
+            [(match_operand 4 "vector_length_operand"     "rK, rK")
+             (match_operand 5 "const_int_operand"         " i,  i")
+             (match_operand 6 "const_int_operand"         " i,  i")
+             (reg:SI VL_REGNUM)
+             (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+          (unspec:VSI
+             [(match_operand:VSI 2   "register_operand"  "vr, vr")
+              (match_operand:<VEL> 3 "const_int_operand" " i,  i")] UNSPEC_CRYPTO_VI)
+          (match_operand:VSI 1 "vector_merge_operand"     "vu, 0")))]
+  "TARGET_ZVKNED || TARGET_ZVKSED"
+  "v<vi_ins_name>.vi\t%0,%2,%3"
+  [(set_attr "type" "v<vi_ins_name>")
+   (set_attr "mode" "<MODE>")])
+
+;; vaeskf2.vi vsm3c.vi
+(define_insn "@pred_vi<vi_ins1_name><mode>_nomaskedoff_scalar"
+  [(set (match_operand:VSI 0 "register_operand"           "=vd")
+        (if_then_else:VSI
+          (unspec:<VSI:VM>
+            [(match_operand 4 "vector_length_operand"     "rK")
+             (match_operand 5 "const_int_operand"         " i")
+             (match_operand 6 "const_int_operand"         " i")
+             (reg:SI VL_REGNUM)
+             (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+          (unspec:VSI
+             [(match_operand:VSI 1   "register_operand"  "0")
+              (match_operand:VSI 2   "register_operand"  "vr")
+              (match_operand:<VEL> 3 "const_int_operand" " i")] UNSPEC_CRYPTO_VI1)
+          (match_dup 1)))]
+  "TARGET_ZVKNED || TARGET_ZVKSH"
+  "v<vi_ins1_name>.vi\t%0,%2,%3"
+  [(set_attr "type" "v<vi_ins1_name>")
+   (set_attr "mode" "<MODE>")])
+
+
+;; zvknh[ab] and zvkg instructions patterns.
+;; vsha2ms.vv vsha2ch.vv vsha2cl.vv vghsh.vv
+
+(define_insn "@pred_v<vv_ins1_name><mode>"
+  [(set (match_operand:VQEXTI 0 "register_operand"       "=vd")
+        (if_then_else:VQEXTI
+          (unspec:<VQEXTI:VM>
+            [(match_operand 4 "vector_length_operand"    " rK")
+             (match_operand 5 "const_int_operand"        "  i")
+             (match_operand 6 "const_int_operand"        "  i")
+             (reg:SI VL_REGNUM)
+             (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+          (unspec:VQEXTI
+             [(match_operand:VQEXTI 1 "register_operand" "  0")
+              (match_operand:VQEXTI 2 "register_operand" " vr")
+              (match_operand:VQEXTI 3 "register_operand" " vr")] UNSPEC_VGNHAB)
+          (match_dup 1)))]
+  "TARGET_ZVKNHA || TARGET_ZVKNHB || TARGET_ZVKG"
+  "v<vv_ins1_name>.vv\t%0,%2,%3"
+  [(set_attr "type" "v<vv_ins1_name>")
+   (set_attr "mode" "<VQEXTI:MODE>")])
+
+;; zvksh instructions patterns.
+;; vsm3me.vv
+
+(define_insn "@pred_vsm3me<mode>"
+  [(set (match_operand:VSI 0 "register_operand"           "=vd, vd")
+        (if_then_else:VSI
+          (unspec:<VM>
+            [(match_operand 4 "vector_length_operand"     "rK, rK")
+             (match_operand 5 "const_int_operand"         " i,  i")
+             (match_operand 6 "const_int_operand"         " i,  i")
+             (reg:SI VL_REGNUM)
+             (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+          (unspec:VSI
+             [(match_operand:VSI 2 "register_operand" "vr, vr")
+              (match_operand:VSI 3 "register_operand" "vr, vr")] UNSPEC_VSM3ME)
+          (match_operand:VSI 1 "vector_merge_operand" "vu,  0")))]
+  "TARGET_ZVKSH"
+  "vsm3me.vv\t%0,%2,%3"
+  [(set_attr "type" "vsm3me")
+   (set_attr "mode" "<VSI:MODE>")])
\ No newline at end of file
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 56080ed1f5f..a81f18f959d 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -3916,3 +3916,44 @@
   (V1024BI "riscv_vector::vls_mode_valid_p (V1024BImode) && TARGET_MIN_VLEN >= 1024")
   (V2048BI "riscv_vector::vls_mode_valid_p (V2048BImode) && TARGET_MIN_VLEN >= 2048")
   (V4096BI "riscv_vector::vls_mode_valid_p (V4096BImode) && TARGET_MIN_VLEN >= 4096")])
+
+(define_mode_iterator VSI [
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
+(define_mode_iterator VLMULX2_SI [
+  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
+(define_mode_iterator VLMULX4_SI [
+  RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
+(define_mode_iterator VLMULX8_SI [
+  RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
+(define_mode_iterator VLMULX16_SI [
+  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
+(define_mode_iterator VDI [
+  (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
+  (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
+])
+
+(define_mode_attr VSIX2 [
+  (RVVM8SI "RVVM8SI") (RVVM4SI "RVVM8SI") (RVVM2SI "RVVM4SI") (RVVM1SI "RVVM2SI") (RVVMF2SI "RVVM1SI")
+])
+
+(define_mode_attr VSIX4 [
+  (RVVM2SI "RVVM8SI") (RVVM1SI "RVVM4SI") (RVVMF2SI "RVVM2SI")
+])
+
+(define_mode_attr VSIX8 [
+  (RVVM1SI "RVVM8SI") (RVVMF2SI "RVVM4SI")
+])
+
+(define_mode_attr VSIX16 [
+  (RVVMF2SI "RVVM8SI")
+])
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index ba9c9e5a9b6..ae6aecc2903 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -52,7 +52,9 @@
 			  vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv,\
 			  vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
 			  vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
-			  vssegtux,vssegtox,vlsegdff")
+			  vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
+			  vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaeskf1,\
+			  vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c")
 	 (const_string "true")]
 	(const_string "false")))
 
@@ -74,7 +76,9 @@
 			  vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovxv,vfmovfv,\
 			  vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
 			  vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
-			  vssegtux,vssegtox,vlsegdff")
+			  vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
+			  vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaeskf1,\
+			  vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c")
 	 (const_string "true")]
 	(const_string "false")))
 
@@ -698,10 +702,14 @@
 				vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,\
 				vired,viwred,vfredu,vfredo,vfwredu,vfwredo,vimovxv,vfmovfv,\
 				vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
-				vgather,vldff,viwmuladd,vfwmuladd,vlsegde,vlsegds,vlsegdux,vlsegdox,vlsegdff")
+				vgather,vldff,viwmuladd,vfwmuladd,vlsegde,vlsegds,vlsegdux,vlsegdox,\
+				vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,vror,vwsll,\
+				vclmul,vclmulh")
 	       (const_int 2)
 
-	       (eq_attr "type" "vimerge,vfmerge,vcompress")
+	       (eq_attr "type" "vimerge,vfmerge,vcompress,vaesef,vaesem,vaesdf,vaesdm,\
+                                vghsh,vgmul,vaeskf1,vaeskf2,vaesz,vsha2ms, vsha2ch,vsha2cl,vsm4k,vsm4r,\
+                                vsm3me,vsm3c")
 	       (const_int 1)
 
 	       (eq_attr "type" "vimuladd,vfmuladd")
@@ -740,7 +748,8 @@
 			  vstox,vext,vmsfs,vmiota,vfsqrt,vfrecp,vfcvtitof,vldff,\
 			  vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,\
 			  vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,vcompress,\
-			  vlsegde,vssegts,vssegtux,vssegtox,vlsegdff")
+			  vlsegde,vssegts,vssegtux,vssegtox,vlsegdff,vbrev,vbrev8,vrev8,\
+			  vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm3me,vsm3c")
 	   (const_int 4)
 
 	 ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -755,13 +764,15 @@
 			  vsshift,vnclip,vfalu,vfmul,vfminmax,vfdiv,vfwalu,vfwmul,\
 			  vfsgnj,vfmerge,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\
 			  vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
-			  vgather,viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox")
+			  vgather,viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox,vandn,vrol,\
+			  vror,vwsll,vclmul,vclmulh")
 	   (const_int 5)
 
 	 (eq_attr "type" "vicmp,vimuladd,vfcmp,vfmuladd")
 	   (const_int 6)
 
-	 (eq_attr "type" "vmpop,vmffs,vmidx,vssegte")
+	 (eq_attr "type" "vmpop,vmffs,vmidx,vssegte,vclz,vctz,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz,\
+                          vsm4r")
 	   (const_int 3)]
   (const_int INVALID_ATTRIBUTE)))
 
@@ -770,7 +781,8 @@
   (cond [(eq_attr "type" "vlde,vimov,vfmov,vext,vmiota,vfsqrt,vfrecp,\
 			  vfcvtitof,vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,\
 			  vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,\
-			  vcompress,vldff,vlsegde,vlsegdff")
+			  vcompress,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8,vghsh,vaeskf1,\
+                          vaeskf2,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm3me,vsm3c")
 	   (symbol_ref "riscv_vector::get_ta(operands[5])")
 
 	 ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -786,13 +798,13 @@
 			  vfwalu,vfwmul,vfsgnj,vfmerge,vired,viwred,vfredu,\
 			  vfredo,vfwredu,vfwredo,vslideup,vslidedown,vislide1up,\
 			  vislide1down,vfslide1up,vfslide1down,vgather,viwmuladd,vfwmuladd,\
-			  vlsegds,vlsegdux,vlsegdox")
+			  vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll,vclmul,vclmulh")
 	   (symbol_ref "riscv_vector::get_ta(operands[6])")
 
 	 (eq_attr "type" "vimuladd,vfmuladd")
 	   (symbol_ref "riscv_vector::get_ta(operands[7])")
 
-	 (eq_attr "type" "vmidx")
+	 (eq_attr "type" "vmidx,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz,vsm4r")
 	   (symbol_ref "riscv_vector::get_ta(operands[4])")]
 	(const_int INVALID_ATTRIBUTE)))
 
@@ -800,7 +812,7 @@
 (define_attr "ma" ""
   (cond [(eq_attr "type" "vlde,vext,vmiota,vfsqrt,vfrecp,vfcvtitof,vfcvtftoi,\
 			  vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,\
-			  vfncvtftof,vfclass,vldff,vlsegde,vlsegdff")
+			  vfncvtftof,vfclass,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8")
 	   (symbol_ref "riscv_vector::get_ma(operands[6])")
 
 	 ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -815,7 +827,8 @@
 			  vnclip,vicmp,vfalu,vfmul,vfminmax,vfdiv,\
 			  vfwalu,vfwmul,vfsgnj,vfcmp,vslideup,vslidedown,\
 			  vislide1up,vislide1down,vfslide1up,vfslide1down,vgather,\
-			  viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox")
+			  viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox,vandn,vrol,\
+			  vror,vwsll,vclmul,vclmulh")
 	   (symbol_ref "riscv_vector::get_ma(operands[7])")
 
 	 (eq_attr "type" "vimuladd,vfmuladd")
@@ -831,7 +844,7 @@
 			  vfsqrt,vfrecp,vfmerge,vfcvtitof,vfcvtftoi,vfwcvtitof,\
 			  vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,vfncvtftof,\
 			  vfclass,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\
-			  vimovxv,vfmovfv,vlsegde,vlsegdff")
+			  vimovxv,vfmovfv,vlsegde,vlsegdff,vbrev,vbrev8,vrev8")
 	   (const_int 7)
 	 (eq_attr "type" "vldm,vstm,vmalu,vmalu")
 	   (const_int 5)
@@ -848,7 +861,7 @@
 			  vnclip,vicmp,vfalu,vfmul,vfminmax,vfdiv,vfwalu,vfwmul,\
 			  vfsgnj,vfcmp,vslideup,vslidedown,vislide1up,\
 			  vislide1down,vfslide1up,vfslide1down,vgather,viwmuladd,vfwmuladd,\
-			  vlsegds,vlsegdux,vlsegdox")
+			  vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll")
 	   (const_int 8)
 	 (eq_attr "type" "vstux,vstox,vssegts,vssegtux,vssegtox")
 	   (const_int 5)
@@ -856,10 +869,10 @@
 	 (eq_attr "type" "vimuladd,vfmuladd")
 	   (const_int 9)
 
-	 (eq_attr "type" "vmsfs,vmidx,vcompress")
+	 (eq_attr "type" "vmsfs,vmidx,vcompress,vghsh,vaeskf1,vaeskf2,vsha2ms, vsha2ch,vsha2cl,vsm4k,\
+	                  vsm3me,vsm3c")
 	   (const_int 6)
-
-	 (eq_attr "type" "vmpop,vmffs,vssegte")
+	 (eq_attr "type" "vmpop,vmffs,vssegte,vclz,vctz")
 	   (const_int 4)]
 	(const_int INVALID_ATTRIBUTE)))
 
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vandn.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vandn.c
new file mode 100644
index 00000000000..b044c020f71
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vandn.c
@@ -0,0 +1,1072 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vandn_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf8(vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8(vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf8(vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf4(vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4(vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf4(vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf2(vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2(vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf2(vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m1(vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1(vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m1(vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m2(vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2(vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m2(vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m4(vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4(vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m4(vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m8(vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8(vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m8(vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16mf4(vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4(vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16mf4(vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16mf2(vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2(vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16mf2(vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m1(vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1(vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m1(vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m2(vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2(vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m2(vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m4(vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4(vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m4(vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m8(vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8(vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m8(vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32mf2(vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2(vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32mf2(vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m1(vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1(vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m1(vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m2(vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2(vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m2(vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m4(vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4(vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m4(vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m8(vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8(vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m8(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m1(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m1(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m2(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m2(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m4(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m4(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m8(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m8(vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf8_m(mask, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf8_m(mask, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf4_m(mask, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf4_m(mask, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m1_m(mask, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m1_m(mask, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m2_m(mask, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m2_m(mask, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m4_m(mask, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m4_m(mask, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m8_m(mask, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m8_m(mask, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16mf4_m(mask, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16mf4_m(mask, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m1_m(mask, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m1_m(mask, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m2_m(mask, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m2_m(mask, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m4_m(mask, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m4_m(mask, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m8_m(mask, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m8_m(mask, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m1_m(mask, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m1_m(mask, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m2_m(mask, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m2_m(mask, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m4_m(mask, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m4_m(mask, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m8_m(mask, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m8_m(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m1_m(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m1_m(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m2_m(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m2_m(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m4_m(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m4_m(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m8_m(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m8_m(mask, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16mf4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16mf4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vandn\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vandn\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
+/* { dg-final { scan-assembler-times {vandn\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vandn\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vandn_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vandn_overloaded.c
new file mode 100644
index 00000000000..9af458cd671
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vandn_overloaded.c
@@ -0,0 +1,1072 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vandn_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8(vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4(vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2(vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1(vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2(vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4(vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8(vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4(vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2(vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1(vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2(vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4(vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8(vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2(vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1(vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2(vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4(vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8(vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vandn(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn(vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn(mask, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vandn_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vandn_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vandn_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vandn_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vandn_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vandn_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vandn_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vandn_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vandn_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vandn_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vandn_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vandn_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vandn_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vandn_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vandn_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vandn_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vandn_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vandn_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vandn_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vandn_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vandn_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vandn_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vandn_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vandn_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vandn_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vandn_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vandn_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vandn_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vandn_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vandn_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vandn_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vandn_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vandn_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vandn_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vandn_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vandn_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vandn_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vandn_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vandn_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vandn_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vandn_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vandn_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vandn_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vandn\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vandn\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
+/* { dg-final { scan-assembler-times {vandn\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vandn\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev.c
new file mode 100644
index 00000000000..6725744071d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev.c
@@ -0,0 +1,542 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vbrev_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf8(vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf4(vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf2(vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1(vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m1(vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2(vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m2(vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4(vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m4(vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8(vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m8(vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16mf4(vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16mf2(vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1(vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m1(vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2(vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m2(vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4(vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m4(vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8(vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m8(vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32mf2(vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1(vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m1(vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2(vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m2(vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4(vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m4(vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8(vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m8(vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1(vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m1(vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2(vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m2(vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4(vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m4(vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8(vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m8(vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf8_m(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf4_m(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf2_m(mask, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m1_m(mask, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m2_m(mask, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m4_m(mask, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m8_m(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16mf4_m(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16mf2_m(mask, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m1_m(mask, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m2_m(mask, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m4_m(mask, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m8_m(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32mf2_m(mask, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m1_m(mask, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m2_m(mask, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m4_m(mask, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m8_m(mask, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m1_m(mask, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m2_m(mask, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m4_m(mask, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m8_m(mask, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf4_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf2_tu(maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m1_tu(maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m2_tu(maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m4_tu(maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m8_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16mf4_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16mf2_tu(maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m1_tu(maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m2_tu(maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m4_tu(maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m8_tu(maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32mf2_tu(maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m1_tu(maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m2_tu(maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m4_tu(maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m1_tu(maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m2_tu(maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m4_tu(maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16mf4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16mf2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32mf2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16mf4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16mf2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32mf2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8mf2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u8m8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16mf4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16mf2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u16m8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32mf2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u32m8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev_v_u64m8_mu(mask, maskedoff, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vbrev\.v\s+v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vbrev\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev8.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev8.c
new file mode 100644
index 00000000000..bb9cffe6797
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev8.c
@@ -0,0 +1,542 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vbrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf8(vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf4(vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf2(vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1(vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m1(vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2(vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m2(vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4(vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m4(vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8(vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m8(vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16mf4(vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16mf2(vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1(vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m1(vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2(vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m2(vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4(vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m4(vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8(vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m8(vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32mf2(vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1(vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m1(vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2(vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m2(vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4(vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m4(vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8(vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m8(vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1(vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m1(vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2(vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m2(vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4(vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m4(vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8(vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m8(vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf8_m(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf4_m(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf2_m(mask, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m1_m(mask, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m2_m(mask, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m4_m(mask, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m8_m(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16mf4_m(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16mf2_m(mask, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m1_m(mask, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m2_m(mask, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m4_m(mask, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m8_m(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32mf2_m(mask, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m1_m(mask, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m2_m(mask, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m4_m(mask, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m8_m(mask, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m1_m(mask, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m2_m(mask, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m4_m(mask, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m8_m(mask, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf4_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf2_tu(maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m1_tu(maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m2_tu(maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m4_tu(maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m8_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16mf4_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16mf2_tu(maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m1_tu(maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m2_tu(maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m4_tu(maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m8_tu(maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32mf2_tu(maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m1_tu(maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m2_tu(maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m4_tu(maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m1_tu(maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m2_tu(maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m4_tu(maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16mf4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16mf2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32mf2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16mf4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16mf2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32mf2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8m8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16mf4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16mf2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u16m8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32mf2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u32m8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u64m8_mu(mask, maskedoff, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vbrev8\.v\s+v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vbrev8\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev8_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev8_overloaded.c
new file mode 100644
index 00000000000..25f6b1c343f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev8_overloaded.c
@@ -0,0 +1,543 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vbrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1(vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2(vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4(vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8(vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1(vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2(vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4(vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8(vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1(vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2(vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4(vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8(vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1(vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2(vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4(vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8(vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev8(vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev8(mask, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev8_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev8_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev8_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev8_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev8_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev8_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev8_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev8_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev8_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev8_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev8_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev8_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev8_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev8_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev8_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev8_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev8_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev8_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev8_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev8_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vbrev8\.v\s+v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vbrev8\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev_overloaded.c
new file mode 100644
index 00000000000..8a4d7bb7f8c
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vbrev_overloaded.c
@@ -0,0 +1,542 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vbrev_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1(vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2(vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4(vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8(vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1(vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2(vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4(vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8(vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1(vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2(vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4(vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8(vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1(vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2(vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4(vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8(vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev(vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev(mask, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vbrev_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vbrev_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vbrev_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vbrev_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vbrev_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vbrev_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vbrev_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vbrev_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vbrev_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vbrev_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vbrev_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vbrev_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vbrev_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vbrev_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vbrev_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vbrev_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vbrev_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vbrev_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vbrev_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vbrev_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vbrev_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vbrev_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vbrev_mu(mask, maskedoff, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vbrev\.v\s+v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vbrev\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vclz.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vclz.c
new file mode 100644
index 00000000000..df19efd3c7d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vclz.c
@@ -0,0 +1,187 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vclz_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vclz_v_u8mf8(vs2, vl);
+}
+
+vuint8mf4_t test_vclz_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vclz_v_u8mf4(vs2, vl);
+}
+
+vuint8mf2_t test_vclz_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vclz_v_u8mf2(vs2, vl);
+}
+
+vuint8m1_t test_vclz_v_u8m1(vuint8m1_t vs2, size_t vl) {
+  return __riscv_vclz_v_u8m1(vs2, vl);
+}
+
+vuint8m2_t test_vclz_v_u8m2(vuint8m2_t vs2, size_t vl) {
+  return __riscv_vclz_v_u8m2(vs2, vl);
+}
+
+vuint8m4_t test_vclz_v_u8m4(vuint8m4_t vs2, size_t vl) {
+  return __riscv_vclz_v_u8m4(vs2, vl);
+}
+
+vuint8m8_t test_vclz_v_u8m8(vuint8m8_t vs2, size_t vl) {
+  return __riscv_vclz_v_u8m8(vs2, vl);
+}
+
+vuint16mf4_t test_vclz_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vclz_v_u16mf4(vs2, vl);
+}
+
+vuint16mf2_t test_vclz_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vclz_v_u16mf2(vs2, vl);
+}
+
+vuint16m1_t test_vclz_v_u16m1(vuint16m1_t vs2, size_t vl) {
+  return __riscv_vclz_v_u16m1(vs2, vl);
+}
+
+vuint16m2_t test_vclz_v_u16m2(vuint16m2_t vs2, size_t vl) {
+  return __riscv_vclz_v_u16m2(vs2, vl);
+}
+
+vuint16m4_t test_vclz_v_u16m4(vuint16m4_t vs2, size_t vl) {
+  return __riscv_vclz_v_u16m4(vs2, vl);
+}
+
+vuint16m8_t test_vclz_v_u16m8(vuint16m8_t vs2, size_t vl) {
+  return __riscv_vclz_v_u16m8(vs2, vl);
+}
+
+vuint32mf2_t test_vclz_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vclz_v_u32mf2(vs2, vl);
+}
+
+vuint32m1_t test_vclz_v_u32m1(vuint32m1_t vs2, size_t vl) {
+  return __riscv_vclz_v_u32m1(vs2, vl);
+}
+
+vuint32m2_t test_vclz_v_u32m2(vuint32m2_t vs2, size_t vl) {
+  return __riscv_vclz_v_u32m2(vs2, vl);
+}
+
+vuint32m4_t test_vclz_v_u32m4(vuint32m4_t vs2, size_t vl) {
+  return __riscv_vclz_v_u32m4(vs2, vl);
+}
+
+vuint32m8_t test_vclz_v_u32m8(vuint32m8_t vs2, size_t vl) {
+  return __riscv_vclz_v_u32m8(vs2, vl);
+}
+
+vuint64m1_t test_vclz_v_u64m1(vuint64m1_t vs2, size_t vl) {
+  return __riscv_vclz_v_u64m1(vs2, vl);
+}
+
+vuint64m2_t test_vclz_v_u64m2(vuint64m2_t vs2, size_t vl) {
+  return __riscv_vclz_v_u64m2(vs2, vl);
+}
+
+vuint64m4_t test_vclz_v_u64m4(vuint64m4_t vs2, size_t vl) {
+  return __riscv_vclz_v_u64m4(vs2, vl);
+}
+
+vuint64m8_t test_vclz_v_u64m8(vuint64m8_t vs2, size_t vl) {
+  return __riscv_vclz_v_u64m8(vs2, vl);
+}
+
+vuint8mf8_t test_vclz_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vclz_v_u8mf8_m(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vclz_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vclz_v_u8mf4_m(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vclz_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vclz_v_u8mf2_m(mask, vs2, vl);
+}
+
+vuint8m1_t test_vclz_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vclz_v_u8m1_m(mask, vs2, vl);
+}
+
+vuint8m2_t test_vclz_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vclz_v_u8m2_m(mask, vs2, vl);
+}
+
+vuint8m4_t test_vclz_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vclz_v_u8m4_m(mask, vs2, vl);
+}
+
+vuint8m8_t test_vclz_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vclz_v_u8m8_m(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vclz_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vclz_v_u16mf4_m(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vclz_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vclz_v_u16mf2_m(mask, vs2, vl);
+}
+
+vuint16m1_t test_vclz_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vclz_v_u16m1_m(mask, vs2, vl);
+}
+
+vuint16m2_t test_vclz_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vclz_v_u16m2_m(mask, vs2, vl);
+}
+
+vuint16m4_t test_vclz_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vclz_v_u16m4_m(mask, vs2, vl);
+}
+
+vuint16m8_t test_vclz_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vclz_v_u16m8_m(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vclz_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vclz_v_u32mf2_m(mask, vs2, vl);
+}
+
+vuint32m1_t test_vclz_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vclz_v_u32m1_m(mask, vs2, vl);
+}
+
+vuint32m2_t test_vclz_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vclz_v_u32m2_m(mask, vs2, vl);
+}
+
+vuint32m4_t test_vclz_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vclz_v_u32m4_m(mask, vs2, vl);
+}
+
+vuint32m8_t test_vclz_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vclz_v_u32m8_m(mask, vs2, vl);
+}
+
+vuint64m1_t test_vclz_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vclz_v_u64m1_m(mask, vs2, vl);
+}
+
+vuint64m2_t test_vclz_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vclz_v_u64m2_m(mask, vs2, vl);
+}
+
+vuint64m4_t test_vclz_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vclz_v_u64m4_m(mask, vs2, vl);
+}
+
+vuint64m8_t test_vclz_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vclz_v_u64m8_m(mask, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vclz\.v\s+v[0-9]+,\s*v[0-9]} 44 } } */
+/* { dg-final { scan-assembler-times {vclz\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 22 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vclz_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vclz_overloaded.c
new file mode 100644
index 00000000000..b4466bd4c23
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vclz_overloaded.c
@@ -0,0 +1,187 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vclz_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint8mf4_t test_vclz_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint8mf2_t test_vclz_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint8m1_t test_vclz_v_u8m1(vuint8m1_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint8m2_t test_vclz_v_u8m2(vuint8m2_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint8m4_t test_vclz_v_u8m4(vuint8m4_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint8m8_t test_vclz_v_u8m8(vuint8m8_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint16mf4_t test_vclz_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint16mf2_t test_vclz_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint16m1_t test_vclz_v_u16m1(vuint16m1_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint16m2_t test_vclz_v_u16m2(vuint16m2_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint16m4_t test_vclz_v_u16m4(vuint16m4_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint16m8_t test_vclz_v_u16m8(vuint16m8_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint32mf2_t test_vclz_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint32m1_t test_vclz_v_u32m1(vuint32m1_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint32m2_t test_vclz_v_u32m2(vuint32m2_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint32m4_t test_vclz_v_u32m4(vuint32m4_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint32m8_t test_vclz_v_u32m8(vuint32m8_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint64m1_t test_vclz_v_u64m1(vuint64m1_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint64m2_t test_vclz_v_u64m2(vuint64m2_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint64m4_t test_vclz_v_u64m4(vuint64m4_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint64m8_t test_vclz_v_u64m8(vuint64m8_t vs2, size_t vl) {
+  return __riscv_vclz(vs2, vl);
+}
+
+vuint8mf8_t test_vclz_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vclz_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vclz_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint8m1_t test_vclz_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint8m2_t test_vclz_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint8m4_t test_vclz_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint8m8_t test_vclz_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vclz_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vclz_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint16m1_t test_vclz_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint16m2_t test_vclz_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint16m4_t test_vclz_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint16m8_t test_vclz_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vclz_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint32m1_t test_vclz_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint32m2_t test_vclz_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint32m4_t test_vclz_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint32m8_t test_vclz_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint64m1_t test_vclz_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint64m2_t test_vclz_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint64m4_t test_vclz_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+vuint64m8_t test_vclz_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vclz(mask, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vclz\.v\s+v[0-9]+,\s*v[0-9]} 44 } } */
+/* { dg-final { scan-assembler-times {vclz\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 22 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vctz.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vctz.c
new file mode 100644
index 00000000000..ce3cc6dc551
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vctz.c
@@ -0,0 +1,187 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vctz_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vctz_v_u8mf8(vs2, vl);
+}
+
+vuint8mf4_t test_vctz_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vctz_v_u8mf4(vs2, vl);
+}
+
+vuint8mf2_t test_vctz_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vctz_v_u8mf2(vs2, vl);
+}
+
+vuint8m1_t test_vctz_v_u8m1(vuint8m1_t vs2, size_t vl) {
+  return __riscv_vctz_v_u8m1(vs2, vl);
+}
+
+vuint8m2_t test_vctz_v_u8m2(vuint8m2_t vs2, size_t vl) {
+  return __riscv_vctz_v_u8m2(vs2, vl);
+}
+
+vuint8m4_t test_vctz_v_u8m4(vuint8m4_t vs2, size_t vl) {
+  return __riscv_vctz_v_u8m4(vs2, vl);
+}
+
+vuint8m8_t test_vctz_v_u8m8(vuint8m8_t vs2, size_t vl) {
+  return __riscv_vctz_v_u8m8(vs2, vl);
+}
+
+vuint16mf4_t test_vctz_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vctz_v_u16mf4(vs2, vl);
+}
+
+vuint16mf2_t test_vctz_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vctz_v_u16mf2(vs2, vl);
+}
+
+vuint16m1_t test_vctz_v_u16m1(vuint16m1_t vs2, size_t vl) {
+  return __riscv_vctz_v_u16m1(vs2, vl);
+}
+
+vuint16m2_t test_vctz_v_u16m2(vuint16m2_t vs2, size_t vl) {
+  return __riscv_vctz_v_u16m2(vs2, vl);
+}
+
+vuint16m4_t test_vctz_v_u16m4(vuint16m4_t vs2, size_t vl) {
+  return __riscv_vctz_v_u16m4(vs2, vl);
+}
+
+vuint16m8_t test_vctz_v_u16m8(vuint16m8_t vs2, size_t vl) {
+  return __riscv_vctz_v_u16m8(vs2, vl);
+}
+
+vuint32mf2_t test_vctz_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vctz_v_u32mf2(vs2, vl);
+}
+
+vuint32m1_t test_vctz_v_u32m1(vuint32m1_t vs2, size_t vl) {
+  return __riscv_vctz_v_u32m1(vs2, vl);
+}
+
+vuint32m2_t test_vctz_v_u32m2(vuint32m2_t vs2, size_t vl) {
+  return __riscv_vctz_v_u32m2(vs2, vl);
+}
+
+vuint32m4_t test_vctz_v_u32m4(vuint32m4_t vs2, size_t vl) {
+  return __riscv_vctz_v_u32m4(vs2, vl);
+}
+
+vuint32m8_t test_vctz_v_u32m8(vuint32m8_t vs2, size_t vl) {
+  return __riscv_vctz_v_u32m8(vs2, vl);
+}
+
+vuint64m1_t test_vctz_v_u64m1(vuint64m1_t vs2, size_t vl) {
+  return __riscv_vctz_v_u64m1(vs2, vl);
+}
+
+vuint64m2_t test_vctz_v_u64m2(vuint64m2_t vs2, size_t vl) {
+  return __riscv_vctz_v_u64m2(vs2, vl);
+}
+
+vuint64m4_t test_vctz_v_u64m4(vuint64m4_t vs2, size_t vl) {
+  return __riscv_vctz_v_u64m4(vs2, vl);
+}
+
+vuint64m8_t test_vctz_v_u64m8(vuint64m8_t vs2, size_t vl) {
+  return __riscv_vctz_v_u64m8(vs2, vl);
+}
+
+vuint8mf8_t test_vctz_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vctz_v_u8mf8_m(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vctz_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vctz_v_u8mf4_m(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vctz_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vctz_v_u8mf2_m(mask, vs2, vl);
+}
+
+vuint8m1_t test_vctz_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vctz_v_u8m1_m(mask, vs2, vl);
+}
+
+vuint8m2_t test_vctz_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vctz_v_u8m2_m(mask, vs2, vl);
+}
+
+vuint8m4_t test_vctz_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vctz_v_u8m4_m(mask, vs2, vl);
+}
+
+vuint8m8_t test_vctz_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vctz_v_u8m8_m(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vctz_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vctz_v_u16mf4_m(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vctz_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vctz_v_u16mf2_m(mask, vs2, vl);
+}
+
+vuint16m1_t test_vctz_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vctz_v_u16m1_m(mask, vs2, vl);
+}
+
+vuint16m2_t test_vctz_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vctz_v_u16m2_m(mask, vs2, vl);
+}
+
+vuint16m4_t test_vctz_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vctz_v_u16m4_m(mask, vs2, vl);
+}
+
+vuint16m8_t test_vctz_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vctz_v_u16m8_m(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vctz_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vctz_v_u32mf2_m(mask, vs2, vl);
+}
+
+vuint32m1_t test_vctz_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vctz_v_u32m1_m(mask, vs2, vl);
+}
+
+vuint32m2_t test_vctz_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vctz_v_u32m2_m(mask, vs2, vl);
+}
+
+vuint32m4_t test_vctz_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vctz_v_u32m4_m(mask, vs2, vl);
+}
+
+vuint32m8_t test_vctz_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vctz_v_u32m8_m(mask, vs2, vl);
+}
+
+vuint64m1_t test_vctz_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vctz_v_u64m1_m(mask, vs2, vl);
+}
+
+vuint64m2_t test_vctz_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vctz_v_u64m2_m(mask, vs2, vl);
+}
+
+vuint64m4_t test_vctz_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vctz_v_u64m4_m(mask, vs2, vl);
+}
+
+vuint64m8_t test_vctz_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vctz_v_u64m8_m(mask, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vctz\.v\s+v[0-9]+,\s*v[0-9]} 44 } } */
+/* { dg-final { scan-assembler-times {vctz\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 22 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vctz_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vctz_overloaded.c
new file mode 100644
index 00000000000..df083ea4504
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vctz_overloaded.c
@@ -0,0 +1,188 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vctz_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint8mf4_t test_vctz_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint8mf2_t test_vctz_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint8m1_t test_vctz_v_u8m1(vuint8m1_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint8m2_t test_vctz_v_u8m2(vuint8m2_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint8m4_t test_vctz_v_u8m4(vuint8m4_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint8m8_t test_vctz_v_u8m8(vuint8m8_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint16mf4_t test_vctz_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint16mf2_t test_vctz_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint16m1_t test_vctz_v_u16m1(vuint16m1_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint16m2_t test_vctz_v_u16m2(vuint16m2_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint16m4_t test_vctz_v_u16m4(vuint16m4_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint16m8_t test_vctz_v_u16m8(vuint16m8_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint32mf2_t test_vctz_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint32m1_t test_vctz_v_u32m1(vuint32m1_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint32m2_t test_vctz_v_u32m2(vuint32m2_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint32m4_t test_vctz_v_u32m4(vuint32m4_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint32m8_t test_vctz_v_u32m8(vuint32m8_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint64m1_t test_vctz_v_u64m1(vuint64m1_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint64m2_t test_vctz_v_u64m2(vuint64m2_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint64m4_t test_vctz_v_u64m4(vuint64m4_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint64m8_t test_vctz_v_u64m8(vuint64m8_t vs2, size_t vl) {
+  return __riscv_vctz(vs2, vl);
+}
+
+vuint8mf8_t test_vctz_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vctz_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vctz_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint8m1_t test_vctz_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint8m2_t test_vctz_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint8m4_t test_vctz_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint8m8_t test_vctz_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vctz_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vctz_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint16m1_t test_vctz_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint16m2_t test_vctz_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint16m4_t test_vctz_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint16m8_t test_vctz_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vctz_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint32m1_t test_vctz_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint32m2_t test_vctz_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint32m4_t test_vctz_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint32m8_t test_vctz_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint64m1_t test_vctz_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint64m2_t test_vctz_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint64m4_t test_vctz_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+vuint64m8_t test_vctz_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vctz(mask, vs2, vl);
+}
+
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vctz\.v\s+v[0-9]+,\s*v[0-9]} 44 } } */
+/* { dg-final { scan-assembler-times {vctz\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 22 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrev8.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrev8.c
new file mode 100644
index 00000000000..a6533d11081
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrev8.c
@@ -0,0 +1,542 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf8(vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf4(vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf2(vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1(vuint8m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m1(vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2(vuint8m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m2(vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4(vuint8m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m4(vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8(vuint8m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m8(vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16mf4(vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16mf2(vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1(vuint16m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m1(vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2(vuint16m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m2(vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4(vuint16m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m4(vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8(vuint16m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m8(vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32mf2(vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1(vuint32m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m1(vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2(vuint32m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m2(vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4(vuint32m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m4(vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8(vuint32m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m8(vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1(vuint64m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m1(vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2(vuint64m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m2(vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4(vuint64m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m4(vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8(vuint64m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m8(vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf8_m(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf4_m(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf2_m(mask, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m1_m(mask, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m2_m(mask, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m4_m(mask, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m8_m(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16mf4_m(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16mf2_m(mask, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m1_m(mask, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m2_m(mask, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m4_m(mask, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m8_m(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32mf2_m(mask, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m1_m(mask, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m2_m(mask, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m4_m(mask, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m8_m(mask, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m1_m(mask, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m2_m(mask, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m4_m(mask, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m8_m(mask, vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf4_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf2_tu(maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m1_tu(maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m2_tu(maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m4_tu(maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m8_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16mf4_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16mf2_tu(maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m1_tu(maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m2_tu(maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m4_tu(maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m8_tu(maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32mf2_tu(maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m1_tu(maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m2_tu(maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m4_tu(maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m1_tu(maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m2_tu(maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m4_tu(maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16mf4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16mf2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32mf2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m1_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m2_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m4_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16mf4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16mf2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32mf2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m1_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m2_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m4_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8m8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16mf4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16mf2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u16m8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32mf2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u32m8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m1_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m2_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m4_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u64m8_mu(mask, maskedoff, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vrev8\.v\s+v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vrev8\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrev8_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrev8_overloaded.c
new file mode 100644
index 00000000000..49ce752376b
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrev8_overloaded.c
@@ -0,0 +1,542 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2(vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1(vuint8m1_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2(vuint8m2_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4(vuint8m4_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8(vuint8m8_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4(vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2(vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1(vuint16m1_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2(vuint16m2_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4(vuint16m4_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8(vuint16m8_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2(vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1(vuint32m1_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2(vuint32m2_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4(vuint32m4_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8(vuint32m8_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1(vuint64m1_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2(vuint64m2_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4(vuint64m4_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8(vuint64m8_t vs2, size_t vl) {
+  return __riscv_vrev8(vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vrev8(mask, vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vrev8_tu(maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vrev8_tum(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8mf2_t test_vrev8_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m1_t test_vrev8_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m2_t test_vrev8_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m4_t test_vrev8_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint8m8_t test_vrev8_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf4_t test_vrev8_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16mf2_t test_vrev8_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m1_t test_vrev8_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m2_t test_vrev8_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m4_t test_vrev8_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint16m8_t test_vrev8_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32mf2_t test_vrev8_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m1_t test_vrev8_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m2_t test_vrev8_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m4_t test_vrev8_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint32m8_t test_vrev8_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m1_t test_vrev8_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m2_t test_vrev8_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m4_t test_vrev8_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+vuint64m8_t test_vrev8_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) {
+  return __riscv_vrev8_mu(mask, maskedoff, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 22 } } */
+/* { dg-final { scan-assembler-times {vrev8\.v\s+v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vrev8\.v\s+v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrol.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrol.c
new file mode 100644
index 00000000000..2bc8c44d738
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrol.c
@@ -0,0 +1,1072 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vrol_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf8(vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf8(vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf4(vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4(vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf4(vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf2(vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2(vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf2(vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m1(vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1(vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m1(vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m2(vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2(vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m2(vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m4(vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4(vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m4(vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m8(vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m8(vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16mf4(vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16mf4(vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16mf2(vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16mf2(vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m1(vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m1(vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m2(vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m2(vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m4(vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m4(vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m8(vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m8(vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32mf2(vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32mf2(vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m1(vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m1(vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m2(vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m2(vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m4(vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m4(vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m8(vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m8(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m1(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m1(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m2(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m2(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m4(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m4(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m8(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m8(vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf8_m(mask, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf8_m(mask, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf4_m(mask, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf4_m(mask, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m1_m(mask, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m1_m(mask, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m2_m(mask, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m2_m(mask, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m4_m(mask, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m4_m(mask, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m8_m(mask, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m8_m(mask, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16mf4_m(mask, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16mf4_m(mask, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m1_m(mask, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m1_m(mask, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m2_m(mask, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m2_m(mask, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m4_m(mask, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m4_m(mask, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m8_m(mask, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m8_m(mask, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m1_m(mask, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m1_m(mask, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m2_m(mask, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m2_m(mask, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m4_m(mask, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m4_m(mask, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m8_m(mask, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m8_m(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m1_m(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m1_m(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m2_m(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m2_m(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m4_m(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m4_m(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m8_m(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m8_m(mask, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16mf4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16mf4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vrol\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vrol\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
+/* { dg-final { scan-assembler-times {vrol\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vrol\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrol_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrol_overloaded.c
new file mode 100644
index 00000000000..2ce4559dc29
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vrol_overloaded.c
@@ -0,0 +1,1072 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vrol_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4(vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2(vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1(vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2(vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4(vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vrol(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol(mask, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vrol_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vrol_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vrol_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vrol_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vrol_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vrol_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vrol_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vrol_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vrol_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vrol_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vrol_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vrol_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vrol_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vrol_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vrol_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vrol_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vrol_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vrol_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vrol_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vrol_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vrol_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vrol_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vrol_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vrol_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vrol_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vrol_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vrol_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vrol_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vrol_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vrol_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vrol_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vrol_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vrol_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vrol_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vrol_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vrol_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vrol_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vrol_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vrol_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vrol_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vrol_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vrol_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vrol\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vrol\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
+/* { dg-final { scan-assembler-times {vrol\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vrol\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vror.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vror.c
new file mode 100644
index 00000000000..144be5c2756
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vror.c
@@ -0,0 +1,1073 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vror_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf8(vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf8(vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf4(vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4(vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf4(vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf2(vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2(vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf2(vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m1(vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1(vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m1(vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m2(vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2(vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m2(vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m4(vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4(vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m4(vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m8(vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m8(vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16mf4(vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16mf4(vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16mf2(vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16mf2(vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m1(vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m1(vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m2(vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m2(vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m4(vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m4(vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m8(vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m8(vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32mf2(vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32mf2(vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m1(vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m1(vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m2(vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m2(vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m4(vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m4(vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m8(vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m8(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m1(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m1(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m2(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m2(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m4(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m4(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m8(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m8(vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf8_m(mask, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf8_m(mask, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf4_m(mask, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf4_m(mask, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m1_m(mask, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m1_m(mask, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m2_m(mask, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m2_m(mask, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m4_m(mask, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m4_m(mask, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m8_m(mask, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m8_m(mask, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16mf4_m(mask, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16mf4_m(mask, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m1_m(mask, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m1_m(mask, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m2_m(mask, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m2_m(mask, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m4_m(mask, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m4_m(mask, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m8_m(mask, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m8_m(mask, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m1_m(mask, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m1_m(mask, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m2_m(mask, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m2_m(mask, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m4_m(mask, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m4_m(mask, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m8_m(mask, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m8_m(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m1_m(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m1_m(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m2_m(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m2_m(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m4_m(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m4_m(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m8_m(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m8_m(mask, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16mf4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16mf4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vror\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vror\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
+/* { dg-final { scan-assembler-times {vror\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vror\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vror_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vror_overloaded.c
new file mode 100644
index 00000000000..bebcf7facad
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vror_overloaded.c
@@ -0,0 +1,1072 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint8mf8_t test_vror_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4(vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2(vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1(vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2(vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4(vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vror(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vror(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror(mask, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf4_t test_vror_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf4_t test_vror_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8mf2_t test_vror_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8mf2_t test_vror_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m1_t test_vror_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m1_t test_vror_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m2_t test_vror_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m2_t test_vror_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m4_t test_vror_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m4_t test_vror_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint8m8_t test_vror_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint8m8_t test_vror_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vror_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vror_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vror_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vror_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vror_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vror_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vror_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vror_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vror_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vror_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vror_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vror_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vror_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vror_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vror_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vror_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vror_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vror_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vror_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vror_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vror_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vror_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vror_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vror_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vror_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vror_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vror_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vror_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vror_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vror_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 88 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 44 } } */
+/* { dg-final { scan-assembler-times {vror\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vror\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 88 } } */
+/* { dg-final { scan-assembler-times {vror\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 132 } } */
+/* { dg-final { scan-assembler-times {vror\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 88 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vwsll.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vwsll.c
new file mode 100644
index 00000000000..e1946261e4f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vwsll.c
@@ -0,0 +1,736 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint16mf4_t test_vwsll_vv_u16mf4(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16mf4(vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4(vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16mf4(vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16mf2(vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2(vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16mf2(vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m1(vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1(vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m1(vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m2(vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2(vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m2(vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m4(vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4(vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m4(vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m8(vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8(vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m8(vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32mf2(vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2(vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32mf2(vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m1(vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1(vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m1(vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m2(vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2(vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m2(vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m4(vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4(vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m4(vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m8(vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8(vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m8(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m1(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1(vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m1(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m2(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2(vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m2(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m4(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4(vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m4(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m8(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8(vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m8(vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16mf4_m(mask, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16mf4_m(mask, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m1_m(mask, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m1_m(mask, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m2_m(mask, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m2_m(mask, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m4_m(mask, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m4_m(mask, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m8_m(mask, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m8_m(mask, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32mf2_m(mask, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32mf2_m(mask, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m1_m(mask, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m1_m(mask, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m2_m(mask, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m2_m(mask, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m4_m(mask, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m4_m(mask, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m8_m(mask, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m8_m(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m1_m(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m1_m(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m2_m(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m2_m(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m4_m(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m4_m(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m8_m(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m8_m(mask, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16mf4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16mf4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32mf2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vwsll_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 60 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 60 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 30 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 30 } } */
+/* { dg-final { scan-assembler-times {vwsll\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 90 } } */
+/* { dg-final { scan-assembler-times {vwsll\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 60 } } */
+/* { dg-final { scan-assembler-times {vwsll\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 90 } } */
+/* { dg-final { scan-assembler-times {vwsll\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 60 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vwsll_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vwsll_overloaded.c
new file mode 100644
index 00000000000..512d76f5850
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/vwsll_overloaded.c
@@ -0,0 +1,736 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint16mf4_t test_vwsll_vv_u16mf4(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4(vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2(vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1(vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2(vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4(vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8(vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2(vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1(vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2(vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4(vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8(vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1(vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2(vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4(vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vwsll(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8(vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll(mask, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 60 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 60 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 30 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 30 } } */
+/* { dg-final { scan-assembler-times {vwsll\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 90 } } */
+/* { dg-final { scan-assembler-times {vwsll\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 60 } } */
+/* { dg-final { scan-assembler-times {vwsll\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 90 } } */
+/* { dg-final { scan-assembler-times {vwsll\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 60 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbb/zvkb.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/zvkb.c
new file mode 100644
index 00000000000..cf956ced59e
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbb/zvkb.c
@@ -0,0 +1,48 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkb -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+
+vuint8mf8_t test_vandn_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vandn_vv_u8mf8(vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vandn_vx_u8mf8(vuint8mf8_t vs2, uint8_t rs1, size_t vl) {
+  return __riscv_vandn_vx_u8mf8(vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vbrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf8(vs2, vl);
+}
+
+vuint8mf4_t test_vbrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vbrev8_v_u8mf4(vs2, vl);
+}
+
+vuint8mf8_t test_vrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf8(vs2, vl);
+}
+
+vuint8mf4_t test_vrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl) {
+  return __riscv_vrev8_v_u8mf4(vs2, vl);
+}
+
+vuint8mf8_t test_vror_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vror_vv_u8mf8(vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vror_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vror_vx_u8mf8(vs2, rs1, vl);
+}
+
+vuint8mf8_t test_vrol_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) {
+  return __riscv_vrol_vv_u8mf8(vs2, vs1, vl);
+}
+
+vuint8mf8_t test_vrol_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl) {
+  return __riscv_vrol_vx_u8mf8(vs2, rs1, vl);
+}
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmul.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmul.c
new file mode 100644
index 00000000000..ba3e5cf858e
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmul.c
@@ -0,0 +1,208 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbc -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint64m1_t test_vclmul_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vclmul_vv_u64m1(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmul_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_vx_u64m1(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmul_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vclmul_vv_u64m2(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmul_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_vx_u64m2(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmul_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vclmul_vv_u64m4(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmul_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_vx_u64m4(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmul_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vclmul_vv_u64m8(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmul_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_vx_u64m8(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmul_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vclmul_vv_u64m1_m(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmul_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_vx_u64m1_m(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmul_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vclmul_vv_u64m2_m(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmul_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_vx_u64m2_m(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmul_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vclmul_vv_u64m4_m(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmul_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_vx_u64m4_m(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmul_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vclmul_vv_u64m8_m(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmul_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_vx_u64m8_m(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmul_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vclmul_vv_u64m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmul_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_vx_u64m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmul_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vclmul_vv_u64m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmul_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_vx_u64m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmul_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vclmul_vv_u64m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmul_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_vx_u64m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmul_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vclmul_vv_u64m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmul_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_vx_u64m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmul_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vclmul_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmul_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmul_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vclmul_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmul_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmul_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vclmul_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmul_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmul_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vclmul_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmul_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmul_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vclmul_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmul_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmul_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vclmul_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmul_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmul_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vclmul_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmul_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmul_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vclmul_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmul_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmul_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vclmul_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmul_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmul_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vclmul_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmul_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmul_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vclmul_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmul_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmul_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vclmul_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmul_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 16 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 16 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 8 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 8 } } */
+/* { dg-final { scan-assembler-times {vclmul\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 24 } } */
+/* { dg-final { scan-assembler-times {vclmul\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 16 } } */
+/* { dg-final { scan-assembler-times {vclmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 24 } } */
+/* { dg-final { scan-assembler-times {vclmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 16 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmul_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmul_overloaded.c
new file mode 100644
index 00000000000..1e25831f3f5
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmul_overloaded.c
@@ -0,0 +1,208 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbc -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint64m1_t test_vclmul_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vclmul(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmul_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmul_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vclmul(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmul_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmul_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vclmul(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmul_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmul_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vclmul(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmul_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmul_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vclmul(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmul_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmul_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vclmul(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmul_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmul_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vclmul(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmul_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmul_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vclmul(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmul_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmul_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vclmul_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmul_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmul_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vclmul_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmul_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmul_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vclmul_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmul_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmul_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vclmul_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmul_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmul_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vclmul_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmul_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmul_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vclmul_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmul_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmul_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vclmul_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmul_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmul_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vclmul_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmul_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmul_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vclmul_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmul_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmul_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vclmul_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmul_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmul_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vclmul_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmul_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmul_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vclmul_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmul_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmul_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vclmul_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmul_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmul_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vclmul_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmul_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmul_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vclmul_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmul_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmul_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vclmul_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmul_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmul_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 16 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 16 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 8 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 8 } } */
+/* { dg-final { scan-assembler-times {vclmul\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 24 } } */
+/* { dg-final { scan-assembler-times {vclmul\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 16 } } */
+/* { dg-final { scan-assembler-times {vclmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 24 } } */
+/* { dg-final { scan-assembler-times {vclmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 16 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmulh.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmulh.c
new file mode 100644
index 00000000000..c14b8a56490
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmulh.c
@@ -0,0 +1,208 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbc -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint64m1_t test_vclmulh_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vclmulh_vv_u64m1(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_vx_u64m1(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vclmulh_vv_u64m2(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_vx_u64m2(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vclmulh_vv_u64m4(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_vx_u64m4(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vclmulh_vv_u64m8(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_vx_u64m8(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vclmulh_vv_u64m1_m(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_vx_u64m1_m(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vclmulh_vv_u64m2_m(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_vx_u64m2_m(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vclmulh_vv_u64m4_m(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_vx_u64m4_m(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vclmulh_vv_u64m8_m(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_vx_u64m8_m(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vclmulh_vv_u64m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_vx_u64m1_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vclmulh_vv_u64m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_vx_u64m2_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vclmulh_vv_u64m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_vx_u64m4_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vclmulh_vv_u64m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_vx_u64m8_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vclmulh_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vclmulh_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vclmulh_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vclmulh_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vclmulh_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vclmulh_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vclmulh_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vclmulh_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vclmulh_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vclmulh_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vclmulh_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vclmulh_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 16 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 16 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 8 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 8 } } */
+/* { dg-final { scan-assembler-times {vclmulh\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 24 } } */
+/* { dg-final { scan-assembler-times {vclmulh\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 16 } } */
+/* { dg-final { scan-assembler-times {vclmulh\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 24 } } */
+/* { dg-final { scan-assembler-times {vclmulh\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 16 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmulh_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmulh_overloaded.c
new file mode 100644
index 00000000000..ed3c4388af6
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvbc/vclmulh_overloaded.c
@@ -0,0 +1,208 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvbc -mabi=lp64d -O3 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint64m1_t test_vclmulh_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vclmulh(vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh(vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vclmulh(vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh(vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vclmulh(vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh(vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vclmulh(vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh(vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vclmulh(mask, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh(mask, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vclmulh(mask, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh(mask, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vclmulh(mask, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh(mask, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vclmulh(mask, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh(mask, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vclmulh_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vclmulh_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vclmulh_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vclmulh_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_tu(maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vclmulh_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vclmulh_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vclmulh_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vclmulh_tum(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_tum(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vclmulh_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vclmulh_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vclmulh_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vclmulh_tumu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_tumu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vclmulh_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vclmulh_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vclmulh_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vclmulh_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vclmulh_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vclmulh_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vclmulh_mu(mask, maskedoff, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vclmulh_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) {
+  return __riscv_vclmulh_mu(mask, maskedoff, vs2, rs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 16 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 16 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*mu} 8 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*mu} 8 } } */
+/* { dg-final { scan-assembler-times {vclmulh\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]} 24 } } */
+/* { dg-final { scan-assembler-times {vclmulh\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 16 } } */
+/* { dg-final { scan-assembler-times {vclmulh\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]} 24 } } */
+/* { dg-final { scan-assembler-times {vclmulh\.vx\s+v[0-9]+,\s*v[0-9]+,\s*a[0-9]+,\s*v0.t} 16 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp b/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
new file mode 100644
index 00000000000..db5abe97065
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvk.exp
@@ -0,0 +1,57 @@
+# Copyright (C) 2022-2023 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with GCC; see the file COPYING3.  If not see
+# <http://www.gnu.org/licenses/>.
+
+# GCC testsuite that uses the `dg.exp' driver.
+
+# Exit immediately if this isn't a RISC-V target.
+if ![istarget riscv*-*-*] then {
+  return
+}
+
+# Load support procs.
+load_lib gcc-dg.exp
+
+# If a testcase doesn't have special options, use these.
+global DEFAULT_CFLAGS
+if ![info exists DEFAULT_CFLAGS] then {
+    set DEFAULT_CFLAGS " -ansi -pedantic-errors"
+}
+
+# Initialize `dg'.
+dg-init
+
+# Main loop.
+dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.\[cS\]]] \
+        "" $DEFAULT_CFLAGS
+dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvbb/*.\[cS\]]] \
+        "" $DEFAULT_CFLAGS
+dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvbc/*.\[cS\]]] \
+        "" $DEFAULT_CFLAGS
+dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvkg/*.\[cS\]]] \
+        "" $DEFAULT_CFLAGS
+dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvkned/*.\[cS\]]] \
+	"" $DEFAULT_CFLAGS
+dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvknha/*.\[cS\]]] \
+	"" $DEFAULT_CFLAGS
+dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvknhb/*.\[cS\]]] \
+        "" $DEFAULT_CFLAGS
+dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvksed/*.\[cS\]]] \
+        "" $DEFAULT_CFLAGS
+dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/zvksh/*.\[cS\]]] \
+        "" $DEFAULT_CFLAGS
+
+# All done.
+dg-finish
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkg/vghsh.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkg/vghsh.c
new file mode 100644
index 00000000000..3837f99fea3
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkg/vghsh.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkg -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vghsh_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vghsh_vv_u32mf2(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vghsh_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vghsh_vv_u32m1(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vghsh_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vghsh_vv_u32m2(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vghsh_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vghsh_vv_u32m4(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vghsh_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vghsh_vv_u32m8(vd, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vghsh_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vghsh_vv_u32mf2_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vghsh_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vghsh_vv_u32m1_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vghsh_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vghsh_vv_u32m2_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vghsh_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vghsh_vv_u32m4_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vghsh_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vghsh_vv_u32m8_tu(vd, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vghsh\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkg/vghsh_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkg/vghsh_overloaded.c
new file mode 100644
index 00000000000..2d2004bc653
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkg/vghsh_overloaded.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkg -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vghsh_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vghsh(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vghsh_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vghsh(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vghsh_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vghsh(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vghsh_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vghsh(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vghsh_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vghsh(vd, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vghsh_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vghsh_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vghsh_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vghsh_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vghsh_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vghsh_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vghsh_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vghsh_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vghsh_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vghsh_tu(vd, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vghsh\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkg/vgmul.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkg/vgmul.c
new file mode 100644
index 00000000000..902de106c12
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkg/vgmul.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkg -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vgmul_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vgmul_vv_u32mf2(vd, vs2, vl);
+}
+
+vuint32m1_t test_vgmul_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vgmul_vv_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vgmul_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vgmul_vv_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vgmul_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vgmul_vv_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vgmul_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vgmul_vv_u32m8(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vgmul_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vgmul_vv_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vgmul_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vgmul_vv_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vgmul_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vgmul_vv_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vgmul_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vgmul_vv_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vgmul_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vgmul_vv_u32m8_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vgmul\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkg/vgmul_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkg/vgmul_overloaded.c
new file mode 100644
index 00000000000..53397ebc69b
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkg/vgmul_overloaded.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkg -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vgmul_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vgmul(vd, vs2, vl);
+}
+
+vuint32m1_t test_vgmul_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vgmul(vd, vs2, vl);
+}
+
+vuint32m2_t test_vgmul_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vgmul(vd, vs2, vl);
+}
+
+vuint32m4_t test_vgmul_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vgmul(vd, vs2, vl);
+}
+
+vuint32m8_t test_vgmul_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2,  size_t vl) {
+  return __riscv_vgmul(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vgmul_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vgmul_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vgmul_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vgmul_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vgmul_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vgmul_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vgmul_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vgmul_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vgmul_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vgmul_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vgmul\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdf.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdf.c
new file mode 100644
index 00000000000..8fcfd493f2f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdf.c
@@ -0,0 +1,169 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+#include "riscv_vector.h"
+/* non-policy */
+vuint32mf2_t test_vaesdf_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_u32mf2(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32mf2_u32mf2(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32mf2_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32mf2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32mf2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32mf2_u32m8(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_u32m1(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m1_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m1_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m1_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m1_u32m8(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_u32m2(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m2_u32m8(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_u32m4(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m4_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m4_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m8_u32m8(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesdf_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32mf2_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32mf2_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32mf2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32mf2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32mf2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m1_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m1_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m1_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m1_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m4_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m4_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_u32m8_u32m8_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vaesdf\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
+/* { dg-final { scan-assembler-times {vaesdf\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdf_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdf_overloaded.c
new file mode 100644
index 00000000000..b8570818358
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdf_overloaded.c
@@ -0,0 +1,169 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+#include "riscv_vector.h"
+/* non-policy */
+vuint32mf2_t test_vaesdf_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesdf_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_tu(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdf_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdf_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdf_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdf_vv_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdf_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdf_vs_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vaesdf\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
+/* { dg-final { scan-assembler-times {vaesdf\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdm.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdm.c
new file mode 100644
index 00000000000..1d4a1711cc9
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdm.c
@@ -0,0 +1,170 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaesdm_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_u32mf2(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32mf2_u32mf2(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32mf2_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32mf2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32mf2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32mf2_u32m8(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_u32m1(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m1_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m1_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m1_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m1_u32m8(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_u32m2(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m2_u32m8(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_u32m4(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m4_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m4_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m8_u32m8(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesdm_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32mf2_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32mf2_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32mf2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32mf2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32mf2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m1_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m1_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m1_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m1_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m4_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m4_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_u32m8_u32m8_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vaesdm\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
+/* { dg-final { scan-assembler-times {vaesdm\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdm_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdm_overloaded.c
new file mode 100644
index 00000000000..4247ba3901b
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesdm_overloaded.c
@@ -0,0 +1,170 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaesdm_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesdm_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_tu(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesdm_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesdm_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesdm_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdm_vv_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesdm_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesdm_vs_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vaesdm\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
+/* { dg-final { scan-assembler-times {vaesdm\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesef.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesef.c
new file mode 100644
index 00000000000..93a79ffa51c
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesef.c
@@ -0,0 +1,170 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaesef_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_u32mf2(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32mf2_u32mf2(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32mf2_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32mf2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32mf2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32mf2_u32m8(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_u32m1(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m1_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m1_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m1_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m1_u32m8(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_u32m2(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m2_u32m8(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_u32m4(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m4_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m4_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m8_u32m8(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesef_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32mf2_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32mf2_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32mf2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32mf2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32mf2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m1_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m1_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m1_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m1_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m4_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m4_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_u32m8_u32m8_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vaesef\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
+/* { dg-final { scan-assembler-times {vaesef\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesef_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesef_overloaded.c
new file mode 100644
index 00000000000..9e3998ef055
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesef_overloaded.c
@@ -0,0 +1,170 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaesef_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vv(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vv(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vv(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesef_vv(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesef_vv(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesef_vs(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesef_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_tu(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesef_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesef_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesef_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesef_vv_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesef_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesef_vs_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vaesef\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
+/* { dg-final { scan-assembler-times {vaesef\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesem.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesem.c
new file mode 100644
index 00000000000..43e468c6f0e
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesem.c
@@ -0,0 +1,170 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaesem_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_u32mf2(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32mf2_u32mf2(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32mf2_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32mf2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32mf2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32mf2_u32m8(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_u32m1(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m1_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m1_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m1_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m1_u32m8(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_u32m2(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m2_u32m8(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_u32m4(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m4_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m4_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m8_u32m8(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesem_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32mf2_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32mf2_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32mf2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32mf2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32mf2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m1_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m1_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m1_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m1_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m4_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m4_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_u32m8_u32m8_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vaesem\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
+/* { dg-final { scan-assembler-times {vaesem\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesem_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesem_overloaded.c
new file mode 100644
index 00000000000..bb2e7dea733
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesem_overloaded.c
@@ -0,0 +1,170 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaesem_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vv(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vv(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vv(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesem_vv(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesem_vv(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesem_vs(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesem_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_tu(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesem_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesem_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesem_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesem_vv_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesem_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesem_vs_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vaesem\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
+/* { dg-final { scan-assembler-times {vaesem\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf1.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf1.c
new file mode 100644
index 00000000000..0edbb6d9108
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf1.c
@@ -0,0 +1,50 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaeskf1_vi_u32mf2(vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaeskf1_vi_u32mf2(vs2, 0, vl);
+}
+
+vuint32m1_t test_vaeskf1_vi_u32m1(vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaeskf1_vi_u32m1(vs2, 0, vl);
+}
+
+vuint32m2_t test_vaeskf1_vi_u32m2(vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaeskf1_vi_u32m2(vs2, 0, vl);
+}
+
+vuint32m4_t test_vaeskf1_vi_u32m4(vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaeskf1_vi_u32m4(vs2, 0, vl);
+}
+
+vuint32m8_t test_vaeskf1_vi_u32m8(vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaeskf1_vi_u32m8(vs2, 0, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaeskf1_vi_u32mf2_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaeskf1_vi_u32m1_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaeskf1_vi_u32m2_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaeskf1_vi_u32m4_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaeskf1_vi_u32m8_tu(maskedoff, vs2, 0, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vaeskf1\.vi\s+v[0-9]+,\s*v[0-9]+,0} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf1_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf1_overloaded.c
new file mode 100644
index 00000000000..63e3537a06b
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf1_overloaded.c
@@ -0,0 +1,50 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaeskf1_vi_u32mf2(vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaeskf1(vs2, 0, vl);
+}
+
+vuint32m1_t test_vaeskf1_vi_u32m1(vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaeskf1(vs2, 0, vl);
+}
+
+vuint32m2_t test_vaeskf1_vi_u32m2(vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaeskf1(vs2, 0, vl);
+}
+
+vuint32m4_t test_vaeskf1_vi_u32m4(vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaeskf1(vs2, 0, vl);
+}
+
+vuint32m8_t test_vaeskf1_vi_u32m8(vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaeskf1(vs2, 0, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vaeskf1\.vi\s+v[0-9]+,\s*v[0-9]+,0} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf2.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf2.c
new file mode 100644
index 00000000000..06fed681d6a
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf2.c
@@ -0,0 +1,50 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaeskf2_vi_u32mf2(vd, vs2, 0, vl);
+}
+
+vuint32m1_t test_vaeskf2_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaeskf2_vi_u32m1(vd, vs2, 0, vl);
+}
+
+vuint32m2_t test_vaeskf2_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaeskf2_vi_u32m2(vd, vs2, 0, vl);
+}
+
+vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaeskf2_vi_u32m4(vd, vs2, 0, vl);
+}
+
+vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaeskf2_vi_u32m8(vd, vs2, 0, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaeskf2_vi_u32mf2_tu(vd, vs2, 0, vl);
+}
+
+vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaeskf2_vi_u32m1_tu(vd, vs2, 0, vl);
+}
+
+vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaeskf2_vi_u32m2_tu(vd, vs2, 0, vl);
+}
+
+vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaeskf2_vi_u32m4_tu(vd, vs2, 0, vl);
+}
+
+vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaeskf2_vi_u32m8_tu(vd, vs2, 0, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vaeskf2\.vi\s+v[0-9]+,\s*v[0-9]+,0} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf2_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf2_overloaded.c
new file mode 100644
index 00000000000..da7f42aef88
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaeskf2_overloaded.c
@@ -0,0 +1,50 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaeskf2(vd, vs2, 0, vl);
+}
+
+vuint32m1_t test_vaeskf2_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaeskf2(vd, vs2, 0, vl);
+}
+
+vuint32m2_t test_vaeskf2_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaeskf2(vd, vs2, 0, vl);
+}
+
+vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaeskf2(vd, vs2, 0, vl);
+}
+
+vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaeskf2(vd, vs2, 0, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaeskf2_tu(vd, vs2, 0, vl);
+}
+
+vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaeskf2_tu(vd, vs2, 0, vl);
+}
+
+vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaeskf2_tu(vd, vs2, 0, vl);
+}
+
+vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaeskf2_tu(vd, vs2, 0, vl);
+}
+
+vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaeskf2_tu(vd, vs2, 0, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vaeskf2\.vi\s+v[0-9]+,\s*v[0-9]+,0} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesz.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesz.c
new file mode 100644
index 00000000000..fbbbeaa78ed
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesz.c
@@ -0,0 +1,130 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32mf2_u32mf2(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesz_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32mf2_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32mf2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32mf2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32mf2_u32m8(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesz_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m1_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m1_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m1_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m1_u32m8(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m2_u32m8(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m4_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m4_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m8_u32m8(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32mf2_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesz_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32mf2_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32mf2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32mf2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32mf2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesz_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m1_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m1_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m1_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m1_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m4_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m4_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesz_vs_u32m8_u32m8_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 15 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 15 } } */
+/* { dg-final { scan-assembler-times {vaesz\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesz_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesz_overloaded.c
new file mode 100644
index 00000000000..9130fbdc4ef
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvkned/vaesz_overloaded.c
@@ -0,0 +1,130 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvkned -mabi=lp64d -O2 -Wno-psabi" } */
+
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesz_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesz_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesz(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesz_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vaesz_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vaesz_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vaesz_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vaesz_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vaesz_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 15 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 15 } } */
+/* { dg-final { scan-assembler-times {vaesz\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ch.c b/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ch.c
new file mode 100644
index 00000000000..2dea4bbb89f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ch.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvknha -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsha2ch_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vsha2ch_vv_u32mf2(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ch_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vsha2ch_vv_u32m1(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ch_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vsha2ch_vv_u32m2(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ch_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vsha2ch_vv_u32m4(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ch_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vsha2ch_vv_u32m8(vd, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vsha2ch_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vsha2ch_vv_u32mf2_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ch_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vsha2ch_vv_u32m1_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ch_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vsha2ch_vv_u32m2_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ch_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vsha2ch_vv_u32m4_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ch_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vsha2ch_vv_u32m8_tu(vd, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsha2ch\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ch_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ch_overloaded.c
new file mode 100644
index 00000000000..5a16400f800
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ch_overloaded.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvknha -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsha2ch_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vsha2ch(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ch_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vsha2ch(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ch_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vsha2ch(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ch_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vsha2ch(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ch_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vsha2ch(vd, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vsha2ch_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vsha2ch_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ch_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vsha2ch_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ch_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vsha2ch_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ch_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vsha2ch_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ch_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vsha2ch_tu(vd, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsha2ch\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2cl.c b/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2cl.c
new file mode 100644
index 00000000000..b2bbc1559f6
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2cl.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvknha -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsha2cl_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vsha2cl_vv_u32mf2(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2cl_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vsha2cl_vv_u32m1(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2cl_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vsha2cl_vv_u32m2(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2cl_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vsha2cl_vv_u32m4(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2cl_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vsha2cl_vv_u32m8(vd, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vsha2cl_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vsha2cl_vv_u32mf2_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2cl_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vsha2cl_vv_u32m1_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2cl_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vsha2cl_vv_u32m2_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2cl_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vsha2cl_vv_u32m4_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2cl_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vsha2cl_vv_u32m8_tu(vd, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsha2cl\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2cl_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2cl_overloaded.c
new file mode 100644
index 00000000000..7a54466b204
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2cl_overloaded.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvknha -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsha2cl_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vsha2cl(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2cl_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vsha2cl(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2cl_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vsha2cl(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2cl_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vsha2cl(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2cl_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vsha2cl(vd, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vsha2cl_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vsha2cl_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2cl_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vsha2cl_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2cl_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vsha2cl_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2cl_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vsha2cl_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2cl_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vsha2cl_tu(vd, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsha2cl\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ms.c b/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ms.c
new file mode 100644
index 00000000000..57523576c3a
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ms.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvknha -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsha2ms_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vsha2ms_vv_u32mf2(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ms_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vsha2ms_vv_u32m1(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ms_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vsha2ms_vv_u32m2(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ms_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vsha2ms_vv_u32m4(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ms_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vsha2ms_vv_u32m8(vd, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vsha2ms_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vsha2ms_vv_u32mf2_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ms_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vsha2ms_vv_u32m1_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ms_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vsha2ms_vv_u32m2_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ms_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vsha2ms_vv_u32m4_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ms_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vsha2ms_vv_u32m8_tu(vd, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsha2ms\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ms_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ms_overloaded.c
new file mode 100644
index 00000000000..4d31ee0ee34
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvknha/vsha2ms_overloaded.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvknha -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsha2ms_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vsha2ms(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ms_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vsha2ms(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ms_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vsha2ms(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ms_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vsha2ms(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ms_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vsha2ms(vd, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vsha2ms_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vsha2ms_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ms_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vsha2ms_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ms_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vsha2ms_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ms_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vsha2ms_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ms_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vsha2ms_tu(vd, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsha2ms\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ch.c b/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ch.c
new file mode 100644
index 00000000000..811c313887b
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ch.c
@@ -0,0 +1,83 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvknhb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsha2ch_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vsha2ch_vv_u32mf2(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ch_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vsha2ch_vv_u32m1(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ch_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vsha2ch_vv_u32m2(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ch_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vsha2ch_vv_u32m4(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ch_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vsha2ch_vv_u32m8(vd, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vsha2ch_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vsha2ch_vv_u64m1(vd, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vsha2ch_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vsha2ch_vv_u64m2(vd, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vsha2ch_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vsha2ch_vv_u64m4(vd, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vsha2ch_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vsha2ch_vv_u64m8(vd, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vsha2ch_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vsha2ch_vv_u32mf2_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ch_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vsha2ch_vv_u32m1_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ch_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vsha2ch_vv_u32m2_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ch_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vsha2ch_vv_u32m4_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ch_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vsha2ch_vv_u32m8_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vsha2ch_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vsha2ch_vv_u64m1_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vsha2ch_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vsha2ch_vv_u64m2_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vsha2ch_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vsha2ch_vv_u64m4_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vsha2ch_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vsha2ch_vv_u64m8_tu(vd, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 9 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 9 } } */
+/* { dg-final { scan-assembler-times {vsha2ch\.vv\s+v[0-9]+,\s*v[0-9]} 18 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ch_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ch_overloaded.c
new file mode 100644
index 00000000000..a09f9876c75
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ch_overloaded.c
@@ -0,0 +1,83 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvknhb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsha2ch_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vsha2ch(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ch_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vsha2ch(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ch_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vsha2ch(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ch_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vsha2ch(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ch_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vsha2ch(vd, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vsha2ch_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vsha2ch(vd, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vsha2ch_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vsha2ch(vd, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vsha2ch_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vsha2ch(vd, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vsha2ch_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vsha2ch(vd, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vsha2ch_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vsha2ch_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ch_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vsha2ch_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ch_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vsha2ch_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ch_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vsha2ch_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ch_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vsha2ch_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vsha2ch_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vsha2ch_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vsha2ch_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vsha2ch_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vsha2ch_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vsha2ch_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vsha2ch_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vsha2ch_tu(vd, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 9 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 9 } } */
+/* { dg-final { scan-assembler-times {vsha2ch\.vv\s+v[0-9]+,\s*v[0-9]} 18 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2cl.c b/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2cl.c
new file mode 100644
index 00000000000..f44c5a2cfbf
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2cl.c
@@ -0,0 +1,83 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvknhb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsha2cl_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vsha2cl_vv_u32mf2(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2cl_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vsha2cl_vv_u32m1(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2cl_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vsha2cl_vv_u32m2(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2cl_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vsha2cl_vv_u32m4(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2cl_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vsha2cl_vv_u32m8(vd, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vsha2cl_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vsha2cl_vv_u64m1(vd, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vsha2cl_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vsha2cl_vv_u64m2(vd, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vsha2cl_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vsha2cl_vv_u64m4(vd, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vsha2cl_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vsha2cl_vv_u64m8(vd, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vsha2cl_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vsha2cl_vv_u32mf2_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2cl_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vsha2cl_vv_u32m1_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2cl_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vsha2cl_vv_u32m2_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2cl_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vsha2cl_vv_u32m4_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2cl_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vsha2cl_vv_u32m8_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vsha2cl_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vsha2cl_vv_u64m1_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vsha2cl_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vsha2cl_vv_u64m2_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vsha2cl_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vsha2cl_vv_u64m4_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vsha2cl_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vsha2cl_vv_u64m8_tu(vd, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 9 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 9 } } */
+/* { dg-final { scan-assembler-times {vsha2cl\.vv\s+v[0-9]+,\s*v[0-9]} 18 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2cl_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2cl_overloaded.c
new file mode 100644
index 00000000000..2354ab54a63
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2cl_overloaded.c
@@ -0,0 +1,83 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvknhb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsha2cl_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vsha2cl(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2cl_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vsha2cl(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2cl_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vsha2cl(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2cl_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vsha2cl(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2cl_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vsha2cl(vd, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vsha2cl_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vsha2cl(vd, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vsha2cl_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vsha2cl(vd, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vsha2cl_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vsha2cl(vd, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vsha2cl_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vsha2cl(vd, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vsha2cl_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vsha2cl_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2cl_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vsha2cl_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2cl_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vsha2cl_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2cl_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vsha2cl_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2cl_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vsha2cl_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vsha2cl_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vsha2cl_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vsha2cl_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vsha2cl_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vsha2cl_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vsha2cl_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vsha2cl_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vsha2cl_tu(vd, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 9 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 9 } } */
+/* { dg-final { scan-assembler-times {vsha2cl\.vv\s+v[0-9]+,\s*v[0-9]} 18 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ms.c b/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ms.c
new file mode 100644
index 00000000000..45aba16119d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ms.c
@@ -0,0 +1,83 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvknhb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsha2ms_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vsha2ms_vv_u32mf2(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ms_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vsha2ms_vv_u32m1(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ms_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vsha2ms_vv_u32m2(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ms_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vsha2ms_vv_u32m4(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ms_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vsha2ms_vv_u32m8(vd, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vsha2ms_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vsha2ms_vv_u64m1(vd, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vsha2ms_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vsha2ms_vv_u64m2(vd, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vsha2ms_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vsha2ms_vv_u64m4(vd, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vsha2ms_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vsha2ms_vv_u64m8(vd, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vsha2ms_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vsha2ms_vv_u32mf2_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ms_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vsha2ms_vv_u32m1_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ms_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vsha2ms_vv_u32m2_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ms_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vsha2ms_vv_u32m4_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ms_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vsha2ms_vv_u32m8_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vsha2ms_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vsha2ms_vv_u64m1_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vsha2ms_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vsha2ms_vv_u64m2_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vsha2ms_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vsha2ms_vv_u64m4_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vsha2ms_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vsha2ms_vv_u64m8_tu(vd, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 9 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 9 } } */
+/* { dg-final { scan-assembler-times {vsha2ms\.vv\s+v[0-9]+,\s*v[0-9]} 18 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ms_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ms_overloaded.c
new file mode 100644
index 00000000000..3cad2e09fc7
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvknhb/vsha2ms_overloaded.c
@@ -0,0 +1,83 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvknhb -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsha2ms_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vsha2ms(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ms_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vsha2ms(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ms_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vsha2ms(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ms_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vsha2ms(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ms_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vsha2ms(vd, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vsha2ms_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vsha2ms(vd, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vsha2ms_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vsha2ms(vd, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vsha2ms_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vsha2ms(vd, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vsha2ms_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vsha2ms(vd, vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vsha2ms_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vsha2ms_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsha2ms_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vsha2ms_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsha2ms_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vsha2ms_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsha2ms_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vsha2ms_tu(vd, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsha2ms_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vsha2ms_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m1_t test_vsha2ms_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) {
+  return __riscv_vsha2ms_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m2_t test_vsha2ms_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) {
+  return __riscv_vsha2ms_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m4_t test_vsha2ms_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) {
+  return __riscv_vsha2ms_tu(vd, vs2, vs1, vl);
+}
+
+vuint64m8_t test_vsha2ms_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) {
+  return __riscv_vsha2ms_tu(vd, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 9 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 9 } } */
+/* { dg-final { scan-assembler-times {vsha2ms\.vv\s+v[0-9]+,\s*v[0-9]} 18 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4k.c b/gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4k.c
new file mode 100644
index 00000000000..7a8a0857f31
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4k.c
@@ -0,0 +1,50 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvksed -mabi=lp64d -O2 -Wno-psabi" } */
+
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vsm4k_vi_u32mf2(vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm4k_vi_u32mf2(vs2, 0, vl);
+}
+
+vuint32m1_t test_vsm4k_vi_u32m1(vuint32m1_t vs2, size_t vl) {
+  return __riscv_vsm4k_vi_u32m1(vs2, 0, vl);
+}
+
+vuint32m2_t test_vsm4k_vi_u32m2(vuint32m2_t vs2, size_t vl) {
+  return __riscv_vsm4k_vi_u32m2(vs2, 0, vl);
+}
+
+vuint32m4_t test_vsm4k_vi_u32m4(vuint32m4_t vs2, size_t vl) {
+  return __riscv_vsm4k_vi_u32m4(vs2, 0, vl);
+}
+
+vuint32m8_t test_vsm4k_vi_u32m8(vuint32m8_t vs2, size_t vl) {
+  return __riscv_vsm4k_vi_u32m8(vs2, 0, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vsm4k_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm4k_vi_u32mf2_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m1_t test_vsm4k_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vsm4k_vi_u32m1_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m2_t test_vsm4k_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vsm4k_vi_u32m2_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m4_t test_vsm4k_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vsm4k_vi_u32m4_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m8_t test_vsm4k_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vsm4k_vi_u32m8_tu(maskedoff, vs2, 0, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsm4k\.vi\s+v[0-9]+,\s*v[0-9]+,0} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4k_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4k_overloaded.c
new file mode 100644
index 00000000000..dd06a7e58d8
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4k_overloaded.c
@@ -0,0 +1,50 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvksed -mabi=lp64d -O2 -Wno-psabi" } */
+
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vsm4k_vi_u32mf2(vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm4k(vs2, 0, vl);
+}
+
+vuint32m1_t test_vsm4k_vi_u32m1(vuint32m1_t vs2, size_t vl) {
+  return __riscv_vsm4k(vs2, 0, vl);
+}
+
+vuint32m2_t test_vsm4k_vi_u32m2(vuint32m2_t vs2, size_t vl) {
+  return __riscv_vsm4k(vs2, 0, vl);
+}
+
+vuint32m4_t test_vsm4k_vi_u32m4(vuint32m4_t vs2, size_t vl) {
+  return __riscv_vsm4k(vs2, 0, vl);
+}
+
+vuint32m8_t test_vsm4k_vi_u32m8(vuint32m8_t vs2, size_t vl) {
+  return __riscv_vsm4k(vs2, 0, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vsm4k_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m1_t test_vsm4k_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m2_t test_vsm4k_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m4_t test_vsm4k_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl);
+}
+
+vuint32m8_t test_vsm4k_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsm4k\.vi\s+v[0-9]+,\s*v[0-9]+,0} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4r.c b/gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4r.c
new file mode 100644
index 00000000000..dac66db3abb
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4r.c
@@ -0,0 +1,170 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvksed -mabi=lp64d -O2 -Wno-psabi" } */
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vsm4r_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vv_u32mf2(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32mf2_u32mf2(vd, vs2, vl);
+}
+
+vuint32m1_t test_vsm4r_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32mf2_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32mf2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32mf2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32mf2_u32m8(vd, vs2, vl);
+}
+
+vuint32m1_t test_vsm4r_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vsm4r_vv_u32m1(vd, vs2, vl);
+}
+
+vuint32m1_t test_vsm4r_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32m1_u32m1(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32m1_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32m1_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32m1_u32m8(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vv_u32m2(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32m2_u32m2(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32m2_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32m2_u32m8(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vsm4r_vv_u32m4(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32m4_u32m4(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32m4_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vsm4r_vv_u32m8(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32m8_u32m8(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vsm4r_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vv_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32mf2_u32mf2_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vsm4r_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32mf2_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32mf2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32mf2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32mf2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vsm4r_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vsm4r_vv_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vsm4r_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32m1_u32m1_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32m1_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32m1_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32m1_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vv_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32m2_u32m2_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32m2_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32m2_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vsm4r_vv_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32m4_u32m4_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32m4_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vsm4r_vv_u32m8_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_u32m8_u32m8_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsm4r\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
+/* { dg-final { scan-assembler-times {vsm4r\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4r_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4r_overloaded.c
new file mode 100644
index 00000000000..6311adfb2d5
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvksed/vsm4r_overloaded.c
@@ -0,0 +1,170 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvksed -mabi=lp64d -O2 -Wno-psabi" } */
+#include "riscv_vector.h"
+
+/* non-policy */
+vuint32mf2_t test_vsm4r_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vv(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+vuint32m1_t test_vsm4r_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+vuint32m1_t test_vsm4r_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vsm4r_vv(vd, vs2, vl);
+}
+
+vuint32m1_t test_vsm4r_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vv(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vsm4r_vv(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vsm4r_vv(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs(vd, vs2, vl);
+}
+
+/* policy */
+vuint32mf2_t test_vsm4r_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vv_tu(vd, vs2, vl);
+}
+
+vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vsm4r_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vsm4r_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vsm4r_vv_tu(vd, vs2, vl);
+}
+
+vuint32m1_t test_vsm4r_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vv_tu(vd, vs2, vl);
+}
+
+vuint32m2_t test_vsm4r_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vsm4r_vv_tu(vd, vs2, vl);
+}
+
+vuint32m4_t test_vsm4r_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vsm4r_vv_tu(vd, vs2, vl);
+}
+
+vuint32m8_t test_vsm4r_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vsm4r_vs_tu(vd, vs2, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 20 } } */
+/* { dg-final { scan-assembler-times {vsm4r\.vv\s+v[0-9]+,\s*v[0-9]} 10 } } */
+/* { dg-final { scan-assembler-times {vsm4r\.vs\s+v[0-9]+,\s*v[0-9]} 30 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3c.c b/gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3c.c
new file mode 100644
index 00000000000..1cea2489708
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3c.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvksh -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsm3c_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm3c_vi_u32mf2(vd, vs2, 0, vl);
+}
+
+vuint32m1_t test_vsm3c_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vsm3c_vi_u32m1(vd, vs2, 0, vl);
+}
+
+vuint32m2_t test_vsm3c_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vsm3c_vi_u32m2(vd, vs2, 0, vl);
+}
+
+vuint32m4_t test_vsm3c_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vsm3c_vi_u32m4(vd, vs2, 0, vl);
+}
+
+vuint32m8_t test_vsm3c_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vsm3c_vi_u32m8(vd, vs2, 0, vl);
+}
+
+vuint32mf2_t test_vsm3c_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm3c_vi_u32mf2_tu(vd, vs2, 0, vl);
+}
+
+vuint32m1_t test_vsm3c_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vsm3c_vi_u32m1_tu(vd, vs2, 0, vl);
+}
+
+vuint32m2_t test_vsm3c_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vsm3c_vi_u32m2_tu(vd, vs2, 0, vl);
+}
+
+vuint32m4_t test_vsm3c_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vsm3c_vi_u32m4_tu(vd, vs2, 0, vl);
+}
+
+vuint32m8_t test_vsm3c_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vsm3c_vi_u32m8_tu(vd, vs2, 0, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsm3c\.vi\s+v[0-9]+,\s*v[0-9]+,0} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3c_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3c_overloaded.c
new file mode 100644
index 00000000000..01b4c0fbb95
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3c_overloaded.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvksh -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsm3c_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm3c(vd, vs2, 0, vl);
+}
+
+vuint32m1_t test_vsm3c_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vsm3c(vd, vs2, 0, vl);
+}
+
+vuint32m2_t test_vsm3c_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vsm3c(vd, vs2, 0, vl);
+}
+
+vuint32m4_t test_vsm3c_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vsm3c(vd, vs2, 0, vl);
+}
+
+vuint32m8_t test_vsm3c_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vsm3c(vd, vs2, 0, vl);
+}
+
+vuint32mf2_t test_vsm3c_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) {
+  return __riscv_vsm3c_tu(vd, vs2, 0, vl);
+}
+
+vuint32m1_t test_vsm3c_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) {
+  return __riscv_vsm3c_tu(vd, vs2, 0, vl);
+}
+
+vuint32m2_t test_vsm3c_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) {
+  return __riscv_vsm3c_tu(vd, vs2, 0, vl);
+}
+
+vuint32m4_t test_vsm3c_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) {
+  return __riscv_vsm3c_tu(vd, vs2, 0, vl);
+}
+
+vuint32m8_t test_vsm3c_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) {
+  return __riscv_vsm3c_tu(vd, vs2, 0, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsm3c\.vi\s+v[0-9]+,\s*v[0-9]+,0} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3me.c b/gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3me.c
new file mode 100644
index 00000000000..78fdf741643
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3me.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvksh -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsm3me_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vsm3me_vv_u32mf2(vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsm3me_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vsm3me_vv_u32m1(vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsm3me_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vsm3me_vv_u32m2(vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsm3me_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vsm3me_vv_u32m4(vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsm3me_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vsm3me_vv_u32m8(vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vsm3me_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vsm3me_vv_u32mf2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsm3me_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vsm3me_vv_u32m1_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsm3me_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vsm3me_vv_u32m2_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsm3me_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vsm3me_vv_u32m4_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsm3me_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vsm3me_vv_u32m8_tu(maskedoff, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsm3me\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 10 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3me_overloaded.c b/gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3me_overloaded.c
new file mode 100644
index 00000000000..00c9cfe56ca
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zvk/zvksh/vsm3me_overloaded.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zvksh -mabi=lp64d -O2 -Wno-psabi" } */
+#include <stdint.h>
+#include <riscv_vector.h>
+
+typedef _Float16 float16_t;
+typedef float float32_t;
+typedef double float64_t;
+vuint32mf2_t test_vsm3me_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vsm3me(vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsm3me_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vsm3me(vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsm3me_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vsm3me(vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsm3me_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vsm3me(vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsm3me_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vsm3me(vs2, vs1, vl);
+}
+
+vuint32mf2_t test_vsm3me_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) {
+  return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m1_t test_vsm3me_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) {
+  return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m2_t test_vsm3me_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) {
+  return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m4_t test_vsm3me_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) {
+  return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl);
+}
+
+vuint32m8_t test_vsm3me_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) {
+  return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*ta,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s*zero,\s*[a-x0-9]+,\s*[a-x0-9]+,m[a-x0-9]+,\s*tu,\s*ma} 5 } } */
+/* { dg-final { scan-assembler-times {vsm3me\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 10 } } */
\ No newline at end of file
-- 
2.17.1


^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2023-11-30  0:26 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-30  0:25 [PATCH] [RISC-V] Add intrinsics for Vector Crypto extension Feng Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).