public inbox for gcc-cvs@sourceware.org
help / color / mirror / Atom feed
* [gcc r13-6488] RISC-V: Add RVV misc intrinsic support
@ 2023-03-05  9:17 Kito Cheng
  0 siblings, 0 replies; only message in thread
From: Kito Cheng @ 2023-03-05  9:17 UTC (permalink / raw)
  To: gcc-cvs

https://gcc.gnu.org/g:7caa1ae5e451e780fbc4746a54e3f19d4f4304dc

commit r13-6488-g7caa1ae5e451e780fbc4746a54e3f19d4f4304dc
Author: Ju-Zhe Zhong <juzhe.zhong@rivai.ai>
Date:   Thu Mar 2 16:01:52 2023 +0800

    RISC-V: Add RVV misc intrinsic support
    
    Co-authored-by: kito-cheng <kito.cheng@sifive.com>
    
    gcc/ChangeLog:
    
            * config/riscv/predicates.md (vector_any_register_operand): New predicate.
            * config/riscv/riscv-c.cc (riscv_check_builtin_call): New function.
            (riscv_register_pragmas): Add builtin function check call.
            * config/riscv/riscv-protos.h (RVV_VUNDEF): Adapt macro.
            (check_builtin_call): New function.
            * config/riscv/riscv-vector-builtins-bases.cc (class vundefined): New class.
            (class vreinterpret): Ditto.
            (class vlmul_ext): Ditto.
            (class vlmul_trunc): Ditto.
            (class vset): Ditto.
            (class vget): Ditto.
            (BASE): Ditto.
            * config/riscv/riscv-vector-builtins-bases.h: Ditto.
            * config/riscv/riscv-vector-builtins-functions.def (vluxei8): Change name.
            (vluxei16): Ditto.
            (vluxei32): Ditto.
            (vluxei64): Ditto.
            (vloxei8): Ditto.
            (vloxei16): Ditto.
            (vloxei32): Ditto.
            (vloxei64): Ditto.
            (vsuxei8): Ditto.
            (vsuxei16): Ditto.
            (vsuxei32): Ditto.
            (vsuxei64): Ditto.
            (vsoxei8): Ditto.
            (vsoxei16): Ditto.
            (vsoxei32): Ditto.
            (vsoxei64): Ditto.
            (vundefined): Add new intrinsic.
            (vreinterpret): Ditto.
            (vlmul_ext): Ditto.
            (vlmul_trunc): Ditto.
            (vset): Ditto.
            (vget): Ditto.
            * config/riscv/riscv-vector-builtins-shapes.cc (struct return_mask_def): New class.
            (struct narrow_alu_def): Ditto.
            (struct reduc_alu_def): Ditto.
            (struct vundefined_def): Ditto.
            (struct misc_def): Ditto.
            (struct vset_def): Ditto.
            (struct vget_def): Ditto.
            (SHAPE): Ditto.
            * config/riscv/riscv-vector-builtins-shapes.h: Ditto.
            * config/riscv/riscv-vector-builtins-types.def (DEF_RVV_EEW8_INTERPRET_OPS): New def.
            (DEF_RVV_EEW16_INTERPRET_OPS): Ditto.
            (DEF_RVV_EEW32_INTERPRET_OPS): Ditto.
            (DEF_RVV_EEW64_INTERPRET_OPS): Ditto.
            (DEF_RVV_X2_VLMUL_EXT_OPS): Ditto.
            (DEF_RVV_X4_VLMUL_EXT_OPS): Ditto.
            (DEF_RVV_X8_VLMUL_EXT_OPS): Ditto.
            (DEF_RVV_X16_VLMUL_EXT_OPS): Ditto.
            (DEF_RVV_X32_VLMUL_EXT_OPS): Ditto.
            (DEF_RVV_X64_VLMUL_EXT_OPS): Ditto.
            (DEF_RVV_LMUL1_OPS): Ditto.
            (DEF_RVV_LMUL2_OPS): Ditto.
            (DEF_RVV_LMUL4_OPS): Ditto.
            (vint16mf4_t): Ditto.
            (vint16mf2_t): Ditto.
            (vint16m1_t): Ditto.
            (vint16m2_t): Ditto.
            (vint16m4_t): Ditto.
            (vint16m8_t): Ditto.
            (vint32mf2_t): Ditto.
            (vint32m1_t): Ditto.
            (vint32m2_t): Ditto.
            (vint32m4_t): Ditto.
            (vint32m8_t): Ditto.
            (vint64m1_t): Ditto.
            (vint64m2_t): Ditto.
            (vint64m4_t): Ditto.
            (vint64m8_t): Ditto.
            (vuint16mf4_t): Ditto.
            (vuint16mf2_t): Ditto.
            (vuint16m1_t): Ditto.
            (vuint16m2_t): Ditto.
            (vuint16m4_t): Ditto.
            (vuint16m8_t): Ditto.
            (vuint32mf2_t): Ditto.
            (vuint32m1_t): Ditto.
            (vuint32m2_t): Ditto.
            (vuint32m4_t): Ditto.
            (vuint32m8_t): Ditto.
            (vuint64m1_t): Ditto.
            (vuint64m2_t): Ditto.
            (vuint64m4_t): Ditto.
            (vuint64m8_t): Ditto.
            (vint8mf4_t): Ditto.
            (vint8mf2_t): Ditto.
            (vint8m1_t): Ditto.
            (vint8m2_t): Ditto.
            (vint8m4_t): Ditto.
            (vint8m8_t): Ditto.
            (vuint8mf4_t): Ditto.
            (vuint8mf2_t): Ditto.
            (vuint8m1_t): Ditto.
            (vuint8m2_t): Ditto.
            (vuint8m4_t): Ditto.
            (vuint8m8_t): Ditto.
            (vint8mf8_t): Ditto.
            (vuint8mf8_t): Ditto.
            (vfloat32mf2_t): Ditto.
            (vfloat32m1_t): Ditto.
            (vfloat32m2_t): Ditto.
            (vfloat32m4_t): Ditto.
            (vfloat64m1_t): Ditto.
            (vfloat64m2_t): Ditto.
            (vfloat64m4_t): Ditto.
            * config/riscv/riscv-vector-builtins.cc (DEF_RVV_TYPE): Ditto.
            (DEF_RVV_EEW8_INTERPRET_OPS): Ditto.
            (DEF_RVV_EEW16_INTERPRET_OPS): Ditto.
            (DEF_RVV_EEW32_INTERPRET_OPS): Ditto.
            (DEF_RVV_EEW64_INTERPRET_OPS): Ditto.
            (DEF_RVV_X2_VLMUL_EXT_OPS): Ditto.
            (DEF_RVV_X4_VLMUL_EXT_OPS): Ditto.
            (DEF_RVV_X8_VLMUL_EXT_OPS): Ditto.
            (DEF_RVV_X16_VLMUL_EXT_OPS): Ditto.
            (DEF_RVV_X32_VLMUL_EXT_OPS): Ditto.
            (DEF_RVV_X64_VLMUL_EXT_OPS): Ditto.
            (DEF_RVV_LMUL1_OPS): Ditto.
            (DEF_RVV_LMUL2_OPS): Ditto.
            (DEF_RVV_LMUL4_OPS): Ditto.
            (DEF_RVV_TYPE_INDEX): Ditto.
            (required_extensions_p): Adapt for new intrinsic support/
            (get_required_extensions): New function.
            (check_required_extensions): Ditto.
            (unsigned_base_type_p): Remove.
            (rvv_arg_type_info::get_scalar_ptr_type): New function.
            (get_mode_for_bitsize): Remove.
            (rvv_arg_type_info::get_scalar_const_ptr_type): New function.
            (rvv_arg_type_info::get_base_vector_type): Ditto.
            (rvv_arg_type_info::get_function_type_index): Ditto.
            (DEF_RVV_BASE_TYPE): New def.
            (function_builder::apply_predication): New class.
            (function_expander::mask_mode): Ditto.
            (function_checker::function_checker): Ditto.
            (function_checker::report_non_ice): Ditto.
            (function_checker::report_out_of_range): Ditto.
            (function_checker::require_immediate): Ditto.
            (function_checker::require_immediate_range): Ditto.
            (function_checker::check): Ditto.
            (check_builtin_call): Ditto.
            * config/riscv/riscv-vector-builtins.def (DEF_RVV_TYPE): New def.
            (DEF_RVV_BASE_TYPE): Ditto.
            (DEF_RVV_TYPE_INDEX): Ditto.
            (vbool64_t): Ditto.
            (vbool32_t): Ditto.
            (vbool16_t): Ditto.
            (vbool8_t): Ditto.
            (vbool4_t): Ditto.
            (vbool2_t): Ditto.
            (vbool1_t): Ditto.
            (vuint8mf8_t): Ditto.
            (vuint8mf4_t): Ditto.
            (vuint8mf2_t): Ditto.
            (vuint8m1_t): Ditto.
            (vuint8m2_t): Ditto.
            (vint8m4_t): Ditto.
            (vuint8m4_t): Ditto.
            (vint8m8_t): Ditto.
            (vuint8m8_t): Ditto.
            (vint16mf4_t): Ditto.
            (vuint16mf2_t): Ditto.
            (vuint16m1_t): Ditto.
            (vuint16m2_t): Ditto.
            (vuint16m4_t): Ditto.
            (vuint16m8_t): Ditto.
            (vint32mf2_t): Ditto.
            (vuint32m1_t): Ditto.
            (vuint32m2_t): Ditto.
            (vuint32m4_t): Ditto.
            (vuint32m8_t): Ditto.
            (vuint64m1_t): Ditto.
            (vuint64m2_t): Ditto.
            (vuint64m4_t): Ditto.
            (vuint64m8_t): Ditto.
            (vfloat32mf2_t): Ditto.
            (vfloat32m1_t): Ditto.
            (vfloat32m2_t): Ditto.
            (vfloat32m4_t): Ditto.
            (vfloat32m8_t): Ditto.
            (vfloat64m1_t): Ditto.
            (vfloat64m4_t): Ditto.
            (vector): Move it def.
            (scalar): Ditto.
            (mask): Ditto.
            (signed_vector): Ditto.
            (unsigned_vector): Ditto.
            (unsigned_scalar): Ditto.
            (vector_ptr): Ditto.
            (scalar_ptr): Ditto.
            (scalar_const_ptr): Ditto.
            (void): Ditto.
            (size): Ditto.
            (ptrdiff): Ditto.
            (unsigned_long): Ditto.
            (long): Ditto.
            (eew8_index): Ditto.
            (eew16_index): Ditto.
            (eew32_index): Ditto.
            (eew64_index): Ditto.
            (shift_vector): Ditto.
            (double_trunc_vector): Ditto.
            (quad_trunc_vector): Ditto.
            (oct_trunc_vector): Ditto.
            (double_trunc_scalar): Ditto.
            (double_trunc_signed_vector): Ditto.
            (double_trunc_unsigned_vector): Ditto.
            (double_trunc_unsigned_scalar): Ditto.
            (double_trunc_float_vector): Ditto.
            (float_vector): Ditto.
            (lmul1_vector): Ditto.
            (widen_lmul1_vector): Ditto.
            (eew8_interpret): Ditto.
            (eew16_interpret): Ditto.
            (eew32_interpret): Ditto.
            (eew64_interpret): Ditto.
            (vlmul_ext_x2): Ditto.
            (vlmul_ext_x4): Ditto.
            (vlmul_ext_x8): Ditto.
            (vlmul_ext_x16): Ditto.
            (vlmul_ext_x32): Ditto.
            (vlmul_ext_x64): Ditto.
            * config/riscv/riscv-vector-builtins.h (DEF_RVV_BASE_TYPE): New def.
            (struct function_type_info): New function.
            (struct rvv_arg_type_info): Ditto.
            (class function_checker): New class.
            (rvv_arg_type_info::get_scalar_type): New function.
            (rvv_arg_type_info::get_vector_type): Ditto.
            (function_expander::ret_mode): New function.
            (function_checker::arg_mode): Ditto.
            (function_checker::ret_mode): Ditto.
            * config/riscv/t-riscv: Add generator.
            * config/riscv/vector-iterators.md: New iterators.
            * config/riscv/vector.md (vundefined<mode>): New pattern.
            (@vundefined<mode>): Ditto.
            (@vreinterpret<mode>): Ditto.
            (@vlmul_extx2<mode>): Ditto.
            (@vlmul_extx4<mode>): Ditto.
            (@vlmul_extx8<mode>): Ditto.
            (@vlmul_extx16<mode>): Ditto.
            (@vlmul_extx32<mode>): Ditto.
            (@vlmul_extx64<mode>): Ditto.
            (*vlmul_extx2<mode>): Ditto.
            (*vlmul_extx4<mode>): Ditto.
            (*vlmul_extx8<mode>): Ditto.
            (*vlmul_extx16<mode>): Ditto.
            (*vlmul_extx32<mode>): Ditto.
            (*vlmul_extx64<mode>): Ditto.
            * config/riscv/genrvv-type-indexer.cc: New file.
    
    gcc/testsuite/ChangeLog:
    
            * gcc.target/riscv/rvv/base/vlmul_v.c: New test.
    
    Co-authored-by: kito-cheng <kito.cheng@sifive.com>

Diff:
---
 gcc/config/riscv/genrvv-type-indexer.cc            |  313 +++++
 gcc/config/riscv/predicates.md                     |    4 +
 gcc/config/riscv/riscv-c.cc                        |   20 +
 gcc/config/riscv/riscv-protos.h                    |    5 +-
 gcc/config/riscv/riscv-vector-builtins-bases.cc    |  126 ++
 gcc/config/riscv/riscv-vector-builtins-bases.h     |    6 +
 .../riscv/riscv-vector-builtins-functions.def      |   69 +-
 gcc/config/riscv/riscv-vector-builtins-shapes.cc   |   95 +-
 gcc/config/riscv/riscv-vector-builtins-shapes.h    |    4 +
 gcc/config/riscv/riscv-vector-builtins-types.def   |  366 +++++
 gcc/config/riscv/riscv-vector-builtins.cc          |  934 +++++++++----
 gcc/config/riscv/riscv-vector-builtins.def         |  239 ++--
 gcc/config/riscv/riscv-vector-builtins.h           |  119 +-
 gcc/config/riscv/t-riscv                           |   18 +
 gcc/config/riscv/vector-iterators.md               |   96 ++
 gcc/config/riscv/vector.md                         |  134 +-
 gcc/testsuite/gcc.target/riscv/rvv/base/vlmul_v.c  | 1448 ++++++++++++++++++++
 17 files changed, 3610 insertions(+), 386 deletions(-)

diff --git a/gcc/config/riscv/genrvv-type-indexer.cc b/gcc/config/riscv/genrvv-type-indexer.cc
new file mode 100644
index 00000000000..0ef1d766002
--- /dev/null
+++ b/gcc/config/riscv/genrvv-type-indexer.cc
@@ -0,0 +1,313 @@
+/* Generate the RVV type indexer tables.
+   Copyright (C) 2023-2023 Free Software Foundation, Inc.
+This file is part of GCC.
+GCC is free software; you can redistribute it and/or modify it under
+the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 3, or (at your option) any later
+version.
+GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+for more details.
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#include "bconfig.h"
+#include "system.h"
+#include "errors.h"
+
+#include "coretypes.h"
+
+#include <sstream>
+#include <assert.h>
+#include <math.h>
+
+std::string
+to_lmul (int lmul_log2)
+{
+  std::stringstream lmul_str;
+  if (lmul_log2 >= 0)
+    lmul_str << "m";
+  else
+    {
+      lmul_str << "mf";
+      lmul_log2 = -lmul_log2;
+    }
+
+  lmul_str << (1 << lmul_log2);
+  return lmul_str.str ();
+}
+
+bool
+valid_type (unsigned sew, int lmul_log2, bool float_p)
+{
+  if (lmul_log2 > 3)
+    return false;
+
+  switch (sew)
+    {
+    case 8:
+      return lmul_log2 >= -3 && !float_p;
+    case 16:
+      return lmul_log2 >= -2 && !float_p;
+    case 32:
+      return lmul_log2 >= -1;
+    case 64:
+      return lmul_log2 >= 0;
+    default:
+      return false;
+    }
+}
+
+std::string
+inttype (unsigned sew, int lmul_log2, bool unsigned_p)
+{
+  if (!valid_type (sew, lmul_log2, /*float_t*/ false))
+    return "INVALID";
+
+  std::stringstream mode;
+  mode << "v";
+  if (unsigned_p)
+    mode << "u";
+  mode << "int" << sew << to_lmul (lmul_log2) << "_t";
+  return mode.str ();
+}
+
+std::string
+floattype (unsigned sew, int lmul_log2)
+{
+  if (!valid_type (sew, lmul_log2, /*float_t*/ true))
+    return "INVALID";
+
+  std::stringstream mode;
+  mode << "vfloat" << sew << to_lmul (lmul_log2) << "_t";
+  return mode.str ();
+}
+
+std::string
+maskmode (unsigned sew, int lmul_log2)
+{
+  if (!valid_type (sew, lmul_log2, /*float_t*/ false))
+    return "INVALID";
+
+  std::stringstream mode;
+
+  int mlen;
+  if (lmul_log2 >= 0)
+    mlen = sew / (1 << lmul_log2);
+  else
+    mlen = sew * (1 << -lmul_log2);
+
+  mode << "vbool" << mlen << "_t";
+  return mode.str ();
+}
+
+std::string
+same_ratio_eew_type (unsigned sew, int lmul_log2, unsigned eew, bool unsigned_p,
+		     bool float_p)
+{
+  if (!valid_type (sew, lmul_log2, float_p))
+    return "INVALID";
+
+  int elmul_log2;
+
+  if (sew == eew)
+    elmul_log2 = lmul_log2;
+  else if (sew > eew)
+    elmul_log2 = lmul_log2 - std::log2 (sew / eew);
+  else /* sew < eew */
+    elmul_log2 = lmul_log2 + std::log2 (eew / sew);
+
+  if (float_p)
+    return floattype (eew, elmul_log2);
+  else
+    return inttype (eew, elmul_log2, unsigned_p);
+}
+
+int
+main (int argc, const char **argv)
+{
+  // Require at least one argument.
+  if (argc < 2)
+    return 1;
+
+  FILE *fp = fopen (argv[1], "w");
+
+  if (!fp)
+    return 1;
+
+  fprintf (fp, "/* Generated by genrvv-type-indexer */\n");
+
+  for (unsigned vbool : {64, 32, 16, 8, 4, 2, 1})
+    {
+      std::stringstream mode;
+      mode << "vbool" << vbool << "_t";
+      fprintf (fp, "DEF_RVV_TYPE_INDEX (\n");
+      fprintf (fp, "  /*VECTOR*/ %s,\n", mode.str ().c_str ());
+      fprintf (fp, "  /*MASK*/ %s,\n", mode.str ().c_str ());
+      fprintf (fp, "  /*SIGNED*/ INVALID,\n");
+      fprintf (fp, "  /*UNSIGNED*/ INVALID,\n");
+      for (unsigned eew : {8, 16, 32, 64})
+	fprintf (fp, "  /*EEW%d_INDEX*/ INVALID,\n", eew);
+      fprintf (fp, "  /*SHIFT*/ INVALID,\n");
+      fprintf (fp, "  /*DOUBLE_TRUNC*/ INVALID,\n");
+      fprintf (fp, "  /*QUAD_TRUNC*/ INVALID,\n");
+      fprintf (fp, "  /*OCT_TRUNC*/ INVALID,\n");
+      fprintf (fp, "  /*DOUBLE_TRUNC_SCALAR*/ INVALID,\n");
+      fprintf (fp, "  /*DOUBLE_TRUNC_SIGNED*/ INVALID,\n");
+      fprintf (fp, "  /*DOUBLE_TRUNC_UNSIGNED*/ INVALID,\n");
+      fprintf (fp, "  /*DOUBLE_TRUNC_UNSIGNED_SCALAR*/ INVALID,\n");
+      fprintf (fp, "  /*DOUBLE_TRUNC_FLOAT*/ INVALID,\n");
+      fprintf (fp, "  /*FLOAT*/ INVALID,\n");
+      fprintf (fp, "  /*LMUL1*/ INVALID,\n");
+      fprintf (fp, "  /*WLMUL1*/ INVALID,\n");
+      for (unsigned eew : {8, 16, 32, 64})
+	fprintf (fp, "  /*EEW%d_INTERPRET*/ INVALID,\n", eew);
+
+      for (unsigned lmul_log2_offset : {1, 2, 3, 4, 5, 6})
+	{
+	  unsigned multiple_of_lmul = 1 << lmul_log2_offset;
+	  const char *comma = lmul_log2_offset == 6 ? "" : ",";
+	  fprintf (fp, "  /*X%d_INTERPRET*/ INVALID%s\n", multiple_of_lmul,
+		   comma);
+	}
+      fprintf (fp, ")\n");
+    }
+
+  // Build for vint and vuint
+  for (unsigned sew : {8, 16, 32, 64})
+    for (int lmul_log2 : {-3, -2, -1, 0, 1, 2, 3})
+      for (bool unsigned_p : {false, true})
+	{
+	  if (!valid_type (sew, lmul_log2, /*float_t*/ false))
+	    continue;
+
+	  fprintf (fp, "DEF_RVV_TYPE_INDEX (\n");
+	  fprintf (fp, "  /*VECTOR*/ %s,\n",
+		   inttype (sew, lmul_log2, unsigned_p).c_str ());
+	  fprintf (fp, "  /*MASK*/ %s,\n", maskmode (sew, lmul_log2).c_str ());
+	  fprintf (fp, "  /*SIGNED*/ %s,\n",
+		   inttype (sew, lmul_log2, /*unsigned_p*/ false).c_str ());
+	  fprintf (fp, "  /*UNSIGNED*/ %s,\n",
+		   inttype (sew, lmul_log2, /*unsigned_p*/ true).c_str ());
+	  for (unsigned eew : {8, 16, 32, 64})
+	    fprintf (fp, "  /*EEW%d_INDEX*/ %s,\n", eew,
+		     same_ratio_eew_type (sew, lmul_log2, eew,
+					  /*unsigned_p*/ true, false)
+		       .c_str ());
+	  fprintf (fp, "  /*SHIFT*/ %s,\n",
+		   inttype (sew, lmul_log2, /*unsigned_p*/ true).c_str ());
+	  fprintf (fp, "  /*DOUBLE_TRUNC*/ %s,\n",
+		   same_ratio_eew_type (sew, lmul_log2, sew / 2, unsigned_p,
+					false)
+		     .c_str ());
+	  fprintf (fp, "  /*QUAD_TRUNC*/ %s,\n",
+		   same_ratio_eew_type (sew, lmul_log2, sew / 4, unsigned_p,
+					false)
+		     .c_str ());
+	  fprintf (fp, "  /*OCT_TRUNC*/ %s,\n",
+		   same_ratio_eew_type (sew, lmul_log2, sew / 8, unsigned_p,
+					false)
+		     .c_str ());
+	  fprintf (fp, "  /*DOUBLE_TRUNC_SCALAR*/ %s,\n",
+		   same_ratio_eew_type (sew, lmul_log2, sew / 2, unsigned_p,
+					false)
+		     .c_str ());
+	  fprintf (fp, "  /*DOUBLE_TRUNC_SIGNED*/ INVALID,\n");
+	  fprintf (fp, "  /*DOUBLE_TRUNC_UNSIGNED*/ %s,\n",
+		   same_ratio_eew_type (sew, lmul_log2, sew / 2, true, false)
+		     .c_str ());
+	  if (unsigned_p)
+	    fprintf (fp, "  /*DOUBLE_TRUNC_UNSIGNED_SCALAR*/ INVALID,\n");
+	  else
+	    fprintf (fp, "  /*DOUBLE_TRUNC_UNSIGNED_SCALAR*/ %s,\n",
+		     same_ratio_eew_type (sew, lmul_log2, sew / 2, true, false)
+		       .c_str ());
+	  fprintf (fp, "  /*DOUBLE_TRUNC_FLOAT*/ %s,\n",
+		   same_ratio_eew_type (sew, lmul_log2, sew / 2, false, true)
+		     .c_str ());
+	  fprintf (fp, "  /*FLOAT*/ %s,\n",
+		   floattype (sew, lmul_log2).c_str ());
+	  fprintf (fp, "  /*LMUL1*/ %s,\n",
+		   inttype (sew, /*lmul_log2*/ 0, unsigned_p).c_str ());
+	  fprintf (fp, "  /*WLMUL1*/ %s,\n",
+		   inttype (sew * 2, /*lmul_log2*/ 0, unsigned_p).c_str ());
+	  for (unsigned eew : {8, 16, 32, 64})
+	    {
+	      if (eew == sew)
+		fprintf (fp, "  /*EEW%d_INTERPRET*/ INVALID,\n", eew);
+	      else
+		fprintf (fp, "  /*EEW%d_INTERPRET*/ %s,\n", eew,
+			 inttype (eew, lmul_log2, unsigned_p).c_str ());
+	    }
+
+	  for (unsigned lmul_log2_offset : {1, 2, 3, 4, 5, 6})
+	    {
+	      unsigned multiple_of_lmul = 1 << lmul_log2_offset;
+	      const char *comma = lmul_log2_offset == 6 ? "" : ",";
+	      fprintf (fp, "  /*X%d_VLMUL_EXT*/ %s%s\n", multiple_of_lmul,
+		       inttype (sew, lmul_log2 + lmul_log2_offset, unsigned_p)
+			 .c_str (),
+		       comma);
+	    }
+	  fprintf (fp, ")\n");
+	}
+  // Build for vfloat
+  for (unsigned sew : {32, 64})
+    for (int lmul_log2 : {-3, -2, -1, 0, 1, 2, 3})
+      {
+	if (!valid_type (sew, lmul_log2, /*float_t*/ true))
+	  continue;
+
+	fprintf (fp, "DEF_RVV_TYPE_INDEX (\n");
+	fprintf (fp, "  /*VECTOR*/ %s,\n", floattype (sew, lmul_log2).c_str ());
+	fprintf (fp, "  /*MASK*/ %s,\n", maskmode (sew, lmul_log2).c_str ());
+	fprintf (fp, "  /*SIGNED*/ %s,\n",
+		 inttype (sew, lmul_log2, /*unsigned_p*/ false).c_str ());
+	fprintf (fp, "  /*UNSIGNED*/ %s,\n",
+		 inttype (sew, lmul_log2, /*unsigned_p*/ true).c_str ());
+	for (unsigned eew : {8, 16, 32, 64})
+	  fprintf (fp, "  /*EEW%d_INDEX*/ %s,\n", eew,
+		   same_ratio_eew_type (sew, lmul_log2, eew,
+					/*unsigned_p*/ true, false)
+		     .c_str ());
+	fprintf (fp, "  /*SHIFT*/ INVALID,\n");
+	fprintf (
+	  fp, "  /*DOUBLE_TRUNC*/ %s,\n",
+	  same_ratio_eew_type (sew, lmul_log2, sew / 2, false, true).c_str ());
+	fprintf (fp, "  /*QUAD_TRUNC*/ INVALID,\n");
+	fprintf (fp, "  /*OCT_TRUNC*/ INVALID,\n");
+	fprintf (
+	  fp, "  /*DOUBLE_TRUNC_SCALAR*/ %s,\n",
+	  same_ratio_eew_type (sew, lmul_log2, sew / 2, false, true).c_str ());
+	fprintf (
+	  fp, "  /*DOUBLE_TRUNC_SIGNED*/ %s,\n",
+	  same_ratio_eew_type (sew, lmul_log2, sew / 2, false, false).c_str ());
+	fprintf (
+	  fp, "  /*DOUBLE_TRUNC_UNSIGNED*/ %s,\n",
+	  same_ratio_eew_type (sew, lmul_log2, sew / 2, true, false).c_str ());
+	fprintf (fp, "  /*DOUBLE_TRUNC_UNSIGNED_SCALAR*/ INVALID,\n");
+	fprintf (
+	  fp, "  /*DOUBLE_TRUNC_FLOAT*/ %s,\n",
+	  same_ratio_eew_type (sew, lmul_log2, sew / 2, false, true).c_str ());
+	fprintf (fp, "  /*FLOAT*/ INVALID,\n");
+	fprintf (fp, "  /*LMUL1*/ %s,\n",
+		 floattype (sew, /*lmul_log2*/ 0).c_str ());
+	fprintf (fp, "  /*WLMUL1*/ %s,\n",
+		 floattype (sew * 2, /*lmul_log2*/ 0).c_str ());
+	for (unsigned eew : {8, 16, 32, 64})
+	  fprintf (fp, "  /*EEW%d_INTERPRET*/ INVALID,\n", eew);
+	for (unsigned lmul_log2_offset : {1, 2, 3, 4, 5, 6})
+	  {
+	    unsigned multiple_of_lmul = 1 << lmul_log2_offset;
+	    const char *comma = lmul_log2_offset == 6 ? "" : ",";
+	    fprintf (fp, "  /*X%d_VLMUL_EXT*/ %s%s\n", multiple_of_lmul,
+		     floattype (sew, lmul_log2 + lmul_log2_offset).c_str (),
+		     comma);
+	  }
+	fprintf (fp, ")\n");
+      }
+
+  return 0;
+}
diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md
index 06a51325537..0d9d7701c7e 100644
--- a/gcc/config/riscv/predicates.md
+++ b/gcc/config/riscv/predicates.md
@@ -346,6 +346,10 @@
   (ior (match_operand 0 "const_0_operand")
        (match_operand 0 "pmode_register_operand")))
 
+;; A special predicate that doesn't match a particular mode.
+(define_special_predicate "vector_any_register_operand"
+  (match_code "reg"))
+
 ;; The scalar operand can be directly broadcast by RVV instructions.
 (define_predicate "direct_broadcast_operand"
   (and (match_test "!(reload_completed && !FLOAT_MODE_P (GET_MODE (op))
diff --git a/gcc/config/riscv/riscv-c.cc b/gcc/config/riscv/riscv-c.cc
index 220951f99a6..ff07d319d0b 100644
--- a/gcc/config/riscv/riscv-c.cc
+++ b/gcc/config/riscv/riscv-c.cc
@@ -184,10 +184,30 @@ riscv_pragma_intrinsic (cpp_reader *)
     error ("unknown %<#pragma riscv intrinsic%> option %qs", name);
 }
 
+/* Implement TARGET_CHECK_BUILTIN_CALL.  */
+static bool
+riscv_check_builtin_call (location_t loc, vec<location_t> arg_loc, tree fndecl,
+			  tree orig_fndecl, unsigned int nargs, tree *args)
+{
+  unsigned int code = DECL_MD_FUNCTION_CODE (fndecl);
+  unsigned int subcode = code >> RISCV_BUILTIN_SHIFT;
+  switch (code & RISCV_BUILTIN_CLASS)
+    {
+    case RISCV_BUILTIN_GENERAL:
+      return true;
+
+    case RISCV_BUILTIN_VECTOR:
+      return riscv_vector::check_builtin_call (loc, arg_loc, subcode,
+					       orig_fndecl, nargs, args);
+    }
+  gcc_unreachable ();
+}
+
 /* Implement REGISTER_TARGET_PRAGMAS.  */
 
 void
 riscv_register_pragmas (void)
 {
+  targetm.check_builtin_call = riscv_check_builtin_call;
   c_register_pragma ("riscv", "intrinsic", riscv_pragma_intrinsic);
 }
diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
index 0e342b5d832..88a6bf5442f 100644
--- a/gcc/config/riscv/riscv-protos.h
+++ b/gcc/config/riscv/riscv-protos.h
@@ -122,7 +122,8 @@ void riscv_run_selftests (void);
 namespace riscv_vector {
 #define RVV_VLMAX gen_rtx_REG (Pmode, X0_REGNUM)
 #define RVV_VUNDEF(MODE)                                                       \
-  gen_rtx_UNSPEC (MODE, gen_rtvec (1, const0_rtx), UNSPEC_VUNDEF)
+  gen_rtx_UNSPEC (MODE, gen_rtvec (1, gen_rtx_REG (SImode, X0_REGNUM)),        \
+		  UNSPEC_VUNDEF)
 enum vlmul_type
 {
   LMUL_1 = 0,
@@ -150,6 +151,8 @@ bool verify_type_context (location_t, type_context_kind, const_tree, bool);
 void handle_pragma_vector (void);
 tree builtin_decl (unsigned, bool);
 rtx expand_builtin (unsigned int, tree, rtx);
+bool check_builtin_call (location_t, vec<location_t>, unsigned int,
+			   tree, unsigned int, tree *);
 bool const_vec_all_same_in_range_p (rtx, HOST_WIDE_INT, HOST_WIDE_INT);
 bool legitimize_move (rtx, rtx, machine_mode);
 void emit_vlmax_op (unsigned, rtx, rtx, machine_mode);
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index 1797c70e7b1..533f40487b6 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -1422,6 +1422,120 @@ public:
   }
 };
 
+class vundefined : public function_base
+{
+public:
+  bool apply_vl_p () const override
+  {
+    return false;
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.generate_insn (code_for_vundefined (e.vector_mode ()));
+  }
+};
+
+class vreinterpret : public function_base
+{
+public:
+  bool apply_vl_p () const override
+  {
+    return false;
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    e.add_input_operand (0);
+    return e.generate_insn (code_for_vreinterpret (e.ret_mode ()));
+  }
+};
+
+class vlmul_ext : public function_base
+{
+public:
+  bool apply_vl_p () const override
+  {
+    return false;
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    e.add_input_operand (0);
+    switch (e.op_info->ret.base_type)
+      {
+      case RVV_BASE_vlmul_ext_x2:
+	return e.generate_insn (
+	  code_for_vlmul_extx2 (e.vector_mode ()));
+      case RVV_BASE_vlmul_ext_x4:
+	return e.generate_insn (
+	  code_for_vlmul_extx4 (e.vector_mode ()));
+      case RVV_BASE_vlmul_ext_x8:
+	return e.generate_insn (
+	  code_for_vlmul_extx8 (e.vector_mode ()));
+      case RVV_BASE_vlmul_ext_x16:
+	return e.generate_insn (
+	  code_for_vlmul_extx16 (e.vector_mode ()));
+      case RVV_BASE_vlmul_ext_x32:
+	return e.generate_insn (
+	  code_for_vlmul_extx32 (e.vector_mode ()));
+      case RVV_BASE_vlmul_ext_x64:
+	return e.generate_insn (
+	  code_for_vlmul_extx64 (e.vector_mode ()));
+      default:
+	gcc_unreachable ();
+      }
+  }
+};
+
+class vlmul_trunc : public function_base
+{
+public:
+  bool apply_vl_p () const override { return false; }
+
+  rtx expand (function_expander &e) const override
+  {
+    rtx src = expand_normal (CALL_EXPR_ARG (e.exp, 0));
+    emit_move_insn (e.target, gen_lowpart (GET_MODE (e.target), src));
+    return e.target;
+  }
+};
+
+class vset : public function_base
+{
+public:
+  bool apply_vl_p () const override { return false; }
+
+  rtx expand (function_expander &e) const override
+  {
+    rtx dest = expand_normal (CALL_EXPR_ARG (e.exp, 0));
+    rtx index = expand_normal (CALL_EXPR_ARG (e.exp, 1));
+    rtx src = expand_normal (CALL_EXPR_ARG (e.exp, 2));
+    poly_int64 offset = INTVAL (index) * GET_MODE_SIZE (GET_MODE (src));
+    emit_move_insn (e.target, dest);
+    rtx subreg = simplify_gen_subreg (GET_MODE (src), e.target,
+				      GET_MODE (e.target), offset);
+    emit_move_insn (subreg, src);
+    return e.target;
+  }
+};
+
+class vget : public function_base
+{
+public:
+  bool apply_vl_p () const override { return false; }
+
+  rtx expand (function_expander &e) const override
+  {
+    rtx src = expand_normal (CALL_EXPR_ARG (e.exp, 0));
+    rtx index = expand_normal (CALL_EXPR_ARG (e.exp, 1));
+    poly_int64 offset = INTVAL (index) * GET_MODE_SIZE (GET_MODE (src));
+    rtx subreg
+      = simplify_gen_subreg (GET_MODE (e.target), src, GET_MODE (src), offset);
+    return subreg;
+  }
+};
+
 static CONSTEXPR const vsetvl<false> vsetvl_obj;
 static CONSTEXPR const vsetvl<true> vsetvlmax_obj;
 static CONSTEXPR const loadstore<false, LST_UNIT_STRIDE, false> vle_obj;
@@ -1624,6 +1738,12 @@ static CONSTEXPR const slideop<UNSPEC_VFSLIDE1DOWN> vfslide1down_obj;
 static CONSTEXPR const vrgather vrgather_obj;
 static CONSTEXPR const vrgatherei16 vrgatherei16_obj;
 static CONSTEXPR const vcompress vcompress_obj;
+static CONSTEXPR const vundefined vundefined_obj;
+static CONSTEXPR const vreinterpret vreinterpret_obj;
+static CONSTEXPR const vlmul_ext vlmul_ext_obj;
+static CONSTEXPR const vlmul_trunc vlmul_trunc_obj;
+static CONSTEXPR const vset vset_obj;
+static CONSTEXPR const vget vget_obj;
 
 /* Declare the function base NAME, pointing it to an instance
    of class <NAME>_obj.  */
@@ -1832,5 +1952,11 @@ BASE (vfslide1down)
 BASE (vrgather)
 BASE (vrgatherei16)
 BASE (vcompress)
+BASE (vundefined)
+BASE (vreinterpret)
+BASE (vlmul_ext)
+BASE (vlmul_trunc)
+BASE (vset)
+BASE (vget)
 
 } // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.h b/gcc/config/riscv/riscv-vector-builtins-bases.h
index 5078bcf9c72..5e05b35b084 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.h
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.h
@@ -232,6 +232,12 @@ extern const function_base *const vfslide1down;
 extern const function_base *const vrgather;
 extern const function_base *const vrgatherei16;
 extern const function_base *const vcompress;
+extern const function_base *const vundefined;
+extern const function_base *const vreinterpret;
+extern const function_base *const vlmul_ext;
+extern const function_base *const vlmul_trunc;
+extern const function_base *const vset;
+extern const function_base *const vget;
 }
 
 } // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.def b/gcc/config/riscv/riscv-vector-builtins-functions.def
index 638daa24596..c0d752e569f 100644
--- a/gcc/config/riscv/riscv-vector-builtins-functions.def
+++ b/gcc/config/riscv/riscv-vector-builtins-functions.def
@@ -54,22 +54,22 @@ DEF_RVV_FUNCTION (vlse, loadstore, full_preds, all_v_scalar_const_ptr_ptrdiff_op
 DEF_RVV_FUNCTION (vsse, loadstore, none_m_preds, all_v_scalar_ptr_ptrdiff_ops)
 
 // 7.6. Vector Indexed Instructions
-DEF_RVV_FUNCTION (vluxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_uint8_index_ops)
-DEF_RVV_FUNCTION (vluxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_uint16_index_ops)
-DEF_RVV_FUNCTION (vluxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_uint32_index_ops)
-DEF_RVV_FUNCTION (vluxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_uint64_index_ops)
-DEF_RVV_FUNCTION (vloxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_uint8_index_ops)
-DEF_RVV_FUNCTION (vloxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_uint16_index_ops)
-DEF_RVV_FUNCTION (vloxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_uint32_index_ops)
-DEF_RVV_FUNCTION (vloxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_uint64_index_ops)
-DEF_RVV_FUNCTION (vsuxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_uint8_index_ops)
-DEF_RVV_FUNCTION (vsuxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_uint16_index_ops)
-DEF_RVV_FUNCTION (vsuxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_uint32_index_ops)
-DEF_RVV_FUNCTION (vsuxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_uint64_index_ops)
-DEF_RVV_FUNCTION (vsoxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_uint8_index_ops)
-DEF_RVV_FUNCTION (vsoxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_uint16_index_ops)
-DEF_RVV_FUNCTION (vsoxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_uint32_index_ops)
-DEF_RVV_FUNCTION (vsoxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_uint64_index_ops)
+DEF_RVV_FUNCTION (vluxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)
+DEF_RVV_FUNCTION (vluxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)
+DEF_RVV_FUNCTION (vluxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)
+DEF_RVV_FUNCTION (vluxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)
+DEF_RVV_FUNCTION (vloxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)
+DEF_RVV_FUNCTION (vloxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)
+DEF_RVV_FUNCTION (vloxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)
+DEF_RVV_FUNCTION (vloxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)
+DEF_RVV_FUNCTION (vsuxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)
+DEF_RVV_FUNCTION (vsuxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)
+DEF_RVV_FUNCTION (vsuxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)
+DEF_RVV_FUNCTION (vsuxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)
+DEF_RVV_FUNCTION (vsoxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)
+DEF_RVV_FUNCTION (vsoxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)
+DEF_RVV_FUNCTION (vsoxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)
+DEF_RVV_FUNCTION (vsoxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)
 
 // TODO: 7.7. Unit-stride Fault-Only-First Loads
 // TODO: 7.8. Vector Load/Store Segment Instructions
@@ -490,4 +490,41 @@ DEF_RVV_FUNCTION (vrgatherei16, alu, full_preds, all_gatherei16_vvv_ops)
 // 16.5. Vector Compress Instruction
 DEF_RVV_FUNCTION (vcompress, alu, none_tu_preds, all_vvm_ops)
 
+/* Miscellaneous Vector Functions.  */
+DEF_RVV_FUNCTION (vundefined, vundefined, none_preds, all_none_void_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, i_v_u_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, u_v_i_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, f_v_i_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, f_v_u_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, i_v_f_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, u_v_f_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew8_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew16_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew32_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew64_interpret_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x2_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x4_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x8_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x16_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x32_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x64_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x2_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x4_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x8_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x16_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x32_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x64_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x2_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x4_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x8_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul2_x2_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul2_x4_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul4_x2_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x2_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x4_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x8_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul2_x2_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul2_x4_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul4_x2_ops)
+
 #undef DEF_RVV_FUNCTION
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index d08a96c0764..2bf72e7af0a 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -277,8 +277,7 @@ struct return_mask_def : public build_base
       {
 	b.append_name (type_suffixes[instance.type.index].vector);
 	vector_type_index ret_type_idx
-	  = instance.op_info->ret.get_base_vector_type (
-	    builtin_types[instance.type.index].vector);
+	  = instance.op_info->ret.get_function_type_index (instance.type.index);
 	b.append_name (type_suffixes[ret_type_idx].vector);
       }
 
@@ -303,8 +302,7 @@ struct narrow_alu_def : public build_base
 	b.append_name (operand_suffixes[instance.op_info->op]);
 	/* vop_<op> --> vop_<op>_<type>.  */
 	vector_type_index ret_type_idx
-	  = instance.op_info->ret.get_base_vector_type (
-	    builtin_types[instance.type.index].vector);
+	  = instance.op_info->ret.get_function_type_index (instance.type.index);
 	b.append_name (type_suffixes[ret_type_idx].vector);
       }
 
@@ -388,8 +386,7 @@ struct reduc_alu_def : public build_base
 	b.append_name (operand_suffixes[instance.op_info->op]);
 	b.append_name (type_suffixes[instance.type.index].vector);
 	vector_type_index ret_type_idx
-	  = instance.op_info->ret.get_base_vector_type (
-	    builtin_types[instance.type.index].vector);
+	  = instance.op_info->ret.get_function_type_index (instance.type.index);
 	b.append_name (type_suffixes[ret_type_idx].vector);
       }
 
@@ -418,6 +415,88 @@ struct scalar_move_def : public build_base
   }
 };
 
+/* vundefined_def class.  */
+struct vundefined_def : public build_base
+{
+  char *get_name (function_builder &b, const function_instance &instance,
+		  bool overloaded_p) const override
+  {
+    if (overloaded_p)
+      return nullptr;
+    b.append_base_name (instance.base_name);
+    b.append_name (type_suffixes[instance.type.index].vector);
+    return b.finish_name ();
+  }
+};
+
+/* misc_def class.  */
+struct misc_def : public build_base
+{
+  char *get_name (function_builder &b, const function_instance &instance,
+		  bool overloaded_p) const override
+  {
+    b.append_base_name (instance.base_name);
+
+    if (!overloaded_p)
+      {
+	b.append_name (operand_suffixes[instance.op_info->op]);
+	vector_type_index arg0_type_idx
+	  = instance.op_info->args[0].get_function_type_index (
+	    instance.type.index);
+	b.append_name (type_suffixes[arg0_type_idx].vector);
+      }
+
+    vector_type_index ret_type_idx
+      = instance.op_info->ret.get_function_type_index (instance.type.index);
+    b.append_name (type_suffixes[ret_type_idx].vector);
+    return b.finish_name ();
+  }
+};
+
+/* vset_def class.  */
+struct vset_def : public build_base
+{
+  char *get_name (function_builder &b, const function_instance &instance,
+		  bool overloaded_p) const override
+  {
+    b.append_base_name (instance.base_name);
+
+    if (!overloaded_p)
+      {
+	b.append_name (operand_suffixes[instance.op_info->op]);
+	vector_type_index arg_type_idx
+	  = instance.op_info->args[2].get_function_type_index (
+	    instance.type.index);
+	b.append_name (type_suffixes[arg_type_idx].vector);
+
+	vector_type_index ret_type_idx
+	  = instance.op_info->ret.get_function_type_index (instance.type.index);
+	b.append_name (type_suffixes[ret_type_idx].vector);
+      }
+    return b.finish_name ();
+  }
+
+  bool check (function_checker &c) const override
+  {
+    poly_int64 outer_size = GET_MODE_SIZE (c.arg_mode (0));
+    poly_int64 inner_size = GET_MODE_SIZE (c.arg_mode (2));
+    unsigned int nvecs = exact_div (outer_size, inner_size).to_constant ();
+    return c.require_immediate (1, 0, nvecs - 1);
+  }
+};
+
+/* vget_def class.  */
+struct vget_def : public misc_def
+{
+  bool check (function_checker &c) const override
+  {
+    poly_int64 outer_size = GET_MODE_SIZE (c.arg_mode (0));
+    poly_int64 inner_size = GET_MODE_SIZE (c.ret_mode ());
+    unsigned int nvecs = exact_div (outer_size, inner_size).to_constant ();
+    return c.require_immediate (1, 0, nvecs - 1);
+  }
+};
+
 SHAPE(vsetvl, vsetvl)
 SHAPE(vsetvl, vsetvlmax)
 SHAPE(loadstore, loadstore)
@@ -431,5 +510,9 @@ SHAPE(move, move)
 SHAPE(mask_alu, mask_alu)
 SHAPE(reduc_alu, reduc_alu)
 SHAPE(scalar_move, scalar_move)
+SHAPE(vundefined, vundefined)
+SHAPE(misc, misc)
+SHAPE(vset, vset)
+SHAPE(vget, vget)
 
 } // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.h b/gcc/config/riscv/riscv-vector-builtins-shapes.h
index a192b941fd8..640ef42f069 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.h
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.h
@@ -37,6 +37,10 @@ extern const function_shape *const move;
 extern const function_shape *const mask_alu;
 extern const function_shape *const reduc_alu;
 extern const function_shape *const scalar_move;
+extern const function_shape *const vundefined;
+extern const function_shape *const misc;
+extern const function_shape *const vset;
+extern const function_shape *const vget;
 }
 
 } // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-types.def b/gcc/config/riscv/riscv-vector-builtins-types.def
index a77024f823f..a55d494f1d9 100644
--- a/gcc/config/riscv/riscv-vector-builtins-types.def
+++ b/gcc/config/riscv/riscv-vector-builtins-types.def
@@ -157,6 +157,84 @@ along with GCC; see the file COPYING3. If not see
 #define DEF_RVV_EI16_OPS(TYPE, REQUIRE)
 #endif
 
+/* Use "DEF_RVV_EEW8_INTERPRET_OPS" macro include all types for EEW8 vinterpret
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_EEW8_INTERPRET_OPS
+#define DEF_RVV_EEW8_INTERPRET_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_EEW16_INTERPRET_OPS" macro include all types for EEW16
+   vinterpret which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_EEW16_INTERPRET_OPS
+#define DEF_RVV_EEW16_INTERPRET_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_EEW32_INTERPRET_OPS" macro include all types for EEW32
+   vinterpret which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_EEW32_INTERPRET_OPS
+#define DEF_RVV_EEW32_INTERPRET_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_EEW64_INTERPRET_OPS" macro include all types for EEW64
+   vinterpret which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_EEW64_INTERPRET_OPS
+#define DEF_RVV_EEW64_INTERPRET_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_X2_VLMUL_EXT_OPS" macro include all types for X2 VLMUL EXT
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_X2_VLMUL_EXT_OPS
+#define DEF_RVV_X2_VLMUL_EXT_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_X4_VLMUL_EXT_OPS" macro include all types for X4 VLMUL EXT
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_X4_VLMUL_EXT_OPS
+#define DEF_RVV_X4_VLMUL_EXT_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_X8_VLMUL_EXT_OPS" macro include all types for X8 VLMUL EXT
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_X8_VLMUL_EXT_OPS
+#define DEF_RVV_X8_VLMUL_EXT_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_X16_VLMUL_EXT_OPS" macro include all types for X16 VLMUL EXT
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_X16_VLMUL_EXT_OPS
+#define DEF_RVV_X16_VLMUL_EXT_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_X32_VLMUL_EXT_OPS" macro include all types for X32 VLMUL EXT
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_X32_VLMUL_EXT_OPS
+#define DEF_RVV_X32_VLMUL_EXT_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_X64_VLMUL_EXT_OPS" macro include all types for X64 VLMUL EXT
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_X64_VLMUL_EXT_OPS
+#define DEF_RVV_X64_VLMUL_EXT_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_LMUL1_OPS" macro include all types for LMUL1
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_LMUL1_OPS
+#define DEF_RVV_LMUL1_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_LMUL2_OPS" macro include all types for LMUL2
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_LMUL2_OPS
+#define DEF_RVV_LMUL2_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_LMUL4_OPS" macro include all types for LMUL4
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_LMUL4_OPS
+#define DEF_RVV_LMUL4_OPS(TYPE, REQUIRE)
+#endif
+
 DEF_RVV_I_OPS (vint8mf8_t, RVV_REQUIRE_ZVE64)
 DEF_RVV_I_OPS (vint8mf4_t, 0)
 DEF_RVV_I_OPS (vint8mf2_t, 0)
@@ -465,6 +543,281 @@ DEF_RVV_EI16_OPS (vfloat64m2_t, RVV_REQUIRE_ELEN_FP_64)
 DEF_RVV_EI16_OPS (vfloat64m4_t, RVV_REQUIRE_ELEN_FP_64)
 DEF_RVV_EI16_OPS (vfloat64m8_t, RVV_REQUIRE_ELEN_FP_64)
 
+DEF_RVV_EEW8_INTERPRET_OPS (vint16mf4_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_EEW8_INTERPRET_OPS (vint16mf2_t, 0)
+DEF_RVV_EEW8_INTERPRET_OPS (vint16m1_t, 0)
+DEF_RVV_EEW8_INTERPRET_OPS (vint16m2_t, 0)
+DEF_RVV_EEW8_INTERPRET_OPS (vint16m4_t, 0)
+DEF_RVV_EEW8_INTERPRET_OPS (vint16m8_t, 0)
+DEF_RVV_EEW8_INTERPRET_OPS (vint32mf2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_EEW8_INTERPRET_OPS (vint32m1_t, 0)
+DEF_RVV_EEW8_INTERPRET_OPS (vint32m2_t, 0)
+DEF_RVV_EEW8_INTERPRET_OPS (vint32m4_t, 0)
+DEF_RVV_EEW8_INTERPRET_OPS (vint32m8_t, 0)
+DEF_RVV_EEW8_INTERPRET_OPS (vint64m1_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_EEW8_INTERPRET_OPS (vint64m2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_EEW8_INTERPRET_OPS (vint64m4_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_EEW8_INTERPRET_OPS (vint64m8_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_EEW8_INTERPRET_OPS (vuint16mf4_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_EEW8_INTERPRET_OPS (vuint16mf2_t, 0)
+DEF_RVV_EEW8_INTERPRET_OPS (vuint16m1_t, 0)
+DEF_RVV_EEW8_INTERPRET_OPS (vuint16m2_t, 0)
+DEF_RVV_EEW8_INTERPRET_OPS (vuint16m4_t, 0)
+DEF_RVV_EEW8_INTERPRET_OPS (vuint16m8_t, 0)
+DEF_RVV_EEW8_INTERPRET_OPS (vuint32mf2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_EEW8_INTERPRET_OPS (vuint32m1_t, 0)
+DEF_RVV_EEW8_INTERPRET_OPS (vuint32m2_t, 0)
+DEF_RVV_EEW8_INTERPRET_OPS (vuint32m4_t, 0)
+DEF_RVV_EEW8_INTERPRET_OPS (vuint32m8_t, 0)
+DEF_RVV_EEW8_INTERPRET_OPS (vuint64m1_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_EEW8_INTERPRET_OPS (vuint64m2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_EEW8_INTERPRET_OPS (vuint64m4_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_EEW8_INTERPRET_OPS (vuint64m8_t, RVV_REQUIRE_ZVE64)
+
+DEF_RVV_EEW16_INTERPRET_OPS (vint8mf4_t, 0)
+DEF_RVV_EEW16_INTERPRET_OPS (vint8mf2_t, 0)
+DEF_RVV_EEW16_INTERPRET_OPS (vint8m1_t, 0)
+DEF_RVV_EEW16_INTERPRET_OPS (vint8m2_t, 0)
+DEF_RVV_EEW16_INTERPRET_OPS (vint8m4_t, 0)
+DEF_RVV_EEW16_INTERPRET_OPS (vint8m8_t, 0)
+DEF_RVV_EEW16_INTERPRET_OPS (vint32mf2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_EEW16_INTERPRET_OPS (vint32m1_t, 0)
+DEF_RVV_EEW16_INTERPRET_OPS (vint32m2_t, 0)
+DEF_RVV_EEW16_INTERPRET_OPS (vint32m4_t, 0)
+DEF_RVV_EEW16_INTERPRET_OPS (vint32m8_t, 0)
+DEF_RVV_EEW16_INTERPRET_OPS (vint64m1_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_EEW16_INTERPRET_OPS (vint64m2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_EEW16_INTERPRET_OPS (vint64m4_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_EEW16_INTERPRET_OPS (vint64m8_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_EEW16_INTERPRET_OPS (vuint8mf4_t, 0)
+DEF_RVV_EEW16_INTERPRET_OPS (vuint8mf2_t, 0)
+DEF_RVV_EEW16_INTERPRET_OPS (vuint8m1_t, 0)
+DEF_RVV_EEW16_INTERPRET_OPS (vuint8m2_t, 0)
+DEF_RVV_EEW16_INTERPRET_OPS (vuint8m4_t, 0)
+DEF_RVV_EEW16_INTERPRET_OPS (vuint8m8_t, 0)
+DEF_RVV_EEW16_INTERPRET_OPS (vuint32mf2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_EEW16_INTERPRET_OPS (vuint32m1_t, 0)
+DEF_RVV_EEW16_INTERPRET_OPS (vuint32m2_t, 0)
+DEF_RVV_EEW16_INTERPRET_OPS (vuint32m4_t, 0)
+DEF_RVV_EEW16_INTERPRET_OPS (vuint32m8_t, 0)
+DEF_RVV_EEW16_INTERPRET_OPS (vuint64m1_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_EEW16_INTERPRET_OPS (vuint64m2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_EEW16_INTERPRET_OPS (vuint64m4_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_EEW16_INTERPRET_OPS (vuint64m8_t, RVV_REQUIRE_ZVE64)
+
+DEF_RVV_EEW32_INTERPRET_OPS (vint8mf2_t, 0)
+DEF_RVV_EEW32_INTERPRET_OPS (vint8m1_t, 0)
+DEF_RVV_EEW32_INTERPRET_OPS (vint8m2_t, 0)
+DEF_RVV_EEW32_INTERPRET_OPS (vint8m4_t, 0)
+DEF_RVV_EEW32_INTERPRET_OPS (vint8m8_t, 0)
+DEF_RVV_EEW32_INTERPRET_OPS (vint16mf2_t, 0)
+DEF_RVV_EEW32_INTERPRET_OPS (vint16m1_t, 0)
+DEF_RVV_EEW32_INTERPRET_OPS (vint16m2_t, 0)
+DEF_RVV_EEW32_INTERPRET_OPS (vint16m4_t, 0)
+DEF_RVV_EEW32_INTERPRET_OPS (vint16m8_t, 0)
+DEF_RVV_EEW32_INTERPRET_OPS (vint64m1_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_EEW32_INTERPRET_OPS (vint64m2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_EEW32_INTERPRET_OPS (vint64m4_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_EEW32_INTERPRET_OPS (vint64m8_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_EEW32_INTERPRET_OPS (vuint8mf2_t, 0)
+DEF_RVV_EEW32_INTERPRET_OPS (vuint8m1_t, 0)
+DEF_RVV_EEW32_INTERPRET_OPS (vuint8m2_t, 0)
+DEF_RVV_EEW32_INTERPRET_OPS (vuint8m4_t, 0)
+DEF_RVV_EEW32_INTERPRET_OPS (vuint8m8_t, 0)
+DEF_RVV_EEW32_INTERPRET_OPS (vuint16mf2_t, 0)
+DEF_RVV_EEW32_INTERPRET_OPS (vuint16m1_t, 0)
+DEF_RVV_EEW32_INTERPRET_OPS (vuint16m2_t, 0)
+DEF_RVV_EEW32_INTERPRET_OPS (vuint16m4_t, 0)
+DEF_RVV_EEW32_INTERPRET_OPS (vuint16m8_t, 0)
+DEF_RVV_EEW32_INTERPRET_OPS (vuint64m1_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_EEW32_INTERPRET_OPS (vuint64m2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_EEW32_INTERPRET_OPS (vuint64m4_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_EEW32_INTERPRET_OPS (vuint64m8_t, RVV_REQUIRE_ZVE64)
+
+DEF_RVV_EEW64_INTERPRET_OPS (vint8m1_t, 0)
+DEF_RVV_EEW64_INTERPRET_OPS (vint8m2_t, 0)
+DEF_RVV_EEW64_INTERPRET_OPS (vint8m4_t, 0)
+DEF_RVV_EEW64_INTERPRET_OPS (vint8m8_t, 0)
+DEF_RVV_EEW64_INTERPRET_OPS (vint16m1_t, 0)
+DEF_RVV_EEW64_INTERPRET_OPS (vint16m2_t, 0)
+DEF_RVV_EEW64_INTERPRET_OPS (vint16m4_t, 0)
+DEF_RVV_EEW64_INTERPRET_OPS (vint16m8_t, 0)
+DEF_RVV_EEW64_INTERPRET_OPS (vint32m1_t, 0)
+DEF_RVV_EEW64_INTERPRET_OPS (vint32m2_t, 0)
+DEF_RVV_EEW64_INTERPRET_OPS (vint32m4_t, 0)
+DEF_RVV_EEW64_INTERPRET_OPS (vint32m8_t, 0)
+DEF_RVV_EEW64_INTERPRET_OPS (vuint8m1_t, 0)
+DEF_RVV_EEW64_INTERPRET_OPS (vuint8m2_t, 0)
+DEF_RVV_EEW64_INTERPRET_OPS (vuint8m4_t, 0)
+DEF_RVV_EEW64_INTERPRET_OPS (vuint8m8_t, 0)
+DEF_RVV_EEW64_INTERPRET_OPS (vuint16m1_t, 0)
+DEF_RVV_EEW64_INTERPRET_OPS (vuint16m2_t, 0)
+DEF_RVV_EEW64_INTERPRET_OPS (vuint16m4_t, 0)
+DEF_RVV_EEW64_INTERPRET_OPS (vuint16m8_t, 0)
+DEF_RVV_EEW64_INTERPRET_OPS (vuint32m1_t, 0)
+DEF_RVV_EEW64_INTERPRET_OPS (vuint32m2_t, 0)
+DEF_RVV_EEW64_INTERPRET_OPS (vuint32m4_t, 0)
+DEF_RVV_EEW64_INTERPRET_OPS (vuint32m8_t, 0)
+
+DEF_RVV_X2_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X2_VLMUL_EXT_OPS (vint8mf4_t, 0)
+DEF_RVV_X2_VLMUL_EXT_OPS (vint8mf2_t, 0)
+DEF_RVV_X2_VLMUL_EXT_OPS (vint8m1_t, 0)
+DEF_RVV_X2_VLMUL_EXT_OPS (vint8m2_t, 0)
+DEF_RVV_X2_VLMUL_EXT_OPS (vint8m4_t, 0)
+DEF_RVV_X2_VLMUL_EXT_OPS (vint16mf4_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X2_VLMUL_EXT_OPS (vint16mf2_t, 0)
+DEF_RVV_X2_VLMUL_EXT_OPS (vint16m1_t, 0)
+DEF_RVV_X2_VLMUL_EXT_OPS (vint16m2_t, 0)
+DEF_RVV_X2_VLMUL_EXT_OPS (vint16m4_t, 0)
+DEF_RVV_X2_VLMUL_EXT_OPS (vint32mf2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X2_VLMUL_EXT_OPS (vint32m1_t, 0)
+DEF_RVV_X2_VLMUL_EXT_OPS (vint32m2_t, 0)
+DEF_RVV_X2_VLMUL_EXT_OPS (vint32m4_t, 0)
+DEF_RVV_X2_VLMUL_EXT_OPS (vint64m1_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X2_VLMUL_EXT_OPS (vint64m2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X2_VLMUL_EXT_OPS (vint64m4_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X2_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X2_VLMUL_EXT_OPS (vuint8mf4_t, 0)
+DEF_RVV_X2_VLMUL_EXT_OPS (vuint8mf2_t, 0)
+DEF_RVV_X2_VLMUL_EXT_OPS (vuint8m1_t, 0)
+DEF_RVV_X2_VLMUL_EXT_OPS (vuint8m2_t, 0)
+DEF_RVV_X2_VLMUL_EXT_OPS (vuint8m4_t, 0)
+DEF_RVV_X2_VLMUL_EXT_OPS (vuint16mf4_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X2_VLMUL_EXT_OPS (vuint16mf2_t, 0)
+DEF_RVV_X2_VLMUL_EXT_OPS (vuint16m1_t, 0)
+DEF_RVV_X2_VLMUL_EXT_OPS (vuint16m2_t, 0)
+DEF_RVV_X2_VLMUL_EXT_OPS (vuint16m4_t, 0)
+DEF_RVV_X2_VLMUL_EXT_OPS (vuint32mf2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X2_VLMUL_EXT_OPS (vuint32m1_t, 0)
+DEF_RVV_X2_VLMUL_EXT_OPS (vuint32m2_t, 0)
+DEF_RVV_X2_VLMUL_EXT_OPS (vuint32m4_t, 0)
+DEF_RVV_X2_VLMUL_EXT_OPS (vuint64m1_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X2_VLMUL_EXT_OPS (vuint64m2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X2_VLMUL_EXT_OPS (vuint64m4_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X2_VLMUL_EXT_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ZVE64)
+DEF_RVV_X2_VLMUL_EXT_OPS (vfloat32m1_t, RVV_REQUIRE_ELEN_FP_32)
+DEF_RVV_X2_VLMUL_EXT_OPS (vfloat32m2_t, RVV_REQUIRE_ELEN_FP_32)
+DEF_RVV_X2_VLMUL_EXT_OPS (vfloat32m4_t, RVV_REQUIRE_ELEN_FP_32)
+DEF_RVV_X2_VLMUL_EXT_OPS (vfloat64m1_t, RVV_REQUIRE_ELEN_FP_64)
+DEF_RVV_X2_VLMUL_EXT_OPS (vfloat64m2_t, RVV_REQUIRE_ELEN_FP_64)
+DEF_RVV_X2_VLMUL_EXT_OPS (vfloat64m4_t, RVV_REQUIRE_ELEN_FP_64)
+
+DEF_RVV_X4_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X4_VLMUL_EXT_OPS (vint8mf4_t, 0)
+DEF_RVV_X4_VLMUL_EXT_OPS (vint8mf2_t, 0)
+DEF_RVV_X4_VLMUL_EXT_OPS (vint8m1_t, 0)
+DEF_RVV_X4_VLMUL_EXT_OPS (vint8m2_t, 0)
+DEF_RVV_X4_VLMUL_EXT_OPS (vint16mf4_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X4_VLMUL_EXT_OPS (vint16mf2_t, 0)
+DEF_RVV_X4_VLMUL_EXT_OPS (vint16m1_t, 0)
+DEF_RVV_X4_VLMUL_EXT_OPS (vint16m2_t, 0)
+DEF_RVV_X4_VLMUL_EXT_OPS (vint32mf2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X4_VLMUL_EXT_OPS (vint32m1_t, 0)
+DEF_RVV_X4_VLMUL_EXT_OPS (vint32m2_t, 0)
+DEF_RVV_X4_VLMUL_EXT_OPS (vint64m1_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X4_VLMUL_EXT_OPS (vint64m2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X4_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X4_VLMUL_EXT_OPS (vuint8mf4_t, 0)
+DEF_RVV_X4_VLMUL_EXT_OPS (vuint8mf2_t, 0)
+DEF_RVV_X4_VLMUL_EXT_OPS (vuint8m1_t, 0)
+DEF_RVV_X4_VLMUL_EXT_OPS (vuint8m2_t, 0)
+DEF_RVV_X4_VLMUL_EXT_OPS (vuint16mf4_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X4_VLMUL_EXT_OPS (vuint16mf2_t, 0)
+DEF_RVV_X4_VLMUL_EXT_OPS (vuint16m1_t, 0)
+DEF_RVV_X4_VLMUL_EXT_OPS (vuint16m2_t, 0)
+DEF_RVV_X4_VLMUL_EXT_OPS (vuint32mf2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X4_VLMUL_EXT_OPS (vuint32m1_t, 0)
+DEF_RVV_X4_VLMUL_EXT_OPS (vuint32m2_t, 0)
+DEF_RVV_X4_VLMUL_EXT_OPS (vuint64m1_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X4_VLMUL_EXT_OPS (vuint64m2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X4_VLMUL_EXT_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ZVE64)
+DEF_RVV_X4_VLMUL_EXT_OPS (vfloat32m1_t, RVV_REQUIRE_ELEN_FP_32)
+DEF_RVV_X4_VLMUL_EXT_OPS (vfloat32m2_t, RVV_REQUIRE_ELEN_FP_32)
+DEF_RVV_X4_VLMUL_EXT_OPS (vfloat64m1_t, RVV_REQUIRE_ELEN_FP_64)
+DEF_RVV_X4_VLMUL_EXT_OPS (vfloat64m2_t, RVV_REQUIRE_ELEN_FP_64)
+
+DEF_RVV_X8_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X8_VLMUL_EXT_OPS (vint8mf4_t, 0)
+DEF_RVV_X8_VLMUL_EXT_OPS (vint8mf2_t, 0)
+DEF_RVV_X8_VLMUL_EXT_OPS (vint8m1_t, 0)
+DEF_RVV_X8_VLMUL_EXT_OPS (vint16mf4_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X8_VLMUL_EXT_OPS (vint16mf2_t, 0)
+DEF_RVV_X8_VLMUL_EXT_OPS (vint16m1_t, 0)
+DEF_RVV_X8_VLMUL_EXT_OPS (vint32mf2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X8_VLMUL_EXT_OPS (vint32m1_t, 0)
+DEF_RVV_X8_VLMUL_EXT_OPS (vint64m1_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X8_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X8_VLMUL_EXT_OPS (vuint8mf4_t, 0)
+DEF_RVV_X8_VLMUL_EXT_OPS (vuint8mf2_t, 0)
+DEF_RVV_X8_VLMUL_EXT_OPS (vuint8m1_t, 0)
+DEF_RVV_X8_VLMUL_EXT_OPS (vuint16mf4_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X8_VLMUL_EXT_OPS (vuint16mf2_t, 0)
+DEF_RVV_X8_VLMUL_EXT_OPS (vuint16m1_t, 0)
+DEF_RVV_X8_VLMUL_EXT_OPS (vuint32mf2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X8_VLMUL_EXT_OPS (vuint32m1_t, 0)
+DEF_RVV_X8_VLMUL_EXT_OPS (vuint64m1_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X8_VLMUL_EXT_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ZVE64)
+DEF_RVV_X8_VLMUL_EXT_OPS (vfloat32m1_t, RVV_REQUIRE_ELEN_FP_32)
+DEF_RVV_X8_VLMUL_EXT_OPS (vfloat64m1_t, RVV_REQUIRE_ELEN_FP_64)
+
+DEF_RVV_X16_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X16_VLMUL_EXT_OPS (vint8mf4_t, 0)
+DEF_RVV_X16_VLMUL_EXT_OPS (vint8mf2_t, 0)
+DEF_RVV_X16_VLMUL_EXT_OPS (vint16mf4_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X16_VLMUL_EXT_OPS (vint16mf2_t, 0)
+DEF_RVV_X16_VLMUL_EXT_OPS (vint32mf2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X16_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X16_VLMUL_EXT_OPS (vuint8mf4_t, 0)
+DEF_RVV_X16_VLMUL_EXT_OPS (vuint8mf2_t, 0)
+DEF_RVV_X16_VLMUL_EXT_OPS (vuint16mf4_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X16_VLMUL_EXT_OPS (vuint16mf2_t, 0)
+DEF_RVV_X16_VLMUL_EXT_OPS (vuint32mf2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X16_VLMUL_EXT_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ZVE64)
+
+DEF_RVV_X32_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X32_VLMUL_EXT_OPS (vint8mf4_t, 0)
+DEF_RVV_X32_VLMUL_EXT_OPS (vint16mf4_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X32_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X32_VLMUL_EXT_OPS (vuint8mf4_t, 0)
+DEF_RVV_X32_VLMUL_EXT_OPS (vuint16mf4_t, RVV_REQUIRE_ZVE64)
+
+DEF_RVV_X64_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_X64_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_ZVE64)
+
+DEF_RVV_LMUL1_OPS (vint8m1_t, 0)
+DEF_RVV_LMUL1_OPS (vint16m1_t, 0)
+DEF_RVV_LMUL1_OPS (vint32m1_t, 0)
+DEF_RVV_LMUL1_OPS (vint64m1_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_LMUL1_OPS (vuint8m1_t, 0)
+DEF_RVV_LMUL1_OPS (vuint16m1_t, 0)
+DEF_RVV_LMUL1_OPS (vuint32m1_t, 0)
+DEF_RVV_LMUL1_OPS (vuint64m1_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_LMUL1_OPS (vfloat32m1_t, RVV_REQUIRE_ELEN_FP_32)
+DEF_RVV_LMUL1_OPS (vfloat64m1_t, RVV_REQUIRE_ELEN_FP_64)
+
+DEF_RVV_LMUL2_OPS (vint8m2_t, 0)
+DEF_RVV_LMUL2_OPS (vint16m2_t, 0)
+DEF_RVV_LMUL2_OPS (vint32m2_t, 0)
+DEF_RVV_LMUL2_OPS (vint64m2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_LMUL2_OPS (vuint8m2_t, 0)
+DEF_RVV_LMUL2_OPS (vuint16m2_t, 0)
+DEF_RVV_LMUL2_OPS (vuint32m2_t, 0)
+DEF_RVV_LMUL2_OPS (vuint64m2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_LMUL2_OPS (vfloat32m2_t, RVV_REQUIRE_ELEN_FP_32)
+DEF_RVV_LMUL2_OPS (vfloat64m2_t, RVV_REQUIRE_ELEN_FP_64)
+
+DEF_RVV_LMUL4_OPS (vint8m4_t, 0)
+DEF_RVV_LMUL4_OPS (vint16m4_t, 0)
+DEF_RVV_LMUL4_OPS (vint32m4_t, 0)
+DEF_RVV_LMUL4_OPS (vint64m4_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_LMUL4_OPS (vuint8m4_t, 0)
+DEF_RVV_LMUL4_OPS (vuint16m4_t, 0)
+DEF_RVV_LMUL4_OPS (vuint32m4_t, 0)
+DEF_RVV_LMUL4_OPS (vuint64m4_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_LMUL4_OPS (vfloat32m4_t, RVV_REQUIRE_ELEN_FP_32)
+DEF_RVV_LMUL4_OPS (vfloat64m4_t, RVV_REQUIRE_ELEN_FP_64)
+
 #undef DEF_RVV_I_OPS
 #undef DEF_RVV_U_OPS
 #undef DEF_RVV_F_OPS
@@ -487,3 +840,16 @@ DEF_RVV_EI16_OPS (vfloat64m8_t, RVV_REQUIRE_ELEN_FP_64)
 #undef DEF_RVV_WU_OPS
 #undef DEF_RVV_WF_OPS
 #undef DEF_RVV_EI16_OPS
+#undef DEF_RVV_EEW8_INTERPRET_OPS
+#undef DEF_RVV_EEW16_INTERPRET_OPS
+#undef DEF_RVV_EEW32_INTERPRET_OPS
+#undef DEF_RVV_EEW64_INTERPRET_OPS
+#undef DEF_RVV_X2_VLMUL_EXT_OPS
+#undef DEF_RVV_X4_VLMUL_EXT_OPS
+#undef DEF_RVV_X8_VLMUL_EXT_OPS
+#undef DEF_RVV_X16_VLMUL_EXT_OPS
+#undef DEF_RVV_X32_VLMUL_EXT_OPS
+#undef DEF_RVV_X64_VLMUL_EXT_OPS
+#undef DEF_RVV_LMUL1_OPS
+#undef DEF_RVV_LMUL2_OPS
+#undef DEF_RVV_LMUL4_OPS
diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
index 6b32b28952a..2d57086262b 100644
--- a/gcc/config/riscv/riscv-vector-builtins.cc
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -106,20 +106,11 @@ const char *const operand_suffixes[NUM_OP_TYPES] = {
 const rvv_builtin_suffixes type_suffixes[NUM_VECTOR_TYPES + 1] = {
 #define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, VECTOR_MODE,         \
 		     VECTOR_MODE_MIN_VLEN_32, VECTOR_SUFFIX, SCALAR_SUFFIX,    \
-		     VSETVL_SUFFIX, MASK_TYPE)                                 \
+		     VSETVL_SUFFIX)                                            \
   {#VECTOR_SUFFIX, #SCALAR_SUFFIX, #VSETVL_SUFFIX},
 #include "riscv-vector-builtins.def"
 };
 
-/* Mask type for each RVV type.  */
-const vector_type_index mask_types[NUM_VECTOR_TYPES + 1] = {
-#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, VECTOR_MODE,         \
-		     VECTOR_MODE_MIN_VLEN_32, VECTOR_SUFFIX, SCALAR_SUFFIX,    \
-		     VSETVL_SUFFIX, MASK_TYPE)                                 \
-  VECTOR_TYPE_##MASK_TYPE,
-#include "riscv-vector-builtins.def"
-};
-
 /* Static information about predication suffix for each RVV type.  */
 const char *const predication_suffixes[NUM_PRED_TYPES] = {
   "", /* PRED_TYPE_none.  */
@@ -294,6 +285,87 @@ static const rvv_type_info oextu_ops[] = {
 #include "riscv-vector-builtins-types.def"
   {NUM_VECTOR_TYPES, 0}};
 
+/* A list of eew8 interpret will be registered for intrinsic functions.  */
+static const rvv_type_info eew8_interpret_ops[] = {
+#define DEF_RVV_EEW8_INTERPRET_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of eew16 interpret will be registered for intrinsic functions.  */
+static const rvv_type_info eew16_interpret_ops[] = {
+#define DEF_RVV_EEW16_INTERPRET_OPS(TYPE, REQUIRE)                             \
+  {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of eew32 interpret will be registered for intrinsic functions.  */
+static const rvv_type_info eew32_interpret_ops[] = {
+#define DEF_RVV_EEW32_INTERPRET_OPS(TYPE, REQUIRE)                             \
+  {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of eew64 interpret will be registered for intrinsic functions.  */
+static const rvv_type_info eew64_interpret_ops[] = {
+#define DEF_RVV_EEW64_INTERPRET_OPS(TYPE, REQUIRE)                             \
+  {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of x2 vlmul ext will be registered for intrinsic functions.  */
+static const rvv_type_info vlmul_ext_x2_ops[] = {
+#define DEF_RVV_X2_VLMUL_EXT_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of x4 vlmul ext will be registered for intrinsic functions.  */
+static const rvv_type_info vlmul_ext_x4_ops[] = {
+#define DEF_RVV_X4_VLMUL_EXT_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of x8 vlmul ext will be registered for intrinsic functions.  */
+static const rvv_type_info vlmul_ext_x8_ops[] = {
+#define DEF_RVV_X8_VLMUL_EXT_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of x16 vlmul ext will be registered for intrinsic functions.  */
+static const rvv_type_info vlmul_ext_x16_ops[] = {
+#define DEF_RVV_X16_VLMUL_EXT_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of x32 vlmul ext will be registered for intrinsic functions.  */
+static const rvv_type_info vlmul_ext_x32_ops[] = {
+#define DEF_RVV_X32_VLMUL_EXT_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of x64 vlmul ext will be registered for intrinsic functions.  */
+static const rvv_type_info vlmul_ext_x64_ops[] = {
+#define DEF_RVV_X64_VLMUL_EXT_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of LMUL1 will be registered for intrinsic functions.  */
+static const rvv_type_info lmul1_ops[] = {
+#define DEF_RVV_LMUL1_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of LMUL2 will be registered for intrinsic functions.  */
+static const rvv_type_info lmul2_ops[] = {
+#define DEF_RVV_LMUL2_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of LMUL4 will be registered for intrinsic functions.  */
+static const rvv_type_info lmul4_ops[] = {
+#define DEF_RVV_LMUL4_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
 static CONSTEXPR const rvv_arg_type_info rvv_arg_type_info_end
   = rvv_arg_type_info (NUM_BASE_TYPES);
 
@@ -330,56 +402,56 @@ static CONSTEXPR const rvv_arg_type_info scalar_ptr_ptrdiff_args[]
      rvv_arg_type_info (RVV_BASE_ptrdiff), rvv_arg_type_info (RVV_BASE_vector),
      rvv_arg_type_info_end};
 
-/* A list of args for vector_type func (const scalar_type *, uint8_index_type)
+/* A list of args for vector_type func (const scalar_type *, eew8_index_type)
  * function.  */
-static CONSTEXPR const rvv_arg_type_info scalar_const_ptr_uint8_index_args[]
+static CONSTEXPR const rvv_arg_type_info scalar_const_ptr_eew8_index_args[]
   = {rvv_arg_type_info (RVV_BASE_scalar_const_ptr),
-     rvv_arg_type_info (RVV_BASE_uint8_index), rvv_arg_type_info_end};
+     rvv_arg_type_info (RVV_BASE_eew8_index), rvv_arg_type_info_end};
 
-/* A list of args for vector_type func (const scalar_type *, uint16_index_type)
+/* A list of args for vector_type func (const scalar_type *, eew16_index_type)
  * function.  */
-static CONSTEXPR const rvv_arg_type_info scalar_const_ptr_uint16_index_args[]
+static CONSTEXPR const rvv_arg_type_info scalar_const_ptr_eew16_index_args[]
   = {rvv_arg_type_info (RVV_BASE_scalar_const_ptr),
-     rvv_arg_type_info (RVV_BASE_uint16_index), rvv_arg_type_info_end};
+     rvv_arg_type_info (RVV_BASE_eew16_index), rvv_arg_type_info_end};
 
-/* A list of args for vector_type func (const scalar_type *, uint32_index_type)
+/* A list of args for vector_type func (const scalar_type *, eew32_index_type)
  * function.  */
-static CONSTEXPR const rvv_arg_type_info scalar_const_ptr_uint32_index_args[]
+static CONSTEXPR const rvv_arg_type_info scalar_const_ptr_eew32_index_args[]
   = {rvv_arg_type_info (RVV_BASE_scalar_const_ptr),
-     rvv_arg_type_info (RVV_BASE_uint32_index), rvv_arg_type_info_end};
+     rvv_arg_type_info (RVV_BASE_eew32_index), rvv_arg_type_info_end};
 
-/* A list of args for vector_type func (const scalar_type *, uint64_index_type)
+/* A list of args for vector_type func (const scalar_type *, eew64_index_type)
  * function.  */
-static CONSTEXPR const rvv_arg_type_info scalar_const_ptr_uint64_index_args[]
+static CONSTEXPR const rvv_arg_type_info scalar_const_ptr_eew64_index_args[]
   = {rvv_arg_type_info (RVV_BASE_scalar_const_ptr),
-     rvv_arg_type_info (RVV_BASE_uint64_index), rvv_arg_type_info_end};
+     rvv_arg_type_info (RVV_BASE_eew64_index), rvv_arg_type_info_end};
 
-/* A list of args for void func (scalar_type *, uint8_index_type, vector_type)
+/* A list of args for void func (scalar_type *, eew8_index_type, vector_type)
  * function.  */
-static CONSTEXPR const rvv_arg_type_info scalar_ptr_uint8_index_args[]
+static CONSTEXPR const rvv_arg_type_info scalar_ptr_eew8_index_args[]
   = {rvv_arg_type_info (RVV_BASE_scalar_ptr),
-     rvv_arg_type_info (RVV_BASE_uint8_index),
+     rvv_arg_type_info (RVV_BASE_eew8_index),
      rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
 
-/* A list of args for void func (scalar_type *, uint16_index_type, vector_type)
+/* A list of args for void func (scalar_type *, eew16_index_type, vector_type)
  * function.  */
-static CONSTEXPR const rvv_arg_type_info scalar_ptr_uint16_index_args[]
+static CONSTEXPR const rvv_arg_type_info scalar_ptr_eew16_index_args[]
   = {rvv_arg_type_info (RVV_BASE_scalar_ptr),
-     rvv_arg_type_info (RVV_BASE_uint16_index),
+     rvv_arg_type_info (RVV_BASE_eew16_index),
      rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
 
-/* A list of args for void func (scalar_type *, uint32_index_type, vector_type)
+/* A list of args for void func (scalar_type *, eew32_index_type, vector_type)
  * function.  */
-static CONSTEXPR const rvv_arg_type_info scalar_ptr_uint32_index_args[]
+static CONSTEXPR const rvv_arg_type_info scalar_ptr_eew32_index_args[]
   = {rvv_arg_type_info (RVV_BASE_scalar_ptr),
-     rvv_arg_type_info (RVV_BASE_uint32_index),
+     rvv_arg_type_info (RVV_BASE_eew32_index),
      rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
 
-/* A list of args for void func (scalar_type *, uint64_index_type, vector_type)
+/* A list of args for void func (scalar_type *, eew64_index_type, vector_type)
  * function.  */
-static CONSTEXPR const rvv_arg_type_info scalar_ptr_uint64_index_args[]
+static CONSTEXPR const rvv_arg_type_info scalar_ptr_eew64_index_args[]
   = {rvv_arg_type_info (RVV_BASE_scalar_ptr),
-     rvv_arg_type_info (RVV_BASE_uint64_index),
+     rvv_arg_type_info (RVV_BASE_eew64_index),
      rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
 
 /* A list of args for vector_type func (vector_type, vector_type) function.  */
@@ -447,7 +519,7 @@ static CONSTEXPR const rvv_arg_type_info gather_vv_args[]
 /* A list of args for vector_type func (vector_type, shift_type) function.  */
 static CONSTEXPR const rvv_arg_type_info gatherei16_vv_args[]
   = {rvv_arg_type_info (RVV_BASE_vector),
-     rvv_arg_type_info (RVV_BASE_uint16_index), rvv_arg_type_info_end};
+     rvv_arg_type_info (RVV_BASE_eew16_index), rvv_arg_type_info_end};
 
 /* A list of args for double demote type func (vector_type, shift_type)
  * function.  */
@@ -460,6 +532,30 @@ static CONSTEXPR const rvv_arg_type_info shift_wv_args[]
 static CONSTEXPR const rvv_arg_type_info v_args[]
   = {rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
 
+/* A list of args for vector_type func (vector_type) function.  */
+static CONSTEXPR const rvv_arg_type_info v_x2_trunc_args[]
+  = {rvv_arg_type_info (RVV_BASE_vlmul_ext_x2), rvv_arg_type_info_end};
+
+/* A list of args for vector_type func (vector_type) function.  */
+static CONSTEXPR const rvv_arg_type_info v_x4_trunc_args[]
+  = {rvv_arg_type_info (RVV_BASE_vlmul_ext_x4), rvv_arg_type_info_end};
+
+/* A list of args for vector_type func (vector_type) function.  */
+static CONSTEXPR const rvv_arg_type_info v_x8_trunc_args[]
+  = {rvv_arg_type_info (RVV_BASE_vlmul_ext_x8), rvv_arg_type_info_end};
+
+/* A list of args for vector_type func (vector_type) function.  */
+static CONSTEXPR const rvv_arg_type_info v_x16_trunc_args[]
+  = {rvv_arg_type_info (RVV_BASE_vlmul_ext_x16), rvv_arg_type_info_end};
+
+/* A list of args for vector_type func (vector_type) function.  */
+static CONSTEXPR const rvv_arg_type_info v_x32_trunc_args[]
+  = {rvv_arg_type_info (RVV_BASE_vlmul_ext_x32), rvv_arg_type_info_end};
+
+/* A list of args for vector_type func (vector_type) function.  */
+static CONSTEXPR const rvv_arg_type_info v_x64_trunc_args[]
+  = {rvv_arg_type_info (RVV_BASE_vlmul_ext_x64), rvv_arg_type_info_end};
+
 /* A list of args for vector_type func (vector_type, lmul1_type) function.  */
 static CONSTEXPR const rvv_arg_type_info vs_args[]
   = {rvv_arg_type_info (RVV_BASE_vector),
@@ -612,6 +708,39 @@ static CONSTEXPR const rvv_arg_type_info w_xu_v_args[]
   = {rvv_arg_type_info (RVV_BASE_double_trunc_unsigned_vector),
      rvv_arg_type_info_end};
 
+/* A list of args for vector_type func (vector_type) function.  */
+static CONSTEXPR const rvv_arg_type_info ext_x2_vset_args[]
+  = {rvv_arg_type_info (RVV_BASE_vlmul_ext_x2),
+     rvv_arg_type_info (RVV_BASE_size), rvv_arg_type_info (RVV_BASE_vector),
+     rvv_arg_type_info_end};
+
+/* A list of args for vector_type func (vector_type) function.  */
+static CONSTEXPR const rvv_arg_type_info ext_x4_vset_args[]
+  = {rvv_arg_type_info (RVV_BASE_vlmul_ext_x4),
+     rvv_arg_type_info (RVV_BASE_size), rvv_arg_type_info (RVV_BASE_vector),
+     rvv_arg_type_info_end};
+
+/* A list of args for vector_type func (vector_type) function.  */
+static CONSTEXPR const rvv_arg_type_info ext_x8_vset_args[]
+  = {rvv_arg_type_info (RVV_BASE_vlmul_ext_x8),
+     rvv_arg_type_info (RVV_BASE_size), rvv_arg_type_info (RVV_BASE_vector),
+     rvv_arg_type_info_end};
+
+/* A list of args for vector_type func (vector_type) function.  */
+static CONSTEXPR const rvv_arg_type_info ext_x2_vget_args[]
+  = {rvv_arg_type_info (RVV_BASE_vlmul_ext_x2),
+     rvv_arg_type_info (RVV_BASE_size), rvv_arg_type_info_end};
+
+/* A list of args for vector_type func (vector_type) function.  */
+static CONSTEXPR const rvv_arg_type_info ext_x4_vget_args[]
+  = {rvv_arg_type_info (RVV_BASE_vlmul_ext_x4),
+     rvv_arg_type_info (RVV_BASE_size), rvv_arg_type_info_end};
+
+/* A list of args for vector_type func (vector_type) function.  */
+static CONSTEXPR const rvv_arg_type_info ext_x8_vget_args[]
+  = {rvv_arg_type_info (RVV_BASE_vlmul_ext_x8),
+     rvv_arg_type_info (RVV_BASE_size), rvv_arg_type_info_end};
+
 /* A list of none preds that will be registered for intrinsic functions.  */
 static CONSTEXPR const predication_type_index none_preds[]
   = {PRED_TYPE_none, NUM_PRED_TYPES};
@@ -637,7 +766,7 @@ static CONSTEXPR const predication_type_index none_m_preds[]
 static CONSTEXPR const predication_type_index none_m_mu_preds[]
   = {PRED_TYPE_none, PRED_TYPE_m, PRED_TYPE_mu, NUM_PRED_TYPES};
 
-/* A static operand information for size_t func (void) function registration. */
+/* A static operand information for size_t func () function registration. */
 static CONSTEXPR const rvv_op_info i_none_size_void_ops
   = {i_ops,				/* Types */
      OP_TYPE_none,			/* Suffix */
@@ -652,6 +781,14 @@ static CONSTEXPR const rvv_op_info i_none_size_size_ops
      rvv_arg_type_info (RVV_BASE_size), /* Return type */
      size_args /* Args */};
 
+/* A static operand information for vector_type func () function registration.
+ */
+static CONSTEXPR const rvv_op_info all_none_void_ops
+  = {all_ops,				  /* Types */
+     OP_TYPE_none,			  /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     void_args /* Args */};
+
 /* A static operand information for vector_type func (const scalar_type *)
  * function registration. */
 static CONSTEXPR const rvv_op_info all_v_scalar_const_ptr_ops
@@ -749,36 +886,36 @@ static CONSTEXPR const rvv_op_info all_v_scalar_const_ptr_ptrdiff_ops
      scalar_const_ptr_ptrdiff_args /* Args */};
 
 /* A static operand information for vector_type func (const scalar_type *,
- * uint8_index_type) function registration. */
-static CONSTEXPR const rvv_op_info all_v_scalar_const_ptr_uint8_index_ops
+ * eew8_index_type) function registration. */
+static CONSTEXPR const rvv_op_info all_v_scalar_const_ptr_eew8_index_ops
   = {all_ops,				  /* Types */
      OP_TYPE_v,				  /* Suffix */
      rvv_arg_type_info (RVV_BASE_vector), /* Return type */
-     scalar_const_ptr_uint8_index_args /* Args */};
+     scalar_const_ptr_eew8_index_args /* Args */};
 
 /* A static operand information for vector_type func (const scalar_type *,
- * uint16_index_type) function registration. */
-static CONSTEXPR const rvv_op_info all_v_scalar_const_ptr_uint16_index_ops
+ * eew16_index_type) function registration. */
+static CONSTEXPR const rvv_op_info all_v_scalar_const_ptr_eew16_index_ops
   = {all_ops,				  /* Types */
      OP_TYPE_v,				  /* Suffix */
      rvv_arg_type_info (RVV_BASE_vector), /* Return type */
-     scalar_const_ptr_uint16_index_args /* Args */};
+     scalar_const_ptr_eew16_index_args /* Args */};
 
 /* A static operand information for vector_type func (const scalar_type *,
- * uint32_index_type) function registration. */
-static CONSTEXPR const rvv_op_info all_v_scalar_const_ptr_uint32_index_ops
+ * eew32_index_type) function registration. */
+static CONSTEXPR const rvv_op_info all_v_scalar_const_ptr_eew32_index_ops
   = {all_ops,				  /* Types */
      OP_TYPE_v,				  /* Suffix */
      rvv_arg_type_info (RVV_BASE_vector), /* Return type */
-     scalar_const_ptr_uint32_index_args /* Args */};
+     scalar_const_ptr_eew32_index_args /* Args */};
 
 /* A static operand information for vector_type func (const scalar_type *,
- * uint64_index_type) function registration. */
-static CONSTEXPR const rvv_op_info all_v_scalar_const_ptr_uint64_index_ops
+ * eew64_index_type) function registration. */
+static CONSTEXPR const rvv_op_info all_v_scalar_const_ptr_eew64_index_ops
   = {all_ops,				  /* Types */
      OP_TYPE_v,				  /* Suffix */
      rvv_arg_type_info (RVV_BASE_vector), /* Return type */
-     scalar_const_ptr_uint64_index_args /* Args */};
+     scalar_const_ptr_eew64_index_args /* Args */};
 
 /* A static operand information for void func (scalar_type *, ptrdiff_t,
  * vector_type) function registration. */
@@ -788,37 +925,37 @@ static CONSTEXPR const rvv_op_info all_v_scalar_ptr_ptrdiff_ops
      rvv_arg_type_info (RVV_BASE_void), /* Return type */
      scalar_ptr_ptrdiff_args /* Args */};
 
-/* A static operand information for void func (scalar_type *, uint8_index_type,
+/* A static operand information for void func (scalar_type *, eew8_index_type,
  * vector_type) function registration. */
-static CONSTEXPR const rvv_op_info all_v_scalar_ptr_uint8_index_ops
+static CONSTEXPR const rvv_op_info all_v_scalar_ptr_eew8_index_ops
   = {all_ops,				/* Types */
      OP_TYPE_v,				/* Suffix */
      rvv_arg_type_info (RVV_BASE_void), /* Return type */
-     scalar_ptr_uint8_index_args /* Args */};
+     scalar_ptr_eew8_index_args /* Args */};
 
-/* A static operand information for void func (scalar_type *, uint16_index_type,
+/* A static operand information for void func (scalar_type *, eew16_index_type,
  * vector_type) function registration. */
-static CONSTEXPR const rvv_op_info all_v_scalar_ptr_uint16_index_ops
+static CONSTEXPR const rvv_op_info all_v_scalar_ptr_eew16_index_ops
   = {all_ops,				/* Types */
      OP_TYPE_v,				/* Suffix */
      rvv_arg_type_info (RVV_BASE_void), /* Return type */
-     scalar_ptr_uint16_index_args /* Args */};
+     scalar_ptr_eew16_index_args /* Args */};
 
-/* A static operand information for void func (scalar_type *, uint32_index_type,
+/* A static operand information for void func (scalar_type *, eew32_index_type,
  * vector_type) function registration. */
-static CONSTEXPR const rvv_op_info all_v_scalar_ptr_uint32_index_ops
+static CONSTEXPR const rvv_op_info all_v_scalar_ptr_eew32_index_ops
   = {all_ops,				/* Types */
      OP_TYPE_v,				/* Suffix */
      rvv_arg_type_info (RVV_BASE_void), /* Return type */
-     scalar_ptr_uint32_index_args /* Args */};
+     scalar_ptr_eew32_index_args /* Args */};
 
-/* A static operand information for void func (scalar_type *, uint64_index_type,
+/* A static operand information for void func (scalar_type *, eew64_index_type,
  * vector_type) function registration. */
-static CONSTEXPR const rvv_op_info all_v_scalar_ptr_uint64_index_ops
+static CONSTEXPR const rvv_op_info all_v_scalar_ptr_eew64_index_ops
   = {all_ops,				/* Types */
      OP_TYPE_v,				/* Suffix */
      rvv_arg_type_info (RVV_BASE_void), /* Return type */
-     scalar_ptr_uint64_index_args /* Args */};
+     scalar_ptr_eew64_index_args /* Args */};
 
 /* A static operand information for vector_type func (vector_type, vector_type)
  * function registration. */
@@ -1374,6 +1511,182 @@ static CONSTEXPR const rvv_op_info all_v_ops
      rvv_arg_type_info (RVV_BASE_vector), /* Return type */
      v_args /* Args */};
 
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info i_v_u_ops
+  = {i_ops,					/* Types */
+     OP_TYPE_v,					/* Suffix */
+     rvv_arg_type_info (RVV_BASE_unsigned_vector), /* Return type */
+     v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info u_v_i_ops
+  = {u_ops,					/* Types */
+     OP_TYPE_v,					/* Suffix */
+     rvv_arg_type_info (RVV_BASE_signed_vector), /* Return type */
+     v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info iu_v_eew8_interpret_ops
+  = {eew8_interpret_ops,			  /* Types */
+     OP_TYPE_v,					  /* Suffix */
+     rvv_arg_type_info (RVV_BASE_eew8_interpret), /* Return type */
+     v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info iu_v_eew16_interpret_ops
+  = {eew16_interpret_ops,			   /* Types */
+     OP_TYPE_v,					   /* Suffix */
+     rvv_arg_type_info (RVV_BASE_eew16_interpret), /* Return type */
+     v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info iu_v_eew32_interpret_ops
+  = {eew32_interpret_ops,			   /* Types */
+     OP_TYPE_v,					   /* Suffix */
+     rvv_arg_type_info (RVV_BASE_eew32_interpret), /* Return type */
+     v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info iu_v_eew64_interpret_ops
+  = {eew64_interpret_ops,			   /* Types */
+     OP_TYPE_v,					   /* Suffix */
+     rvv_arg_type_info (RVV_BASE_eew64_interpret), /* Return type */
+     v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_vlmul_ext_x2_ops
+  = {vlmul_ext_x2_ops,				/* Types */
+     OP_TYPE_v,					/* Suffix */
+     rvv_arg_type_info (RVV_BASE_vlmul_ext_x2), /* Return type */
+     v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_vlmul_ext_x4_ops
+  = {vlmul_ext_x4_ops,				/* Types */
+     OP_TYPE_v,					/* Suffix */
+     rvv_arg_type_info (RVV_BASE_vlmul_ext_x4), /* Return type */
+     v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_vlmul_ext_x8_ops
+  = {vlmul_ext_x8_ops,				/* Types */
+     OP_TYPE_v,					/* Suffix */
+     rvv_arg_type_info (RVV_BASE_vlmul_ext_x8), /* Return type */
+     v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_vlmul_ext_x16_ops
+  = {vlmul_ext_x16_ops,				 /* Types */
+     OP_TYPE_v,					 /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vlmul_ext_x16), /* Return type */
+     v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_vlmul_ext_x32_ops
+  = {vlmul_ext_x32_ops,				 /* Types */
+     OP_TYPE_v,					 /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vlmul_ext_x32), /* Return type */
+     v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_vlmul_ext_x64_ops
+  = {vlmul_ext_x64_ops,				 /* Types */
+     OP_TYPE_v,					 /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vlmul_ext_x64), /* Return type */
+     v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_vlmul_trunc_x2_ops
+  = {vlmul_ext_x2_ops,			  /* Types */
+     OP_TYPE_v,				  /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     v_x2_trunc_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_vlmul_trunc_x4_ops
+  = {vlmul_ext_x4_ops,			  /* Types */
+     OP_TYPE_v,				  /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     v_x4_trunc_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_vlmul_trunc_x8_ops
+  = {vlmul_ext_x8_ops,			  /* Types */
+     OP_TYPE_v,				  /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     v_x8_trunc_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_vlmul_trunc_x16_ops
+  = {vlmul_ext_x16_ops,			  /* Types */
+     OP_TYPE_v,				  /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     v_x16_trunc_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_vlmul_trunc_x32_ops
+  = {vlmul_ext_x32_ops,			  /* Types */
+     OP_TYPE_v,				  /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     v_x32_trunc_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_vlmul_trunc_x64_ops
+  = {vlmul_ext_x64_ops,			  /* Types */
+     OP_TYPE_v,				  /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     v_x64_trunc_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info f_v_i_ops
+  = {f_ops,					 /* Types */
+     OP_TYPE_v,					 /* Suffix */
+     rvv_arg_type_info (RVV_BASE_signed_vector), /* Return type */
+     v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info f_v_u_ops
+  = {f_ops,					   /* Types */
+     OP_TYPE_v,					   /* Suffix */
+     rvv_arg_type_info (RVV_BASE_unsigned_vector), /* Return type */
+     v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info i_v_f_ops
+  = {f_ops,				  /* Types */
+     OP_TYPE_v,				  /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     x_v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info u_v_f_ops
+  = {f_ops,				  /* Types */
+     OP_TYPE_v,				  /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     xu_v_args /* Args */};
+
 /* A static operand information for vector_type func (scalar_type)
  * function registration. */
 static CONSTEXPR const rvv_op_info iu_x_ops
@@ -1694,6 +2007,158 @@ static CONSTEXPR const rvv_op_info iu_trunc_ops
      rvv_arg_type_info (RVV_BASE_double_trunc_vector), /* Return type */
      v_args /* Args */};
 
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_vset_lmul1_x2_ops
+  = {lmul1_ops,					/* Types */
+     OP_TYPE_v,					/* Suffix */
+     rvv_arg_type_info (RVV_BASE_vlmul_ext_x2), /* Return type */
+     ext_x2_vset_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_vset_lmul1_x4_ops
+  = {lmul1_ops,					/* Types */
+     OP_TYPE_v,					/* Suffix */
+     rvv_arg_type_info (RVV_BASE_vlmul_ext_x4), /* Return type */
+     ext_x4_vset_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_vset_lmul1_x8_ops
+  = {lmul1_ops,					/* Types */
+     OP_TYPE_v,					/* Suffix */
+     rvv_arg_type_info (RVV_BASE_vlmul_ext_x8), /* Return type */
+     ext_x8_vset_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_vset_lmul2_x2_ops
+  = {lmul2_ops,					/* Types */
+     OP_TYPE_v,					/* Suffix */
+     rvv_arg_type_info (RVV_BASE_vlmul_ext_x2), /* Return type */
+     ext_x2_vset_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_vset_lmul2_x4_ops
+  = {lmul2_ops,					/* Types */
+     OP_TYPE_v,					/* Suffix */
+     rvv_arg_type_info (RVV_BASE_vlmul_ext_x4), /* Return type */
+     ext_x4_vset_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_vset_lmul4_x2_ops
+  = {lmul4_ops,					/* Types */
+     OP_TYPE_v,					/* Suffix */
+     rvv_arg_type_info (RVV_BASE_vlmul_ext_x2), /* Return type */
+     ext_x2_vset_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_vget_lmul1_x2_ops
+  = {lmul1_ops,				  /* Types */
+     OP_TYPE_v,				  /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     ext_x2_vget_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_vget_lmul1_x4_ops
+  = {lmul1_ops,				  /* Types */
+     OP_TYPE_v,				  /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     ext_x4_vget_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_vget_lmul1_x8_ops
+  = {lmul1_ops,				  /* Types */
+     OP_TYPE_v,				  /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     ext_x8_vget_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_vget_lmul2_x2_ops
+  = {lmul2_ops,				  /* Types */
+     OP_TYPE_v,				  /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     ext_x2_vget_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_vget_lmul2_x4_ops
+  = {lmul2_ops,				  /* Types */
+     OP_TYPE_v,				  /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     ext_x4_vget_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_vget_lmul4_x2_ops
+  = {lmul4_ops,				  /* Types */
+     OP_TYPE_v,				  /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     ext_x2_vget_args /* Args */};
+
+/* A list of all RVV base function types.  */
+static CONSTEXPR const function_type_info function_types[] = {
+#define DEF_RVV_TYPE_INDEX(VECTOR, MASK, SIGNED, UNSIGNED, EEW8_INDEX, EEW16_INDEX, \
+		      EEW32_INDEX, EEW64_INDEX, SHIFT, DOUBLE_TRUNC,           \
+		      QUAD_TRUNC, OCT_TRUNC, DOUBLE_TRUNC_SCALAR,              \
+		      DOUBLE_TRUNC_SIGNED, DOUBLE_TRUNC_UNSIGNED,              \
+		      DOUBLE_TRUNC_UNSIGNED_SCALAR, DOUBLE_TRUNC_FLOAT, FLOAT, \
+		      LMUL1, WLMUL1, EEW8_INTERPRET, EEW16_INTERPRET,          \
+		      EEW32_INTERPRET, EEW64_INTERPRET, X2_VLMUL_EXT,          \
+		      X4_VLMUL_EXT, X8_VLMUL_EXT, X16_VLMUL_EXT,               \
+		      X32_VLMUL_EXT, X64_VLMUL_EXT)                            \
+  {                                                                            \
+    VECTOR_TYPE_##VECTOR,                                                      \
+    VECTOR_TYPE_INVALID,                                                       \
+    VECTOR_TYPE_##MASK,                                                        \
+    VECTOR_TYPE_##SIGNED,                                                      \
+    VECTOR_TYPE_##UNSIGNED,                                                    \
+    VECTOR_TYPE_INVALID,                                                       \
+    VECTOR_TYPE_INVALID,                                                       \
+    VECTOR_TYPE_INVALID,                                                       \
+    VECTOR_TYPE_INVALID,                                                       \
+    VECTOR_TYPE_INVALID,                                                       \
+    VECTOR_TYPE_INVALID,                                                       \
+    VECTOR_TYPE_INVALID,                                                       \
+    VECTOR_TYPE_INVALID,                                                       \
+    VECTOR_TYPE_INVALID,                                                       \
+    VECTOR_TYPE_##EEW8_INDEX,                                                  \
+    VECTOR_TYPE_##EEW16_INDEX,                                                 \
+    VECTOR_TYPE_##EEW32_INDEX,                                                 \
+    VECTOR_TYPE_##EEW64_INDEX,                                                 \
+    VECTOR_TYPE_##SHIFT,                                                       \
+    VECTOR_TYPE_##DOUBLE_TRUNC,                                                \
+    VECTOR_TYPE_##QUAD_TRUNC,                                                  \
+    VECTOR_TYPE_##OCT_TRUNC,                                                   \
+    VECTOR_TYPE_##DOUBLE_TRUNC_SCALAR,                                         \
+    VECTOR_TYPE_##DOUBLE_TRUNC_SIGNED,                                         \
+    VECTOR_TYPE_##DOUBLE_TRUNC_UNSIGNED,                                       \
+    VECTOR_TYPE_##DOUBLE_TRUNC_UNSIGNED_SCALAR,                                \
+    VECTOR_TYPE_##DOUBLE_TRUNC_FLOAT,                                          \
+    VECTOR_TYPE_##FLOAT,                                                       \
+    VECTOR_TYPE_##LMUL1,                                                       \
+    VECTOR_TYPE_##WLMUL1,                                                      \
+    VECTOR_TYPE_##EEW8_INTERPRET,                                              \
+    VECTOR_TYPE_##EEW16_INTERPRET,                                             \
+    VECTOR_TYPE_##EEW32_INTERPRET,                                             \
+    VECTOR_TYPE_##EEW64_INTERPRET,                                             \
+    VECTOR_TYPE_##X2_VLMUL_EXT,                                                \
+    VECTOR_TYPE_##X4_VLMUL_EXT,                                                \
+    VECTOR_TYPE_##X8_VLMUL_EXT,                                                \
+    VECTOR_TYPE_##X16_VLMUL_EXT,                                               \
+    VECTOR_TYPE_##X32_VLMUL_EXT,                                               \
+    VECTOR_TYPE_##X64_VLMUL_EXT,                                               \
+  },
+#include "riscv-vector-builtins.def"
+}; // namespace riscv_vector
+
 /* A list of all RVV intrinsic functions.  */
 static function_group_info function_groups[] = {
 #define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)                         \
@@ -1886,12 +2351,29 @@ register_vector_type (vector_type_index type)
 static bool
 required_extensions_p (enum rvv_base_type type)
 {
-  return type == RVV_BASE_uint8_index || type == RVV_BASE_uint16_index
-	 || type == RVV_BASE_uint32_index || type == RVV_BASE_uint64_index
+  return type == RVV_BASE_eew8_index || type == RVV_BASE_eew16_index
+	 || type == RVV_BASE_eew32_index || type == RVV_BASE_eew64_index
 	 || type == RVV_BASE_float_vector
 	 || type == RVV_BASE_double_trunc_float_vector
 	 || type == RVV_BASE_double_trunc_vector
-	 || type == RVV_BASE_widen_lmul1_vector;
+	 || type == RVV_BASE_widen_lmul1_vector
+	 || type == RVV_BASE_eew8_interpret || type == RVV_BASE_eew16_interpret
+	 || type == RVV_BASE_eew32_interpret || type == RVV_BASE_eew64_interpret
+	 || type == RVV_BASE_vlmul_ext_x2 || type == RVV_BASE_vlmul_ext_x4
+	 || type == RVV_BASE_vlmul_ext_x8 || type == RVV_BASE_vlmul_ext_x16
+	 || type == RVV_BASE_vlmul_ext_x32 || type == RVV_BASE_vlmul_ext_x64;
+}
+
+static uint64_t
+get_required_extensions (vector_type_index type_idx)
+{
+  for (unsigned int i = 0; all_ops[i].index != NUM_VECTOR_TYPES; i++)
+    if (type_idx == all_ops[i].index)
+      return all_ops[i].required_extensions;
+  for (unsigned int i = 0; b_ops[i].index != NUM_VECTOR_TYPES; i++)
+    if (type_idx == b_ops[i].index)
+      return b_ops[i].required_extensions;
+  gcc_unreachable ();
 }
 
 /* Check whether all the RVV_REQUIRE_* values in REQUIRED_EXTENSIONS are
@@ -1902,21 +2384,30 @@ check_required_extensions (const function_instance &instance)
   rvv_type_info type_info = instance.type;
   uint64_t required_extensions = type_info.required_extensions;
   const rvv_op_info *op_info = instance.op_info;
-  tree type = builtin_types[type_info.index].vector;
+
+  if (required_extensions_p (op_info->ret.base_type))
+    {
+      enum vector_type_index ret_type_idx
+	= op_info->ret.get_function_type_index (type_info.index);
+      if (ret_type_idx == NUM_VECTOR_TYPES)
+	return false;
+      required_extensions |= get_required_extensions (ret_type_idx);
+    }
+
   for (unsigned i = 0; op_info->args[i].base_type != NUM_BASE_TYPES; ++i)
     {
       if (!required_extensions_p (op_info->args[i].base_type))
 	continue;
 
       enum vector_type_index vector_type
-	= op_info->args[i].get_base_vector_type (type);
+	= op_info->args[i].get_function_type_index (type_info.index);
       if (vector_type == NUM_VECTOR_TYPES)
 	return false;
-      required_extensions |= op_info->types[vector_type].required_extensions;
+      required_extensions |= get_required_extensions (vector_type);
 
       /* According to RVV ISA, EEW=64 index of indexed loads/stores require
 	 XLEN = 64.  */
-      if (op_info->args[i].base_type == RVV_BASE_uint64_index)
+      if (op_info->args[i].base_type == RVV_BASE_eew64_index)
 	required_extensions |= RVV_REQUIRE_RV64BIT;
     }
 
@@ -1975,124 +2466,35 @@ get_mask_policy_for_pred (enum predication_type_index pred)
   return gen_int_mode (get_prefer_mask_policy (), Pmode);
 }
 
-static bool
-unsigned_base_type_p (rvv_base_type base_type)
+tree
+rvv_arg_type_info::get_scalar_ptr_type (vector_type_index type_idx) const
 {
-  return base_type == RVV_BASE_double_trunc_unsigned_vector
-	 || base_type == RVV_BASE_double_trunc_unsigned_scalar
-	 || base_type == RVV_BASE_unsigned_vector
-	 || base_type == RVV_BASE_uint8_index
-	 || base_type == RVV_BASE_uint16_index
-	 || base_type == RVV_BASE_uint32_index
-	 || base_type == RVV_BASE_uint64_index
-	 || base_type == RVV_BASE_shift_vector;
+  /* According to the latest rvv-intrinsic-doc, it defines vsm.v intrinsic:
+   __riscv_vsm (uint8_t *base, vbool1_t value, size_t vl).  */
+  if (type_idx >= VECTOR_TYPE_vbool64_t && type_idx <= VECTOR_TYPE_vbool1_t)
+    return builtin_types[VECTOR_TYPE_vuint8mf8_t].scalar_ptr;
+  else
+    return builtin_types[type_idx].scalar_ptr;
 }
 
-static machine_mode
-get_mode_for_bitsize (poly_int64 bitsize, bool float_mode_p)
+tree
+rvv_arg_type_info::get_scalar_const_ptr_type (vector_type_index type_idx) const
 {
-  if (float_mode_p)
-    return float_mode_for_size (bitsize).require ();
+  /* According to the latest rvv-intrinsic-doc, it defines vlm.v intrinsic:
+   __riscv_vlm_v_b1 (const uint8_t *base, size_t vl).  */
+  if (type_idx >= VECTOR_TYPE_vbool64_t && type_idx <= VECTOR_TYPE_vbool1_t)
+    return builtin_types[VECTOR_TYPE_vuint8mf8_t].scalar_const_ptr;
   else
-    return int_mode_for_size (bitsize, 0).require ();
+    return builtin_types[type_idx].scalar_const_ptr;
 }
 
 vector_type_index
-rvv_arg_type_info::get_base_vector_type (tree type) const
+rvv_arg_type_info::get_function_type_index (vector_type_index type_idx) const
 {
-  if (!type)
-    return NUM_VECTOR_TYPES;
-
-  poly_int64 nunits = GET_MODE_NUNITS (TYPE_MODE (type));
-  machine_mode inner_mode = GET_MODE_INNER (TYPE_MODE (type));
-  poly_int64 bitsize = GET_MODE_BITSIZE (inner_mode);
-  poly_int64 bytesize = GET_MODE_SIZE (inner_mode);
-
-  bool unsigned_p = TYPE_UNSIGNED (type);
-  if (unsigned_base_type_p (base_type))
-    unsigned_p = true;
-
-  switch (base_type)
-    {
-    case RVV_BASE_mask:
-      inner_mode = E_BImode;
-      break;
-    case RVV_BASE_uint8_index:
-      inner_mode = E_QImode;
-      break;
-    case RVV_BASE_uint16_index:
-      inner_mode = E_HImode;
-      break;
-    case RVV_BASE_uint32_index:
-      inner_mode = E_SImode;
-      break;
-    case RVV_BASE_uint64_index:
-      inner_mode = E_DImode;
-      break;
-    case RVV_BASE_shift_vector:
-      inner_mode = GET_MODE_INNER (TYPE_MODE (type));
-      break;
-    case RVV_BASE_double_trunc_vector:
-    case RVV_BASE_double_trunc_scalar:
-      inner_mode = get_mode_for_bitsize (exact_div (bitsize, 2),
-					 FLOAT_MODE_P (inner_mode));
-      break;
-    case RVV_BASE_double_trunc_unsigned_vector:
-    case RVV_BASE_double_trunc_unsigned_scalar:
-    case RVV_BASE_double_trunc_signed_vector:
-      inner_mode = int_mode_for_size (exact_div (bitsize, 2), 0).require ();
-      break;
-    case RVV_BASE_quad_trunc_vector:
-      inner_mode = get_mode_for_bitsize (exact_div (bitsize, 4),
-					 FLOAT_MODE_P (inner_mode));
-      break;
-    case RVV_BASE_oct_trunc_vector:
-      inner_mode = get_mode_for_bitsize (exact_div (bitsize, 8),
-					 FLOAT_MODE_P (inner_mode));
-      break;
-    case RVV_BASE_float_vector:
-      inner_mode = float_mode_for_size (bitsize).require ();
-      break;
-    case RVV_BASE_double_trunc_float_vector:
-      inner_mode = float_mode_for_size (exact_div (bitsize, 2)).require ();
-      break;
-    case RVV_BASE_signed_vector:
-    case RVV_BASE_unsigned_vector:
-      inner_mode = int_mode_for_mode (inner_mode).require ();
-      break;
-    case RVV_BASE_lmul1_vector:
-      nunits = exact_div (BYTES_PER_RISCV_VECTOR, bytesize);
-      break;
-    case RVV_BASE_widen_lmul1_vector:
-      inner_mode
-	= get_mode_for_bitsize (bitsize * 2, FLOAT_MODE_P (inner_mode));
-      if (BYTES_PER_RISCV_VECTOR.coeffs[0] < (bytesize * 2).coeffs[0])
-	return NUM_VECTOR_TYPES;
-      nunits = exact_div (BYTES_PER_RISCV_VECTOR, bytesize * 2);
-      break;
-    default:
-      return NUM_VECTOR_TYPES;
-    }
-
-  opt_machine_mode mode
-    = get_vector_mode (as_a<scalar_mode> (inner_mode), nunits);
-
-  if (!mode.exists ())
-    return NUM_VECTOR_TYPES;
-  for (unsigned int i = 0; i < NUM_VECTOR_TYPES + 1; i++)
-    {
-      tree vector_type = builtin_types[i].vector;
-      if (!vector_type)
-	continue;
-
-      if (GET_MODE_CLASS (TYPE_MODE (vector_type)) == MODE_VECTOR_INT
-	  && TYPE_UNSIGNED (vector_type) != unsigned_p)
-	continue;
-
-      if (TYPE_MODE (vector_type) == mode.require ())
-	return (enum vector_type_index) i;
-    }
-  return NUM_VECTOR_TYPES;
+  tree type
+    = builtin_types[function_types[type_idx].type_indexes[base_type]].vector;
+  return type ? function_types[type_idx].type_indexes[base_type]
+	      : NUM_VECTOR_TYPES;
 }
 
 tree
@@ -2104,79 +2506,17 @@ rvv_arg_type_info::get_tree_type (vector_type_index type_idx) const
      just return NULL_TREE.  */
   if (!builtin_types[type_idx].vector)
     return NULL_TREE;
+
   switch (base_type)
     {
-    case RVV_BASE_vector:
-      return builtin_types[type_idx].vector;
-    case RVV_BASE_scalar:
-      return builtin_types[type_idx].scalar;
-    /* According to riscv-vector-builtins-types.def, the unsigned
-       type is always the signed type + 1 (They have same SEW and LMUL).
-       For example 'vuint8mf8_t' enum = 'vint8mf8_t' enum + 1.
-       Note: We dont't allow type_idx to be unsigned type.  */
-    case RVV_BASE_unsigned_scalar:
-      gcc_assert (!TYPE_UNSIGNED (builtin_types[type_idx].scalar));
-      return builtin_types[type_idx + 1].scalar;
-    case RVV_BASE_vector_ptr:
-      return builtin_types[type_idx].vector_ptr;
-    case RVV_BASE_scalar_ptr:
-      /* According to the latest rvv-intrinsic-doc, it defines vsm.v intrinsic:
-	 __riscv_vsm (uint8_t *base, vbool1_t value, size_t vl).  */
-      if (type_idx >= VECTOR_TYPE_vbool64_t && type_idx <= VECTOR_TYPE_vbool1_t)
-	return builtin_types[VECTOR_TYPE_vuint8mf8_t].scalar_ptr;
-      else
-	return builtin_types[type_idx].scalar_ptr;
-    case RVV_BASE_scalar_const_ptr:
-      /* According to the latest rvv-intrinsic-doc, it defines vlm.v intrinsic:
-	 __riscv_vlm_v_b1 (const uint8_t *base, size_t vl).  */
-      if (type_idx >= VECTOR_TYPE_vbool64_t && type_idx <= VECTOR_TYPE_vbool1_t)
-	return builtin_types[VECTOR_TYPE_vuint8mf8_t].scalar_const_ptr;
-      else
-	return builtin_types[type_idx].scalar_const_ptr;
-    case RVV_BASE_void:
-      return void_type_node;
-    case RVV_BASE_size:
-      return size_type_node;
-    case RVV_BASE_ptrdiff:
-      return ptrdiff_type_node;
-    case RVV_BASE_unsigned_long:
-      return long_unsigned_type_node;
-    case RVV_BASE_long:
-      return long_integer_type_node;
-    case RVV_BASE_uint8_index:
-    case RVV_BASE_uint16_index:
-    case RVV_BASE_uint32_index:
-    case RVV_BASE_uint64_index:
-    case RVV_BASE_shift_vector:
-    case RVV_BASE_double_trunc_vector:
-    case RVV_BASE_quad_trunc_vector:
-    case RVV_BASE_oct_trunc_vector:
-    case RVV_BASE_double_trunc_signed_vector:
-    case RVV_BASE_double_trunc_unsigned_vector:
-    case RVV_BASE_mask:
-    case RVV_BASE_float_vector:
-    case RVV_BASE_double_trunc_float_vector:
-    case RVV_BASE_signed_vector:
-    case RVV_BASE_unsigned_vector:
-    case RVV_BASE_lmul1_vector:
-    case RVV_BASE_widen_lmul1_vector:
-      if (get_base_vector_type (builtin_types[type_idx].vector)
-	  != NUM_VECTOR_TYPES)
-	return builtin_types[get_base_vector_type (
-			       builtin_types[type_idx].vector)].vector;
-      break;
-    case RVV_BASE_double_trunc_scalar:
-    case RVV_BASE_double_trunc_unsigned_scalar:
-      if (get_base_vector_type (builtin_types[type_idx].vector)
-	  != NUM_VECTOR_TYPES)
-	return builtin_types[get_base_vector_type (
-			       builtin_types[type_idx].vector)].scalar;
-      break;
+#define DEF_RVV_BASE_TYPE(NAME, TYPE)                                          \
+  case RVV_BASE_##NAME:                                                        \
+    return TYPE;
+#include "riscv-vector-builtins.def"
     default:
       gcc_unreachable ();
     }
-  /* Return NULL_TREE if the type we don't want to register.  */
-  return NULL_TREE;
+  gcc_unreachable ();
 }
 
 function_instance::function_instance (const char *base_name_in,
@@ -2346,7 +2686,9 @@ function_builder::apply_predication (const function_instance &instance,
       argument_types.quick_insert (0, return_type);
 
   /* These predication types need to apply mask type.  */
-  tree mask_type = builtin_types[mask_types[instance.type.index]].vector;
+  vector_type_index mask_type_index
+    = function_types[instance.type.index].type_indexes[RVV_BASE_mask];
+  tree mask_type = builtin_types[mask_type_index].vector;
   if (instance.pred == PRED_TYPE_m || instance.pred == PRED_TYPE_tum
       || instance.pred == PRED_TYPE_tumu || instance.pred == PRED_TYPE_mu)
     argument_types.quick_insert (0, mask_type);
@@ -2559,7 +2901,9 @@ function_expander::add_mem_operand (machine_mode mode, unsigned argno)
 machine_mode
 function_expander::mask_mode (void) const
 {
-  return TYPE_MODE (builtin_types[mask_types[type.index]].vector);
+  vector_type_index mask_type_index
+    = function_types[type.index].type_indexes[RVV_BASE_mask];
+  return TYPE_MODE (builtin_types[mask_type_index].vector);
 }
 
 /* Implement the call using instruction ICODE, with a 1:1 mapping between
@@ -2850,6 +3194,88 @@ function_expander::generate_insn (insn_code icode)
   return function_returns_void_p () ? const0_rtx : m_ops[0].value;
 }
 
+function_checker::function_checker (location_t location,
+				    const function_instance &instance,
+				    tree fndecl, tree fntype,
+				    unsigned int nargs, tree *args)
+  : function_call_info (location, instance, fndecl), m_fntype (fntype),
+    m_nargs (nargs), m_args (args)
+{}
+
+/* Report that LOCATION has a call to FNDECL in which argument ARGNO
+   was not an integer constant expression.  ARGNO counts from zero.  */
+void
+function_checker::report_non_ice (unsigned int argno) const
+{
+  error_at (location,
+	    "argument %d of %qE must be an integer constant"
+	    " expression",
+	    argno + 1, fndecl);
+}
+
+/* Report that LOCATION has a call to FNDECL in which argument ARGNO has
+   the value ACTUAL, whereas the function requires a value in the range
+   [MIN, MAX].  ARGNO counts from zero.  */
+void
+function_checker::report_out_of_range (unsigned int argno, HOST_WIDE_INT actual,
+				       HOST_WIDE_INT min,
+				       HOST_WIDE_INT max) const
+{
+  error_at (location,
+	    "passing %wd to argument %d of %qE, which expects"
+	    " a value in the range [%wd, %wd]",
+	    actual, argno + 1, fndecl, min, max);
+}
+
+/* Check that argument ARGNO is an integer constant expression and
+   store its value in VALUE_OUT if so.  The caller should first
+   check that argument ARGNO exists.  */
+bool
+function_checker::require_immediate (unsigned int argno, HOST_WIDE_INT min,
+				     HOST_WIDE_INT max) const
+{
+  gcc_assert (argno < m_nargs);
+  tree arg = m_args[argno];
+
+  /* The type and range are unsigned, so read the argument as an
+     unsigned rather than signed HWI.  */
+  if (!tree_fits_uhwi_p (arg))
+    {
+      report_non_ice (argno);
+      return false;
+    }
+  return require_immediate_range (argno, min, max);
+}
+
+/* Check that argument REL_ARGNO is an integer constant expression in the
+   range [MIN, MAX].  REL_ARGNO counts from the end of the predication
+   arguments.  */
+bool
+function_checker::require_immediate_range (unsigned int argno,
+					   HOST_WIDE_INT min,
+					   HOST_WIDE_INT max) const
+{
+  gcc_assert (argno < m_nargs);
+  tree arg = m_args[argno];
+  HOST_WIDE_INT actual = tree_to_uhwi (arg);
+
+  if (!IN_RANGE (actual, min, max))
+    {
+      report_out_of_range (argno, actual, min, max);
+      return false;
+    }
+
+  return true;
+}
+
+/* Perform semantic checks on the call.  Return true if the call is valid,
+   otherwise report a suitable error.  */
+bool
+function_checker::check ()
+{
+  return shape->check (*this);
+}
+
 inline hashval_t
 registered_function_hasher::hash (value_type value)
 {
@@ -3013,6 +3439,22 @@ expand_builtin (unsigned int code, tree exp, rtx target)
   return function_expander (rfn.instance, rfn.decl, exp, target).expand ();
 }
 
+/* Perform any semantic checks needed for a call to the SVE function
+   with subcode CODE, such as testing for integer constant expressions.
+   The call occurs at location LOCATION and has NARGS arguments,
+   given by ARGS.  FNDECL is the original function decl, before
+   overload resolution.
+
+   Return true if the call is valid, otherwise report a suitable error.  */
+bool
+check_builtin_call (location_t location, vec<location_t>, unsigned int code,
+		    tree fndecl, unsigned int nargs, tree *args)
+{
+  const registered_function &rfn = *(*registered_functions)[code];
+  return function_checker (location, rfn.instance, fndecl,
+			   TREE_TYPE (rfn.decl), nargs, args).check ();
+}
+
 } // end namespace riscv_vector
 
 inline void
diff --git a/gcc/config/riscv/riscv-vector-builtins.def b/gcc/config/riscv/riscv-vector-builtins.def
index 5094f041f66..4d7e00de8b4 100644
--- a/gcc/config/riscv/riscv-vector-builtins.def
+++ b/gcc/config/riscv/riscv-vector-builtins.def
@@ -44,7 +44,7 @@ along with GCC; see the file COPYING3.  If not see
 #ifndef DEF_RVV_TYPE
 #define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, VECTOR_MODE,         \
 		     VECTOR_MODE_MIN_VLEN_32, VECTOR_SUFFIX, SCALAR_SUFFIX,    \
-		     VSETVL_SUFFIX, MASK_TYPE)
+		     VSETVL_SUFFIX)
 #endif
 
 /* Use "DEF_RVV_OP_TYPE" macro to define RVV operand types.
@@ -59,214 +59,234 @@ along with GCC; see the file COPYING3.  If not see
 #define DEF_RVV_PRED_TYPE(NAME)
 #endif
 
+/* Use "DEF_RVV_BASE_TYPE" macro to define RVV base types.
+   The 'NAME' will be concatenated into intrinsic function name.  */
+#ifndef DEF_RVV_BASE_TYPE
+#define DEF_RVV_BASE_TYPE(NAME, TYPE)
+#endif
+
+/* Use "DEF_RVV_TYPE_INDEX" macro to define RVV function types.
+   The 'NAME' will be concatenated into intrinsic function name.  */
+#ifndef DEF_RVV_TYPE_INDEX
+#define DEF_RVV_TYPE_INDEX(VECTOR, MASK, SIGNED, UNSIGNED, EEW8_INDEX, EEW16_INDEX, \
+		      EEW32_INDEX, EEW64_INDEX, SHIFT, DOUBLE_TRUNC,           \
+		      QUAD_TRUNC, OCT_TRUNC, DOUBLE_TRUNC_SCALAR,              \
+		      DOUBLE_TRUNC_SIGNED, DOUBLE_TRUNC_UNSIGNED,              \
+		      DOUBLE_TRUNC_UNSIGNED_SCALAR, DOUBLE_TRUNC_FLOAT, FLOAT, \
+		      LMUL1, WLMUL1, EEW8_INTERPRET, EEW16_INTERPRET,          \
+		      EEW32_INTERPRET, EEW64_INTERPRET, X2_VLMUL_EXT,          \
+		      X4_VLMUL_EXT, X8_VLMUL_EXT, X16_VLMUL_EXT,               \
+		      X32_VLMUL_EXT, X64_VLMUL_EXT)
+#endif
+
 /* SEW/LMUL = 64:
    Only enable when TARGET_MIN_VLEN > 32 and machine mode = VNx1BImode.  */
-DEF_RVV_TYPE (vbool64_t, 14, __rvv_bool64_t, boolean, VNx1BI, VOID, _b64, , , vbool64_t)
+DEF_RVV_TYPE (vbool64_t, 14, __rvv_bool64_t, boolean, VNx1BI, VOID, _b64, , )
 /* SEW/LMUL = 32:
    Machine mode = VNx2BImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx1BImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vbool32_t, 14, __rvv_bool32_t, boolean, VNx2BI, VNx1BI, _b32, , , vbool32_t)
+DEF_RVV_TYPE (vbool32_t, 14, __rvv_bool32_t, boolean, VNx2BI, VNx1BI, _b32, , )
 /* SEW/LMUL = 16:
    Machine mode = VNx2BImode when TARGET_MIN_VLEN = 32.
    Machine mode = VNx4BImode when TARGET_MIN_VLEN > 32.  */
-DEF_RVV_TYPE (vbool16_t, 14, __rvv_bool16_t, boolean, VNx4BI, VNx2BI, _b16, , , vbool16_t)
+DEF_RVV_TYPE (vbool16_t, 14, __rvv_bool16_t, boolean, VNx4BI, VNx2BI, _b16, , )
 /* SEW/LMUL = 8:
    Machine mode = VNx8BImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx4BImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vbool8_t, 13, __rvv_bool8_t, boolean, VNx8BI, VNx4BI, _b8, , , vbool8_t)
+DEF_RVV_TYPE (vbool8_t, 13, __rvv_bool8_t, boolean, VNx8BI, VNx4BI, _b8, , )
 /* SEW/LMUL = 4:
    Machine mode = VNx16BImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx8BImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vbool4_t, 13, __rvv_bool4_t, boolean, VNx16BI, VNx8BI, _b4, , , vbool4_t)
+DEF_RVV_TYPE (vbool4_t, 13, __rvv_bool4_t, boolean, VNx16BI, VNx8BI, _b4, , )
 /* SEW/LMUL = 2:
    Machine mode = VNx32BImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx16BImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vbool2_t, 13, __rvv_bool2_t, boolean, VNx32BI, VNx16BI, _b2, , , vbool2_t)
+DEF_RVV_TYPE (vbool2_t, 13, __rvv_bool2_t, boolean, VNx32BI, VNx16BI, _b2, , )
 /* SEW/LMUL = 1:
    Machine mode = VNx64BImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx32BImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vbool1_t, 13, __rvv_bool1_t, boolean, VNx64BI, VNx32BI, _b1, , , vbool1_t)
+DEF_RVV_TYPE (vbool1_t, 13, __rvv_bool1_t, boolean, VNx64BI, VNx32BI, _b1, , )
 
 /* LMUL = 1/8:
    Only enble when TARGET_MIN_VLEN > 32 and machine mode = VNx1QImode.  */
 DEF_RVV_TYPE (vint8mf8_t, 15, __rvv_int8mf8_t, int8, VNx1QI, VOID, _i8mf8, _i8,
-	      _e8mf8, vbool64_t)
-DEF_RVV_TYPE (vuint8mf8_t, 16, __rvv_uint8mf8_t, uint8, VNx1QI, VOID,
-	      _u8mf8, _u8, _e8mf8, vbool64_t)
+	      _e8mf8)
+DEF_RVV_TYPE (vuint8mf8_t, 16, __rvv_uint8mf8_t, uint8, VNx1QI, VOID, _u8mf8,
+	      _u8, _e8mf8)
 /* LMUL = 1/4:
    Machine mode = VNx2QImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx1QImode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vint8mf4_t, 15, __rvv_int8mf4_t, int8, VNx2QI, VNx1QI, _i8mf4,
-	      _i8, _e8mf4, vbool32_t)
-DEF_RVV_TYPE (vuint8mf4_t, 16, __rvv_uint8mf4_t, uint8, VNx2QI, VNx1QI,
-	      _u8mf4, _u8, _e8mf4, vbool32_t)
+	      _i8, _e8mf4)
+DEF_RVV_TYPE (vuint8mf4_t, 16, __rvv_uint8mf4_t, uint8, VNx2QI, VNx1QI, _u8mf4,
+	      _u8, _e8mf4)
 /* LMUL = 1/2:
    Machine mode = VNx4QImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx2QImode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vint8mf2_t, 15, __rvv_int8mf2_t, int8, VNx4QI, VNx2QI, _i8mf2,
-	      _i8, _e8mf2, vbool16_t)
-DEF_RVV_TYPE (vuint8mf2_t, 16, __rvv_uint8mf2_t, uint8, VNx4QI, VNx2QI,
-	      _u8mf2, _u8, _e8mf2, vbool16_t)
+	      _i8, _e8mf2)
+DEF_RVV_TYPE (vuint8mf2_t, 16, __rvv_uint8mf2_t, uint8, VNx4QI, VNx2QI, _u8mf2,
+	      _u8, _e8mf2)
 /* LMUL = 1:
    Machine mode = VNx8QImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx4QImode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vint8m1_t, 14, __rvv_int8m1_t, int8, VNx8QI, VNx4QI, _i8m1, _i8,
-	      _e8m1, vbool8_t)
-DEF_RVV_TYPE (vuint8m1_t, 15, __rvv_uint8m1_t, uint8, VNx8QI, VNx4QI,
-	      _u8m1, _u8, _e8m1, vbool8_t)
+	      _e8m1)
+DEF_RVV_TYPE (vuint8m1_t, 15, __rvv_uint8m1_t, uint8, VNx8QI, VNx4QI, _u8m1,
+	      _u8, _e8m1)
 /* LMUL = 2:
    Machine mode = VNx16QImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx8QImode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vint8m2_t, 14, __rvv_int8m2_t, int8, VNx16QI, VNx8QI, _i8m2, _i8,
-	      _e8m2, vbool4_t)
-DEF_RVV_TYPE (vuint8m2_t, 15, __rvv_uint8m2_t, uint8, VNx16QI, VNx8QI,
-	      _u8m2, _u8, _e8m2, vbool4_t)
+	      _e8m2)
+DEF_RVV_TYPE (vuint8m2_t, 15, __rvv_uint8m2_t, uint8, VNx16QI, VNx8QI, _u8m2,
+	      _u8, _e8m2)
 /* LMUL = 4:
    Machine mode = VNx32QImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx16QImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vint8m4_t, 14, __rvv_int8m4_t, int8, VNx32QI, VNx16QI, _i8m4,
-	      _i8, _e8m4, vbool2_t)
-DEF_RVV_TYPE (vuint8m4_t, 15, __rvv_uint8m4_t, uint8, VNx32QI, VNx16QI,
-	      _u8m4, _u8, _e8m4, vbool2_t)
+DEF_RVV_TYPE (vint8m4_t, 14, __rvv_int8m4_t, int8, VNx32QI, VNx16QI, _i8m4, _i8,
+	      _e8m4)
+DEF_RVV_TYPE (vuint8m4_t, 15, __rvv_uint8m4_t, uint8, VNx32QI, VNx16QI, _u8m4,
+	      _u8, _e8m4)
 /* LMUL = 8:
    Machine mode = VNx64QImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx32QImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vint8m8_t, 14, __rvv_int8m8_t, int8, VNx64QI, VNx32QI, _i8m8,
-	      _i8, _e8m8, vbool1_t)
-DEF_RVV_TYPE (vuint8m8_t, 15, __rvv_uint8m8_t, uint8, VNx64QI, VNx32QI,
-	      _u8m8, _u8, _e8m8, vbool1_t)
+DEF_RVV_TYPE (vint8m8_t, 14, __rvv_int8m8_t, int8, VNx64QI, VNx32QI, _i8m8, _i8,
+	      _e8m8)
+DEF_RVV_TYPE (vuint8m8_t, 15, __rvv_uint8m8_t, uint8, VNx64QI, VNx32QI, _u8m8,
+	      _u8, _e8m8)
 
 /* LMUL = 1/4:
    Only enble when TARGET_MIN_VLEN > 32 and machine mode = VNx1HImode.  */
 DEF_RVV_TYPE (vint16mf4_t, 16, __rvv_int16mf4_t, int16, VNx1HI, VOID, _i16mf4,
-	      _i16, _e16mf4, vbool64_t)
+	      _i16, _e16mf4)
 DEF_RVV_TYPE (vuint16mf4_t, 17, __rvv_uint16mf4_t, uint16, VNx1HI, VOID,
-	      _u16mf4, _u16, _e16mf4, vbool64_t)
+	      _u16mf4, _u16, _e16mf4)
 /* LMUL = 1/2:
    Machine mode = VNx2HImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx1HImode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vint16mf2_t, 16, __rvv_int16mf2_t, int16, VNx2HI, VNx1HI, _i16mf2,
-	      _i16, _e16mf2, vbool32_t)
-DEF_RVV_TYPE (vuint16mf2_t, 17, __rvv_uint16mf2_t, uint16, VNx2HI,
-	      VNx1HI, _u16mf2, _u16, _e16mf2, vbool32_t)
+	      _i16, _e16mf2)
+DEF_RVV_TYPE (vuint16mf2_t, 17, __rvv_uint16mf2_t, uint16, VNx2HI, VNx1HI,
+	      _u16mf2, _u16, _e16mf2)
 /* LMUL = 1:
    Machine mode = VNx4HImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx2HImode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vint16m1_t, 15, __rvv_int16m1_t, int16, VNx4HI, VNx2HI, _i16m1,
-	      _i16, _e16m1, vbool16_t)
-DEF_RVV_TYPE (vuint16m1_t, 16, __rvv_uint16m1_t, uint16, VNx4HI, VNx2HI,
-	      _u16m1, _u16, _e16m1, vbool16_t)
+	      _i16, _e16m1)
+DEF_RVV_TYPE (vuint16m1_t, 16, __rvv_uint16m1_t, uint16, VNx4HI, VNx2HI, _u16m1,
+	      _u16, _e16m1)
 /* LMUL = 2:
    Machine mode = VNx8HImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx4HImode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vint16m2_t, 15, __rvv_int16m2_t, int16, VNx8HI, VNx4HI, _i16m2,
-	      _i16, _e16m2, vbool8_t)
-DEF_RVV_TYPE (vuint16m2_t, 16, __rvv_uint16m2_t, uint16, VNx8HI, VNx4HI,
-	      _u16m2, _u16, _e16m2, vbool8_t)
+	      _i16, _e16m2)
+DEF_RVV_TYPE (vuint16m2_t, 16, __rvv_uint16m2_t, uint16, VNx8HI, VNx4HI, _u16m2,
+	      _u16, _e16m2)
 /* LMUL = 4:
    Machine mode = VNx16HImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx8HImode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vint16m4_t, 15, __rvv_int16m4_t, int16, VNx16HI, VNx8HI, _i16m4,
-	      _i16, _e16m4, vbool4_t)
-DEF_RVV_TYPE (vuint16m4_t, 16, __rvv_uint16m4_t, uint16, VNx16HI,
-	      VNx8HI, _u16m4, _u16, _e16m4, vbool4_t)
+	      _i16, _e16m4)
+DEF_RVV_TYPE (vuint16m4_t, 16, __rvv_uint16m4_t, uint16, VNx16HI, VNx8HI,
+	      _u16m4, _u16, _e16m4)
 /* LMUL = 8:
    Machine mode = VNx32HImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx16HImode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vint16m8_t, 15, __rvv_int16m8_t, int16, VNx32HI, VNx16HI, _i16m8,
-	      _i16, _e16m8, vbool2_t)
-DEF_RVV_TYPE (vuint16m8_t, 16, __rvv_uint16m8_t, uint16, VNx32HI,
-	      VNx16HI, _u16m8, _u16, _e16m8, vbool2_t)
+	      _i16, _e16m8)
+DEF_RVV_TYPE (vuint16m8_t, 16, __rvv_uint16m8_t, uint16, VNx32HI, VNx16HI,
+	      _u16m8, _u16, _e16m8)
 
 /* LMUL = 1/2:
    Only enble when TARGET_MIN_VLEN > 32 and machine mode = VNx1SImode.  */
 DEF_RVV_TYPE (vint32mf2_t, 16, __rvv_int32mf2_t, int32, VNx1SI, VOID, _i32mf2,
-	      _i32, _e32mf2, vbool64_t)
+	      _i32, _e32mf2)
 DEF_RVV_TYPE (vuint32mf2_t, 17, __rvv_uint32mf2_t, uint32, VNx1SI, VOID,
-	      _u32mf2, _u32, _e32mf2, vbool64_t)
+	      _u32mf2, _u32, _e32mf2)
 /* LMUL = 1:
    Machine mode = VNx2SImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx1SImode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vint32m1_t, 15, __rvv_int32m1_t, int32, VNx2SI, VNx1SI, _i32m1,
-	      _i32, _e32m1, vbool32_t)
-DEF_RVV_TYPE (vuint32m1_t, 16, __rvv_uint32m1_t, uint32, VNx2SI, VNx1SI,
-	      _u32m1, _u32, _e32m1, vbool32_t)
+	      _i32, _e32m1)
+DEF_RVV_TYPE (vuint32m1_t, 16, __rvv_uint32m1_t, uint32, VNx2SI, VNx1SI, _u32m1,
+	      _u32, _e32m1)
 /* LMUL = 2:
    Machine mode = VNx4SImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx2SImode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vint32m2_t, 15, __rvv_int32m2_t, int32, VNx4SI, VNx2SI, _i32m2,
-	      _i32, _e32m2, vbool16_t)
-DEF_RVV_TYPE (vuint32m2_t, 16, __rvv_uint32m2_t, uint32, VNx4SI, VNx2SI,
-	      _u32m2, _u32, _e32m2, vbool16_t)
+	      _i32, _e32m2)
+DEF_RVV_TYPE (vuint32m2_t, 16, __rvv_uint32m2_t, uint32, VNx4SI, VNx2SI, _u32m2,
+	      _u32, _e32m2)
 /* LMUL = 4:
    Machine mode = VNx8SImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx4SImode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vint32m4_t, 15, __rvv_int32m4_t, int32, VNx8SI, VNx4SI, _i32m4,
-	      _i32, _e32m4, vbool8_t)
-DEF_RVV_TYPE (vuint32m4_t, 16, __rvv_uint32m4_t, uint32, VNx8SI, VNx4SI,
-	      _u32m4, _u32, _e32m4, vbool8_t)
+	      _i32, _e32m4)
+DEF_RVV_TYPE (vuint32m4_t, 16, __rvv_uint32m4_t, uint32, VNx8SI, VNx4SI, _u32m4,
+	      _u32, _e32m4)
 /* LMUL = 8:
    Machine mode = VNx16SImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx8SImode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vint32m8_t, 15, __rvv_int32m8_t, int32, VNx16SI, VNx8SI, _i32m8,
-	      _i32, _e32m8, vbool4_t)
-DEF_RVV_TYPE (vuint32m8_t, 16, __rvv_uint32m8_t, uint32, VNx16SI,
-	      VNx8SI, _u32m8, _u32, _e32m8, vbool4_t)
+	      _i32, _e32m8)
+DEF_RVV_TYPE (vuint32m8_t, 16, __rvv_uint32m8_t, uint32, VNx16SI, VNx8SI,
+	      _u32m8, _u32, _e32m8)
 
 /* SEW = 64:
    Disable when TARGET_MIN_VLEN > 32.  */
 DEF_RVV_TYPE (vint64m1_t, 15, __rvv_int64m1_t, int64, VNx1DI, VOID, _i64m1,
-	      _i64, _e64m1, vbool64_t)
-DEF_RVV_TYPE (vuint64m1_t, 16, __rvv_uint64m1_t, uint64, VNx1DI, VOID,
-	      _u64m1, _u64, _e64m1, vbool64_t)
+	      _i64, _e64m1)
+DEF_RVV_TYPE (vuint64m1_t, 16, __rvv_uint64m1_t, uint64, VNx1DI, VOID, _u64m1,
+	      _u64, _e64m1)
 DEF_RVV_TYPE (vint64m2_t, 15, __rvv_int64m2_t, int64, VNx2DI, VOID, _i64m2,
-	      _i64, _e64m2, vbool32_t)
-DEF_RVV_TYPE (vuint64m2_t, 16, __rvv_uint64m2_t, uint64, VNx2DI, VOID,
-	      _u64m2, _u64, _e64m2, vbool32_t)
+	      _i64, _e64m2)
+DEF_RVV_TYPE (vuint64m2_t, 16, __rvv_uint64m2_t, uint64, VNx2DI, VOID, _u64m2,
+	      _u64, _e64m2)
 DEF_RVV_TYPE (vint64m4_t, 15, __rvv_int64m4_t, int64, VNx4DI, VOID, _i64m4,
-	      _i64, _e64m4, vbool16_t)
-DEF_RVV_TYPE (vuint64m4_t, 16, __rvv_uint64m4_t, uint64, VNx4DI, VOID,
-	      _u64m4, _u64, _e64m4, vbool16_t)
+	      _i64, _e64m4)
+DEF_RVV_TYPE (vuint64m4_t, 16, __rvv_uint64m4_t, uint64, VNx4DI, VOID, _u64m4,
+	      _u64, _e64m4)
 DEF_RVV_TYPE (vint64m8_t, 15, __rvv_int64m8_t, int64, VNx8DI, VOID, _i64m8,
-	      _i64, _e64m8, vbool8_t)
-DEF_RVV_TYPE (vuint64m8_t, 16, __rvv_uint64m8_t, uint64, VNx8DI, VOID,
-	      _u64m8, _u64, _e64m8, vbool8_t)
+	      _i64, _e64m8)
+DEF_RVV_TYPE (vuint64m8_t, 16, __rvv_uint64m8_t, uint64, VNx8DI, VOID, _u64m8,
+	      _u64, _e64m8)
 
 /* LMUL = 1/2:
    Only enble when TARGET_MIN_VLEN > 32 and machine mode = VNx1SFmode.  */
 DEF_RVV_TYPE (vfloat32mf2_t, 18, __rvv_float32mf2_t, float, VNx1SF, VOID,
-	      _f32mf2, _f32, _e32mf2, vbool64_t)
+	      _f32mf2, _f32, _e32mf2)
 /* LMUL = 1:
    Machine mode = VNx2SFmode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx1SFmode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vfloat32m1_t, 17, __rvv_float32m1_t, float, VNx2SF, VNx1SF,
-	      _f32m1, _f32, _e32m1, vbool32_t)
+	      _f32m1, _f32, _e32m1)
 /* LMUL = 2:
    Machine mode = VNx4SFmode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx2SFmode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vfloat32m2_t, 17, __rvv_float32m2_t, float, VNx4SF, VNx2SF,
-	      _f32m2, _f32, _e32m2, vbool16_t)
+	      _f32m2, _f32, _e32m2)
 /* LMUL = 4:
    Machine mode = VNx8SFmode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx4SFmode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vfloat32m4_t, 17, __rvv_float32m4_t, float, VNx8SF, VNx4SF,
-	      _f32m4, _f32, _e32m4, vbool8_t)
+	      _f32m4, _f32, _e32m4)
 /* LMUL = 8:
    Machine mode = VNx16SFmode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx8SFmode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vfloat32m8_t, 17, __rvv_float32m8_t, float, VNx16SF, VNx8SF,
-	      _f32m8, _f32, _e32m8, vbool4_t)
+	      _f32m8, _f32, _e32m8)
 
 /* SEW = 64:
    Disable when TARGET_VECTOR_FP64.  */
 DEF_RVV_TYPE (vfloat64m1_t, 17, __rvv_float64m1_t, double, VNx1DF, VOID, _f64m1,
-	      _f64, _e64m1, vbool64_t)
+	      _f64, _e64m1)
 DEF_RVV_TYPE (vfloat64m2_t, 17, __rvv_float64m2_t, double, VNx2DF, VOID, _f64m2,
-	      _f64, _e64m2, vbool32_t)
+	      _f64, _e64m2)
 DEF_RVV_TYPE (vfloat64m4_t, 17, __rvv_float64m4_t, double, VNx4DF, VOID, _f64m4,
-	      _f64, _e64m4, vbool16_t)
+	      _f64, _e64m4)
 DEF_RVV_TYPE (vfloat64m8_t, 17, __rvv_float64m8_t, double, VNx8DF, VOID, _f64m8,
-	      _f64, _e64m8, vbool8_t)
+	      _f64, _e64m8)
 
 DEF_RVV_OP_TYPE (vv)
 DEF_RVV_OP_TYPE (vx)
@@ -307,6 +327,59 @@ DEF_RVV_PRED_TYPE (m)
 DEF_RVV_PRED_TYPE (tam)
 DEF_RVV_PRED_TYPE (tum)
 
+DEF_RVV_BASE_TYPE (vector, builtin_types[type_idx].vector)
+DEF_RVV_BASE_TYPE (scalar, builtin_types[type_idx].scalar)
+DEF_RVV_BASE_TYPE (mask, get_vector_type (type_idx))
+DEF_RVV_BASE_TYPE (signed_vector, get_vector_type (type_idx))
+DEF_RVV_BASE_TYPE (unsigned_vector, get_vector_type (type_idx))
+/* According to riscv-vector-builtins-types.def, the unsigned
+   type is always the signed type + 1 (They have same SEW and LMUL).
+   For example 'vuint8mf8_t' enum = 'vint8mf8_t' enum + 1.
+   Note: We dont't allow type_idx to be unsigned type.  */
+DEF_RVV_BASE_TYPE (unsigned_scalar, builtin_types[type_idx + 1].scalar)
+DEF_RVV_BASE_TYPE (vector_ptr, builtin_types[type_idx].vector_ptr)
+/* According to the latest rvv-intrinsic-doc, it defines vsm.v intrinsic:
+   __riscv_vsm (uint8_t *base, vbool1_t value, size_t vl).  */
+DEF_RVV_BASE_TYPE (scalar_ptr, get_scalar_ptr_type (type_idx))
+/* According to the latest rvv-intrinsic-doc, it defines vlm.v intrinsic:
+   __riscv_vlm_v_b1 (const uint8_t *base, size_t vl).  */
+DEF_RVV_BASE_TYPE (scalar_const_ptr, get_scalar_const_ptr_type (type_idx))
+DEF_RVV_BASE_TYPE (void, void_type_node)
+DEF_RVV_BASE_TYPE (size, size_type_node)
+DEF_RVV_BASE_TYPE (ptrdiff, ptrdiff_type_node)
+DEF_RVV_BASE_TYPE (unsigned_long, long_unsigned_type_node)
+DEF_RVV_BASE_TYPE (long, long_integer_type_node)
+DEF_RVV_BASE_TYPE (eew8_index, get_vector_type (type_idx))
+DEF_RVV_BASE_TYPE (eew16_index, get_vector_type (type_idx))
+DEF_RVV_BASE_TYPE (eew32_index, get_vector_type (type_idx))
+DEF_RVV_BASE_TYPE (eew64_index, get_vector_type (type_idx))
+DEF_RVV_BASE_TYPE (shift_vector, get_vector_type (type_idx))
+DEF_RVV_BASE_TYPE (double_trunc_vector, get_vector_type (type_idx))
+DEF_RVV_BASE_TYPE (quad_trunc_vector, get_vector_type (type_idx))
+DEF_RVV_BASE_TYPE (oct_trunc_vector, get_vector_type (type_idx))
+DEF_RVV_BASE_TYPE (double_trunc_scalar, get_scalar_type (type_idx))
+DEF_RVV_BASE_TYPE (double_trunc_signed_vector, get_vector_type (type_idx))
+DEF_RVV_BASE_TYPE (double_trunc_unsigned_vector, get_vector_type (type_idx))
+DEF_RVV_BASE_TYPE (double_trunc_unsigned_scalar, get_scalar_type (type_idx))
+DEF_RVV_BASE_TYPE (double_trunc_float_vector, get_vector_type (type_idx))
+DEF_RVV_BASE_TYPE (float_vector, get_vector_type (type_idx))
+DEF_RVV_BASE_TYPE (lmul1_vector, get_vector_type (type_idx))
+DEF_RVV_BASE_TYPE (widen_lmul1_vector, get_vector_type (type_idx))
+DEF_RVV_BASE_TYPE (eew8_interpret, get_vector_type (type_idx))
+DEF_RVV_BASE_TYPE (eew16_interpret, get_vector_type (type_idx))
+DEF_RVV_BASE_TYPE (eew32_interpret, get_vector_type (type_idx))
+DEF_RVV_BASE_TYPE (eew64_interpret, get_vector_type (type_idx))
+DEF_RVV_BASE_TYPE (vlmul_ext_x2, get_vector_type (type_idx))
+DEF_RVV_BASE_TYPE (vlmul_ext_x4, get_vector_type (type_idx))
+DEF_RVV_BASE_TYPE (vlmul_ext_x8, get_vector_type (type_idx))
+DEF_RVV_BASE_TYPE (vlmul_ext_x16, get_vector_type (type_idx))
+DEF_RVV_BASE_TYPE (vlmul_ext_x32, get_vector_type (type_idx))
+DEF_RVV_BASE_TYPE (vlmul_ext_x64, get_vector_type (type_idx))
+
+#include "riscv-vector-type-indexer.gen.def"
+
 #undef DEF_RVV_PRED_TYPE
 #undef DEF_RVV_OP_TYPE
 #undef DEF_RVV_TYPE
+#undef DEF_RVV_BASE_TYPE
+#undef DEF_RVV_TYPE_INDEX
diff --git a/gcc/config/riscv/riscv-vector-builtins.h b/gcc/config/riscv/riscv-vector-builtins.h
index 8707f7366d9..8464aa9b7e9 100644
--- a/gcc/config/riscv/riscv-vector-builtins.h
+++ b/gcc/config/riscv/riscv-vector-builtins.h
@@ -123,7 +123,8 @@ enum vector_type_index
 {
 #define DEF_RVV_TYPE(NAME, ABI_NAME, NCHARS, ARGS...) VECTOR_TYPE_##NAME,
 #include "riscv-vector-builtins.def"
-  NUM_VECTOR_TYPES
+  NUM_VECTOR_TYPES,
+  VECTOR_TYPE_INVALID = NUM_VECTOR_TYPES
 };
 
 /* Enumerates the RVV governing predication types.  */
@@ -138,36 +139,8 @@ enum predication_type_index
 /* Enumerates the RVV base types.  */
 enum rvv_base_type
 {
-  RVV_BASE_vector,
-  RVV_BASE_scalar,
-  RVV_BASE_mask,
-  RVV_BASE_signed_vector,
-  RVV_BASE_unsigned_vector,
-  RVV_BASE_unsigned_scalar,
-  RVV_BASE_vector_ptr,
-  RVV_BASE_scalar_ptr,
-  RVV_BASE_scalar_const_ptr,
-  RVV_BASE_void,
-  RVV_BASE_size,
-  RVV_BASE_ptrdiff,
-  RVV_BASE_unsigned_long,
-  RVV_BASE_long,
-  RVV_BASE_uint8_index,
-  RVV_BASE_uint16_index,
-  RVV_BASE_uint32_index,
-  RVV_BASE_uint64_index,
-  RVV_BASE_shift_vector,
-  RVV_BASE_double_trunc_vector,
-  RVV_BASE_quad_trunc_vector,
-  RVV_BASE_oct_trunc_vector,
-  RVV_BASE_double_trunc_scalar,
-  RVV_BASE_double_trunc_signed_vector,
-  RVV_BASE_double_trunc_unsigned_vector,
-  RVV_BASE_double_trunc_unsigned_scalar,
-  RVV_BASE_double_trunc_float_vector,
-  RVV_BASE_float_vector,
-  RVV_BASE_lmul1_vector,
-  RVV_BASE_widen_lmul1_vector,
+#define DEF_RVV_BASE_TYPE(NAME, ARGS...) RVV_BASE_##NAME,
+#include "riscv-vector-builtins.def"
   NUM_BASE_TYPES
 };
 
@@ -189,6 +162,13 @@ struct rvv_builtin_suffixes
   const char *vsetvl;
 };
 
+/* Builtin base type used to specify the type of builtin function
+   argument or return result.  */
+struct function_type_info
+{
+  enum vector_type_index type_indexes[NUM_BASE_TYPES];
+};
+
 /* RVV Builtin argument information.  */
 struct rvv_arg_type_info
 {
@@ -197,7 +177,11 @@ struct rvv_arg_type_info
   {}
   enum rvv_base_type base_type;
 
-  vector_type_index get_base_vector_type (tree type) const;
+  tree get_scalar_ptr_type (vector_type_index) const;
+  tree get_scalar_const_ptr_type (vector_type_index) const;
+  vector_type_index get_function_type_index (vector_type_index) const;
+  tree get_scalar_type (vector_type_index) const;
+  tree get_vector_type (vector_type_index) const;
   tree get_tree_type (vector_type_index) const;
 };
 
@@ -352,6 +336,7 @@ public:
   machine_mode index_mode (void) const;
   machine_mode arg_mode (int) const;
   machine_mode mask_mode (void) const;
+  machine_mode ret_mode (void) const;
 
   rtx use_exact_insn (insn_code);
   rtx use_contiguous_load_insn (insn_code);
@@ -410,6 +395,37 @@ public:
   virtual rtx expand (function_expander &) const = 0;
 };
 
+/* A class for checking that the semantic constraints on a function call are
+   satisfied, such as arguments being integer constant expressions with
+   a particular range.  The parent class's FNDECL is the decl that was
+   called in the original source, before overload resolution.  */
+class function_checker : public function_call_info
+{
+public:
+  function_checker (location_t, const function_instance &, tree, tree,
+		    unsigned int, tree *);
+
+  machine_mode arg_mode (unsigned int) const;
+  machine_mode ret_mode (void) const;
+  bool check (void);
+
+  bool require_immediate (unsigned int, HOST_WIDE_INT, HOST_WIDE_INT) const;
+
+private:
+  bool require_immediate_range (unsigned int, HOST_WIDE_INT,
+				HOST_WIDE_INT) const;
+  void report_non_ice (unsigned int) const;
+  void report_out_of_range (unsigned int, HOST_WIDE_INT, HOST_WIDE_INT,
+			    HOST_WIDE_INT) const;
+
+  /* The type of the resolved function.  */
+  tree m_fntype;
+
+  /* The arguments to the function.  */
+  unsigned int m_nargs;
+  tree *m_args;
+};
+
 /* Classifies functions into "shapes" base on:
 
    - Base name of the intrinsic function.
@@ -430,6 +446,10 @@ public:
   /* Define all functions associated with the given group.  */
   virtual void build (function_builder &, const function_group_info &) const
     = 0;
+
+  /* Check whether the given call is semantically valid.  Return true
+   if it is, otherwise report an error and return false.  */
+  virtual bool check (function_checker &) const { return true; }
 };
 
 extern const char *const operand_suffixes[NUM_OP_TYPES];
@@ -437,6 +457,22 @@ extern const rvv_builtin_suffixes type_suffixes[NUM_VECTOR_TYPES + 1];
 extern const char *const predication_suffixes[NUM_PRED_TYPES];
 extern rvv_builtin_types_t builtin_types[NUM_VECTOR_TYPES + 1];
 
+inline tree
+rvv_arg_type_info::get_scalar_type (vector_type_index type_idx) const
+{
+  return get_function_type_index (type_idx) == VECTOR_TYPE_INVALID
+	   ? NULL_TREE
+	   : builtin_types[get_function_type_index (type_idx)].scalar;
+}
+
+inline tree
+rvv_arg_type_info::get_vector_type (vector_type_index type_idx) const
+{
+  return get_function_type_index (type_idx) == VECTOR_TYPE_INVALID
+	   ? NULL_TREE
+	   : builtin_types[get_function_type_index (type_idx)].vector;
+}
+
 inline bool
 function_instance::operator!= (const function_instance &other) const
 {
@@ -516,6 +552,25 @@ function_expander::arg_mode (int idx) const
   return TYPE_MODE (op_info->args[idx].get_tree_type (type.index));
 }
 
+/* Return the machine_mode of the corresponding return type.  */
+inline machine_mode
+function_expander::ret_mode (void) const
+{
+  return TYPE_MODE (op_info->ret.get_tree_type (type.index));
+}
+
+inline machine_mode
+function_checker::arg_mode (unsigned int argno) const
+{
+  return TYPE_MODE (TREE_TYPE (m_args[argno]));
+}
+
+inline machine_mode
+function_checker::ret_mode () const
+{
+  return TYPE_MODE (TREE_TYPE (TREE_TYPE (fndecl)));
+}
+
 /* Default implementation of function_base::call_properties, with conservatively
    correct behavior for floating-point instructions.  */
 inline unsigned int
diff --git a/gcc/config/riscv/t-riscv b/gcc/config/riscv/t-riscv
index d30e0235356..c2fc860e4c3 100644
--- a/gcc/config/riscv/t-riscv
+++ b/gcc/config/riscv/t-riscv
@@ -80,3 +80,21 @@ PASSES_EXTRA += $(srcdir)/config/riscv/riscv-passes.def
 $(common_out_file): $(srcdir)/config/riscv/riscv-cores.def \
     $(srcdir)/config/riscv/riscv-protos.h \
     $(srcdir)/config/riscv/riscv-subset.h
+
+build/genrvv-type-indexer.o: $(srcdir)/config/riscv/genrvv-type-indexer.cc $(RTL_BASE_H) $(BCONFIG_H) $(SYSTEM_H)	\
+  $(CORETYPES_H) $(GTM_H) errors.h $(GENSUPPORT_H) insn-modes.h
+
+build/genrvv-type-indexer$(build_exeext): build/genrvv-type-indexer.o
+	+$(LINKER_FOR_BUILD) $(BUILD_LINKERFLAGS) $(BUILD_LDFLAGS) -o $@ \
+	    $(filter-out $(BUILD_LIBDEPS), $^) $(BUILD_LIBS)
+
+$(srcdir)/config/riscv/riscv-vector-builtins.def: riscv-vector-type-indexer.gen.def
+
+riscv-vector-type-indexer.gen.def: s-riscv-vector-type-indexer.gen.defs ; @true
+
+s-riscv-vector-type-indexer.gen.defs: build/genrvv-type-indexer$(build_exeext)
+	$(RUN_GEN) build/genrvv-type-indexer$(build_exeext) tmp-riscv-vector-type-indexer.gen.def
+	$(SHELL) $(srcdir)/../move-if-change tmp-riscv-vector-type-indexer.gen.def    riscv-vector-type-indexer.gen.def
+	$(STAMP) s-riscv-vector-type-indexer.gen.defs
+
+genprog+=rvv-type-indexer
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 0eebe53f121..61e141e7b64 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -98,6 +98,59 @@
   (VNx8DF "TARGET_VECTOR_ELEN_FP_64")
 ])
 
+(define_mode_iterator VLMULEXT2 [
+  VNx1QI VNx2QI VNx4QI VNx8QI VNx16QI VNx32QI
+  VNx1HI VNx2HI VNx4HI VNx8HI VNx16HI
+  VNx1SI VNx2SI VNx4SI VNx8SI
+  (VNx1DI "TARGET_MIN_VLEN > 32") (VNx2DI "TARGET_MIN_VLEN > 32")
+  (VNx4DI "TARGET_MIN_VLEN > 32")
+  (VNx1SF "TARGET_VECTOR_ELEN_FP_32")
+  (VNx2SF "TARGET_VECTOR_ELEN_FP_32")
+  (VNx4SF "TARGET_VECTOR_ELEN_FP_32")
+  (VNx8SF "TARGET_VECTOR_ELEN_FP_32")
+  (VNx1DF "TARGET_VECTOR_ELEN_FP_64")
+  (VNx2DF "TARGET_VECTOR_ELEN_FP_64")
+  (VNx4DF "TARGET_VECTOR_ELEN_FP_64")
+])
+
+(define_mode_iterator VLMULEXT4 [
+  VNx1QI VNx2QI VNx4QI VNx8QI VNx16QI
+  VNx1HI VNx2HI VNx4HI VNx8HI
+  VNx1SI VNx2SI VNx4SI
+  (VNx1DI "TARGET_MIN_VLEN > 32") (VNx2DI "TARGET_MIN_VLEN > 32")
+  (VNx1SF "TARGET_VECTOR_ELEN_FP_32")
+  (VNx2SF "TARGET_VECTOR_ELEN_FP_32")
+  (VNx4SF "TARGET_VECTOR_ELEN_FP_32")
+  (VNx1DF "TARGET_VECTOR_ELEN_FP_64")
+  (VNx2DF "TARGET_VECTOR_ELEN_FP_64")
+])
+
+(define_mode_iterator VLMULEXT8 [
+  VNx1QI VNx2QI VNx4QI VNx8QI
+  VNx1HI VNx2HI VNx4HI
+  VNx1SI VNx2SI
+  (VNx1DI "TARGET_MIN_VLEN > 32")
+  (VNx1SF "TARGET_VECTOR_ELEN_FP_32")
+  (VNx2SF "TARGET_VECTOR_ELEN_FP_32")
+  (VNx1DF "TARGET_VECTOR_ELEN_FP_64")
+])
+
+(define_mode_iterator VLMULEXT16 [
+  VNx1QI VNx2QI VNx4QI
+  VNx1HI VNx2HI
+  VNx1SI
+  (VNx1SF "TARGET_VECTOR_ELEN_FP_32")
+])
+
+(define_mode_iterator VLMULEXT32 [
+  VNx1QI VNx2QI
+  VNx1HI
+])
+
+(define_mode_iterator VLMULEXT64 [
+  VNx1QI
+])
+
 (define_mode_iterator VEI16 [
   VNx1QI VNx2QI VNx4QI VNx8QI VNx16QI VNx32QI
   VNx1HI VNx2HI VNx4HI VNx8HI VNx16HI (VNx32HI "TARGET_MIN_VLEN > 32")
@@ -317,6 +370,49 @@
   (VNx4DI "TARGET_MIN_VLEN > 32") (VNx8DI "TARGET_MIN_VLEN > 32")
 ])
 
+(define_mode_attr VLMULX2 [
+  (VNx1QI "VNx2QI") (VNx2QI "VNx4QI") (VNx4QI "VNx8QI") (VNx8QI "VNx16QI") (VNx16QI "VNx32QI") (VNx32QI "VNx64QI")
+  (VNx1HI "VNx2HI") (VNx2HI "VNx4HI") (VNx4HI "VNx8HI") (VNx8HI "VNx16HI") (VNx16HI "VNx32HI")
+  (VNx1SI "VNx2SI") (VNx2SI "VNx4SI") (VNx4SI "VNx8SI") (VNx8SI "VNx16SI")
+  (VNx1DI "VNx2DI") (VNx2DI "VNx4DI") (VNx4DI "VNx8DI")
+  (VNx1SF "VNx2SF") (VNx2SF "VNx4SF") (VNx4SF "VNx8SF") (VNx8SF "VNx16SF")
+  (VNx1DF "VNx2DF") (VNx2DF "VNx4DF") (VNx4DF "VNx8DF")
+])
+
+(define_mode_attr VLMULX4 [
+  (VNx1QI "VNx4QI") (VNx2QI "VNx8QI") (VNx4QI "VNx16QI") (VNx8QI "VNx32QI") (VNx16QI "VNx64QI")
+  (VNx1HI "VNx4HI") (VNx2HI "VNx8HI") (VNx4HI "VNx16HI") (VNx8HI "VNx32HI")
+  (VNx1SI "VNx4SI") (VNx2SI "VNx8SI") (VNx4SI "VNx16SI")
+  (VNx1DI "VNx4DI") (VNx2DI "VNx8DI")
+  (VNx1SF "VNx4SF") (VNx2SF "VNx8SF") (VNx4SF "VNx16SF")
+  (VNx1DF "VNx4DF") (VNx2DF "VNx8DF")
+])
+
+(define_mode_attr VLMULX8 [
+  (VNx1QI "VNx8QI") (VNx2QI "VNx16QI") (VNx4QI "VNx32QI") (VNx8QI "VNx64QI")
+  (VNx1HI "VNx8HI") (VNx2HI "VNx16HI") (VNx4HI "VNx32HI")
+  (VNx1SI "VNx8SI") (VNx2SI "VNx16SI")
+  (VNx1DI "VNx8DI")
+  (VNx1SF "VNx8SF") (VNx2SF "VNx16SF")
+  (VNx1DF "VNx8DF")
+])
+
+(define_mode_attr VLMULX16 [
+  (VNx1QI "VNx16QI") (VNx2QI "VNx32QI") (VNx4QI "VNx64QI")
+  (VNx1HI "VNx16HI") (VNx2HI "VNx32HI")
+  (VNx1SI "VNx16SI")
+  (VNx1SF "VNx16SF")
+])
+
+(define_mode_attr VLMULX32 [
+  (VNx1QI "VNx32QI") (VNx2QI "VNx64QI")
+  (VNx1HI "VNx32HI")
+])
+
+(define_mode_attr VLMULX64 [
+  (VNx1QI "VNx64QI")
+])
+
 (define_mode_attr VINDEX [
   (VNx1QI "VNx1QI") (VNx2QI "VNx2QI") (VNx4QI "VNx4QI") (VNx8QI "VNx8QI")
   (VNx16QI "VNx16QI") (VNx32QI "VNx32QI") (VNx64QI "VNx64QI")
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 62e1abbb2da..2d4eb8bf1cd 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -354,12 +354,142 @@
 ;; ---- Miscellaneous Operations
 ;; -----------------------------------------------------------------
 
-(define_insn "vundefined<mode>"
+(define_insn "@vundefined<mode>"
   [(set (match_operand:V 0 "register_operand" "=vr")
-	(unspec:V [(const_int 0)] UNSPEC_VUNDEF))]
+	(unspec:V [(reg:SI X0_REGNUM)] UNSPEC_VUNDEF))]
   "TARGET_VECTOR"
   "")
 
+(define_expand "@vreinterpret<mode>"
+  [(set (match_operand:V 0 "register_operand")
+	(match_operand 1 "vector_any_register_operand"))]
+  "TARGET_VECTOR"
+  {
+    emit_move_insn (operands[0], gen_lowpart (<MODE>mode, operands[1]));
+    DONE;
+  }
+)
+
+(define_expand "@vlmul_extx2<mode>"
+  [(set (match_operand:<VLMULX2> 0 "register_operand")
+  	(subreg:<VLMULX2>
+  	  (match_operand:VLMULEXT2 1 "register_operand") 0))]
+  "TARGET_VECTOR"
+{})
+
+(define_expand "@vlmul_extx4<mode>"
+  [(set (match_operand:<VLMULX4> 0 "register_operand")
+  	(subreg:<VLMULX4>
+  	  (match_operand:VLMULEXT4 1 "register_operand") 0))]
+  "TARGET_VECTOR"
+{})
+
+(define_expand "@vlmul_extx8<mode>"
+  [(set (match_operand:<VLMULX8> 0 "register_operand")
+  	(subreg:<VLMULX8>
+  	  (match_operand:VLMULEXT8 1 "register_operand") 0))]
+  "TARGET_VECTOR"
+{})
+
+(define_expand "@vlmul_extx16<mode>"
+  [(set (match_operand:<VLMULX16> 0 "register_operand")
+  	(subreg:<VLMULX16>
+  	  (match_operand:VLMULEXT16 1 "register_operand") 0))]
+  "TARGET_VECTOR"
+{})
+
+(define_expand "@vlmul_extx32<mode>"
+  [(set (match_operand:<VLMULX32> 0 "register_operand")
+  	(subreg:<VLMULX32>
+  	  (match_operand:VLMULEXT32 1 "register_operand") 0))]
+  "TARGET_VECTOR"
+{})
+
+(define_expand "@vlmul_extx64<mode>"
+  [(set (match_operand:<VLMULX64> 0 "register_operand")
+  	(subreg:<VLMULX64>
+  	  (match_operand:VLMULEXT64 1 "register_operand") 0))]
+  "TARGET_VECTOR"
+{})
+
+(define_insn_and_split "*vlmul_extx2<mode>"
+  [(set (match_operand:<VLMULX2> 0 "register_operand"  "=vr, ?&vr")
+	(subreg:<VLMULX2>
+	  (match_operand:VLMULEXT2 1 "register_operand" " 0,   vr") 0))]
+  "TARGET_VECTOR"
+  "#"
+  "&& reload_completed"
+  [(const_int 0)]
+{
+  emit_insn (gen_rtx_SET (gen_lowpart (<MODE>mode, operands[0]), operands[1]));
+  DONE;
+})
+
+(define_insn_and_split "*vlmul_extx4<mode>"
+  [(set (match_operand:<VLMULX4> 0 "register_operand"  "=vr, ?&vr")
+	(subreg:<VLMULX4>
+	  (match_operand:VLMULEXT4 1 "register_operand" " 0,   vr") 0))]
+  "TARGET_VECTOR"
+  "#"
+  "&& reload_completed"
+  [(const_int 0)]
+{
+  emit_insn (gen_rtx_SET (gen_lowpart (<MODE>mode, operands[0]), operands[1]));
+  DONE;
+})
+
+(define_insn_and_split "*vlmul_extx8<mode>"
+  [(set (match_operand:<VLMULX8> 0 "register_operand"  "=vr, ?&vr")
+	(subreg:<VLMULX8>
+	  (match_operand:VLMULEXT8 1 "register_operand" " 0,   vr") 0))]
+  "TARGET_VECTOR"
+  "#"
+  "&& reload_completed"
+  [(const_int 0)]
+{
+  emit_insn (gen_rtx_SET (gen_lowpart (<MODE>mode, operands[0]), operands[1]));
+  DONE;
+})
+
+(define_insn_and_split "*vlmul_extx16<mode>"
+  [(set (match_operand:<VLMULX16> 0 "register_operand"  "=vr, ?&vr")
+	(subreg:<VLMULX16>
+	  (match_operand:VLMULEXT16 1 "register_operand" " 0,   vr") 0))]
+  "TARGET_VECTOR"
+  "#"
+  "&& reload_completed"
+  [(const_int 0)]
+{
+  emit_insn (gen_rtx_SET (gen_lowpart (<MODE>mode, operands[0]), operands[1]));
+  DONE;
+})
+
+(define_insn_and_split "*vlmul_extx32<mode>"
+  [(set (match_operand:<VLMULX32> 0 "register_operand"  "=vr, ?&vr")
+	(subreg:<VLMULX32>
+	  (match_operand:VLMULEXT32 1 "register_operand" " 0,   vr") 0))]
+  "TARGET_VECTOR"
+  "#"
+  "&& reload_completed"
+  [(const_int 0)]
+{
+  emit_insn (gen_rtx_SET (gen_lowpart (<MODE>mode, operands[0]), operands[1]));
+  DONE;
+})
+
+(define_insn_and_split "*vlmul_extx64<mode>"
+  [(set (match_operand:<VLMULX64> 0 "register_operand"  "=vr, ?&vr")
+	(subreg:<VLMULX64>
+	  (match_operand:VLMULEXT64 1 "register_operand" " 0,   vr") 0))]
+  "TARGET_VECTOR"
+  "#"
+  "&& reload_completed"
+  [(const_int 0)]
+{
+  emit_insn (gen_rtx_SET (gen_lowpart (<MODE>mode, operands[0]), operands[1]));
+  DONE;
+})
+
 ;; This pattern is used to hold the AVL operand for
 ;; RVV instructions that implicity use VLMAX AVL.
 ;; RVV instruction implicitly use GPR that is ultimately
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vlmul_v.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vlmul_v.c
new file mode 100644
index 00000000000..1925ae37c89
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vlmul_v.c
@@ -0,0 +1,1448 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+vfloat32m1_t test___riscv_vlmul_ext_v_f32mf2_f32m1(vfloat32mf2_t op1)
+{
+    return __riscv_vlmul_ext_v_f32mf2_f32m1(op1);
+}
+
+
+vfloat32m2_t test___riscv_vlmul_ext_v_f32mf2_f32m2(vfloat32mf2_t op1)
+{
+    return __riscv_vlmul_ext_v_f32mf2_f32m2(op1);
+}
+
+
+vfloat32m4_t test___riscv_vlmul_ext_v_f32mf2_f32m4(vfloat32mf2_t op1)
+{
+    return __riscv_vlmul_ext_v_f32mf2_f32m4(op1);
+}
+
+
+vfloat32m8_t test___riscv_vlmul_ext_v_f32mf2_f32m8(vfloat32mf2_t op1)
+{
+    return __riscv_vlmul_ext_v_f32mf2_f32m8(op1);
+}
+
+
+vfloat32m2_t test___riscv_vlmul_ext_v_f32m1_f32m2(vfloat32m1_t op1)
+{
+    return __riscv_vlmul_ext_v_f32m1_f32m2(op1);
+}
+
+
+vfloat32m4_t test___riscv_vlmul_ext_v_f32m1_f32m4(vfloat32m1_t op1)
+{
+    return __riscv_vlmul_ext_v_f32m1_f32m4(op1);
+}
+
+
+vfloat32m8_t test___riscv_vlmul_ext_v_f32m1_f32m8(vfloat32m1_t op1)
+{
+    return __riscv_vlmul_ext_v_f32m1_f32m8(op1);
+}
+
+
+vfloat32m4_t test___riscv_vlmul_ext_v_f32m2_f32m4(vfloat32m2_t op1)
+{
+    return __riscv_vlmul_ext_v_f32m2_f32m4(op1);
+}
+
+
+vfloat32m8_t test___riscv_vlmul_ext_v_f32m2_f32m8(vfloat32m2_t op1)
+{
+    return __riscv_vlmul_ext_v_f32m2_f32m8(op1);
+}
+
+
+vfloat32m8_t test___riscv_vlmul_ext_v_f32m4_f32m8(vfloat32m4_t op1)
+{
+    return __riscv_vlmul_ext_v_f32m4_f32m8(op1);
+}
+
+
+vfloat64m2_t test___riscv_vlmul_ext_v_f64m1_f64m2(vfloat64m1_t op1)
+{
+    return __riscv_vlmul_ext_v_f64m1_f64m2(op1);
+}
+
+
+vfloat64m4_t test___riscv_vlmul_ext_v_f64m1_f64m4(vfloat64m1_t op1)
+{
+    return __riscv_vlmul_ext_v_f64m1_f64m4(op1);
+}
+
+
+vfloat64m8_t test___riscv_vlmul_ext_v_f64m1_f64m8(vfloat64m1_t op1)
+{
+    return __riscv_vlmul_ext_v_f64m1_f64m8(op1);
+}
+
+
+vfloat64m4_t test___riscv_vlmul_ext_v_f64m2_f64m4(vfloat64m2_t op1)
+{
+    return __riscv_vlmul_ext_v_f64m2_f64m4(op1);
+}
+
+
+vfloat64m8_t test___riscv_vlmul_ext_v_f64m2_f64m8(vfloat64m2_t op1)
+{
+    return __riscv_vlmul_ext_v_f64m2_f64m8(op1);
+}
+
+
+vfloat64m8_t test___riscv_vlmul_ext_v_f64m4_f64m8(vfloat64m4_t op1)
+{
+    return __riscv_vlmul_ext_v_f64m4_f64m8(op1);
+}
+
+
+vint8mf4_t test___riscv_vlmul_ext_v_i8mf8_i8mf4(vint8mf8_t op1)
+{
+    return __riscv_vlmul_ext_v_i8mf8_i8mf4(op1);
+}
+
+
+vint8mf2_t test___riscv_vlmul_ext_v_i8mf8_i8mf2(vint8mf8_t op1)
+{
+    return __riscv_vlmul_ext_v_i8mf8_i8mf2(op1);
+}
+
+
+vint8m1_t test___riscv_vlmul_ext_v_i8mf8_i8m1(vint8mf8_t op1)
+{
+    return __riscv_vlmul_ext_v_i8mf8_i8m1(op1);
+}
+
+
+vint8m2_t test___riscv_vlmul_ext_v_i8mf8_i8m2(vint8mf8_t op1)
+{
+    return __riscv_vlmul_ext_v_i8mf8_i8m2(op1);
+}
+
+
+vint8m4_t test___riscv_vlmul_ext_v_i8mf8_i8m4(vint8mf8_t op1)
+{
+    return __riscv_vlmul_ext_v_i8mf8_i8m4(op1);
+}
+
+
+vint8m8_t test___riscv_vlmul_ext_v_i8mf8_i8m8(vint8mf8_t op1)
+{
+    return __riscv_vlmul_ext_v_i8mf8_i8m8(op1);
+}
+
+
+vint8mf2_t test___riscv_vlmul_ext_v_i8mf4_i8mf2(vint8mf4_t op1)
+{
+    return __riscv_vlmul_ext_v_i8mf4_i8mf2(op1);
+}
+
+
+vint8m1_t test___riscv_vlmul_ext_v_i8mf4_i8m1(vint8mf4_t op1)
+{
+    return __riscv_vlmul_ext_v_i8mf4_i8m1(op1);
+}
+
+
+vint8m2_t test___riscv_vlmul_ext_v_i8mf4_i8m2(vint8mf4_t op1)
+{
+    return __riscv_vlmul_ext_v_i8mf4_i8m2(op1);
+}
+
+
+vint8m4_t test___riscv_vlmul_ext_v_i8mf4_i8m4(vint8mf4_t op1)
+{
+    return __riscv_vlmul_ext_v_i8mf4_i8m4(op1);
+}
+
+
+vint8m8_t test___riscv_vlmul_ext_v_i8mf4_i8m8(vint8mf4_t op1)
+{
+    return __riscv_vlmul_ext_v_i8mf4_i8m8(op1);
+}
+
+
+vint8m1_t test___riscv_vlmul_ext_v_i8mf2_i8m1(vint8mf2_t op1)
+{
+    return __riscv_vlmul_ext_v_i8mf2_i8m1(op1);
+}
+
+
+vint8m2_t test___riscv_vlmul_ext_v_i8mf2_i8m2(vint8mf2_t op1)
+{
+    return __riscv_vlmul_ext_v_i8mf2_i8m2(op1);
+}
+
+
+vint8m4_t test___riscv_vlmul_ext_v_i8mf2_i8m4(vint8mf2_t op1)
+{
+    return __riscv_vlmul_ext_v_i8mf2_i8m4(op1);
+}
+
+
+vint8m8_t test___riscv_vlmul_ext_v_i8mf2_i8m8(vint8mf2_t op1)
+{
+    return __riscv_vlmul_ext_v_i8mf2_i8m8(op1);
+}
+
+
+vint8m2_t test___riscv_vlmul_ext_v_i8m1_i8m2(vint8m1_t op1)
+{
+    return __riscv_vlmul_ext_v_i8m1_i8m2(op1);
+}
+
+
+vint8m4_t test___riscv_vlmul_ext_v_i8m1_i8m4(vint8m1_t op1)
+{
+    return __riscv_vlmul_ext_v_i8m1_i8m4(op1);
+}
+
+
+vint8m8_t test___riscv_vlmul_ext_v_i8m1_i8m8(vint8m1_t op1)
+{
+    return __riscv_vlmul_ext_v_i8m1_i8m8(op1);
+}
+
+
+vint8m4_t test___riscv_vlmul_ext_v_i8m2_i8m4(vint8m2_t op1)
+{
+    return __riscv_vlmul_ext_v_i8m2_i8m4(op1);
+}
+
+
+vint8m8_t test___riscv_vlmul_ext_v_i8m2_i8m8(vint8m2_t op1)
+{
+    return __riscv_vlmul_ext_v_i8m2_i8m8(op1);
+}
+
+
+vint8m8_t test___riscv_vlmul_ext_v_i8m4_i8m8(vint8m4_t op1)
+{
+    return __riscv_vlmul_ext_v_i8m4_i8m8(op1);
+}
+
+
+vint16mf2_t test___riscv_vlmul_ext_v_i16mf4_i16mf2(vint16mf4_t op1)
+{
+    return __riscv_vlmul_ext_v_i16mf4_i16mf2(op1);
+}
+
+
+vint16m1_t test___riscv_vlmul_ext_v_i16mf4_i16m1(vint16mf4_t op1)
+{
+    return __riscv_vlmul_ext_v_i16mf4_i16m1(op1);
+}
+
+
+vint16m2_t test___riscv_vlmul_ext_v_i16mf4_i16m2(vint16mf4_t op1)
+{
+    return __riscv_vlmul_ext_v_i16mf4_i16m2(op1);
+}
+
+
+vint16m4_t test___riscv_vlmul_ext_v_i16mf4_i16m4(vint16mf4_t op1)
+{
+    return __riscv_vlmul_ext_v_i16mf4_i16m4(op1);
+}
+
+
+vint16m8_t test___riscv_vlmul_ext_v_i16mf4_i16m8(vint16mf4_t op1)
+{
+    return __riscv_vlmul_ext_v_i16mf4_i16m8(op1);
+}
+
+
+vint16m1_t test___riscv_vlmul_ext_v_i16mf2_i16m1(vint16mf2_t op1)
+{
+    return __riscv_vlmul_ext_v_i16mf2_i16m1(op1);
+}
+
+
+vint16m2_t test___riscv_vlmul_ext_v_i16mf2_i16m2(vint16mf2_t op1)
+{
+    return __riscv_vlmul_ext_v_i16mf2_i16m2(op1);
+}
+
+
+vint16m4_t test___riscv_vlmul_ext_v_i16mf2_i16m4(vint16mf2_t op1)
+{
+    return __riscv_vlmul_ext_v_i16mf2_i16m4(op1);
+}
+
+
+vint16m8_t test___riscv_vlmul_ext_v_i16mf2_i16m8(vint16mf2_t op1)
+{
+    return __riscv_vlmul_ext_v_i16mf2_i16m8(op1);
+}
+
+
+vint16m2_t test___riscv_vlmul_ext_v_i16m1_i16m2(vint16m1_t op1)
+{
+    return __riscv_vlmul_ext_v_i16m1_i16m2(op1);
+}
+
+
+vint16m4_t test___riscv_vlmul_ext_v_i16m1_i16m4(vint16m1_t op1)
+{
+    return __riscv_vlmul_ext_v_i16m1_i16m4(op1);
+}
+
+
+vint16m8_t test___riscv_vlmul_ext_v_i16m1_i16m8(vint16m1_t op1)
+{
+    return __riscv_vlmul_ext_v_i16m1_i16m8(op1);
+}
+
+
+vint16m4_t test___riscv_vlmul_ext_v_i16m2_i16m4(vint16m2_t op1)
+{
+    return __riscv_vlmul_ext_v_i16m2_i16m4(op1);
+}
+
+
+vint16m8_t test___riscv_vlmul_ext_v_i16m2_i16m8(vint16m2_t op1)
+{
+    return __riscv_vlmul_ext_v_i16m2_i16m8(op1);
+}
+
+
+vint16m8_t test___riscv_vlmul_ext_v_i16m4_i16m8(vint16m4_t op1)
+{
+    return __riscv_vlmul_ext_v_i16m4_i16m8(op1);
+}
+
+
+vint32m1_t test___riscv_vlmul_ext_v_i32mf2_i32m1(vint32mf2_t op1)
+{
+    return __riscv_vlmul_ext_v_i32mf2_i32m1(op1);
+}
+
+
+vint32m2_t test___riscv_vlmul_ext_v_i32mf2_i32m2(vint32mf2_t op1)
+{
+    return __riscv_vlmul_ext_v_i32mf2_i32m2(op1);
+}
+
+
+vint32m4_t test___riscv_vlmul_ext_v_i32mf2_i32m4(vint32mf2_t op1)
+{
+    return __riscv_vlmul_ext_v_i32mf2_i32m4(op1);
+}
+
+
+vint32m8_t test___riscv_vlmul_ext_v_i32mf2_i32m8(vint32mf2_t op1)
+{
+    return __riscv_vlmul_ext_v_i32mf2_i32m8(op1);
+}
+
+
+vint32m2_t test___riscv_vlmul_ext_v_i32m1_i32m2(vint32m1_t op1)
+{
+    return __riscv_vlmul_ext_v_i32m1_i32m2(op1);
+}
+
+
+vint32m4_t test___riscv_vlmul_ext_v_i32m1_i32m4(vint32m1_t op1)
+{
+    return __riscv_vlmul_ext_v_i32m1_i32m4(op1);
+}
+
+
+vint32m8_t test___riscv_vlmul_ext_v_i32m1_i32m8(vint32m1_t op1)
+{
+    return __riscv_vlmul_ext_v_i32m1_i32m8(op1);
+}
+
+
+vint32m4_t test___riscv_vlmul_ext_v_i32m2_i32m4(vint32m2_t op1)
+{
+    return __riscv_vlmul_ext_v_i32m2_i32m4(op1);
+}
+
+
+vint32m8_t test___riscv_vlmul_ext_v_i32m2_i32m8(vint32m2_t op1)
+{
+    return __riscv_vlmul_ext_v_i32m2_i32m8(op1);
+}
+
+
+vint32m8_t test___riscv_vlmul_ext_v_i32m4_i32m8(vint32m4_t op1)
+{
+    return __riscv_vlmul_ext_v_i32m4_i32m8(op1);
+}
+
+
+vint64m2_t test___riscv_vlmul_ext_v_i64m1_i64m2(vint64m1_t op1)
+{
+    return __riscv_vlmul_ext_v_i64m1_i64m2(op1);
+}
+
+
+vint64m4_t test___riscv_vlmul_ext_v_i64m1_i64m4(vint64m1_t op1)
+{
+    return __riscv_vlmul_ext_v_i64m1_i64m4(op1);
+}
+
+
+vint64m8_t test___riscv_vlmul_ext_v_i64m1_i64m8(vint64m1_t op1)
+{
+    return __riscv_vlmul_ext_v_i64m1_i64m8(op1);
+}
+
+
+vint64m4_t test___riscv_vlmul_ext_v_i64m2_i64m4(vint64m2_t op1)
+{
+    return __riscv_vlmul_ext_v_i64m2_i64m4(op1);
+}
+
+
+vint64m8_t test___riscv_vlmul_ext_v_i64m2_i64m8(vint64m2_t op1)
+{
+    return __riscv_vlmul_ext_v_i64m2_i64m8(op1);
+}
+
+
+vint64m8_t test___riscv_vlmul_ext_v_i64m4_i64m8(vint64m4_t op1)
+{
+    return __riscv_vlmul_ext_v_i64m4_i64m8(op1);
+}
+
+
+vuint8mf4_t test___riscv_vlmul_ext_v_u8mf8_u8mf4(vuint8mf8_t op1)
+{
+    return __riscv_vlmul_ext_v_u8mf8_u8mf4(op1);
+}
+
+
+vuint8mf2_t test___riscv_vlmul_ext_v_u8mf8_u8mf2(vuint8mf8_t op1)
+{
+    return __riscv_vlmul_ext_v_u8mf8_u8mf2(op1);
+}
+
+
+vuint8m1_t test___riscv_vlmul_ext_v_u8mf8_u8m1(vuint8mf8_t op1)
+{
+    return __riscv_vlmul_ext_v_u8mf8_u8m1(op1);
+}
+
+
+vuint8m2_t test___riscv_vlmul_ext_v_u8mf8_u8m2(vuint8mf8_t op1)
+{
+    return __riscv_vlmul_ext_v_u8mf8_u8m2(op1);
+}
+
+
+vuint8m4_t test___riscv_vlmul_ext_v_u8mf8_u8m4(vuint8mf8_t op1)
+{
+    return __riscv_vlmul_ext_v_u8mf8_u8m4(op1);
+}
+
+
+vuint8m8_t test___riscv_vlmul_ext_v_u8mf8_u8m8(vuint8mf8_t op1)
+{
+    return __riscv_vlmul_ext_v_u8mf8_u8m8(op1);
+}
+
+
+vuint8mf2_t test___riscv_vlmul_ext_v_u8mf4_u8mf2(vuint8mf4_t op1)
+{
+    return __riscv_vlmul_ext_v_u8mf4_u8mf2(op1);
+}
+
+
+vuint8m1_t test___riscv_vlmul_ext_v_u8mf4_u8m1(vuint8mf4_t op1)
+{
+    return __riscv_vlmul_ext_v_u8mf4_u8m1(op1);
+}
+
+
+vuint8m2_t test___riscv_vlmul_ext_v_u8mf4_u8m2(vuint8mf4_t op1)
+{
+    return __riscv_vlmul_ext_v_u8mf4_u8m2(op1);
+}
+
+
+vuint8m4_t test___riscv_vlmul_ext_v_u8mf4_u8m4(vuint8mf4_t op1)
+{
+    return __riscv_vlmul_ext_v_u8mf4_u8m4(op1);
+}
+
+
+vuint8m8_t test___riscv_vlmul_ext_v_u8mf4_u8m8(vuint8mf4_t op1)
+{
+    return __riscv_vlmul_ext_v_u8mf4_u8m8(op1);
+}
+
+
+vuint8m1_t test___riscv_vlmul_ext_v_u8mf2_u8m1(vuint8mf2_t op1)
+{
+    return __riscv_vlmul_ext_v_u8mf2_u8m1(op1);
+}
+
+
+vuint8m2_t test___riscv_vlmul_ext_v_u8mf2_u8m2(vuint8mf2_t op1)
+{
+    return __riscv_vlmul_ext_v_u8mf2_u8m2(op1);
+}
+
+
+vuint8m4_t test___riscv_vlmul_ext_v_u8mf2_u8m4(vuint8mf2_t op1)
+{
+    return __riscv_vlmul_ext_v_u8mf2_u8m4(op1);
+}
+
+
+vuint8m8_t test___riscv_vlmul_ext_v_u8mf2_u8m8(vuint8mf2_t op1)
+{
+    return __riscv_vlmul_ext_v_u8mf2_u8m8(op1);
+}
+
+
+vuint8m2_t test___riscv_vlmul_ext_v_u8m1_u8m2(vuint8m1_t op1)
+{
+    return __riscv_vlmul_ext_v_u8m1_u8m2(op1);
+}
+
+
+vuint8m4_t test___riscv_vlmul_ext_v_u8m1_u8m4(vuint8m1_t op1)
+{
+    return __riscv_vlmul_ext_v_u8m1_u8m4(op1);
+}
+
+
+vuint8m8_t test___riscv_vlmul_ext_v_u8m1_u8m8(vuint8m1_t op1)
+{
+    return __riscv_vlmul_ext_v_u8m1_u8m8(op1);
+}
+
+
+vuint8m4_t test___riscv_vlmul_ext_v_u8m2_u8m4(vuint8m2_t op1)
+{
+    return __riscv_vlmul_ext_v_u8m2_u8m4(op1);
+}
+
+
+vuint8m8_t test___riscv_vlmul_ext_v_u8m2_u8m8(vuint8m2_t op1)
+{
+    return __riscv_vlmul_ext_v_u8m2_u8m8(op1);
+}
+
+
+vuint8m8_t test___riscv_vlmul_ext_v_u8m4_u8m8(vuint8m4_t op1)
+{
+    return __riscv_vlmul_ext_v_u8m4_u8m8(op1);
+}
+
+
+vuint16mf2_t test___riscv_vlmul_ext_v_u16mf4_u16mf2(vuint16mf4_t op1)
+{
+    return __riscv_vlmul_ext_v_u16mf4_u16mf2(op1);
+}
+
+
+vuint16m1_t test___riscv_vlmul_ext_v_u16mf4_u16m1(vuint16mf4_t op1)
+{
+    return __riscv_vlmul_ext_v_u16mf4_u16m1(op1);
+}
+
+
+vuint16m2_t test___riscv_vlmul_ext_v_u16mf4_u16m2(vuint16mf4_t op1)
+{
+    return __riscv_vlmul_ext_v_u16mf4_u16m2(op1);
+}
+
+
+vuint16m4_t test___riscv_vlmul_ext_v_u16mf4_u16m4(vuint16mf4_t op1)
+{
+    return __riscv_vlmul_ext_v_u16mf4_u16m4(op1);
+}
+
+
+vuint16m8_t test___riscv_vlmul_ext_v_u16mf4_u16m8(vuint16mf4_t op1)
+{
+    return __riscv_vlmul_ext_v_u16mf4_u16m8(op1);
+}
+
+
+vuint16m1_t test___riscv_vlmul_ext_v_u16mf2_u16m1(vuint16mf2_t op1)
+{
+    return __riscv_vlmul_ext_v_u16mf2_u16m1(op1);
+}
+
+
+vuint16m2_t test___riscv_vlmul_ext_v_u16mf2_u16m2(vuint16mf2_t op1)
+{
+    return __riscv_vlmul_ext_v_u16mf2_u16m2(op1);
+}
+
+
+vuint16m4_t test___riscv_vlmul_ext_v_u16mf2_u16m4(vuint16mf2_t op1)
+{
+    return __riscv_vlmul_ext_v_u16mf2_u16m4(op1);
+}
+
+
+vuint16m8_t test___riscv_vlmul_ext_v_u16mf2_u16m8(vuint16mf2_t op1)
+{
+    return __riscv_vlmul_ext_v_u16mf2_u16m8(op1);
+}
+
+
+vuint16m2_t test___riscv_vlmul_ext_v_u16m1_u16m2(vuint16m1_t op1)
+{
+    return __riscv_vlmul_ext_v_u16m1_u16m2(op1);
+}
+
+
+vuint16m4_t test___riscv_vlmul_ext_v_u16m1_u16m4(vuint16m1_t op1)
+{
+    return __riscv_vlmul_ext_v_u16m1_u16m4(op1);
+}
+
+
+vuint16m8_t test___riscv_vlmul_ext_v_u16m1_u16m8(vuint16m1_t op1)
+{
+    return __riscv_vlmul_ext_v_u16m1_u16m8(op1);
+}
+
+
+vuint16m4_t test___riscv_vlmul_ext_v_u16m2_u16m4(vuint16m2_t op1)
+{
+    return __riscv_vlmul_ext_v_u16m2_u16m4(op1);
+}
+
+
+vuint16m8_t test___riscv_vlmul_ext_v_u16m2_u16m8(vuint16m2_t op1)
+{
+    return __riscv_vlmul_ext_v_u16m2_u16m8(op1);
+}
+
+
+vuint16m8_t test___riscv_vlmul_ext_v_u16m4_u16m8(vuint16m4_t op1)
+{
+    return __riscv_vlmul_ext_v_u16m4_u16m8(op1);
+}
+
+
+vuint32m1_t test___riscv_vlmul_ext_v_u32mf2_u32m1(vuint32mf2_t op1)
+{
+    return __riscv_vlmul_ext_v_u32mf2_u32m1(op1);
+}
+
+
+vuint32m2_t test___riscv_vlmul_ext_v_u32mf2_u32m2(vuint32mf2_t op1)
+{
+    return __riscv_vlmul_ext_v_u32mf2_u32m2(op1);
+}
+
+
+vuint32m4_t test___riscv_vlmul_ext_v_u32mf2_u32m4(vuint32mf2_t op1)
+{
+    return __riscv_vlmul_ext_v_u32mf2_u32m4(op1);
+}
+
+
+vuint32m8_t test___riscv_vlmul_ext_v_u32mf2_u32m8(vuint32mf2_t op1)
+{
+    return __riscv_vlmul_ext_v_u32mf2_u32m8(op1);
+}
+
+
+vuint32m2_t test___riscv_vlmul_ext_v_u32m1_u32m2(vuint32m1_t op1)
+{
+    return __riscv_vlmul_ext_v_u32m1_u32m2(op1);
+}
+
+
+vuint32m4_t test___riscv_vlmul_ext_v_u32m1_u32m4(vuint32m1_t op1)
+{
+    return __riscv_vlmul_ext_v_u32m1_u32m4(op1);
+}
+
+
+vuint32m8_t test___riscv_vlmul_ext_v_u32m1_u32m8(vuint32m1_t op1)
+{
+    return __riscv_vlmul_ext_v_u32m1_u32m8(op1);
+}
+
+
+vuint32m4_t test___riscv_vlmul_ext_v_u32m2_u32m4(vuint32m2_t op1)
+{
+    return __riscv_vlmul_ext_v_u32m2_u32m4(op1);
+}
+
+
+vuint32m8_t test___riscv_vlmul_ext_v_u32m2_u32m8(vuint32m2_t op1)
+{
+    return __riscv_vlmul_ext_v_u32m2_u32m8(op1);
+}
+
+
+vuint32m8_t test___riscv_vlmul_ext_v_u32m4_u32m8(vuint32m4_t op1)
+{
+    return __riscv_vlmul_ext_v_u32m4_u32m8(op1);
+}
+
+
+vuint64m2_t test___riscv_vlmul_ext_v_u64m1_u64m2(vuint64m1_t op1)
+{
+    return __riscv_vlmul_ext_v_u64m1_u64m2(op1);
+}
+
+
+vuint64m4_t test___riscv_vlmul_ext_v_u64m1_u64m4(vuint64m1_t op1)
+{
+    return __riscv_vlmul_ext_v_u64m1_u64m4(op1);
+}
+
+
+vuint64m8_t test___riscv_vlmul_ext_v_u64m1_u64m8(vuint64m1_t op1)
+{
+    return __riscv_vlmul_ext_v_u64m1_u64m8(op1);
+}
+
+
+vuint64m4_t test___riscv_vlmul_ext_v_u64m2_u64m4(vuint64m2_t op1)
+{
+    return __riscv_vlmul_ext_v_u64m2_u64m4(op1);
+}
+
+
+vuint64m8_t test___riscv_vlmul_ext_v_u64m2_u64m8(vuint64m2_t op1)
+{
+    return __riscv_vlmul_ext_v_u64m2_u64m8(op1);
+}
+
+
+vuint64m8_t test___riscv_vlmul_ext_v_u64m4_u64m8(vuint64m4_t op1)
+{
+    return __riscv_vlmul_ext_v_u64m4_u64m8(op1);
+}
+
+
+vfloat32mf2_t test___riscv_vlmul_trunc_v_f32m1_f32mf2(vfloat32m1_t op1)
+{
+    return __riscv_vlmul_trunc_v_f32m1_f32mf2(op1);
+}
+
+
+vfloat32mf2_t test___riscv_vlmul_trunc_v_f32m2_f32mf2(vfloat32m2_t op1)
+{
+    return __riscv_vlmul_trunc_v_f32m2_f32mf2(op1);
+}
+
+
+vfloat32m1_t test___riscv_vlmul_trunc_v_f32m2_f32m1(vfloat32m2_t op1)
+{
+    return __riscv_vlmul_trunc_v_f32m2_f32m1(op1);
+}
+
+
+vfloat32mf2_t test___riscv_vlmul_trunc_v_f32m4_f32mf2(vfloat32m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_f32m4_f32mf2(op1);
+}
+
+
+vfloat32m1_t test___riscv_vlmul_trunc_v_f32m4_f32m1(vfloat32m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_f32m4_f32m1(op1);
+}
+
+
+vfloat32m2_t test___riscv_vlmul_trunc_v_f32m4_f32m2(vfloat32m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_f32m4_f32m2(op1);
+}
+
+
+vfloat32mf2_t test___riscv_vlmul_trunc_v_f32m8_f32mf2(vfloat32m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_f32m8_f32mf2(op1);
+}
+
+
+vfloat32m1_t test___riscv_vlmul_trunc_v_f32m8_f32m1(vfloat32m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_f32m8_f32m1(op1);
+}
+
+
+vfloat32m2_t test___riscv_vlmul_trunc_v_f32m8_f32m2(vfloat32m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_f32m8_f32m2(op1);
+}
+
+
+vfloat32m4_t test___riscv_vlmul_trunc_v_f32m8_f32m4(vfloat32m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_f32m8_f32m4(op1);
+}
+
+
+vfloat64m1_t test___riscv_vlmul_trunc_v_f64m2_f64m1(vfloat64m2_t op1)
+{
+    return __riscv_vlmul_trunc_v_f64m2_f64m1(op1);
+}
+
+
+vfloat64m1_t test___riscv_vlmul_trunc_v_f64m4_f64m1(vfloat64m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_f64m4_f64m1(op1);
+}
+
+
+vfloat64m2_t test___riscv_vlmul_trunc_v_f64m4_f64m2(vfloat64m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_f64m4_f64m2(op1);
+}
+
+
+vfloat64m1_t test___riscv_vlmul_trunc_v_f64m8_f64m1(vfloat64m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_f64m8_f64m1(op1);
+}
+
+
+vfloat64m2_t test___riscv_vlmul_trunc_v_f64m8_f64m2(vfloat64m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_f64m8_f64m2(op1);
+}
+
+
+vfloat64m4_t test___riscv_vlmul_trunc_v_f64m8_f64m4(vfloat64m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_f64m8_f64m4(op1);
+}
+
+
+vint8mf8_t test___riscv_vlmul_trunc_v_i8mf4_i8mf8(vint8mf4_t op1)
+{
+    return __riscv_vlmul_trunc_v_i8mf4_i8mf8(op1);
+}
+
+
+vint8mf8_t test___riscv_vlmul_trunc_v_i8mf2_i8mf8(vint8mf2_t op1)
+{
+    return __riscv_vlmul_trunc_v_i8mf2_i8mf8(op1);
+}
+
+
+vint8mf4_t test___riscv_vlmul_trunc_v_i8mf2_i8mf4(vint8mf2_t op1)
+{
+    return __riscv_vlmul_trunc_v_i8mf2_i8mf4(op1);
+}
+
+
+vint8mf8_t test___riscv_vlmul_trunc_v_i8m1_i8mf8(vint8m1_t op1)
+{
+    return __riscv_vlmul_trunc_v_i8m1_i8mf8(op1);
+}
+
+
+vint8mf4_t test___riscv_vlmul_trunc_v_i8m1_i8mf4(vint8m1_t op1)
+{
+    return __riscv_vlmul_trunc_v_i8m1_i8mf4(op1);
+}
+
+
+vint8mf2_t test___riscv_vlmul_trunc_v_i8m1_i8mf2(vint8m1_t op1)
+{
+    return __riscv_vlmul_trunc_v_i8m1_i8mf2(op1);
+}
+
+
+vint8mf8_t test___riscv_vlmul_trunc_v_i8m2_i8mf8(vint8m2_t op1)
+{
+    return __riscv_vlmul_trunc_v_i8m2_i8mf8(op1);
+}
+
+
+vint8mf4_t test___riscv_vlmul_trunc_v_i8m2_i8mf4(vint8m2_t op1)
+{
+    return __riscv_vlmul_trunc_v_i8m2_i8mf4(op1);
+}
+
+
+vint8mf2_t test___riscv_vlmul_trunc_v_i8m2_i8mf2(vint8m2_t op1)
+{
+    return __riscv_vlmul_trunc_v_i8m2_i8mf2(op1);
+}
+
+
+vint8m1_t test___riscv_vlmul_trunc_v_i8m2_i8m1(vint8m2_t op1)
+{
+    return __riscv_vlmul_trunc_v_i8m2_i8m1(op1);
+}
+
+
+vint8mf8_t test___riscv_vlmul_trunc_v_i8m4_i8mf8(vint8m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_i8m4_i8mf8(op1);
+}
+
+
+vint8mf4_t test___riscv_vlmul_trunc_v_i8m4_i8mf4(vint8m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_i8m4_i8mf4(op1);
+}
+
+
+vint8mf2_t test___riscv_vlmul_trunc_v_i8m4_i8mf2(vint8m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_i8m4_i8mf2(op1);
+}
+
+
+vint8m1_t test___riscv_vlmul_trunc_v_i8m4_i8m1(vint8m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_i8m4_i8m1(op1);
+}
+
+
+vint8m2_t test___riscv_vlmul_trunc_v_i8m4_i8m2(vint8m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_i8m4_i8m2(op1);
+}
+
+
+vint8mf8_t test___riscv_vlmul_trunc_v_i8m8_i8mf8(vint8m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_i8m8_i8mf8(op1);
+}
+
+
+vint8mf4_t test___riscv_vlmul_trunc_v_i8m8_i8mf4(vint8m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_i8m8_i8mf4(op1);
+}
+
+
+vint8mf2_t test___riscv_vlmul_trunc_v_i8m8_i8mf2(vint8m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_i8m8_i8mf2(op1);
+}
+
+
+vint8m1_t test___riscv_vlmul_trunc_v_i8m8_i8m1(vint8m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_i8m8_i8m1(op1);
+}
+
+
+vint8m2_t test___riscv_vlmul_trunc_v_i8m8_i8m2(vint8m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_i8m8_i8m2(op1);
+}
+
+
+vint8m4_t test___riscv_vlmul_trunc_v_i8m8_i8m4(vint8m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_i8m8_i8m4(op1);
+}
+
+
+vint16mf4_t test___riscv_vlmul_trunc_v_i16mf2_i16mf4(vint16mf2_t op1)
+{
+    return __riscv_vlmul_trunc_v_i16mf2_i16mf4(op1);
+}
+
+
+vint16mf4_t test___riscv_vlmul_trunc_v_i16m1_i16mf4(vint16m1_t op1)
+{
+    return __riscv_vlmul_trunc_v_i16m1_i16mf4(op1);
+}
+
+
+vint16mf2_t test___riscv_vlmul_trunc_v_i16m1_i16mf2(vint16m1_t op1)
+{
+    return __riscv_vlmul_trunc_v_i16m1_i16mf2(op1);
+}
+
+
+vint16mf4_t test___riscv_vlmul_trunc_v_i16m2_i16mf4(vint16m2_t op1)
+{
+    return __riscv_vlmul_trunc_v_i16m2_i16mf4(op1);
+}
+
+
+vint16mf2_t test___riscv_vlmul_trunc_v_i16m2_i16mf2(vint16m2_t op1)
+{
+    return __riscv_vlmul_trunc_v_i16m2_i16mf2(op1);
+}
+
+
+vint16m1_t test___riscv_vlmul_trunc_v_i16m2_i16m1(vint16m2_t op1)
+{
+    return __riscv_vlmul_trunc_v_i16m2_i16m1(op1);
+}
+
+
+vint16mf4_t test___riscv_vlmul_trunc_v_i16m4_i16mf4(vint16m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_i16m4_i16mf4(op1);
+}
+
+
+vint16mf2_t test___riscv_vlmul_trunc_v_i16m4_i16mf2(vint16m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_i16m4_i16mf2(op1);
+}
+
+
+vint16m1_t test___riscv_vlmul_trunc_v_i16m4_i16m1(vint16m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_i16m4_i16m1(op1);
+}
+
+
+vint16m2_t test___riscv_vlmul_trunc_v_i16m4_i16m2(vint16m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_i16m4_i16m2(op1);
+}
+
+
+vint16mf4_t test___riscv_vlmul_trunc_v_i16m8_i16mf4(vint16m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_i16m8_i16mf4(op1);
+}
+
+
+vint16mf2_t test___riscv_vlmul_trunc_v_i16m8_i16mf2(vint16m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_i16m8_i16mf2(op1);
+}
+
+
+vint16m1_t test___riscv_vlmul_trunc_v_i16m8_i16m1(vint16m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_i16m8_i16m1(op1);
+}
+
+
+vint16m2_t test___riscv_vlmul_trunc_v_i16m8_i16m2(vint16m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_i16m8_i16m2(op1);
+}
+
+
+vint16m4_t test___riscv_vlmul_trunc_v_i16m8_i16m4(vint16m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_i16m8_i16m4(op1);
+}
+
+
+vint32mf2_t test___riscv_vlmul_trunc_v_i32m1_i32mf2(vint32m1_t op1)
+{
+    return __riscv_vlmul_trunc_v_i32m1_i32mf2(op1);
+}
+
+
+vint32mf2_t test___riscv_vlmul_trunc_v_i32m2_i32mf2(vint32m2_t op1)
+{
+    return __riscv_vlmul_trunc_v_i32m2_i32mf2(op1);
+}
+
+
+vint32m1_t test___riscv_vlmul_trunc_v_i32m2_i32m1(vint32m2_t op1)
+{
+    return __riscv_vlmul_trunc_v_i32m2_i32m1(op1);
+}
+
+
+vint32mf2_t test___riscv_vlmul_trunc_v_i32m4_i32mf2(vint32m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_i32m4_i32mf2(op1);
+}
+
+
+vint32m1_t test___riscv_vlmul_trunc_v_i32m4_i32m1(vint32m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_i32m4_i32m1(op1);
+}
+
+
+vint32m2_t test___riscv_vlmul_trunc_v_i32m4_i32m2(vint32m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_i32m4_i32m2(op1);
+}
+
+
+vint32mf2_t test___riscv_vlmul_trunc_v_i32m8_i32mf2(vint32m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_i32m8_i32mf2(op1);
+}
+
+
+vint32m1_t test___riscv_vlmul_trunc_v_i32m8_i32m1(vint32m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_i32m8_i32m1(op1);
+}
+
+
+vint32m2_t test___riscv_vlmul_trunc_v_i32m8_i32m2(vint32m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_i32m8_i32m2(op1);
+}
+
+
+vint32m4_t test___riscv_vlmul_trunc_v_i32m8_i32m4(vint32m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_i32m8_i32m4(op1);
+}
+
+
+vint64m1_t test___riscv_vlmul_trunc_v_i64m2_i64m1(vint64m2_t op1)
+{
+    return __riscv_vlmul_trunc_v_i64m2_i64m1(op1);
+}
+
+
+vint64m1_t test___riscv_vlmul_trunc_v_i64m4_i64m1(vint64m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_i64m4_i64m1(op1);
+}
+
+
+vint64m2_t test___riscv_vlmul_trunc_v_i64m4_i64m2(vint64m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_i64m4_i64m2(op1);
+}
+
+
+vint64m1_t test___riscv_vlmul_trunc_v_i64m8_i64m1(vint64m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_i64m8_i64m1(op1);
+}
+
+
+vint64m2_t test___riscv_vlmul_trunc_v_i64m8_i64m2(vint64m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_i64m8_i64m2(op1);
+}
+
+
+vint64m4_t test___riscv_vlmul_trunc_v_i64m8_i64m4(vint64m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_i64m8_i64m4(op1);
+}
+
+
+vuint8mf8_t test___riscv_vlmul_trunc_v_u8mf4_u8mf8(vuint8mf4_t op1)
+{
+    return __riscv_vlmul_trunc_v_u8mf4_u8mf8(op1);
+}
+
+
+vuint8mf8_t test___riscv_vlmul_trunc_v_u8mf2_u8mf8(vuint8mf2_t op1)
+{
+    return __riscv_vlmul_trunc_v_u8mf2_u8mf8(op1);
+}
+
+
+vuint8mf4_t test___riscv_vlmul_trunc_v_u8mf2_u8mf4(vuint8mf2_t op1)
+{
+    return __riscv_vlmul_trunc_v_u8mf2_u8mf4(op1);
+}
+
+
+vuint8mf8_t test___riscv_vlmul_trunc_v_u8m1_u8mf8(vuint8m1_t op1)
+{
+    return __riscv_vlmul_trunc_v_u8m1_u8mf8(op1);
+}
+
+
+vuint8mf4_t test___riscv_vlmul_trunc_v_u8m1_u8mf4(vuint8m1_t op1)
+{
+    return __riscv_vlmul_trunc_v_u8m1_u8mf4(op1);
+}
+
+
+vuint8mf2_t test___riscv_vlmul_trunc_v_u8m1_u8mf2(vuint8m1_t op1)
+{
+    return __riscv_vlmul_trunc_v_u8m1_u8mf2(op1);
+}
+
+
+vuint8mf8_t test___riscv_vlmul_trunc_v_u8m2_u8mf8(vuint8m2_t op1)
+{
+    return __riscv_vlmul_trunc_v_u8m2_u8mf8(op1);
+}
+
+
+vuint8mf4_t test___riscv_vlmul_trunc_v_u8m2_u8mf4(vuint8m2_t op1)
+{
+    return __riscv_vlmul_trunc_v_u8m2_u8mf4(op1);
+}
+
+
+vuint8mf2_t test___riscv_vlmul_trunc_v_u8m2_u8mf2(vuint8m2_t op1)
+{
+    return __riscv_vlmul_trunc_v_u8m2_u8mf2(op1);
+}
+
+
+vuint8m1_t test___riscv_vlmul_trunc_v_u8m2_u8m1(vuint8m2_t op1)
+{
+    return __riscv_vlmul_trunc_v_u8m2_u8m1(op1);
+}
+
+
+vuint8mf8_t test___riscv_vlmul_trunc_v_u8m4_u8mf8(vuint8m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_u8m4_u8mf8(op1);
+}
+
+
+vuint8mf4_t test___riscv_vlmul_trunc_v_u8m4_u8mf4(vuint8m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_u8m4_u8mf4(op1);
+}
+
+
+vuint8mf2_t test___riscv_vlmul_trunc_v_u8m4_u8mf2(vuint8m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_u8m4_u8mf2(op1);
+}
+
+
+vuint8m1_t test___riscv_vlmul_trunc_v_u8m4_u8m1(vuint8m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_u8m4_u8m1(op1);
+}
+
+
+vuint8m2_t test___riscv_vlmul_trunc_v_u8m4_u8m2(vuint8m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_u8m4_u8m2(op1);
+}
+
+
+vuint8mf8_t test___riscv_vlmul_trunc_v_u8m8_u8mf8(vuint8m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_u8m8_u8mf8(op1);
+}
+
+
+vuint8mf4_t test___riscv_vlmul_trunc_v_u8m8_u8mf4(vuint8m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_u8m8_u8mf4(op1);
+}
+
+
+vuint8mf2_t test___riscv_vlmul_trunc_v_u8m8_u8mf2(vuint8m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_u8m8_u8mf2(op1);
+}
+
+
+vuint8m1_t test___riscv_vlmul_trunc_v_u8m8_u8m1(vuint8m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_u8m8_u8m1(op1);
+}
+
+
+vuint8m2_t test___riscv_vlmul_trunc_v_u8m8_u8m2(vuint8m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_u8m8_u8m2(op1);
+}
+
+
+vuint8m4_t test___riscv_vlmul_trunc_v_u8m8_u8m4(vuint8m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_u8m8_u8m4(op1);
+}
+
+
+vuint16mf4_t test___riscv_vlmul_trunc_v_u16mf2_u16mf4(vuint16mf2_t op1)
+{
+    return __riscv_vlmul_trunc_v_u16mf2_u16mf4(op1);
+}
+
+
+vuint16mf4_t test___riscv_vlmul_trunc_v_u16m1_u16mf4(vuint16m1_t op1)
+{
+    return __riscv_vlmul_trunc_v_u16m1_u16mf4(op1);
+}
+
+
+vuint16mf2_t test___riscv_vlmul_trunc_v_u16m1_u16mf2(vuint16m1_t op1)
+{
+    return __riscv_vlmul_trunc_v_u16m1_u16mf2(op1);
+}
+
+
+vuint16mf4_t test___riscv_vlmul_trunc_v_u16m2_u16mf4(vuint16m2_t op1)
+{
+    return __riscv_vlmul_trunc_v_u16m2_u16mf4(op1);
+}
+
+
+vuint16mf2_t test___riscv_vlmul_trunc_v_u16m2_u16mf2(vuint16m2_t op1)
+{
+    return __riscv_vlmul_trunc_v_u16m2_u16mf2(op1);
+}
+
+
+vuint16m1_t test___riscv_vlmul_trunc_v_u16m2_u16m1(vuint16m2_t op1)
+{
+    return __riscv_vlmul_trunc_v_u16m2_u16m1(op1);
+}
+
+
+vuint16mf4_t test___riscv_vlmul_trunc_v_u16m4_u16mf4(vuint16m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_u16m4_u16mf4(op1);
+}
+
+
+vuint16mf2_t test___riscv_vlmul_trunc_v_u16m4_u16mf2(vuint16m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_u16m4_u16mf2(op1);
+}
+
+
+vuint16m1_t test___riscv_vlmul_trunc_v_u16m4_u16m1(vuint16m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_u16m4_u16m1(op1);
+}
+
+
+vuint16m2_t test___riscv_vlmul_trunc_v_u16m4_u16m2(vuint16m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_u16m4_u16m2(op1);
+}
+
+
+vuint16mf4_t test___riscv_vlmul_trunc_v_u16m8_u16mf4(vuint16m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_u16m8_u16mf4(op1);
+}
+
+
+vuint16mf2_t test___riscv_vlmul_trunc_v_u16m8_u16mf2(vuint16m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_u16m8_u16mf2(op1);
+}
+
+
+vuint16m1_t test___riscv_vlmul_trunc_v_u16m8_u16m1(vuint16m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_u16m8_u16m1(op1);
+}
+
+
+vuint16m2_t test___riscv_vlmul_trunc_v_u16m8_u16m2(vuint16m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_u16m8_u16m2(op1);
+}
+
+
+vuint16m4_t test___riscv_vlmul_trunc_v_u16m8_u16m4(vuint16m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_u16m8_u16m4(op1);
+}
+
+
+vuint32mf2_t test___riscv_vlmul_trunc_v_u32m1_u32mf2(vuint32m1_t op1)
+{
+    return __riscv_vlmul_trunc_v_u32m1_u32mf2(op1);
+}
+
+
+vuint32mf2_t test___riscv_vlmul_trunc_v_u32m2_u32mf2(vuint32m2_t op1)
+{
+    return __riscv_vlmul_trunc_v_u32m2_u32mf2(op1);
+}
+
+
+vuint32m1_t test___riscv_vlmul_trunc_v_u32m2_u32m1(vuint32m2_t op1)
+{
+    return __riscv_vlmul_trunc_v_u32m2_u32m1(op1);
+}
+
+
+vuint32mf2_t test___riscv_vlmul_trunc_v_u32m4_u32mf2(vuint32m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_u32m4_u32mf2(op1);
+}
+
+
+vuint32m1_t test___riscv_vlmul_trunc_v_u32m4_u32m1(vuint32m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_u32m4_u32m1(op1);
+}
+
+
+vuint32m2_t test___riscv_vlmul_trunc_v_u32m4_u32m2(vuint32m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_u32m4_u32m2(op1);
+}
+
+
+vuint32mf2_t test___riscv_vlmul_trunc_v_u32m8_u32mf2(vuint32m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_u32m8_u32mf2(op1);
+}
+
+
+vuint32m1_t test___riscv_vlmul_trunc_v_u32m8_u32m1(vuint32m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_u32m8_u32m1(op1);
+}
+
+
+vuint32m2_t test___riscv_vlmul_trunc_v_u32m8_u32m2(vuint32m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_u32m8_u32m2(op1);
+}
+
+
+vuint32m4_t test___riscv_vlmul_trunc_v_u32m8_u32m4(vuint32m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_u32m8_u32m4(op1);
+}
+
+
+vuint64m1_t test___riscv_vlmul_trunc_v_u64m2_u64m1(vuint64m2_t op1)
+{
+    return __riscv_vlmul_trunc_v_u64m2_u64m1(op1);
+}
+
+
+vuint64m1_t test___riscv_vlmul_trunc_v_u64m4_u64m1(vuint64m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_u64m4_u64m1(op1);
+}
+
+
+vuint64m2_t test___riscv_vlmul_trunc_v_u64m4_u64m2(vuint64m4_t op1)
+{
+    return __riscv_vlmul_trunc_v_u64m4_u64m2(op1);
+}
+
+
+vuint64m1_t test___riscv_vlmul_trunc_v_u64m8_u64m1(vuint64m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_u64m8_u64m1(op1);
+}
+
+
+vuint64m2_t test___riscv_vlmul_trunc_v_u64m8_u64m2(vuint64m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_u64m8_u64m2(op1);
+}
+
+
+vuint64m4_t test___riscv_vlmul_trunc_v_u64m8_u64m4(vuint64m8_t op1)
+{
+    return __riscv_vlmul_trunc_v_u64m8_u64m4(op1);
+}
+
+/* { dg-final { scan-assembler-not {vmv} } } */
+
+
+

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2023-03-05  9:17 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-05  9:17 [gcc r13-6488] RISC-V: Add RVV misc intrinsic support Kito Cheng

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).