public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH] RISC-V: Add vlse/vsse C/C++ API intrinsics support
@ 2023-01-20  4:21 juzhe.zhong
  0 siblings, 0 replies; 2+ messages in thread
From: juzhe.zhong @ 2023-01-20  4:21 UTC (permalink / raw)
  To: gcc-patches; +Cc: kito.cheng, palmer, Ju-Zhe Zhong

From: Ju-Zhe Zhong <juzhe.zhong@rivai.ai>

gcc/ChangeLog:

        * config/riscv/predicates.md (pmode_reg_or_0_operand): New predicate.
        * config/riscv/riscv-vector-builtins-bases.cc (class loadstore): Add vlse/vsse support.
        (BASE): Ditto.
        * config/riscv/riscv-vector-builtins-bases.h: Ditto.
        * config/riscv/riscv-vector-builtins-functions.def (vlse): Ditto.
        (vsse): Ditto.
        * config/riscv/riscv-vector-builtins.cc (function_expander::use_contiguous_load_insn): Ditto.
        * config/riscv/vector.md (@pred_strided_load<mode>): Ditto.
        (@pred_strided_store<mode>): Ditto.

gcc/testsuite/ChangeLog:

        * g++.target/riscv/rvv/base/vlse-1.C: New test.
        * g++.target/riscv/rvv/base/vlse_tu-1.C: New test.
        * g++.target/riscv/rvv/base/vlse_tum-1.C: New test.
        * g++.target/riscv/rvv/base/vlse_tumu-1.C: New test.
        * g++.target/riscv/rvv/base/vsse-1.C: New test.
        * gcc.target/riscv/rvv/base/vlse-1.c: New test.
        * gcc.target/riscv/rvv/base/vlse-2.c: New test.
        * gcc.target/riscv/rvv/base/vlse-3.c: New test.
        * gcc.target/riscv/rvv/base/vlse-vsse-constraint-1.c: New test.
        * gcc.target/riscv/rvv/base/vlse_m-1.c: New test.
        * gcc.target/riscv/rvv/base/vlse_m-2.c: New test.
        * gcc.target/riscv/rvv/base/vlse_m-3.c: New test.
        * gcc.target/riscv/rvv/base/vlse_mu-1.c: New test.
        * gcc.target/riscv/rvv/base/vlse_mu-2.c: New test.
        * gcc.target/riscv/rvv/base/vlse_mu-3.c: New test.
        * gcc.target/riscv/rvv/base/vlse_tu-1.c: New test.
        * gcc.target/riscv/rvv/base/vlse_tu-2.c: New test.
        * gcc.target/riscv/rvv/base/vlse_tu-3.c: New test.
        * gcc.target/riscv/rvv/base/vlse_tum-1.c: New test.
        * gcc.target/riscv/rvv/base/vlse_tum-2.c: New test.
        * gcc.target/riscv/rvv/base/vlse_tum-3.c: New test.
        * gcc.target/riscv/rvv/base/vlse_tumu-1.c: New test.
        * gcc.target/riscv/rvv/base/vlse_tumu-2.c: New test.
        * gcc.target/riscv/rvv/base/vlse_tumu-3.c: New test.
        * gcc.target/riscv/rvv/base/vsse-1.c: New test.
        * gcc.target/riscv/rvv/base/vsse-2.c: New test.
        * gcc.target/riscv/rvv/base/vsse-3.c: New test.
        * gcc.target/riscv/rvv/base/vsse_m-1.c: New test.
        * gcc.target/riscv/rvv/base/vsse_m-2.c: New test.
        * gcc.target/riscv/rvv/base/vsse_m-3.c: New test.

---
 gcc/config/riscv/predicates.md                |   4 +
 .../riscv/riscv-vector-builtins-bases.cc      |  26 +-
 .../riscv/riscv-vector-builtins-bases.h       |   2 +
 .../riscv/riscv-vector-builtins-functions.def |   2 +
 gcc/config/riscv/riscv-vector-builtins.cc     |  33 +-
 gcc/config/riscv/vector.md                    |  90 ++-
 .../g++.target/riscv/rvv/base/vlse-1.C        | 345 +++++++++
 .../g++.target/riscv/rvv/base/vlse_tu-1.C     | 345 +++++++++
 .../g++.target/riscv/rvv/base/vlse_tum-1.C    | 345 +++++++++
 .../g++.target/riscv/rvv/base/vlse_tumu-1.C   | 345 +++++++++
 .../g++.target/riscv/rvv/base/vsse-1.C        | 685 ++++++++++++++++++
 .../gcc.target/riscv/rvv/base/vlse-1.c        | 345 +++++++++
 .../gcc.target/riscv/rvv/base/vlse-2.c        | 345 +++++++++
 .../gcc.target/riscv/rvv/base/vlse-3.c        | 345 +++++++++
 .../riscv/rvv/base/vlse-vsse-constraint-1.c   | 113 +++
 .../gcc.target/riscv/rvv/base/vlse_m-1.c      | 345 +++++++++
 .../gcc.target/riscv/rvv/base/vlse_m-2.c      | 345 +++++++++
 .../gcc.target/riscv/rvv/base/vlse_m-3.c      | 345 +++++++++
 .../gcc.target/riscv/rvv/base/vlse_mu-1.c     | 345 +++++++++
 .../gcc.target/riscv/rvv/base/vlse_mu-2.c     | 345 +++++++++
 .../gcc.target/riscv/rvv/base/vlse_mu-3.c     | 345 +++++++++
 .../gcc.target/riscv/rvv/base/vlse_tu-1.c     | 345 +++++++++
 .../gcc.target/riscv/rvv/base/vlse_tu-2.c     | 345 +++++++++
 .../gcc.target/riscv/rvv/base/vlse_tu-3.c     | 345 +++++++++
 .../gcc.target/riscv/rvv/base/vlse_tum-1.c    | 345 +++++++++
 .../gcc.target/riscv/rvv/base/vlse_tum-2.c    | 345 +++++++++
 .../gcc.target/riscv/rvv/base/vlse_tum-3.c    | 345 +++++++++
 .../gcc.target/riscv/rvv/base/vlse_tumu-1.c   | 345 +++++++++
 .../gcc.target/riscv/rvv/base/vlse_tumu-2.c   | 345 +++++++++
 .../gcc.target/riscv/rvv/base/vlse_tumu-3.c   | 345 +++++++++
 .../gcc.target/riscv/rvv/base/vsse-1.c        | 345 +++++++++
 .../gcc.target/riscv/rvv/base/vsse-2.c        | 345 +++++++++
 .../gcc.target/riscv/rvv/base/vsse-3.c        | 345 +++++++++
 .../gcc.target/riscv/rvv/base/vsse_m-1.c      | 345 +++++++++
 .../gcc.target/riscv/rvv/base/vsse_m-2.c      | 345 +++++++++
 .../gcc.target/riscv/rvv/base/vsse_m-3.c      | 345 +++++++++
 36 files changed, 10601 insertions(+), 14 deletions(-)
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/base/vlse-1.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/base/vlse_tu-1.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/base/vlse_tum-1.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/base/vlse_tumu-1.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/base/vsse-1.C
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vlse-1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vlse-2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vlse-3.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vlse-vsse-constraint-1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vlse_m-1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vlse_m-2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vlse_m-3.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vlse_mu-1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vlse_mu-2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vlse_mu-3.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tu-1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tu-2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tu-3.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tum-1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tum-2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tum-3.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tumu-1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tumu-2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tumu-3.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vsse-1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vsse-2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vsse-3.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vsse_m-1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vsse_m-2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vsse_m-3.c

diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md
index 5a5a49bf7c0..bae9cfa02dd 100644
--- a/gcc/config/riscv/predicates.md
+++ b/gcc/config/riscv/predicates.md
@@ -286,6 +286,10 @@
 	    (match_test "GET_CODE (op) == UNSPEC
 			 && (XINT (op, 1) == UNSPEC_VUNDEF)"))))
 
+(define_special_predicate "pmode_reg_or_0_operand"
+  (ior (match_operand 0 "const_0_operand")
+       (match_operand 0 "pmode_register_operand")))
+
 ;; The scalar operand can be directly broadcast by RVV instructions.
 (define_predicate "direct_broadcast_operand"
   (ior (match_operand 0 "register_operand")
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index 0da4797d272..17a1294cf85 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -84,8 +84,8 @@ public:
   }
 };
 
-/* Implements vle.v/vse.v/vlm.v/vsm.v codegen.  */
-template <bool STORE_P>
+/* Implements vle.v/vse.v/vlm.v/vsm.v/vlse.v/vsse.v codegen.  */
+template <bool STORE_P, bool STRIDED_P = false>
 class loadstore : public function_base
 {
   unsigned int call_properties (const function_instance &) const override
@@ -106,9 +106,23 @@ class loadstore : public function_base
   rtx expand (function_expander &e) const override
   {
     if (STORE_P)
-      return e.use_contiguous_store_insn (code_for_pred_store (e.vector_mode ()));
+      {
+	if (STRIDED_P)
+	  return e.use_contiguous_store_insn (
+	    code_for_pred_strided_store (e.vector_mode ()));
+	else
+	  return e.use_contiguous_store_insn (
+	    code_for_pred_store (e.vector_mode ()));
+      }
     else
-      return e.use_contiguous_load_insn (code_for_pred_mov (e.vector_mode ()));
+      {
+	if (STRIDED_P)
+	  return e.use_contiguous_load_insn (
+	    code_for_pred_strided_load (e.vector_mode ()));
+	else
+	  return e.use_contiguous_load_insn (
+	    code_for_pred_mov (e.vector_mode ()));
+      }
   }
 };
 
@@ -118,6 +132,8 @@ static CONSTEXPR const loadstore<false> vle_obj;
 static CONSTEXPR const loadstore<true> vse_obj;
 static CONSTEXPR const loadstore<false> vlm_obj;
 static CONSTEXPR const loadstore<true> vsm_obj;
+static CONSTEXPR const loadstore<false, true> vlse_obj;
+static CONSTEXPR const loadstore<true, true> vsse_obj;
 
 /* Declare the function base NAME, pointing it to an instance
    of class <NAME>_obj.  */
@@ -130,5 +146,7 @@ BASE (vle)
 BASE (vse)
 BASE (vlm)
 BASE (vsm)
+BASE (vlse)
+BASE (vsse)
 
 } // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.h b/gcc/config/riscv/riscv-vector-builtins-bases.h
index 28151a8d8d2..d8676e94b28 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.h
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.h
@@ -30,6 +30,8 @@ extern const function_base *const vle;
 extern const function_base *const vse;
 extern const function_base *const vlm;
 extern const function_base *const vsm;
+extern const function_base *const vlse;
+extern const function_base *const vsse;
 }
 
 } // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.def b/gcc/config/riscv/riscv-vector-builtins-functions.def
index 63aa8fe32c8..348262928c8 100644
--- a/gcc/config/riscv/riscv-vector-builtins-functions.def
+++ b/gcc/config/riscv/riscv-vector-builtins-functions.def
@@ -44,5 +44,7 @@ DEF_RVV_FUNCTION (vle, loadstore, full_preds, all_v_scalar_const_ptr_ops)
 DEF_RVV_FUNCTION (vse, loadstore, none_m_preds, all_v_scalar_ptr_ops)
 DEF_RVV_FUNCTION (vlm, loadstore, none_preds, b_v_scalar_const_ptr_ops)
 DEF_RVV_FUNCTION (vsm, loadstore, none_preds, b_v_scalar_ptr_ops)
+DEF_RVV_FUNCTION (vlse, loadstore, full_preds, all_v_scalar_const_ptr_ptrdiff_ops)
+DEF_RVV_FUNCTION (vsse, loadstore, none_m_preds, all_v_scalar_ptr_ptrdiff_ops)
 
 #undef DEF_RVV_FUNCTION
diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
index f95fe0d58d5..b97a2c94550 100644
--- a/gcc/config/riscv/riscv-vector-builtins.cc
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -167,6 +167,19 @@ static CONSTEXPR const rvv_arg_type_info scalar_ptr_args[]
   = {rvv_arg_type_info (RVV_BASE_scalar_ptr),
      rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
 
+/* A list of args for vector_type func (const scalar_type *, ptrdiff_t)
+ * function.  */
+static CONSTEXPR const rvv_arg_type_info scalar_const_ptr_ptrdiff_args[]
+  = {rvv_arg_type_info (RVV_BASE_scalar_const_ptr),
+     rvv_arg_type_info (RVV_BASE_ptrdiff), rvv_arg_type_info_end};
+
+/* A list of args for void func (scalar_type *, ptrdiff_t, vector_type)
+ * function.  */
+static CONSTEXPR const rvv_arg_type_info scalar_ptr_ptrdiff_args[]
+  = {rvv_arg_type_info (RVV_BASE_scalar_ptr),
+     rvv_arg_type_info (RVV_BASE_ptrdiff), rvv_arg_type_info (RVV_BASE_vector),
+     rvv_arg_type_info_end};
+
 /* A list of none preds that will be registered for intrinsic functions.  */
 static CONSTEXPR const predication_type_index none_preds[]
   = {PRED_TYPE_none, NUM_PRED_TYPES};
@@ -227,6 +240,22 @@ static CONSTEXPR const rvv_op_info b_v_scalar_ptr_ops
      rvv_arg_type_info (RVV_BASE_void), /* Return type */
      scalar_ptr_args /* Args */};
 
+/* A static operand information for vector_type func (const scalar_type *,
+ * ptrdiff_t) function registration. */
+static CONSTEXPR const rvv_op_info all_v_scalar_const_ptr_ptrdiff_ops
+  = {all_ops,				  /* Types */
+     OP_TYPE_v,				  /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     scalar_const_ptr_ptrdiff_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, ptrdiff_t,
+ * vector_type) function registration. */
+static CONSTEXPR const rvv_op_info all_v_scalar_ptr_ptrdiff_ops
+  = {all_ops,				/* Types */
+     OP_TYPE_v,				/* Suffix */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type */
+     scalar_ptr_ptrdiff_args /* Args */};
+
 /* A list of all RVV intrinsic functions.  */
 static function_group_info function_groups[] = {
 #define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)                         \
@@ -920,7 +949,9 @@ function_expander::use_contiguous_load_insn (insn_code icode)
       add_input_operand (Pmode, get_tail_policy_for_pred (pred));
       add_input_operand (Pmode, get_mask_policy_for_pred (pred));
     }
-  add_input_operand (Pmode, get_avl_type_rtx (avl_type::NONVLMAX));
+
+  if (opno != insn_data[icode].n_generator_args)
+    add_input_operand (Pmode, get_avl_type_rtx (avl_type::NONVLMAX));
 
   return generate_insn (icode);
 }
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index e1173f2d5a6..498cf21905b 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -33,6 +33,7 @@
   UNSPEC_VUNDEF
   UNSPEC_VPREDICATE
   UNSPEC_VLMAX
+  UNSPEC_STRIDED
 ])
 
 (define_constants [
@@ -204,28 +205,56 @@
 
 ;; The index of operand[] to get the avl op.
 (define_attr "vl_op_idx" ""
-	(cond [(eq_attr "type" "vlde,vste,vimov,vfmov,vldm,vstm,vlds,vmalu")
-	 (const_int 4)]
-	(const_int INVALID_ATTRIBUTE)))
+  (cond [(eq_attr "type" "vlde,vste,vimov,vfmov,vldm,vstm,vmalu,vsts")
+	   (const_int 4)
+
+	 ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
+	 ;; wheras it is pred_strided_load if operands[3] is vector mode.
+         (eq_attr "type" "vlds")
+	   (if_then_else (match_test "VECTOR_MODE_P (GET_MODE (operands[3]))")
+             (const_int 5)
+             (const_int 4))]
+  (const_int INVALID_ATTRIBUTE)))
 
 ;; The tail policy op value.
 (define_attr "ta" ""
-  (cond [(eq_attr "type" "vlde,vimov,vfmov,vlds")
-	   (symbol_ref "riscv_vector::get_ta(operands[5])")]
+  (cond [(eq_attr "type" "vlde,vimov,vfmov")
+	   (symbol_ref "riscv_vector::get_ta(operands[5])")
+
+	 ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
+	 ;; wheras it is pred_strided_load if operands[3] is vector mode.
+	 (eq_attr "type" "vlds")
+	   (if_then_else (match_test "VECTOR_MODE_P (GET_MODE (operands[3]))")
+	     (symbol_ref "riscv_vector::get_ta(operands[6])")
+	     (symbol_ref "riscv_vector::get_ta(operands[5])"))]
 	(const_int INVALID_ATTRIBUTE)))
 
 ;; The mask policy op value.
 (define_attr "ma" ""
-  (cond [(eq_attr "type" "vlde,vlds")
-	   (symbol_ref "riscv_vector::get_ma(operands[6])")]
+  (cond [(eq_attr "type" "vlde")
+	   (symbol_ref "riscv_vector::get_ma(operands[6])")
+
+	 ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
+	 ;; wheras it is pred_strided_load if operands[3] is vector mode.
+	 (eq_attr "type" "vlds")
+	   (if_then_else (match_test "VECTOR_MODE_P (GET_MODE (operands[3]))")
+	     (symbol_ref "riscv_vector::get_ma(operands[7])")
+	     (symbol_ref "riscv_vector::get_ma(operands[6])"))]
 	(const_int INVALID_ATTRIBUTE)))
 
 ;; The avl type value.
 (define_attr "avl_type" ""
-  (cond [(eq_attr "type" "vlde,vlde,vste,vimov,vimov,vimov,vfmov,vlds,vlds")
+  (cond [(eq_attr "type" "vlde,vlde,vste,vimov,vimov,vimov,vfmov")
 	   (symbol_ref "INTVAL (operands[7])")
 	 (eq_attr "type" "vldm,vstm,vimov,vmalu,vmalu")
-	   (symbol_ref "INTVAL (operands[5])")]
+	   (symbol_ref "INTVAL (operands[5])")
+
+	 ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
+	 ;; wheras it is pred_strided_load if operands[3] is vector mode.
+	 (eq_attr "type" "vlds")
+	   (if_then_else (match_test "VECTOR_MODE_P (GET_MODE (operands[3]))")
+	     (const_int INVALID_ATTRIBUTE)
+	     (symbol_ref "INTVAL (operands[7])"))]
 	(const_int INVALID_ATTRIBUTE)))
 
 ;; -----------------------------------------------------------------
@@ -760,3 +789,46 @@
    vlse<sew>.v\t%0,%3,zero"
   [(set_attr "type" "vimov,vfmov,vlds,vlds")
    (set_attr "mode" "<MODE>")])
+
+;; -------------------------------------------------------------------------------
+;; ---- Predicated Strided loads/stores
+;; -------------------------------------------------------------------------------
+;; Includes:
+;; - 7.5. Vector Strided Instructions
+;; -------------------------------------------------------------------------------
+
+(define_insn "@pred_strided_load<mode>"
+  [(set (match_operand:V 0 "register_operand"              "=vr,    vr,    vd")
+	(if_then_else:V
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm")
+	     (match_operand 5 "vector_length_operand"    "   rK,    rK,    rK")
+	     (match_operand 6 "const_int_operand"        "    i,     i,     i")
+	     (match_operand 7 "const_int_operand"        "    i,     i,     i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:V
+	    [(match_operand:V 3 "memory_operand"         "    m,     m,     m")
+	     (match_operand 4 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")] UNSPEC_STRIDED)
+	  (match_operand:V 2 "vector_merge_operand"      "    0,    vu,    vu")))]
+  "TARGET_VECTOR"
+  "vlse<sew>.v\t%0,%3,%z4%p1"
+  [(set_attr "type" "vlds")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_strided_store<mode>"
+  [(set (match_operand:V 0 "memory_operand"                 "+m")
+	(if_then_else:V
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+	     (match_operand 4 "vector_length_operand"    "   rK")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:V
+	    [(match_operand 2 "pmode_reg_or_0_operand"   "   rJ")
+	     (match_operand:V 3 "register_operand"       "   vr")] UNSPEC_STRIDED)
+	  (match_dup 0)))]
+  "TARGET_VECTOR"
+  "vsse<sew>.v\t%3,%0,%z2%p1"
+  [(set_attr "type" "vsts")
+   (set_attr "mode" "<MODE>")])
diff --git a/gcc/testsuite/g++.target/riscv/rvv/base/vlse-1.C b/gcc/testsuite/g++.target/riscv/rvv/base/vlse-1.C
new file mode 100644
index 00000000000..4c71cad9ee6
--- /dev/null
+++ b/gcc/testsuite/g++.target/riscv/rvv/base/vlse-1.C
@@ -0,0 +1,345 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+vint8mf8_t
+test___riscv_vlse8(vbool64_t mask,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8(mask,base,bstride,vl);
+}
+
+vint8mf4_t
+test___riscv_vlse8(vbool32_t mask,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8(mask,base,bstride,vl);
+}
+
+vint8mf2_t
+test___riscv_vlse8(vbool16_t mask,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8(mask,base,bstride,vl);
+}
+
+vint8m1_t
+test___riscv_vlse8(vbool8_t mask,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8(mask,base,bstride,vl);
+}
+
+vint8m2_t
+test___riscv_vlse8(vbool4_t mask,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8(mask,base,bstride,vl);
+}
+
+vint8m4_t
+test___riscv_vlse8(vbool2_t mask,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8(mask,base,bstride,vl);
+}
+
+vint8m8_t
+test___riscv_vlse8(vbool1_t mask,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8(mask,base,bstride,vl);
+}
+
+vuint8mf8_t
+test___riscv_vlse8(vbool64_t mask,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8(mask,base,bstride,vl);
+}
+
+vuint8mf4_t
+test___riscv_vlse8(vbool32_t mask,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8(mask,base,bstride,vl);
+}
+
+vuint8mf2_t
+test___riscv_vlse8(vbool16_t mask,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8(mask,base,bstride,vl);
+}
+
+vuint8m1_t
+test___riscv_vlse8(vbool8_t mask,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8(mask,base,bstride,vl);
+}
+
+vuint8m2_t
+test___riscv_vlse8(vbool4_t mask,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8(mask,base,bstride,vl);
+}
+
+vuint8m4_t
+test___riscv_vlse8(vbool2_t mask,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8(mask,base,bstride,vl);
+}
+
+vuint8m8_t
+test___riscv_vlse8(vbool1_t mask,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8(mask,base,bstride,vl);
+}
+
+vint16mf4_t
+test___riscv_vlse16(vbool64_t mask,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16(mask,base,bstride,vl);
+}
+
+vint16mf2_t
+test___riscv_vlse16(vbool32_t mask,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16(mask,base,bstride,vl);
+}
+
+vint16m1_t
+test___riscv_vlse16(vbool16_t mask,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16(mask,base,bstride,vl);
+}
+
+vint16m2_t
+test___riscv_vlse16(vbool8_t mask,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16(mask,base,bstride,vl);
+}
+
+vint16m4_t
+test___riscv_vlse16(vbool4_t mask,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16(mask,base,bstride,vl);
+}
+
+vint16m8_t
+test___riscv_vlse16(vbool2_t mask,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16(mask,base,bstride,vl);
+}
+
+vuint16mf4_t
+test___riscv_vlse16(vbool64_t mask,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16(mask,base,bstride,vl);
+}
+
+vuint16mf2_t
+test___riscv_vlse16(vbool32_t mask,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16(mask,base,bstride,vl);
+}
+
+vuint16m1_t
+test___riscv_vlse16(vbool16_t mask,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16(mask,base,bstride,vl);
+}
+
+vuint16m2_t
+test___riscv_vlse16(vbool8_t mask,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16(mask,base,bstride,vl);
+}
+
+vuint16m4_t
+test___riscv_vlse16(vbool4_t mask,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16(mask,base,bstride,vl);
+}
+
+vuint16m8_t
+test___riscv_vlse16(vbool2_t mask,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16(mask,base,bstride,vl);
+}
+
+vint32mf2_t
+test___riscv_vlse32(vbool64_t mask,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32(mask,base,bstride,vl);
+}
+
+vint32m1_t
+test___riscv_vlse32(vbool32_t mask,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32(mask,base,bstride,vl);
+}
+
+vint32m2_t
+test___riscv_vlse32(vbool16_t mask,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32(mask,base,bstride,vl);
+}
+
+vint32m4_t
+test___riscv_vlse32(vbool8_t mask,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32(mask,base,bstride,vl);
+}
+
+vint32m8_t
+test___riscv_vlse32(vbool4_t mask,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32(mask,base,bstride,vl);
+}
+
+vuint32mf2_t
+test___riscv_vlse32(vbool64_t mask,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32(mask,base,bstride,vl);
+}
+
+vuint32m1_t
+test___riscv_vlse32(vbool32_t mask,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32(mask,base,bstride,vl);
+}
+
+vuint32m2_t
+test___riscv_vlse32(vbool16_t mask,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32(mask,base,bstride,vl);
+}
+
+vuint32m4_t
+test___riscv_vlse32(vbool8_t mask,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32(mask,base,bstride,vl);
+}
+
+vuint32m8_t
+test___riscv_vlse32(vbool4_t mask,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32(mask,base,bstride,vl);
+}
+
+vfloat32mf2_t
+test___riscv_vlse32(vbool64_t mask,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32(mask,base,bstride,vl);
+}
+
+vfloat32m1_t
+test___riscv_vlse32(vbool32_t mask,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32(mask,base,bstride,vl);
+}
+
+vfloat32m2_t
+test___riscv_vlse32(vbool16_t mask,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32(mask,base,bstride,vl);
+}
+
+vfloat32m4_t
+test___riscv_vlse32(vbool8_t mask,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32(mask,base,bstride,vl);
+}
+
+vfloat32m8_t
+test___riscv_vlse32(vbool4_t mask,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32(mask,base,bstride,vl);
+}
+
+vint64m1_t
+test___riscv_vlse64(vbool64_t mask,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64(mask,base,bstride,vl);
+}
+
+vint64m2_t
+test___riscv_vlse64(vbool32_t mask,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64(mask,base,bstride,vl);
+}
+
+vint64m4_t
+test___riscv_vlse64(vbool16_t mask,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64(mask,base,bstride,vl);
+}
+
+vint64m8_t
+test___riscv_vlse64(vbool8_t mask,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64(mask,base,bstride,vl);
+}
+
+vuint64m1_t
+test___riscv_vlse64(vbool64_t mask,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64(mask,base,bstride,vl);
+}
+
+vuint64m2_t
+test___riscv_vlse64(vbool32_t mask,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64(mask,base,bstride,vl);
+}
+
+vuint64m4_t
+test___riscv_vlse64(vbool16_t mask,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64(mask,base,bstride,vl);
+}
+
+vuint64m8_t
+test___riscv_vlse64(vbool8_t mask,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64(mask,base,bstride,vl);
+}
+
+vfloat64m1_t
+test___riscv_vlse64(vbool64_t mask,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64(mask,base,bstride,vl);
+}
+
+vfloat64m2_t
+test___riscv_vlse64(vbool32_t mask,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64(mask,base,bstride,vl);
+}
+
+vfloat64m4_t
+test___riscv_vlse64(vbool16_t mask,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64(mask,base,bstride,vl);
+}
+
+vfloat64m8_t
+test___riscv_vlse64(vbool8_t mask,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64(mask,base,bstride,vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
diff --git a/gcc/testsuite/g++.target/riscv/rvv/base/vlse_tu-1.C b/gcc/testsuite/g++.target/riscv/rvv/base/vlse_tu-1.C
new file mode 100644
index 00000000000..ef75b05d46c
--- /dev/null
+++ b/gcc/testsuite/g++.target/riscv/rvv/base/vlse_tu-1.C
@@ -0,0 +1,345 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+vint8mf8_t
+test___riscv_vlse8_tu(vint8mf8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tu(merge,base,bstride,vl);
+}
+
+vint8mf4_t
+test___riscv_vlse8_tu(vint8mf4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tu(merge,base,bstride,vl);
+}
+
+vint8mf2_t
+test___riscv_vlse8_tu(vint8mf2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tu(merge,base,bstride,vl);
+}
+
+vint8m1_t
+test___riscv_vlse8_tu(vint8m1_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tu(merge,base,bstride,vl);
+}
+
+vint8m2_t
+test___riscv_vlse8_tu(vint8m2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tu(merge,base,bstride,vl);
+}
+
+vint8m4_t
+test___riscv_vlse8_tu(vint8m4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tu(merge,base,bstride,vl);
+}
+
+vint8m8_t
+test___riscv_vlse8_tu(vint8m8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tu(merge,base,bstride,vl);
+}
+
+vuint8mf8_t
+test___riscv_vlse8_tu(vuint8mf8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tu(merge,base,bstride,vl);
+}
+
+vuint8mf4_t
+test___riscv_vlse8_tu(vuint8mf4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tu(merge,base,bstride,vl);
+}
+
+vuint8mf2_t
+test___riscv_vlse8_tu(vuint8mf2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tu(merge,base,bstride,vl);
+}
+
+vuint8m1_t
+test___riscv_vlse8_tu(vuint8m1_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tu(merge,base,bstride,vl);
+}
+
+vuint8m2_t
+test___riscv_vlse8_tu(vuint8m2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tu(merge,base,bstride,vl);
+}
+
+vuint8m4_t
+test___riscv_vlse8_tu(vuint8m4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tu(merge,base,bstride,vl);
+}
+
+vuint8m8_t
+test___riscv_vlse8_tu(vuint8m8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tu(merge,base,bstride,vl);
+}
+
+vint16mf4_t
+test___riscv_vlse16_tu(vint16mf4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tu(merge,base,bstride,vl);
+}
+
+vint16mf2_t
+test___riscv_vlse16_tu(vint16mf2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tu(merge,base,bstride,vl);
+}
+
+vint16m1_t
+test___riscv_vlse16_tu(vint16m1_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tu(merge,base,bstride,vl);
+}
+
+vint16m2_t
+test___riscv_vlse16_tu(vint16m2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tu(merge,base,bstride,vl);
+}
+
+vint16m4_t
+test___riscv_vlse16_tu(vint16m4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tu(merge,base,bstride,vl);
+}
+
+vint16m8_t
+test___riscv_vlse16_tu(vint16m8_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tu(merge,base,bstride,vl);
+}
+
+vuint16mf4_t
+test___riscv_vlse16_tu(vuint16mf4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tu(merge,base,bstride,vl);
+}
+
+vuint16mf2_t
+test___riscv_vlse16_tu(vuint16mf2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tu(merge,base,bstride,vl);
+}
+
+vuint16m1_t
+test___riscv_vlse16_tu(vuint16m1_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tu(merge,base,bstride,vl);
+}
+
+vuint16m2_t
+test___riscv_vlse16_tu(vuint16m2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tu(merge,base,bstride,vl);
+}
+
+vuint16m4_t
+test___riscv_vlse16_tu(vuint16m4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tu(merge,base,bstride,vl);
+}
+
+vuint16m8_t
+test___riscv_vlse16_tu(vuint16m8_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tu(merge,base,bstride,vl);
+}
+
+vint32mf2_t
+test___riscv_vlse32_tu(vint32mf2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tu(merge,base,bstride,vl);
+}
+
+vint32m1_t
+test___riscv_vlse32_tu(vint32m1_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tu(merge,base,bstride,vl);
+}
+
+vint32m2_t
+test___riscv_vlse32_tu(vint32m2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tu(merge,base,bstride,vl);
+}
+
+vint32m4_t
+test___riscv_vlse32_tu(vint32m4_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tu(merge,base,bstride,vl);
+}
+
+vint32m8_t
+test___riscv_vlse32_tu(vint32m8_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tu(merge,base,bstride,vl);
+}
+
+vuint32mf2_t
+test___riscv_vlse32_tu(vuint32mf2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tu(merge,base,bstride,vl);
+}
+
+vuint32m1_t
+test___riscv_vlse32_tu(vuint32m1_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tu(merge,base,bstride,vl);
+}
+
+vuint32m2_t
+test___riscv_vlse32_tu(vuint32m2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tu(merge,base,bstride,vl);
+}
+
+vuint32m4_t
+test___riscv_vlse32_tu(vuint32m4_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tu(merge,base,bstride,vl);
+}
+
+vuint32m8_t
+test___riscv_vlse32_tu(vuint32m8_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tu(merge,base,bstride,vl);
+}
+
+vfloat32mf2_t
+test___riscv_vlse32_tu(vfloat32mf2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tu(merge,base,bstride,vl);
+}
+
+vfloat32m1_t
+test___riscv_vlse32_tu(vfloat32m1_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tu(merge,base,bstride,vl);
+}
+
+vfloat32m2_t
+test___riscv_vlse32_tu(vfloat32m2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tu(merge,base,bstride,vl);
+}
+
+vfloat32m4_t
+test___riscv_vlse32_tu(vfloat32m4_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tu(merge,base,bstride,vl);
+}
+
+vfloat32m8_t
+test___riscv_vlse32_tu(vfloat32m8_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tu(merge,base,bstride,vl);
+}
+
+vint64m1_t
+test___riscv_vlse64_tu(vint64m1_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tu(merge,base,bstride,vl);
+}
+
+vint64m2_t
+test___riscv_vlse64_tu(vint64m2_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tu(merge,base,bstride,vl);
+}
+
+vint64m4_t
+test___riscv_vlse64_tu(vint64m4_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tu(merge,base,bstride,vl);
+}
+
+vint64m8_t
+test___riscv_vlse64_tu(vint64m8_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tu(merge,base,bstride,vl);
+}
+
+vuint64m1_t
+test___riscv_vlse64_tu(vuint64m1_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tu(merge,base,bstride,vl);
+}
+
+vuint64m2_t
+test___riscv_vlse64_tu(vuint64m2_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tu(merge,base,bstride,vl);
+}
+
+vuint64m4_t
+test___riscv_vlse64_tu(vuint64m4_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tu(merge,base,bstride,vl);
+}
+
+vuint64m8_t
+test___riscv_vlse64_tu(vuint64m8_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tu(merge,base,bstride,vl);
+}
+
+vfloat64m1_t
+test___riscv_vlse64_tu(vfloat64m1_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tu(merge,base,bstride,vl);
+}
+
+vfloat64m2_t
+test___riscv_vlse64_tu(vfloat64m2_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tu(merge,base,bstride,vl);
+}
+
+vfloat64m4_t
+test___riscv_vlse64_tu(vfloat64m4_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tu(merge,base,bstride,vl);
+}
+
+vfloat64m8_t
+test___riscv_vlse64_tu(vfloat64m8_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tu(merge,base,bstride,vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
diff --git a/gcc/testsuite/g++.target/riscv/rvv/base/vlse_tum-1.C b/gcc/testsuite/g++.target/riscv/rvv/base/vlse_tum-1.C
new file mode 100644
index 00000000000..9e05bb3583f
--- /dev/null
+++ b/gcc/testsuite/g++.target/riscv/rvv/base/vlse_tum-1.C
@@ -0,0 +1,345 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+vint8mf8_t
+test___riscv_vlse8_tum(vbool64_t mask,vint8mf8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tum(mask,merge,base,bstride,vl);
+}
+
+vint8mf4_t
+test___riscv_vlse8_tum(vbool32_t mask,vint8mf4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tum(mask,merge,base,bstride,vl);
+}
+
+vint8mf2_t
+test___riscv_vlse8_tum(vbool16_t mask,vint8mf2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tum(mask,merge,base,bstride,vl);
+}
+
+vint8m1_t
+test___riscv_vlse8_tum(vbool8_t mask,vint8m1_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tum(mask,merge,base,bstride,vl);
+}
+
+vint8m2_t
+test___riscv_vlse8_tum(vbool4_t mask,vint8m2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tum(mask,merge,base,bstride,vl);
+}
+
+vint8m4_t
+test___riscv_vlse8_tum(vbool2_t mask,vint8m4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tum(mask,merge,base,bstride,vl);
+}
+
+vint8m8_t
+test___riscv_vlse8_tum(vbool1_t mask,vint8m8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tum(mask,merge,base,bstride,vl);
+}
+
+vuint8mf8_t
+test___riscv_vlse8_tum(vbool64_t mask,vuint8mf8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tum(mask,merge,base,bstride,vl);
+}
+
+vuint8mf4_t
+test___riscv_vlse8_tum(vbool32_t mask,vuint8mf4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tum(mask,merge,base,bstride,vl);
+}
+
+vuint8mf2_t
+test___riscv_vlse8_tum(vbool16_t mask,vuint8mf2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tum(mask,merge,base,bstride,vl);
+}
+
+vuint8m1_t
+test___riscv_vlse8_tum(vbool8_t mask,vuint8m1_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tum(mask,merge,base,bstride,vl);
+}
+
+vuint8m2_t
+test___riscv_vlse8_tum(vbool4_t mask,vuint8m2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tum(mask,merge,base,bstride,vl);
+}
+
+vuint8m4_t
+test___riscv_vlse8_tum(vbool2_t mask,vuint8m4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tum(mask,merge,base,bstride,vl);
+}
+
+vuint8m8_t
+test___riscv_vlse8_tum(vbool1_t mask,vuint8m8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tum(mask,merge,base,bstride,vl);
+}
+
+vint16mf4_t
+test___riscv_vlse16_tum(vbool64_t mask,vint16mf4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tum(mask,merge,base,bstride,vl);
+}
+
+vint16mf2_t
+test___riscv_vlse16_tum(vbool32_t mask,vint16mf2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tum(mask,merge,base,bstride,vl);
+}
+
+vint16m1_t
+test___riscv_vlse16_tum(vbool16_t mask,vint16m1_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tum(mask,merge,base,bstride,vl);
+}
+
+vint16m2_t
+test___riscv_vlse16_tum(vbool8_t mask,vint16m2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tum(mask,merge,base,bstride,vl);
+}
+
+vint16m4_t
+test___riscv_vlse16_tum(vbool4_t mask,vint16m4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tum(mask,merge,base,bstride,vl);
+}
+
+vint16m8_t
+test___riscv_vlse16_tum(vbool2_t mask,vint16m8_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tum(mask,merge,base,bstride,vl);
+}
+
+vuint16mf4_t
+test___riscv_vlse16_tum(vbool64_t mask,vuint16mf4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tum(mask,merge,base,bstride,vl);
+}
+
+vuint16mf2_t
+test___riscv_vlse16_tum(vbool32_t mask,vuint16mf2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tum(mask,merge,base,bstride,vl);
+}
+
+vuint16m1_t
+test___riscv_vlse16_tum(vbool16_t mask,vuint16m1_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tum(mask,merge,base,bstride,vl);
+}
+
+vuint16m2_t
+test___riscv_vlse16_tum(vbool8_t mask,vuint16m2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tum(mask,merge,base,bstride,vl);
+}
+
+vuint16m4_t
+test___riscv_vlse16_tum(vbool4_t mask,vuint16m4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tum(mask,merge,base,bstride,vl);
+}
+
+vuint16m8_t
+test___riscv_vlse16_tum(vbool2_t mask,vuint16m8_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tum(mask,merge,base,bstride,vl);
+}
+
+vint32mf2_t
+test___riscv_vlse32_tum(vbool64_t mask,vint32mf2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tum(mask,merge,base,bstride,vl);
+}
+
+vint32m1_t
+test___riscv_vlse32_tum(vbool32_t mask,vint32m1_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tum(mask,merge,base,bstride,vl);
+}
+
+vint32m2_t
+test___riscv_vlse32_tum(vbool16_t mask,vint32m2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tum(mask,merge,base,bstride,vl);
+}
+
+vint32m4_t
+test___riscv_vlse32_tum(vbool8_t mask,vint32m4_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tum(mask,merge,base,bstride,vl);
+}
+
+vint32m8_t
+test___riscv_vlse32_tum(vbool4_t mask,vint32m8_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tum(mask,merge,base,bstride,vl);
+}
+
+vuint32mf2_t
+test___riscv_vlse32_tum(vbool64_t mask,vuint32mf2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tum(mask,merge,base,bstride,vl);
+}
+
+vuint32m1_t
+test___riscv_vlse32_tum(vbool32_t mask,vuint32m1_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tum(mask,merge,base,bstride,vl);
+}
+
+vuint32m2_t
+test___riscv_vlse32_tum(vbool16_t mask,vuint32m2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tum(mask,merge,base,bstride,vl);
+}
+
+vuint32m4_t
+test___riscv_vlse32_tum(vbool8_t mask,vuint32m4_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tum(mask,merge,base,bstride,vl);
+}
+
+vuint32m8_t
+test___riscv_vlse32_tum(vbool4_t mask,vuint32m8_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tum(mask,merge,base,bstride,vl);
+}
+
+vfloat32mf2_t
+test___riscv_vlse32_tum(vbool64_t mask,vfloat32mf2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tum(mask,merge,base,bstride,vl);
+}
+
+vfloat32m1_t
+test___riscv_vlse32_tum(vbool32_t mask,vfloat32m1_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tum(mask,merge,base,bstride,vl);
+}
+
+vfloat32m2_t
+test___riscv_vlse32_tum(vbool16_t mask,vfloat32m2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tum(mask,merge,base,bstride,vl);
+}
+
+vfloat32m4_t
+test___riscv_vlse32_tum(vbool8_t mask,vfloat32m4_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tum(mask,merge,base,bstride,vl);
+}
+
+vfloat32m8_t
+test___riscv_vlse32_tum(vbool4_t mask,vfloat32m8_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tum(mask,merge,base,bstride,vl);
+}
+
+vint64m1_t
+test___riscv_vlse64_tum(vbool64_t mask,vint64m1_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tum(mask,merge,base,bstride,vl);
+}
+
+vint64m2_t
+test___riscv_vlse64_tum(vbool32_t mask,vint64m2_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tum(mask,merge,base,bstride,vl);
+}
+
+vint64m4_t
+test___riscv_vlse64_tum(vbool16_t mask,vint64m4_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tum(mask,merge,base,bstride,vl);
+}
+
+vint64m8_t
+test___riscv_vlse64_tum(vbool8_t mask,vint64m8_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tum(mask,merge,base,bstride,vl);
+}
+
+vuint64m1_t
+test___riscv_vlse64_tum(vbool64_t mask,vuint64m1_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tum(mask,merge,base,bstride,vl);
+}
+
+vuint64m2_t
+test___riscv_vlse64_tum(vbool32_t mask,vuint64m2_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tum(mask,merge,base,bstride,vl);
+}
+
+vuint64m4_t
+test___riscv_vlse64_tum(vbool16_t mask,vuint64m4_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tum(mask,merge,base,bstride,vl);
+}
+
+vuint64m8_t
+test___riscv_vlse64_tum(vbool8_t mask,vuint64m8_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tum(mask,merge,base,bstride,vl);
+}
+
+vfloat64m1_t
+test___riscv_vlse64_tum(vbool64_t mask,vfloat64m1_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tum(mask,merge,base,bstride,vl);
+}
+
+vfloat64m2_t
+test___riscv_vlse64_tum(vbool32_t mask,vfloat64m2_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tum(mask,merge,base,bstride,vl);
+}
+
+vfloat64m4_t
+test___riscv_vlse64_tum(vbool16_t mask,vfloat64m4_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tum(mask,merge,base,bstride,vl);
+}
+
+vfloat64m8_t
+test___riscv_vlse64_tum(vbool8_t mask,vfloat64m8_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tum(mask,merge,base,bstride,vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
diff --git a/gcc/testsuite/g++.target/riscv/rvv/base/vlse_tumu-1.C b/gcc/testsuite/g++.target/riscv/rvv/base/vlse_tumu-1.C
new file mode 100644
index 00000000000..04fa611dac8
--- /dev/null
+++ b/gcc/testsuite/g++.target/riscv/rvv/base/vlse_tumu-1.C
@@ -0,0 +1,345 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+vint8mf8_t
+test___riscv_vlse8_tumu(vbool64_t mask,vint8mf8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tumu(mask,merge,base,bstride,vl);
+}
+
+vint8mf4_t
+test___riscv_vlse8_tumu(vbool32_t mask,vint8mf4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tumu(mask,merge,base,bstride,vl);
+}
+
+vint8mf2_t
+test___riscv_vlse8_tumu(vbool16_t mask,vint8mf2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tumu(mask,merge,base,bstride,vl);
+}
+
+vint8m1_t
+test___riscv_vlse8_tumu(vbool8_t mask,vint8m1_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tumu(mask,merge,base,bstride,vl);
+}
+
+vint8m2_t
+test___riscv_vlse8_tumu(vbool4_t mask,vint8m2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tumu(mask,merge,base,bstride,vl);
+}
+
+vint8m4_t
+test___riscv_vlse8_tumu(vbool2_t mask,vint8m4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tumu(mask,merge,base,bstride,vl);
+}
+
+vint8m8_t
+test___riscv_vlse8_tumu(vbool1_t mask,vint8m8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint8mf8_t
+test___riscv_vlse8_tumu(vbool64_t mask,vuint8mf8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint8mf4_t
+test___riscv_vlse8_tumu(vbool32_t mask,vuint8mf4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint8mf2_t
+test___riscv_vlse8_tumu(vbool16_t mask,vuint8mf2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint8m1_t
+test___riscv_vlse8_tumu(vbool8_t mask,vuint8m1_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint8m2_t
+test___riscv_vlse8_tumu(vbool4_t mask,vuint8m2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint8m4_t
+test___riscv_vlse8_tumu(vbool2_t mask,vuint8m4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint8m8_t
+test___riscv_vlse8_tumu(vbool1_t mask,vuint8m8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_tumu(mask,merge,base,bstride,vl);
+}
+
+vint16mf4_t
+test___riscv_vlse16_tumu(vbool64_t mask,vint16mf4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tumu(mask,merge,base,bstride,vl);
+}
+
+vint16mf2_t
+test___riscv_vlse16_tumu(vbool32_t mask,vint16mf2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tumu(mask,merge,base,bstride,vl);
+}
+
+vint16m1_t
+test___riscv_vlse16_tumu(vbool16_t mask,vint16m1_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tumu(mask,merge,base,bstride,vl);
+}
+
+vint16m2_t
+test___riscv_vlse16_tumu(vbool8_t mask,vint16m2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tumu(mask,merge,base,bstride,vl);
+}
+
+vint16m4_t
+test___riscv_vlse16_tumu(vbool4_t mask,vint16m4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tumu(mask,merge,base,bstride,vl);
+}
+
+vint16m8_t
+test___riscv_vlse16_tumu(vbool2_t mask,vint16m8_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint16mf4_t
+test___riscv_vlse16_tumu(vbool64_t mask,vuint16mf4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint16mf2_t
+test___riscv_vlse16_tumu(vbool32_t mask,vuint16mf2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint16m1_t
+test___riscv_vlse16_tumu(vbool16_t mask,vuint16m1_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint16m2_t
+test___riscv_vlse16_tumu(vbool8_t mask,vuint16m2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint16m4_t
+test___riscv_vlse16_tumu(vbool4_t mask,vuint16m4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint16m8_t
+test___riscv_vlse16_tumu(vbool2_t mask,vuint16m8_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_tumu(mask,merge,base,bstride,vl);
+}
+
+vint32mf2_t
+test___riscv_vlse32_tumu(vbool64_t mask,vint32mf2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tumu(mask,merge,base,bstride,vl);
+}
+
+vint32m1_t
+test___riscv_vlse32_tumu(vbool32_t mask,vint32m1_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tumu(mask,merge,base,bstride,vl);
+}
+
+vint32m2_t
+test___riscv_vlse32_tumu(vbool16_t mask,vint32m2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tumu(mask,merge,base,bstride,vl);
+}
+
+vint32m4_t
+test___riscv_vlse32_tumu(vbool8_t mask,vint32m4_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tumu(mask,merge,base,bstride,vl);
+}
+
+vint32m8_t
+test___riscv_vlse32_tumu(vbool4_t mask,vint32m8_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint32mf2_t
+test___riscv_vlse32_tumu(vbool64_t mask,vuint32mf2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint32m1_t
+test___riscv_vlse32_tumu(vbool32_t mask,vuint32m1_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint32m2_t
+test___riscv_vlse32_tumu(vbool16_t mask,vuint32m2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint32m4_t
+test___riscv_vlse32_tumu(vbool8_t mask,vuint32m4_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint32m8_t
+test___riscv_vlse32_tumu(vbool4_t mask,vuint32m8_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tumu(mask,merge,base,bstride,vl);
+}
+
+vfloat32mf2_t
+test___riscv_vlse32_tumu(vbool64_t mask,vfloat32mf2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tumu(mask,merge,base,bstride,vl);
+}
+
+vfloat32m1_t
+test___riscv_vlse32_tumu(vbool32_t mask,vfloat32m1_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tumu(mask,merge,base,bstride,vl);
+}
+
+vfloat32m2_t
+test___riscv_vlse32_tumu(vbool16_t mask,vfloat32m2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tumu(mask,merge,base,bstride,vl);
+}
+
+vfloat32m4_t
+test___riscv_vlse32_tumu(vbool8_t mask,vfloat32m4_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tumu(mask,merge,base,bstride,vl);
+}
+
+vfloat32m8_t
+test___riscv_vlse32_tumu(vbool4_t mask,vfloat32m8_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_tumu(mask,merge,base,bstride,vl);
+}
+
+vint64m1_t
+test___riscv_vlse64_tumu(vbool64_t mask,vint64m1_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tumu(mask,merge,base,bstride,vl);
+}
+
+vint64m2_t
+test___riscv_vlse64_tumu(vbool32_t mask,vint64m2_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tumu(mask,merge,base,bstride,vl);
+}
+
+vint64m4_t
+test___riscv_vlse64_tumu(vbool16_t mask,vint64m4_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tumu(mask,merge,base,bstride,vl);
+}
+
+vint64m8_t
+test___riscv_vlse64_tumu(vbool8_t mask,vint64m8_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint64m1_t
+test___riscv_vlse64_tumu(vbool64_t mask,vuint64m1_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint64m2_t
+test___riscv_vlse64_tumu(vbool32_t mask,vuint64m2_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint64m4_t
+test___riscv_vlse64_tumu(vbool16_t mask,vuint64m4_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint64m8_t
+test___riscv_vlse64_tumu(vbool8_t mask,vuint64m8_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tumu(mask,merge,base,bstride,vl);
+}
+
+vfloat64m1_t
+test___riscv_vlse64_tumu(vbool64_t mask,vfloat64m1_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tumu(mask,merge,base,bstride,vl);
+}
+
+vfloat64m2_t
+test___riscv_vlse64_tumu(vbool32_t mask,vfloat64m2_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tumu(mask,merge,base,bstride,vl);
+}
+
+vfloat64m4_t
+test___riscv_vlse64_tumu(vbool16_t mask,vfloat64m4_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tumu(mask,merge,base,bstride,vl);
+}
+
+vfloat64m8_t
+test___riscv_vlse64_tumu(vbool8_t mask,vfloat64m8_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_tumu(mask,merge,base,bstride,vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*tu,\s*mu\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*tu,\s*mu\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*tu,\s*mu\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*tu,\s*mu\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
diff --git a/gcc/testsuite/g++.target/riscv/rvv/base/vsse-1.C b/gcc/testsuite/g++.target/riscv/rvv/base/vsse-1.C
new file mode 100644
index 00000000000..a2d55462c98
--- /dev/null
+++ b/gcc/testsuite/g++.target/riscv/rvv/base/vsse-1.C
@@ -0,0 +1,685 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+void
+test___riscv_vsse8(int8_t* base,ptrdiff_t bstride,vint8mf8_t value,size_t vl)
+{
+  __riscv_vsse8(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8(int8_t* base,ptrdiff_t bstride,vint8mf4_t value,size_t vl)
+{
+  __riscv_vsse8(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8(int8_t* base,ptrdiff_t bstride,vint8mf2_t value,size_t vl)
+{
+  __riscv_vsse8(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8(int8_t* base,ptrdiff_t bstride,vint8m1_t value,size_t vl)
+{
+  __riscv_vsse8(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8(int8_t* base,ptrdiff_t bstride,vint8m2_t value,size_t vl)
+{
+  __riscv_vsse8(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8(int8_t* base,ptrdiff_t bstride,vint8m4_t value,size_t vl)
+{
+  __riscv_vsse8(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8(int8_t* base,ptrdiff_t bstride,vint8m8_t value,size_t vl)
+{
+  __riscv_vsse8(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8(uint8_t* base,ptrdiff_t bstride,vuint8mf8_t value,size_t vl)
+{
+  __riscv_vsse8(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8(uint8_t* base,ptrdiff_t bstride,vuint8mf4_t value,size_t vl)
+{
+  __riscv_vsse8(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8(uint8_t* base,ptrdiff_t bstride,vuint8mf2_t value,size_t vl)
+{
+  __riscv_vsse8(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8(uint8_t* base,ptrdiff_t bstride,vuint8m1_t value,size_t vl)
+{
+  __riscv_vsse8(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8(uint8_t* base,ptrdiff_t bstride,vuint8m2_t value,size_t vl)
+{
+  __riscv_vsse8(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8(uint8_t* base,ptrdiff_t bstride,vuint8m4_t value,size_t vl)
+{
+  __riscv_vsse8(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8(uint8_t* base,ptrdiff_t bstride,vuint8m8_t value,size_t vl)
+{
+  __riscv_vsse8(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8(vbool64_t mask,int8_t* base,ptrdiff_t bstride,vint8mf8_t value,size_t vl)
+{
+  __riscv_vsse8(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8(vbool32_t mask,int8_t* base,ptrdiff_t bstride,vint8mf4_t value,size_t vl)
+{
+  __riscv_vsse8(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8(vbool16_t mask,int8_t* base,ptrdiff_t bstride,vint8mf2_t value,size_t vl)
+{
+  __riscv_vsse8(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8(vbool8_t mask,int8_t* base,ptrdiff_t bstride,vint8m1_t value,size_t vl)
+{
+  __riscv_vsse8(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8(vbool4_t mask,int8_t* base,ptrdiff_t bstride,vint8m2_t value,size_t vl)
+{
+  __riscv_vsse8(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8(vbool2_t mask,int8_t* base,ptrdiff_t bstride,vint8m4_t value,size_t vl)
+{
+  __riscv_vsse8(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8(vbool1_t mask,int8_t* base,ptrdiff_t bstride,vint8m8_t value,size_t vl)
+{
+  __riscv_vsse8(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8(vbool64_t mask,uint8_t* base,ptrdiff_t bstride,vuint8mf8_t value,size_t vl)
+{
+  __riscv_vsse8(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8(vbool32_t mask,uint8_t* base,ptrdiff_t bstride,vuint8mf4_t value,size_t vl)
+{
+  __riscv_vsse8(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8(vbool16_t mask,uint8_t* base,ptrdiff_t bstride,vuint8mf2_t value,size_t vl)
+{
+  __riscv_vsse8(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8(vbool8_t mask,uint8_t* base,ptrdiff_t bstride,vuint8m1_t value,size_t vl)
+{
+  __riscv_vsse8(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8(vbool4_t mask,uint8_t* base,ptrdiff_t bstride,vuint8m2_t value,size_t vl)
+{
+  __riscv_vsse8(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8(vbool2_t mask,uint8_t* base,ptrdiff_t bstride,vuint8m4_t value,size_t vl)
+{
+  __riscv_vsse8(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8(vbool1_t mask,uint8_t* base,ptrdiff_t bstride,vuint8m8_t value,size_t vl)
+{
+  __riscv_vsse8(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16(int16_t* base,ptrdiff_t bstride,vint16mf4_t value,size_t vl)
+{
+  __riscv_vsse16(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16(int16_t* base,ptrdiff_t bstride,vint16mf2_t value,size_t vl)
+{
+  __riscv_vsse16(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16(int16_t* base,ptrdiff_t bstride,vint16m1_t value,size_t vl)
+{
+  __riscv_vsse16(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16(int16_t* base,ptrdiff_t bstride,vint16m2_t value,size_t vl)
+{
+  __riscv_vsse16(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16(int16_t* base,ptrdiff_t bstride,vint16m4_t value,size_t vl)
+{
+  __riscv_vsse16(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16(int16_t* base,ptrdiff_t bstride,vint16m8_t value,size_t vl)
+{
+  __riscv_vsse16(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16(uint16_t* base,ptrdiff_t bstride,vuint16mf4_t value,size_t vl)
+{
+  __riscv_vsse16(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16(uint16_t* base,ptrdiff_t bstride,vuint16mf2_t value,size_t vl)
+{
+  __riscv_vsse16(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16(uint16_t* base,ptrdiff_t bstride,vuint16m1_t value,size_t vl)
+{
+  __riscv_vsse16(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16(uint16_t* base,ptrdiff_t bstride,vuint16m2_t value,size_t vl)
+{
+  __riscv_vsse16(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16(uint16_t* base,ptrdiff_t bstride,vuint16m4_t value,size_t vl)
+{
+  __riscv_vsse16(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16(uint16_t* base,ptrdiff_t bstride,vuint16m8_t value,size_t vl)
+{
+  __riscv_vsse16(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16(vbool64_t mask,int16_t* base,ptrdiff_t bstride,vint16mf4_t value,size_t vl)
+{
+  __riscv_vsse16(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16(vbool32_t mask,int16_t* base,ptrdiff_t bstride,vint16mf2_t value,size_t vl)
+{
+  __riscv_vsse16(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16(vbool16_t mask,int16_t* base,ptrdiff_t bstride,vint16m1_t value,size_t vl)
+{
+  __riscv_vsse16(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16(vbool8_t mask,int16_t* base,ptrdiff_t bstride,vint16m2_t value,size_t vl)
+{
+  __riscv_vsse16(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16(vbool4_t mask,int16_t* base,ptrdiff_t bstride,vint16m4_t value,size_t vl)
+{
+  __riscv_vsse16(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16(vbool2_t mask,int16_t* base,ptrdiff_t bstride,vint16m8_t value,size_t vl)
+{
+  __riscv_vsse16(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16(vbool64_t mask,uint16_t* base,ptrdiff_t bstride,vuint16mf4_t value,size_t vl)
+{
+  __riscv_vsse16(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16(vbool32_t mask,uint16_t* base,ptrdiff_t bstride,vuint16mf2_t value,size_t vl)
+{
+  __riscv_vsse16(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16(vbool16_t mask,uint16_t* base,ptrdiff_t bstride,vuint16m1_t value,size_t vl)
+{
+  __riscv_vsse16(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16(vbool8_t mask,uint16_t* base,ptrdiff_t bstride,vuint16m2_t value,size_t vl)
+{
+  __riscv_vsse16(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16(vbool4_t mask,uint16_t* base,ptrdiff_t bstride,vuint16m4_t value,size_t vl)
+{
+  __riscv_vsse16(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16(vbool2_t mask,uint16_t* base,ptrdiff_t bstride,vuint16m8_t value,size_t vl)
+{
+  __riscv_vsse16(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(int32_t* base,ptrdiff_t bstride,vint32mf2_t value,size_t vl)
+{
+  __riscv_vsse32(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(int32_t* base,ptrdiff_t bstride,vint32m1_t value,size_t vl)
+{
+  __riscv_vsse32(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(int32_t* base,ptrdiff_t bstride,vint32m2_t value,size_t vl)
+{
+  __riscv_vsse32(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(int32_t* base,ptrdiff_t bstride,vint32m4_t value,size_t vl)
+{
+  __riscv_vsse32(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(int32_t* base,ptrdiff_t bstride,vint32m8_t value,size_t vl)
+{
+  __riscv_vsse32(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(uint32_t* base,ptrdiff_t bstride,vuint32mf2_t value,size_t vl)
+{
+  __riscv_vsse32(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(uint32_t* base,ptrdiff_t bstride,vuint32m1_t value,size_t vl)
+{
+  __riscv_vsse32(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(uint32_t* base,ptrdiff_t bstride,vuint32m2_t value,size_t vl)
+{
+  __riscv_vsse32(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(uint32_t* base,ptrdiff_t bstride,vuint32m4_t value,size_t vl)
+{
+  __riscv_vsse32(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(uint32_t* base,ptrdiff_t bstride,vuint32m8_t value,size_t vl)
+{
+  __riscv_vsse32(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(float* base,ptrdiff_t bstride,vfloat32mf2_t value,size_t vl)
+{
+  __riscv_vsse32(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(float* base,ptrdiff_t bstride,vfloat32m1_t value,size_t vl)
+{
+  __riscv_vsse32(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(float* base,ptrdiff_t bstride,vfloat32m2_t value,size_t vl)
+{
+  __riscv_vsse32(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(float* base,ptrdiff_t bstride,vfloat32m4_t value,size_t vl)
+{
+  __riscv_vsse32(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(float* base,ptrdiff_t bstride,vfloat32m8_t value,size_t vl)
+{
+  __riscv_vsse32(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(vbool64_t mask,int32_t* base,ptrdiff_t bstride,vint32mf2_t value,size_t vl)
+{
+  __riscv_vsse32(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(vbool32_t mask,int32_t* base,ptrdiff_t bstride,vint32m1_t value,size_t vl)
+{
+  __riscv_vsse32(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(vbool16_t mask,int32_t* base,ptrdiff_t bstride,vint32m2_t value,size_t vl)
+{
+  __riscv_vsse32(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(vbool8_t mask,int32_t* base,ptrdiff_t bstride,vint32m4_t value,size_t vl)
+{
+  __riscv_vsse32(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(vbool4_t mask,int32_t* base,ptrdiff_t bstride,vint32m8_t value,size_t vl)
+{
+  __riscv_vsse32(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(vbool64_t mask,uint32_t* base,ptrdiff_t bstride,vuint32mf2_t value,size_t vl)
+{
+  __riscv_vsse32(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(vbool32_t mask,uint32_t* base,ptrdiff_t bstride,vuint32m1_t value,size_t vl)
+{
+  __riscv_vsse32(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(vbool16_t mask,uint32_t* base,ptrdiff_t bstride,vuint32m2_t value,size_t vl)
+{
+  __riscv_vsse32(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(vbool8_t mask,uint32_t* base,ptrdiff_t bstride,vuint32m4_t value,size_t vl)
+{
+  __riscv_vsse32(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(vbool4_t mask,uint32_t* base,ptrdiff_t bstride,vuint32m8_t value,size_t vl)
+{
+  __riscv_vsse32(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(vbool64_t mask,float* base,ptrdiff_t bstride,vfloat32mf2_t value,size_t vl)
+{
+  __riscv_vsse32(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(vbool32_t mask,float* base,ptrdiff_t bstride,vfloat32m1_t value,size_t vl)
+{
+  __riscv_vsse32(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(vbool16_t mask,float* base,ptrdiff_t bstride,vfloat32m2_t value,size_t vl)
+{
+  __riscv_vsse32(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(vbool8_t mask,float* base,ptrdiff_t bstride,vfloat32m4_t value,size_t vl)
+{
+  __riscv_vsse32(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32(vbool4_t mask,float* base,ptrdiff_t bstride,vfloat32m8_t value,size_t vl)
+{
+  __riscv_vsse32(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64(int64_t* base,ptrdiff_t bstride,vint64m1_t value,size_t vl)
+{
+  __riscv_vsse64(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64(int64_t* base,ptrdiff_t bstride,vint64m2_t value,size_t vl)
+{
+  __riscv_vsse64(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64(int64_t* base,ptrdiff_t bstride,vint64m4_t value,size_t vl)
+{
+  __riscv_vsse64(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64(int64_t* base,ptrdiff_t bstride,vint64m8_t value,size_t vl)
+{
+  __riscv_vsse64(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64(uint64_t* base,ptrdiff_t bstride,vuint64m1_t value,size_t vl)
+{
+  __riscv_vsse64(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64(uint64_t* base,ptrdiff_t bstride,vuint64m2_t value,size_t vl)
+{
+  __riscv_vsse64(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64(uint64_t* base,ptrdiff_t bstride,vuint64m4_t value,size_t vl)
+{
+  __riscv_vsse64(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64(uint64_t* base,ptrdiff_t bstride,vuint64m8_t value,size_t vl)
+{
+  __riscv_vsse64(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64(double* base,ptrdiff_t bstride,vfloat64m1_t value,size_t vl)
+{
+  __riscv_vsse64(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64(double* base,ptrdiff_t bstride,vfloat64m2_t value,size_t vl)
+{
+  __riscv_vsse64(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64(double* base,ptrdiff_t bstride,vfloat64m4_t value,size_t vl)
+{
+  __riscv_vsse64(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64(double* base,ptrdiff_t bstride,vfloat64m8_t value,size_t vl)
+{
+  __riscv_vsse64(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64(vbool64_t mask,int64_t* base,ptrdiff_t bstride,vint64m1_t value,size_t vl)
+{
+  __riscv_vsse64(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64(vbool32_t mask,int64_t* base,ptrdiff_t bstride,vint64m2_t value,size_t vl)
+{
+  __riscv_vsse64(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64(vbool16_t mask,int64_t* base,ptrdiff_t bstride,vint64m4_t value,size_t vl)
+{
+  __riscv_vsse64(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64(vbool8_t mask,int64_t* base,ptrdiff_t bstride,vint64m8_t value,size_t vl)
+{
+  __riscv_vsse64(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64(vbool64_t mask,uint64_t* base,ptrdiff_t bstride,vuint64m1_t value,size_t vl)
+{
+  __riscv_vsse64(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64(vbool32_t mask,uint64_t* base,ptrdiff_t bstride,vuint64m2_t value,size_t vl)
+{
+  __riscv_vsse64(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64(vbool16_t mask,uint64_t* base,ptrdiff_t bstride,vuint64m4_t value,size_t vl)
+{
+  __riscv_vsse64(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64(vbool8_t mask,uint64_t* base,ptrdiff_t bstride,vuint64m8_t value,size_t vl)
+{
+  __riscv_vsse64(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64(vbool64_t mask,double* base,ptrdiff_t bstride,vfloat64m1_t value,size_t vl)
+{
+  __riscv_vsse64(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64(vbool32_t mask,double* base,ptrdiff_t bstride,vfloat64m2_t value,size_t vl)
+{
+  __riscv_vsse64(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64(vbool16_t mask,double* base,ptrdiff_t bstride,vfloat64m4_t value,size_t vl)
+{
+  __riscv_vsse64(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64(vbool8_t mask,double* base,ptrdiff_t bstride,vfloat64m8_t value,size_t vl)
+{
+  __riscv_vsse64(mask,base,bstride,value,vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vlse-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse-1.c
new file mode 100644
index 00000000000..b7e7ef064b7
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse-1.c
@@ -0,0 +1,345 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+vint8mf8_t
+test___riscv_vlse8_v_i8mf8(int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf8(base,bstride,vl);
+}
+
+vint8mf4_t
+test___riscv_vlse8_v_i8mf4(int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf4(base,bstride,vl);
+}
+
+vint8mf2_t
+test___riscv_vlse8_v_i8mf2(int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf2(base,bstride,vl);
+}
+
+vint8m1_t
+test___riscv_vlse8_v_i8m1(int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m1(base,bstride,vl);
+}
+
+vint8m2_t
+test___riscv_vlse8_v_i8m2(int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m2(base,bstride,vl);
+}
+
+vint8m4_t
+test___riscv_vlse8_v_i8m4(int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m4(base,bstride,vl);
+}
+
+vint8m8_t
+test___riscv_vlse8_v_i8m8(int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m8(base,bstride,vl);
+}
+
+vuint8mf8_t
+test___riscv_vlse8_v_u8mf8(uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf8(base,bstride,vl);
+}
+
+vuint8mf4_t
+test___riscv_vlse8_v_u8mf4(uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf4(base,bstride,vl);
+}
+
+vuint8mf2_t
+test___riscv_vlse8_v_u8mf2(uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf2(base,bstride,vl);
+}
+
+vuint8m1_t
+test___riscv_vlse8_v_u8m1(uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m1(base,bstride,vl);
+}
+
+vuint8m2_t
+test___riscv_vlse8_v_u8m2(uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m2(base,bstride,vl);
+}
+
+vuint8m4_t
+test___riscv_vlse8_v_u8m4(uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m4(base,bstride,vl);
+}
+
+vuint8m8_t
+test___riscv_vlse8_v_u8m8(uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m8(base,bstride,vl);
+}
+
+vint16mf4_t
+test___riscv_vlse16_v_i16mf4(int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf4(base,bstride,vl);
+}
+
+vint16mf2_t
+test___riscv_vlse16_v_i16mf2(int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf2(base,bstride,vl);
+}
+
+vint16m1_t
+test___riscv_vlse16_v_i16m1(int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m1(base,bstride,vl);
+}
+
+vint16m2_t
+test___riscv_vlse16_v_i16m2(int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m2(base,bstride,vl);
+}
+
+vint16m4_t
+test___riscv_vlse16_v_i16m4(int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m4(base,bstride,vl);
+}
+
+vint16m8_t
+test___riscv_vlse16_v_i16m8(int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m8(base,bstride,vl);
+}
+
+vuint16mf4_t
+test___riscv_vlse16_v_u16mf4(uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf4(base,bstride,vl);
+}
+
+vuint16mf2_t
+test___riscv_vlse16_v_u16mf2(uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf2(base,bstride,vl);
+}
+
+vuint16m1_t
+test___riscv_vlse16_v_u16m1(uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m1(base,bstride,vl);
+}
+
+vuint16m2_t
+test___riscv_vlse16_v_u16m2(uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m2(base,bstride,vl);
+}
+
+vuint16m4_t
+test___riscv_vlse16_v_u16m4(uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m4(base,bstride,vl);
+}
+
+vuint16m8_t
+test___riscv_vlse16_v_u16m8(uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m8(base,bstride,vl);
+}
+
+vint32mf2_t
+test___riscv_vlse32_v_i32mf2(int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32mf2(base,bstride,vl);
+}
+
+vint32m1_t
+test___riscv_vlse32_v_i32m1(int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m1(base,bstride,vl);
+}
+
+vint32m2_t
+test___riscv_vlse32_v_i32m2(int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m2(base,bstride,vl);
+}
+
+vint32m4_t
+test___riscv_vlse32_v_i32m4(int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m4(base,bstride,vl);
+}
+
+vint32m8_t
+test___riscv_vlse32_v_i32m8(int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m8(base,bstride,vl);
+}
+
+vuint32mf2_t
+test___riscv_vlse32_v_u32mf2(uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32mf2(base,bstride,vl);
+}
+
+vuint32m1_t
+test___riscv_vlse32_v_u32m1(uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m1(base,bstride,vl);
+}
+
+vuint32m2_t
+test___riscv_vlse32_v_u32m2(uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m2(base,bstride,vl);
+}
+
+vuint32m4_t
+test___riscv_vlse32_v_u32m4(uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m4(base,bstride,vl);
+}
+
+vuint32m8_t
+test___riscv_vlse32_v_u32m8(uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m8(base,bstride,vl);
+}
+
+vfloat32mf2_t
+test___riscv_vlse32_v_f32mf2(float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32mf2(base,bstride,vl);
+}
+
+vfloat32m1_t
+test___riscv_vlse32_v_f32m1(float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m1(base,bstride,vl);
+}
+
+vfloat32m2_t
+test___riscv_vlse32_v_f32m2(float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m2(base,bstride,vl);
+}
+
+vfloat32m4_t
+test___riscv_vlse32_v_f32m4(float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m4(base,bstride,vl);
+}
+
+vfloat32m8_t
+test___riscv_vlse32_v_f32m8(float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m8(base,bstride,vl);
+}
+
+vint64m1_t
+test___riscv_vlse64_v_i64m1(int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m1(base,bstride,vl);
+}
+
+vint64m2_t
+test___riscv_vlse64_v_i64m2(int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m2(base,bstride,vl);
+}
+
+vint64m4_t
+test___riscv_vlse64_v_i64m4(int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m4(base,bstride,vl);
+}
+
+vint64m8_t
+test___riscv_vlse64_v_i64m8(int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m8(base,bstride,vl);
+}
+
+vuint64m1_t
+test___riscv_vlse64_v_u64m1(uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m1(base,bstride,vl);
+}
+
+vuint64m2_t
+test___riscv_vlse64_v_u64m2(uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m2(base,bstride,vl);
+}
+
+vuint64m4_t
+test___riscv_vlse64_v_u64m4(uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m4(base,bstride,vl);
+}
+
+vuint64m8_t
+test___riscv_vlse64_v_u64m8(uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m8(base,bstride,vl);
+}
+
+vfloat64m1_t
+test___riscv_vlse64_v_f64m1(double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m1(base,bstride,vl);
+}
+
+vfloat64m2_t
+test___riscv_vlse64_v_f64m2(double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m2(base,bstride,vl);
+}
+
+vfloat64m4_t
+test___riscv_vlse64_v_f64m4(double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m4(base,bstride,vl);
+}
+
+vfloat64m8_t
+test___riscv_vlse64_v_f64m8(double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m8(base,bstride,vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vlse-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse-2.c
new file mode 100644
index 00000000000..221aacde919
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse-2.c
@@ -0,0 +1,345 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+vint8mf8_t
+test___riscv_vlse8_v_i8mf8(int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf8(base,bstride,31);
+}
+
+vint8mf4_t
+test___riscv_vlse8_v_i8mf4(int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf4(base,bstride,31);
+}
+
+vint8mf2_t
+test___riscv_vlse8_v_i8mf2(int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf2(base,bstride,31);
+}
+
+vint8m1_t
+test___riscv_vlse8_v_i8m1(int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m1(base,bstride,31);
+}
+
+vint8m2_t
+test___riscv_vlse8_v_i8m2(int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m2(base,bstride,31);
+}
+
+vint8m4_t
+test___riscv_vlse8_v_i8m4(int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m4(base,bstride,31);
+}
+
+vint8m8_t
+test___riscv_vlse8_v_i8m8(int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m8(base,bstride,31);
+}
+
+vuint8mf8_t
+test___riscv_vlse8_v_u8mf8(uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf8(base,bstride,31);
+}
+
+vuint8mf4_t
+test___riscv_vlse8_v_u8mf4(uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf4(base,bstride,31);
+}
+
+vuint8mf2_t
+test___riscv_vlse8_v_u8mf2(uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf2(base,bstride,31);
+}
+
+vuint8m1_t
+test___riscv_vlse8_v_u8m1(uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m1(base,bstride,31);
+}
+
+vuint8m2_t
+test___riscv_vlse8_v_u8m2(uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m2(base,bstride,31);
+}
+
+vuint8m4_t
+test___riscv_vlse8_v_u8m4(uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m4(base,bstride,31);
+}
+
+vuint8m8_t
+test___riscv_vlse8_v_u8m8(uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m8(base,bstride,31);
+}
+
+vint16mf4_t
+test___riscv_vlse16_v_i16mf4(int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf4(base,bstride,31);
+}
+
+vint16mf2_t
+test___riscv_vlse16_v_i16mf2(int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf2(base,bstride,31);
+}
+
+vint16m1_t
+test___riscv_vlse16_v_i16m1(int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m1(base,bstride,31);
+}
+
+vint16m2_t
+test___riscv_vlse16_v_i16m2(int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m2(base,bstride,31);
+}
+
+vint16m4_t
+test___riscv_vlse16_v_i16m4(int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m4(base,bstride,31);
+}
+
+vint16m8_t
+test___riscv_vlse16_v_i16m8(int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m8(base,bstride,31);
+}
+
+vuint16mf4_t
+test___riscv_vlse16_v_u16mf4(uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf4(base,bstride,31);
+}
+
+vuint16mf2_t
+test___riscv_vlse16_v_u16mf2(uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf2(base,bstride,31);
+}
+
+vuint16m1_t
+test___riscv_vlse16_v_u16m1(uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m1(base,bstride,31);
+}
+
+vuint16m2_t
+test___riscv_vlse16_v_u16m2(uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m2(base,bstride,31);
+}
+
+vuint16m4_t
+test___riscv_vlse16_v_u16m4(uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m4(base,bstride,31);
+}
+
+vuint16m8_t
+test___riscv_vlse16_v_u16m8(uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m8(base,bstride,31);
+}
+
+vint32mf2_t
+test___riscv_vlse32_v_i32mf2(int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32mf2(base,bstride,31);
+}
+
+vint32m1_t
+test___riscv_vlse32_v_i32m1(int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m1(base,bstride,31);
+}
+
+vint32m2_t
+test___riscv_vlse32_v_i32m2(int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m2(base,bstride,31);
+}
+
+vint32m4_t
+test___riscv_vlse32_v_i32m4(int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m4(base,bstride,31);
+}
+
+vint32m8_t
+test___riscv_vlse32_v_i32m8(int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m8(base,bstride,31);
+}
+
+vuint32mf2_t
+test___riscv_vlse32_v_u32mf2(uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32mf2(base,bstride,31);
+}
+
+vuint32m1_t
+test___riscv_vlse32_v_u32m1(uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m1(base,bstride,31);
+}
+
+vuint32m2_t
+test___riscv_vlse32_v_u32m2(uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m2(base,bstride,31);
+}
+
+vuint32m4_t
+test___riscv_vlse32_v_u32m4(uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m4(base,bstride,31);
+}
+
+vuint32m8_t
+test___riscv_vlse32_v_u32m8(uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m8(base,bstride,31);
+}
+
+vfloat32mf2_t
+test___riscv_vlse32_v_f32mf2(float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32mf2(base,bstride,31);
+}
+
+vfloat32m1_t
+test___riscv_vlse32_v_f32m1(float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m1(base,bstride,31);
+}
+
+vfloat32m2_t
+test___riscv_vlse32_v_f32m2(float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m2(base,bstride,31);
+}
+
+vfloat32m4_t
+test___riscv_vlse32_v_f32m4(float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m4(base,bstride,31);
+}
+
+vfloat32m8_t
+test___riscv_vlse32_v_f32m8(float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m8(base,bstride,31);
+}
+
+vint64m1_t
+test___riscv_vlse64_v_i64m1(int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m1(base,bstride,31);
+}
+
+vint64m2_t
+test___riscv_vlse64_v_i64m2(int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m2(base,bstride,31);
+}
+
+vint64m4_t
+test___riscv_vlse64_v_i64m4(int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m4(base,bstride,31);
+}
+
+vint64m8_t
+test___riscv_vlse64_v_i64m8(int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m8(base,bstride,31);
+}
+
+vuint64m1_t
+test___riscv_vlse64_v_u64m1(uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m1(base,bstride,31);
+}
+
+vuint64m2_t
+test___riscv_vlse64_v_u64m2(uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m2(base,bstride,31);
+}
+
+vuint64m4_t
+test___riscv_vlse64_v_u64m4(uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m4(base,bstride,31);
+}
+
+vuint64m8_t
+test___riscv_vlse64_v_u64m8(uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m8(base,bstride,31);
+}
+
+vfloat64m1_t
+test___riscv_vlse64_v_f64m1(double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m1(base,bstride,31);
+}
+
+vfloat64m2_t
+test___riscv_vlse64_v_f64m2(double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m2(base,bstride,31);
+}
+
+vfloat64m4_t
+test___riscv_vlse64_v_f64m4(double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m4(base,bstride,31);
+}
+
+vfloat64m8_t
+test___riscv_vlse64_v_f64m8(double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m8(base,bstride,31);
+}
+
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vlse-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse-3.c
new file mode 100644
index 00000000000..2c19375c332
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse-3.c
@@ -0,0 +1,345 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+vint8mf8_t
+test___riscv_vlse8_v_i8mf8(int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf8(base,bstride,32);
+}
+
+vint8mf4_t
+test___riscv_vlse8_v_i8mf4(int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf4(base,bstride,32);
+}
+
+vint8mf2_t
+test___riscv_vlse8_v_i8mf2(int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf2(base,bstride,32);
+}
+
+vint8m1_t
+test___riscv_vlse8_v_i8m1(int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m1(base,bstride,32);
+}
+
+vint8m2_t
+test___riscv_vlse8_v_i8m2(int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m2(base,bstride,32);
+}
+
+vint8m4_t
+test___riscv_vlse8_v_i8m4(int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m4(base,bstride,32);
+}
+
+vint8m8_t
+test___riscv_vlse8_v_i8m8(int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m8(base,bstride,32);
+}
+
+vuint8mf8_t
+test___riscv_vlse8_v_u8mf8(uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf8(base,bstride,32);
+}
+
+vuint8mf4_t
+test___riscv_vlse8_v_u8mf4(uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf4(base,bstride,32);
+}
+
+vuint8mf2_t
+test___riscv_vlse8_v_u8mf2(uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf2(base,bstride,32);
+}
+
+vuint8m1_t
+test___riscv_vlse8_v_u8m1(uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m1(base,bstride,32);
+}
+
+vuint8m2_t
+test___riscv_vlse8_v_u8m2(uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m2(base,bstride,32);
+}
+
+vuint8m4_t
+test___riscv_vlse8_v_u8m4(uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m4(base,bstride,32);
+}
+
+vuint8m8_t
+test___riscv_vlse8_v_u8m8(uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m8(base,bstride,32);
+}
+
+vint16mf4_t
+test___riscv_vlse16_v_i16mf4(int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf4(base,bstride,32);
+}
+
+vint16mf2_t
+test___riscv_vlse16_v_i16mf2(int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf2(base,bstride,32);
+}
+
+vint16m1_t
+test___riscv_vlse16_v_i16m1(int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m1(base,bstride,32);
+}
+
+vint16m2_t
+test___riscv_vlse16_v_i16m2(int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m2(base,bstride,32);
+}
+
+vint16m4_t
+test___riscv_vlse16_v_i16m4(int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m4(base,bstride,32);
+}
+
+vint16m8_t
+test___riscv_vlse16_v_i16m8(int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m8(base,bstride,32);
+}
+
+vuint16mf4_t
+test___riscv_vlse16_v_u16mf4(uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf4(base,bstride,32);
+}
+
+vuint16mf2_t
+test___riscv_vlse16_v_u16mf2(uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf2(base,bstride,32);
+}
+
+vuint16m1_t
+test___riscv_vlse16_v_u16m1(uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m1(base,bstride,32);
+}
+
+vuint16m2_t
+test___riscv_vlse16_v_u16m2(uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m2(base,bstride,32);
+}
+
+vuint16m4_t
+test___riscv_vlse16_v_u16m4(uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m4(base,bstride,32);
+}
+
+vuint16m8_t
+test___riscv_vlse16_v_u16m8(uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m8(base,bstride,32);
+}
+
+vint32mf2_t
+test___riscv_vlse32_v_i32mf2(int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32mf2(base,bstride,32);
+}
+
+vint32m1_t
+test___riscv_vlse32_v_i32m1(int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m1(base,bstride,32);
+}
+
+vint32m2_t
+test___riscv_vlse32_v_i32m2(int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m2(base,bstride,32);
+}
+
+vint32m4_t
+test___riscv_vlse32_v_i32m4(int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m4(base,bstride,32);
+}
+
+vint32m8_t
+test___riscv_vlse32_v_i32m8(int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m8(base,bstride,32);
+}
+
+vuint32mf2_t
+test___riscv_vlse32_v_u32mf2(uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32mf2(base,bstride,32);
+}
+
+vuint32m1_t
+test___riscv_vlse32_v_u32m1(uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m1(base,bstride,32);
+}
+
+vuint32m2_t
+test___riscv_vlse32_v_u32m2(uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m2(base,bstride,32);
+}
+
+vuint32m4_t
+test___riscv_vlse32_v_u32m4(uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m4(base,bstride,32);
+}
+
+vuint32m8_t
+test___riscv_vlse32_v_u32m8(uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m8(base,bstride,32);
+}
+
+vfloat32mf2_t
+test___riscv_vlse32_v_f32mf2(float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32mf2(base,bstride,32);
+}
+
+vfloat32m1_t
+test___riscv_vlse32_v_f32m1(float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m1(base,bstride,32);
+}
+
+vfloat32m2_t
+test___riscv_vlse32_v_f32m2(float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m2(base,bstride,32);
+}
+
+vfloat32m4_t
+test___riscv_vlse32_v_f32m4(float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m4(base,bstride,32);
+}
+
+vfloat32m8_t
+test___riscv_vlse32_v_f32m8(float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m8(base,bstride,32);
+}
+
+vint64m1_t
+test___riscv_vlse64_v_i64m1(int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m1(base,bstride,32);
+}
+
+vint64m2_t
+test___riscv_vlse64_v_i64m2(int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m2(base,bstride,32);
+}
+
+vint64m4_t
+test___riscv_vlse64_v_i64m4(int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m4(base,bstride,32);
+}
+
+vint64m8_t
+test___riscv_vlse64_v_i64m8(int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m8(base,bstride,32);
+}
+
+vuint64m1_t
+test___riscv_vlse64_v_u64m1(uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m1(base,bstride,32);
+}
+
+vuint64m2_t
+test___riscv_vlse64_v_u64m2(uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m2(base,bstride,32);
+}
+
+vuint64m4_t
+test___riscv_vlse64_v_u64m4(uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m4(base,bstride,32);
+}
+
+vuint64m8_t
+test___riscv_vlse64_v_u64m8(uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m8(base,bstride,32);
+}
+
+vfloat64m1_t
+test___riscv_vlse64_v_f64m1(double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m1(base,bstride,32);
+}
+
+vfloat64m2_t
+test___riscv_vlse64_v_f64m2(double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m2(base,bstride,32);
+}
+
+vfloat64m4_t
+test___riscv_vlse64_v_f64m4(double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m4(base,bstride,32);
+}
+
+vfloat64m8_t
+test___riscv_vlse64_v_f64m8(double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m8(base,bstride,32);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vlse-vsse-constraint-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse-vsse-constraint-1.c
new file mode 100644
index 00000000000..0b88a765ab2
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse-vsse-constraint-1.c
@@ -0,0 +1,113 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+
+#include "riscv_vector.h"
+
+/*
+** f1:
+**	vsetivli\tzero,4,e32,m1,tu,ma
+**	vlse32\.v\tv[0-9]+,0\([a-x0-9]+\),\s*zero
+**	vlse32\.v\tv[0-9]+,0\([a-x0-9]+\),\s*zero
+**	vsse32\.v\tv[0-9]+,0\([a-x0-9]+\),\s*zero
+**	ret
+*/
+void f1 (float * in, float *out)
+{
+    vfloat32m1_t v = __riscv_vlse32_v_f32m1 (in, 0, 4);
+    vfloat32m1_t v2 = __riscv_vlse32_v_f32m1_tu (v, in, 0, 4);
+    __riscv_vsse32_v_f32m1 (out, 0, v2, 4);
+}
+
+/*
+** f2:
+**	vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+**	vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  li\t[a-x0-9]+,\s*7
+**	vsetivli\tzero,4,e32,m1,ta,ma
+**	vlse32.v\tv[0-9]+,0\([a-x0-9]+\),\s*[a-x0-9]+,v0.t
+**	vsse32.v\tv[0-9]+,0\([a-x0-9]+\),\s*[a-x0-9]+
+**	ret
+*/
+void f2 (float * in, float *out)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vfloat32m1_t v = __riscv_vlse32_v_f32m1 (in, 7, 4);
+    vfloat32m1_t v2 = __riscv_vlse32_v_f32m1_m (mask, in, 7, 4);
+    __riscv_vsse32_v_f32m1 (out, 7, v2, 4);
+}
+
+
+/*
+** f3:
+**	vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+**	vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	vsetivli\tzero,4,e32,m1,tu,mu
+**	vlse32\.v\tv[0-9]+,0\([a-x0-9]+\),zero
+**	vlse32\.v\tv[0-9]+,0\([a-x0-9]+\),zero,v0.t
+**	vsse32\.v\tv[0-9]+,0\([a-x0-9]+\),zero
+**	ret
+*/
+void f3 (float * in, float *out)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vfloat32m1_t v = __riscv_vlse32_v_f32m1 (in, 0, 4);
+    vfloat32m1_t v2 = __riscv_vlse32_v_f32m1_tumu (mask, v, in, 0, 4);
+    __riscv_vsse32_v_f32m1 (out, 0, v2, 4);
+}
+
+/*
+** f4:
+**  li\s+[a-x0-9]+,\s*17
+**	vsetivli\tzero,4,e8,mf8,tu,ma
+**	vlse8\.v\tv[0-9]+,0\([a-x0-9]+\),[a-x0-9]+
+**	vlse8\.v\tv[0-9]+,0\([a-x0-9]+\),[a-x0-9]+
+**	vsse8\.v\tv[0-9]+,0\([a-x0-9]+\),[a-x0-9]+
+**	ret
+*/
+void f4 (int8_t * in, int8_t *out)
+{
+    vint8mf8_t v = __riscv_vlse8_v_i8mf8 (in, 17, 4);
+    vint8mf8_t v2 = __riscv_vlse8_v_i8mf8_tu (v, in, 17, 4);
+    __riscv_vsse8_v_i8mf8 (out, 17, v2, 4);
+}
+
+/*
+** f5:
+**	vsetvli\t[a-x0-9]+,zero,e8,mf8,ta,ma
+**	vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	vsetivli\tzero,4,e8,mf8,ta,ma
+**	vlse8.v\tv[0-9]+,0\([a-x0-9]+\),zero,v0.t
+**	vsse8.v\tv[0-9]+,0\([a-x0-9]+\),zero
+**	ret
+*/
+void f5 (int8_t * in, int8_t *out)
+{
+    vbool64_t mask = *(vbool64_t*)in;
+    asm volatile ("":::"memory");
+    vint8mf8_t v = __riscv_vlse8_v_i8mf8 (in, 0, 4);
+    vint8mf8_t v2 = __riscv_vlse8_v_i8mf8_m (mask, in, 0, 4);
+    __riscv_vsse8_v_i8mf8 (out, 0, v2, 4);
+}
+
+/*
+** f6:
+**	vsetvli\t[a-x0-9]+,zero,e8,mf8,ta,ma
+**	vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  li\s+[a-x0-9]+,\s*999
+**	vsetivli\tzero,4,e8,mf8,tu,mu
+**	vlse8\.v\tv[0-9]+,0\([a-x0-9]+\),\s*[a-x0-9]+
+**	vlse8\.v\tv[0-9]+,0\([a-x0-9]+\),\s*[a-x0-9]+,v0.t
+**	vsse8\.v\tv[0-9]+,0\([a-x0-9]+\),\s*[a-x0-9]+,v0.t
+**	ret
+*/
+void f6 (int8_t * in, int8_t *out)
+{
+    vbool64_t mask = *(vbool64_t*)in;
+    asm volatile ("":::"memory");
+    vint8mf8_t v = __riscv_vlse8_v_i8mf8 (in, 999, 4);
+    vint8mf8_t v2 = __riscv_vlse8_v_i8mf8_tumu (mask, v, in, 999, 4);
+    __riscv_vsse8_v_i8mf8_m (mask,out, 999, v2, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_m-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_m-1.c
new file mode 100644
index 00000000000..573cf8a37fb
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_m-1.c
@@ -0,0 +1,345 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+vint8mf8_t
+test___riscv_vlse8_v_i8mf8_m(vbool64_t mask,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf8_m(mask,base,bstride,vl);
+}
+
+vint8mf4_t
+test___riscv_vlse8_v_i8mf4_m(vbool32_t mask,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf4_m(mask,base,bstride,vl);
+}
+
+vint8mf2_t
+test___riscv_vlse8_v_i8mf2_m(vbool16_t mask,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf2_m(mask,base,bstride,vl);
+}
+
+vint8m1_t
+test___riscv_vlse8_v_i8m1_m(vbool8_t mask,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m1_m(mask,base,bstride,vl);
+}
+
+vint8m2_t
+test___riscv_vlse8_v_i8m2_m(vbool4_t mask,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m2_m(mask,base,bstride,vl);
+}
+
+vint8m4_t
+test___riscv_vlse8_v_i8m4_m(vbool2_t mask,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m4_m(mask,base,bstride,vl);
+}
+
+vint8m8_t
+test___riscv_vlse8_v_i8m8_m(vbool1_t mask,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m8_m(mask,base,bstride,vl);
+}
+
+vuint8mf8_t
+test___riscv_vlse8_v_u8mf8_m(vbool64_t mask,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf8_m(mask,base,bstride,vl);
+}
+
+vuint8mf4_t
+test___riscv_vlse8_v_u8mf4_m(vbool32_t mask,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf4_m(mask,base,bstride,vl);
+}
+
+vuint8mf2_t
+test___riscv_vlse8_v_u8mf2_m(vbool16_t mask,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf2_m(mask,base,bstride,vl);
+}
+
+vuint8m1_t
+test___riscv_vlse8_v_u8m1_m(vbool8_t mask,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m1_m(mask,base,bstride,vl);
+}
+
+vuint8m2_t
+test___riscv_vlse8_v_u8m2_m(vbool4_t mask,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m2_m(mask,base,bstride,vl);
+}
+
+vuint8m4_t
+test___riscv_vlse8_v_u8m4_m(vbool2_t mask,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m4_m(mask,base,bstride,vl);
+}
+
+vuint8m8_t
+test___riscv_vlse8_v_u8m8_m(vbool1_t mask,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m8_m(mask,base,bstride,vl);
+}
+
+vint16mf4_t
+test___riscv_vlse16_v_i16mf4_m(vbool64_t mask,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf4_m(mask,base,bstride,vl);
+}
+
+vint16mf2_t
+test___riscv_vlse16_v_i16mf2_m(vbool32_t mask,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf2_m(mask,base,bstride,vl);
+}
+
+vint16m1_t
+test___riscv_vlse16_v_i16m1_m(vbool16_t mask,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m1_m(mask,base,bstride,vl);
+}
+
+vint16m2_t
+test___riscv_vlse16_v_i16m2_m(vbool8_t mask,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m2_m(mask,base,bstride,vl);
+}
+
+vint16m4_t
+test___riscv_vlse16_v_i16m4_m(vbool4_t mask,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m4_m(mask,base,bstride,vl);
+}
+
+vint16m8_t
+test___riscv_vlse16_v_i16m8_m(vbool2_t mask,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m8_m(mask,base,bstride,vl);
+}
+
+vuint16mf4_t
+test___riscv_vlse16_v_u16mf4_m(vbool64_t mask,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf4_m(mask,base,bstride,vl);
+}
+
+vuint16mf2_t
+test___riscv_vlse16_v_u16mf2_m(vbool32_t mask,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf2_m(mask,base,bstride,vl);
+}
+
+vuint16m1_t
+test___riscv_vlse16_v_u16m1_m(vbool16_t mask,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m1_m(mask,base,bstride,vl);
+}
+
+vuint16m2_t
+test___riscv_vlse16_v_u16m2_m(vbool8_t mask,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m2_m(mask,base,bstride,vl);
+}
+
+vuint16m4_t
+test___riscv_vlse16_v_u16m4_m(vbool4_t mask,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m4_m(mask,base,bstride,vl);
+}
+
+vuint16m8_t
+test___riscv_vlse16_v_u16m8_m(vbool2_t mask,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m8_m(mask,base,bstride,vl);
+}
+
+vint32mf2_t
+test___riscv_vlse32_v_i32mf2_m(vbool64_t mask,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32mf2_m(mask,base,bstride,vl);
+}
+
+vint32m1_t
+test___riscv_vlse32_v_i32m1_m(vbool32_t mask,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m1_m(mask,base,bstride,vl);
+}
+
+vint32m2_t
+test___riscv_vlse32_v_i32m2_m(vbool16_t mask,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m2_m(mask,base,bstride,vl);
+}
+
+vint32m4_t
+test___riscv_vlse32_v_i32m4_m(vbool8_t mask,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m4_m(mask,base,bstride,vl);
+}
+
+vint32m8_t
+test___riscv_vlse32_v_i32m8_m(vbool4_t mask,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m8_m(mask,base,bstride,vl);
+}
+
+vuint32mf2_t
+test___riscv_vlse32_v_u32mf2_m(vbool64_t mask,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32mf2_m(mask,base,bstride,vl);
+}
+
+vuint32m1_t
+test___riscv_vlse32_v_u32m1_m(vbool32_t mask,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m1_m(mask,base,bstride,vl);
+}
+
+vuint32m2_t
+test___riscv_vlse32_v_u32m2_m(vbool16_t mask,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m2_m(mask,base,bstride,vl);
+}
+
+vuint32m4_t
+test___riscv_vlse32_v_u32m4_m(vbool8_t mask,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m4_m(mask,base,bstride,vl);
+}
+
+vuint32m8_t
+test___riscv_vlse32_v_u32m8_m(vbool4_t mask,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m8_m(mask,base,bstride,vl);
+}
+
+vfloat32mf2_t
+test___riscv_vlse32_v_f32mf2_m(vbool64_t mask,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32mf2_m(mask,base,bstride,vl);
+}
+
+vfloat32m1_t
+test___riscv_vlse32_v_f32m1_m(vbool32_t mask,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m1_m(mask,base,bstride,vl);
+}
+
+vfloat32m2_t
+test___riscv_vlse32_v_f32m2_m(vbool16_t mask,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m2_m(mask,base,bstride,vl);
+}
+
+vfloat32m4_t
+test___riscv_vlse32_v_f32m4_m(vbool8_t mask,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m4_m(mask,base,bstride,vl);
+}
+
+vfloat32m8_t
+test___riscv_vlse32_v_f32m8_m(vbool4_t mask,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m8_m(mask,base,bstride,vl);
+}
+
+vint64m1_t
+test___riscv_vlse64_v_i64m1_m(vbool64_t mask,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m1_m(mask,base,bstride,vl);
+}
+
+vint64m2_t
+test___riscv_vlse64_v_i64m2_m(vbool32_t mask,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m2_m(mask,base,bstride,vl);
+}
+
+vint64m4_t
+test___riscv_vlse64_v_i64m4_m(vbool16_t mask,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m4_m(mask,base,bstride,vl);
+}
+
+vint64m8_t
+test___riscv_vlse64_v_i64m8_m(vbool8_t mask,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m8_m(mask,base,bstride,vl);
+}
+
+vuint64m1_t
+test___riscv_vlse64_v_u64m1_m(vbool64_t mask,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m1_m(mask,base,bstride,vl);
+}
+
+vuint64m2_t
+test___riscv_vlse64_v_u64m2_m(vbool32_t mask,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m2_m(mask,base,bstride,vl);
+}
+
+vuint64m4_t
+test___riscv_vlse64_v_u64m4_m(vbool16_t mask,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m4_m(mask,base,bstride,vl);
+}
+
+vuint64m8_t
+test___riscv_vlse64_v_u64m8_m(vbool8_t mask,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m8_m(mask,base,bstride,vl);
+}
+
+vfloat64m1_t
+test___riscv_vlse64_v_f64m1_m(vbool64_t mask,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m1_m(mask,base,bstride,vl);
+}
+
+vfloat64m2_t
+test___riscv_vlse64_v_f64m2_m(vbool32_t mask,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m2_m(mask,base,bstride,vl);
+}
+
+vfloat64m4_t
+test___riscv_vlse64_v_f64m4_m(vbool16_t mask,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m4_m(mask,base,bstride,vl);
+}
+
+vfloat64m8_t
+test___riscv_vlse64_v_f64m8_m(vbool8_t mask,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m8_m(mask,base,bstride,vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_m-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_m-2.c
new file mode 100644
index 00000000000..cc40364d5fd
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_m-2.c
@@ -0,0 +1,345 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+vint8mf8_t
+test___riscv_vlse8_v_i8mf8_m(vbool64_t mask,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf8_m(mask,base,bstride,31);
+}
+
+vint8mf4_t
+test___riscv_vlse8_v_i8mf4_m(vbool32_t mask,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf4_m(mask,base,bstride,31);
+}
+
+vint8mf2_t
+test___riscv_vlse8_v_i8mf2_m(vbool16_t mask,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf2_m(mask,base,bstride,31);
+}
+
+vint8m1_t
+test___riscv_vlse8_v_i8m1_m(vbool8_t mask,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m1_m(mask,base,bstride,31);
+}
+
+vint8m2_t
+test___riscv_vlse8_v_i8m2_m(vbool4_t mask,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m2_m(mask,base,bstride,31);
+}
+
+vint8m4_t
+test___riscv_vlse8_v_i8m4_m(vbool2_t mask,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m4_m(mask,base,bstride,31);
+}
+
+vint8m8_t
+test___riscv_vlse8_v_i8m8_m(vbool1_t mask,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m8_m(mask,base,bstride,31);
+}
+
+vuint8mf8_t
+test___riscv_vlse8_v_u8mf8_m(vbool64_t mask,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf8_m(mask,base,bstride,31);
+}
+
+vuint8mf4_t
+test___riscv_vlse8_v_u8mf4_m(vbool32_t mask,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf4_m(mask,base,bstride,31);
+}
+
+vuint8mf2_t
+test___riscv_vlse8_v_u8mf2_m(vbool16_t mask,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf2_m(mask,base,bstride,31);
+}
+
+vuint8m1_t
+test___riscv_vlse8_v_u8m1_m(vbool8_t mask,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m1_m(mask,base,bstride,31);
+}
+
+vuint8m2_t
+test___riscv_vlse8_v_u8m2_m(vbool4_t mask,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m2_m(mask,base,bstride,31);
+}
+
+vuint8m4_t
+test___riscv_vlse8_v_u8m4_m(vbool2_t mask,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m4_m(mask,base,bstride,31);
+}
+
+vuint8m8_t
+test___riscv_vlse8_v_u8m8_m(vbool1_t mask,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m8_m(mask,base,bstride,31);
+}
+
+vint16mf4_t
+test___riscv_vlse16_v_i16mf4_m(vbool64_t mask,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf4_m(mask,base,bstride,31);
+}
+
+vint16mf2_t
+test___riscv_vlse16_v_i16mf2_m(vbool32_t mask,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf2_m(mask,base,bstride,31);
+}
+
+vint16m1_t
+test___riscv_vlse16_v_i16m1_m(vbool16_t mask,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m1_m(mask,base,bstride,31);
+}
+
+vint16m2_t
+test___riscv_vlse16_v_i16m2_m(vbool8_t mask,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m2_m(mask,base,bstride,31);
+}
+
+vint16m4_t
+test___riscv_vlse16_v_i16m4_m(vbool4_t mask,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m4_m(mask,base,bstride,31);
+}
+
+vint16m8_t
+test___riscv_vlse16_v_i16m8_m(vbool2_t mask,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m8_m(mask,base,bstride,31);
+}
+
+vuint16mf4_t
+test___riscv_vlse16_v_u16mf4_m(vbool64_t mask,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf4_m(mask,base,bstride,31);
+}
+
+vuint16mf2_t
+test___riscv_vlse16_v_u16mf2_m(vbool32_t mask,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf2_m(mask,base,bstride,31);
+}
+
+vuint16m1_t
+test___riscv_vlse16_v_u16m1_m(vbool16_t mask,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m1_m(mask,base,bstride,31);
+}
+
+vuint16m2_t
+test___riscv_vlse16_v_u16m2_m(vbool8_t mask,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m2_m(mask,base,bstride,31);
+}
+
+vuint16m4_t
+test___riscv_vlse16_v_u16m4_m(vbool4_t mask,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m4_m(mask,base,bstride,31);
+}
+
+vuint16m8_t
+test___riscv_vlse16_v_u16m8_m(vbool2_t mask,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m8_m(mask,base,bstride,31);
+}
+
+vint32mf2_t
+test___riscv_vlse32_v_i32mf2_m(vbool64_t mask,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32mf2_m(mask,base,bstride,31);
+}
+
+vint32m1_t
+test___riscv_vlse32_v_i32m1_m(vbool32_t mask,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m1_m(mask,base,bstride,31);
+}
+
+vint32m2_t
+test___riscv_vlse32_v_i32m2_m(vbool16_t mask,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m2_m(mask,base,bstride,31);
+}
+
+vint32m4_t
+test___riscv_vlse32_v_i32m4_m(vbool8_t mask,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m4_m(mask,base,bstride,31);
+}
+
+vint32m8_t
+test___riscv_vlse32_v_i32m8_m(vbool4_t mask,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m8_m(mask,base,bstride,31);
+}
+
+vuint32mf2_t
+test___riscv_vlse32_v_u32mf2_m(vbool64_t mask,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32mf2_m(mask,base,bstride,31);
+}
+
+vuint32m1_t
+test___riscv_vlse32_v_u32m1_m(vbool32_t mask,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m1_m(mask,base,bstride,31);
+}
+
+vuint32m2_t
+test___riscv_vlse32_v_u32m2_m(vbool16_t mask,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m2_m(mask,base,bstride,31);
+}
+
+vuint32m4_t
+test___riscv_vlse32_v_u32m4_m(vbool8_t mask,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m4_m(mask,base,bstride,31);
+}
+
+vuint32m8_t
+test___riscv_vlse32_v_u32m8_m(vbool4_t mask,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m8_m(mask,base,bstride,31);
+}
+
+vfloat32mf2_t
+test___riscv_vlse32_v_f32mf2_m(vbool64_t mask,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32mf2_m(mask,base,bstride,31);
+}
+
+vfloat32m1_t
+test___riscv_vlse32_v_f32m1_m(vbool32_t mask,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m1_m(mask,base,bstride,31);
+}
+
+vfloat32m2_t
+test___riscv_vlse32_v_f32m2_m(vbool16_t mask,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m2_m(mask,base,bstride,31);
+}
+
+vfloat32m4_t
+test___riscv_vlse32_v_f32m4_m(vbool8_t mask,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m4_m(mask,base,bstride,31);
+}
+
+vfloat32m8_t
+test___riscv_vlse32_v_f32m8_m(vbool4_t mask,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m8_m(mask,base,bstride,31);
+}
+
+vint64m1_t
+test___riscv_vlse64_v_i64m1_m(vbool64_t mask,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m1_m(mask,base,bstride,31);
+}
+
+vint64m2_t
+test___riscv_vlse64_v_i64m2_m(vbool32_t mask,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m2_m(mask,base,bstride,31);
+}
+
+vint64m4_t
+test___riscv_vlse64_v_i64m4_m(vbool16_t mask,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m4_m(mask,base,bstride,31);
+}
+
+vint64m8_t
+test___riscv_vlse64_v_i64m8_m(vbool8_t mask,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m8_m(mask,base,bstride,31);
+}
+
+vuint64m1_t
+test___riscv_vlse64_v_u64m1_m(vbool64_t mask,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m1_m(mask,base,bstride,31);
+}
+
+vuint64m2_t
+test___riscv_vlse64_v_u64m2_m(vbool32_t mask,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m2_m(mask,base,bstride,31);
+}
+
+vuint64m4_t
+test___riscv_vlse64_v_u64m4_m(vbool16_t mask,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m4_m(mask,base,bstride,31);
+}
+
+vuint64m8_t
+test___riscv_vlse64_v_u64m8_m(vbool8_t mask,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m8_m(mask,base,bstride,31);
+}
+
+vfloat64m1_t
+test___riscv_vlse64_v_f64m1_m(vbool64_t mask,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m1_m(mask,base,bstride,31);
+}
+
+vfloat64m2_t
+test___riscv_vlse64_v_f64m2_m(vbool32_t mask,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m2_m(mask,base,bstride,31);
+}
+
+vfloat64m4_t
+test___riscv_vlse64_v_f64m4_m(vbool16_t mask,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m4_m(mask,base,bstride,31);
+}
+
+vfloat64m8_t
+test___riscv_vlse64_v_f64m8_m(vbool8_t mask,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m8_m(mask,base,bstride,31);
+}
+
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_m-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_m-3.c
new file mode 100644
index 00000000000..d28908885f3
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_m-3.c
@@ -0,0 +1,345 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+vint8mf8_t
+test___riscv_vlse8_v_i8mf8_m(vbool64_t mask,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf8_m(mask,base,bstride,32);
+}
+
+vint8mf4_t
+test___riscv_vlse8_v_i8mf4_m(vbool32_t mask,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf4_m(mask,base,bstride,32);
+}
+
+vint8mf2_t
+test___riscv_vlse8_v_i8mf2_m(vbool16_t mask,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf2_m(mask,base,bstride,32);
+}
+
+vint8m1_t
+test___riscv_vlse8_v_i8m1_m(vbool8_t mask,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m1_m(mask,base,bstride,32);
+}
+
+vint8m2_t
+test___riscv_vlse8_v_i8m2_m(vbool4_t mask,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m2_m(mask,base,bstride,32);
+}
+
+vint8m4_t
+test___riscv_vlse8_v_i8m4_m(vbool2_t mask,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m4_m(mask,base,bstride,32);
+}
+
+vint8m8_t
+test___riscv_vlse8_v_i8m8_m(vbool1_t mask,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m8_m(mask,base,bstride,32);
+}
+
+vuint8mf8_t
+test___riscv_vlse8_v_u8mf8_m(vbool64_t mask,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf8_m(mask,base,bstride,32);
+}
+
+vuint8mf4_t
+test___riscv_vlse8_v_u8mf4_m(vbool32_t mask,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf4_m(mask,base,bstride,32);
+}
+
+vuint8mf2_t
+test___riscv_vlse8_v_u8mf2_m(vbool16_t mask,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf2_m(mask,base,bstride,32);
+}
+
+vuint8m1_t
+test___riscv_vlse8_v_u8m1_m(vbool8_t mask,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m1_m(mask,base,bstride,32);
+}
+
+vuint8m2_t
+test___riscv_vlse8_v_u8m2_m(vbool4_t mask,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m2_m(mask,base,bstride,32);
+}
+
+vuint8m4_t
+test___riscv_vlse8_v_u8m4_m(vbool2_t mask,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m4_m(mask,base,bstride,32);
+}
+
+vuint8m8_t
+test___riscv_vlse8_v_u8m8_m(vbool1_t mask,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m8_m(mask,base,bstride,32);
+}
+
+vint16mf4_t
+test___riscv_vlse16_v_i16mf4_m(vbool64_t mask,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf4_m(mask,base,bstride,32);
+}
+
+vint16mf2_t
+test___riscv_vlse16_v_i16mf2_m(vbool32_t mask,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf2_m(mask,base,bstride,32);
+}
+
+vint16m1_t
+test___riscv_vlse16_v_i16m1_m(vbool16_t mask,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m1_m(mask,base,bstride,32);
+}
+
+vint16m2_t
+test___riscv_vlse16_v_i16m2_m(vbool8_t mask,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m2_m(mask,base,bstride,32);
+}
+
+vint16m4_t
+test___riscv_vlse16_v_i16m4_m(vbool4_t mask,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m4_m(mask,base,bstride,32);
+}
+
+vint16m8_t
+test___riscv_vlse16_v_i16m8_m(vbool2_t mask,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m8_m(mask,base,bstride,32);
+}
+
+vuint16mf4_t
+test___riscv_vlse16_v_u16mf4_m(vbool64_t mask,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf4_m(mask,base,bstride,32);
+}
+
+vuint16mf2_t
+test___riscv_vlse16_v_u16mf2_m(vbool32_t mask,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf2_m(mask,base,bstride,32);
+}
+
+vuint16m1_t
+test___riscv_vlse16_v_u16m1_m(vbool16_t mask,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m1_m(mask,base,bstride,32);
+}
+
+vuint16m2_t
+test___riscv_vlse16_v_u16m2_m(vbool8_t mask,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m2_m(mask,base,bstride,32);
+}
+
+vuint16m4_t
+test___riscv_vlse16_v_u16m4_m(vbool4_t mask,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m4_m(mask,base,bstride,32);
+}
+
+vuint16m8_t
+test___riscv_vlse16_v_u16m8_m(vbool2_t mask,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m8_m(mask,base,bstride,32);
+}
+
+vint32mf2_t
+test___riscv_vlse32_v_i32mf2_m(vbool64_t mask,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32mf2_m(mask,base,bstride,32);
+}
+
+vint32m1_t
+test___riscv_vlse32_v_i32m1_m(vbool32_t mask,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m1_m(mask,base,bstride,32);
+}
+
+vint32m2_t
+test___riscv_vlse32_v_i32m2_m(vbool16_t mask,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m2_m(mask,base,bstride,32);
+}
+
+vint32m4_t
+test___riscv_vlse32_v_i32m4_m(vbool8_t mask,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m4_m(mask,base,bstride,32);
+}
+
+vint32m8_t
+test___riscv_vlse32_v_i32m8_m(vbool4_t mask,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m8_m(mask,base,bstride,32);
+}
+
+vuint32mf2_t
+test___riscv_vlse32_v_u32mf2_m(vbool64_t mask,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32mf2_m(mask,base,bstride,32);
+}
+
+vuint32m1_t
+test___riscv_vlse32_v_u32m1_m(vbool32_t mask,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m1_m(mask,base,bstride,32);
+}
+
+vuint32m2_t
+test___riscv_vlse32_v_u32m2_m(vbool16_t mask,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m2_m(mask,base,bstride,32);
+}
+
+vuint32m4_t
+test___riscv_vlse32_v_u32m4_m(vbool8_t mask,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m4_m(mask,base,bstride,32);
+}
+
+vuint32m8_t
+test___riscv_vlse32_v_u32m8_m(vbool4_t mask,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m8_m(mask,base,bstride,32);
+}
+
+vfloat32mf2_t
+test___riscv_vlse32_v_f32mf2_m(vbool64_t mask,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32mf2_m(mask,base,bstride,32);
+}
+
+vfloat32m1_t
+test___riscv_vlse32_v_f32m1_m(vbool32_t mask,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m1_m(mask,base,bstride,32);
+}
+
+vfloat32m2_t
+test___riscv_vlse32_v_f32m2_m(vbool16_t mask,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m2_m(mask,base,bstride,32);
+}
+
+vfloat32m4_t
+test___riscv_vlse32_v_f32m4_m(vbool8_t mask,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m4_m(mask,base,bstride,32);
+}
+
+vfloat32m8_t
+test___riscv_vlse32_v_f32m8_m(vbool4_t mask,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m8_m(mask,base,bstride,32);
+}
+
+vint64m1_t
+test___riscv_vlse64_v_i64m1_m(vbool64_t mask,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m1_m(mask,base,bstride,32);
+}
+
+vint64m2_t
+test___riscv_vlse64_v_i64m2_m(vbool32_t mask,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m2_m(mask,base,bstride,32);
+}
+
+vint64m4_t
+test___riscv_vlse64_v_i64m4_m(vbool16_t mask,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m4_m(mask,base,bstride,32);
+}
+
+vint64m8_t
+test___riscv_vlse64_v_i64m8_m(vbool8_t mask,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m8_m(mask,base,bstride,32);
+}
+
+vuint64m1_t
+test___riscv_vlse64_v_u64m1_m(vbool64_t mask,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m1_m(mask,base,bstride,32);
+}
+
+vuint64m2_t
+test___riscv_vlse64_v_u64m2_m(vbool32_t mask,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m2_m(mask,base,bstride,32);
+}
+
+vuint64m4_t
+test___riscv_vlse64_v_u64m4_m(vbool16_t mask,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m4_m(mask,base,bstride,32);
+}
+
+vuint64m8_t
+test___riscv_vlse64_v_u64m8_m(vbool8_t mask,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m8_m(mask,base,bstride,32);
+}
+
+vfloat64m1_t
+test___riscv_vlse64_v_f64m1_m(vbool64_t mask,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m1_m(mask,base,bstride,32);
+}
+
+vfloat64m2_t
+test___riscv_vlse64_v_f64m2_m(vbool32_t mask,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m2_m(mask,base,bstride,32);
+}
+
+vfloat64m4_t
+test___riscv_vlse64_v_f64m4_m(vbool16_t mask,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m4_m(mask,base,bstride,32);
+}
+
+vfloat64m8_t
+test___riscv_vlse64_v_f64m8_m(vbool8_t mask,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m8_m(mask,base,bstride,32);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_mu-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_mu-1.c
new file mode 100644
index 00000000000..03c28707171
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_mu-1.c
@@ -0,0 +1,345 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+vint8mf8_t
+test___riscv_vlse8_v_i8mf8_mu(vbool64_t mask,vint8mf8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf8_mu(mask,merge,base,bstride,vl);
+}
+
+vint8mf4_t
+test___riscv_vlse8_v_i8mf4_mu(vbool32_t mask,vint8mf4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf4_mu(mask,merge,base,bstride,vl);
+}
+
+vint8mf2_t
+test___riscv_vlse8_v_i8mf2_mu(vbool16_t mask,vint8mf2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf2_mu(mask,merge,base,bstride,vl);
+}
+
+vint8m1_t
+test___riscv_vlse8_v_i8m1_mu(vbool8_t mask,vint8m1_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m1_mu(mask,merge,base,bstride,vl);
+}
+
+vint8m2_t
+test___riscv_vlse8_v_i8m2_mu(vbool4_t mask,vint8m2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m2_mu(mask,merge,base,bstride,vl);
+}
+
+vint8m4_t
+test___riscv_vlse8_v_i8m4_mu(vbool2_t mask,vint8m4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m4_mu(mask,merge,base,bstride,vl);
+}
+
+vint8m8_t
+test___riscv_vlse8_v_i8m8_mu(vbool1_t mask,vint8m8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m8_mu(mask,merge,base,bstride,vl);
+}
+
+vuint8mf8_t
+test___riscv_vlse8_v_u8mf8_mu(vbool64_t mask,vuint8mf8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf8_mu(mask,merge,base,bstride,vl);
+}
+
+vuint8mf4_t
+test___riscv_vlse8_v_u8mf4_mu(vbool32_t mask,vuint8mf4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf4_mu(mask,merge,base,bstride,vl);
+}
+
+vuint8mf2_t
+test___riscv_vlse8_v_u8mf2_mu(vbool16_t mask,vuint8mf2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf2_mu(mask,merge,base,bstride,vl);
+}
+
+vuint8m1_t
+test___riscv_vlse8_v_u8m1_mu(vbool8_t mask,vuint8m1_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m1_mu(mask,merge,base,bstride,vl);
+}
+
+vuint8m2_t
+test___riscv_vlse8_v_u8m2_mu(vbool4_t mask,vuint8m2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m2_mu(mask,merge,base,bstride,vl);
+}
+
+vuint8m4_t
+test___riscv_vlse8_v_u8m4_mu(vbool2_t mask,vuint8m4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m4_mu(mask,merge,base,bstride,vl);
+}
+
+vuint8m8_t
+test___riscv_vlse8_v_u8m8_mu(vbool1_t mask,vuint8m8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m8_mu(mask,merge,base,bstride,vl);
+}
+
+vint16mf4_t
+test___riscv_vlse16_v_i16mf4_mu(vbool64_t mask,vint16mf4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf4_mu(mask,merge,base,bstride,vl);
+}
+
+vint16mf2_t
+test___riscv_vlse16_v_i16mf2_mu(vbool32_t mask,vint16mf2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf2_mu(mask,merge,base,bstride,vl);
+}
+
+vint16m1_t
+test___riscv_vlse16_v_i16m1_mu(vbool16_t mask,vint16m1_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m1_mu(mask,merge,base,bstride,vl);
+}
+
+vint16m2_t
+test___riscv_vlse16_v_i16m2_mu(vbool8_t mask,vint16m2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m2_mu(mask,merge,base,bstride,vl);
+}
+
+vint16m4_t
+test___riscv_vlse16_v_i16m4_mu(vbool4_t mask,vint16m4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m4_mu(mask,merge,base,bstride,vl);
+}
+
+vint16m8_t
+test___riscv_vlse16_v_i16m8_mu(vbool2_t mask,vint16m8_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m8_mu(mask,merge,base,bstride,vl);
+}
+
+vuint16mf4_t
+test___riscv_vlse16_v_u16mf4_mu(vbool64_t mask,vuint16mf4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf4_mu(mask,merge,base,bstride,vl);
+}
+
+vuint16mf2_t
+test___riscv_vlse16_v_u16mf2_mu(vbool32_t mask,vuint16mf2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf2_mu(mask,merge,base,bstride,vl);
+}
+
+vuint16m1_t
+test___riscv_vlse16_v_u16m1_mu(vbool16_t mask,vuint16m1_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m1_mu(mask,merge,base,bstride,vl);
+}
+
+vuint16m2_t
+test___riscv_vlse16_v_u16m2_mu(vbool8_t mask,vuint16m2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m2_mu(mask,merge,base,bstride,vl);
+}
+
+vuint16m4_t
+test___riscv_vlse16_v_u16m4_mu(vbool4_t mask,vuint16m4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m4_mu(mask,merge,base,bstride,vl);
+}
+
+vuint16m8_t
+test___riscv_vlse16_v_u16m8_mu(vbool2_t mask,vuint16m8_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m8_mu(mask,merge,base,bstride,vl);
+}
+
+vint32mf2_t
+test___riscv_vlse32_v_i32mf2_mu(vbool64_t mask,vint32mf2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32mf2_mu(mask,merge,base,bstride,vl);
+}
+
+vint32m1_t
+test___riscv_vlse32_v_i32m1_mu(vbool32_t mask,vint32m1_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m1_mu(mask,merge,base,bstride,vl);
+}
+
+vint32m2_t
+test___riscv_vlse32_v_i32m2_mu(vbool16_t mask,vint32m2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m2_mu(mask,merge,base,bstride,vl);
+}
+
+vint32m4_t
+test___riscv_vlse32_v_i32m4_mu(vbool8_t mask,vint32m4_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m4_mu(mask,merge,base,bstride,vl);
+}
+
+vint32m8_t
+test___riscv_vlse32_v_i32m8_mu(vbool4_t mask,vint32m8_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m8_mu(mask,merge,base,bstride,vl);
+}
+
+vuint32mf2_t
+test___riscv_vlse32_v_u32mf2_mu(vbool64_t mask,vuint32mf2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32mf2_mu(mask,merge,base,bstride,vl);
+}
+
+vuint32m1_t
+test___riscv_vlse32_v_u32m1_mu(vbool32_t mask,vuint32m1_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m1_mu(mask,merge,base,bstride,vl);
+}
+
+vuint32m2_t
+test___riscv_vlse32_v_u32m2_mu(vbool16_t mask,vuint32m2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m2_mu(mask,merge,base,bstride,vl);
+}
+
+vuint32m4_t
+test___riscv_vlse32_v_u32m4_mu(vbool8_t mask,vuint32m4_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m4_mu(mask,merge,base,bstride,vl);
+}
+
+vuint32m8_t
+test___riscv_vlse32_v_u32m8_mu(vbool4_t mask,vuint32m8_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m8_mu(mask,merge,base,bstride,vl);
+}
+
+vfloat32mf2_t
+test___riscv_vlse32_v_f32mf2_mu(vbool64_t mask,vfloat32mf2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32mf2_mu(mask,merge,base,bstride,vl);
+}
+
+vfloat32m1_t
+test___riscv_vlse32_v_f32m1_mu(vbool32_t mask,vfloat32m1_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m1_mu(mask,merge,base,bstride,vl);
+}
+
+vfloat32m2_t
+test___riscv_vlse32_v_f32m2_mu(vbool16_t mask,vfloat32m2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m2_mu(mask,merge,base,bstride,vl);
+}
+
+vfloat32m4_t
+test___riscv_vlse32_v_f32m4_mu(vbool8_t mask,vfloat32m4_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m4_mu(mask,merge,base,bstride,vl);
+}
+
+vfloat32m8_t
+test___riscv_vlse32_v_f32m8_mu(vbool4_t mask,vfloat32m8_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m8_mu(mask,merge,base,bstride,vl);
+}
+
+vint64m1_t
+test___riscv_vlse64_v_i64m1_mu(vbool64_t mask,vint64m1_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m1_mu(mask,merge,base,bstride,vl);
+}
+
+vint64m2_t
+test___riscv_vlse64_v_i64m2_mu(vbool32_t mask,vint64m2_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m2_mu(mask,merge,base,bstride,vl);
+}
+
+vint64m4_t
+test___riscv_vlse64_v_i64m4_mu(vbool16_t mask,vint64m4_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m4_mu(mask,merge,base,bstride,vl);
+}
+
+vint64m8_t
+test___riscv_vlse64_v_i64m8_mu(vbool8_t mask,vint64m8_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m8_mu(mask,merge,base,bstride,vl);
+}
+
+vuint64m1_t
+test___riscv_vlse64_v_u64m1_mu(vbool64_t mask,vuint64m1_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m1_mu(mask,merge,base,bstride,vl);
+}
+
+vuint64m2_t
+test___riscv_vlse64_v_u64m2_mu(vbool32_t mask,vuint64m2_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m2_mu(mask,merge,base,bstride,vl);
+}
+
+vuint64m4_t
+test___riscv_vlse64_v_u64m4_mu(vbool16_t mask,vuint64m4_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m4_mu(mask,merge,base,bstride,vl);
+}
+
+vuint64m8_t
+test___riscv_vlse64_v_u64m8_mu(vbool8_t mask,vuint64m8_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m8_mu(mask,merge,base,bstride,vl);
+}
+
+vfloat64m1_t
+test___riscv_vlse64_v_f64m1_mu(vbool64_t mask,vfloat64m1_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m1_mu(mask,merge,base,bstride,vl);
+}
+
+vfloat64m2_t
+test___riscv_vlse64_v_f64m2_mu(vbool32_t mask,vfloat64m2_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m2_mu(mask,merge,base,bstride,vl);
+}
+
+vfloat64m4_t
+test___riscv_vlse64_v_f64m4_mu(vbool16_t mask,vfloat64m4_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m4_mu(mask,merge,base,bstride,vl);
+}
+
+vfloat64m8_t
+test___riscv_vlse64_v_f64m8_mu(vbool8_t mask,vfloat64m8_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m8_mu(mask,merge,base,bstride,vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*mu\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*mu\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*mu\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*mu\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_mu-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_mu-2.c
new file mode 100644
index 00000000000..bc8727f6f85
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_mu-2.c
@@ -0,0 +1,345 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+vint8mf8_t
+test___riscv_vlse8_v_i8mf8_mu(vbool64_t mask,vint8mf8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf8_mu(mask,merge,base,bstride,31);
+}
+
+vint8mf4_t
+test___riscv_vlse8_v_i8mf4_mu(vbool32_t mask,vint8mf4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf4_mu(mask,merge,base,bstride,31);
+}
+
+vint8mf2_t
+test___riscv_vlse8_v_i8mf2_mu(vbool16_t mask,vint8mf2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf2_mu(mask,merge,base,bstride,31);
+}
+
+vint8m1_t
+test___riscv_vlse8_v_i8m1_mu(vbool8_t mask,vint8m1_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m1_mu(mask,merge,base,bstride,31);
+}
+
+vint8m2_t
+test___riscv_vlse8_v_i8m2_mu(vbool4_t mask,vint8m2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m2_mu(mask,merge,base,bstride,31);
+}
+
+vint8m4_t
+test___riscv_vlse8_v_i8m4_mu(vbool2_t mask,vint8m4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m4_mu(mask,merge,base,bstride,31);
+}
+
+vint8m8_t
+test___riscv_vlse8_v_i8m8_mu(vbool1_t mask,vint8m8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m8_mu(mask,merge,base,bstride,31);
+}
+
+vuint8mf8_t
+test___riscv_vlse8_v_u8mf8_mu(vbool64_t mask,vuint8mf8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf8_mu(mask,merge,base,bstride,31);
+}
+
+vuint8mf4_t
+test___riscv_vlse8_v_u8mf4_mu(vbool32_t mask,vuint8mf4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf4_mu(mask,merge,base,bstride,31);
+}
+
+vuint8mf2_t
+test___riscv_vlse8_v_u8mf2_mu(vbool16_t mask,vuint8mf2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf2_mu(mask,merge,base,bstride,31);
+}
+
+vuint8m1_t
+test___riscv_vlse8_v_u8m1_mu(vbool8_t mask,vuint8m1_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m1_mu(mask,merge,base,bstride,31);
+}
+
+vuint8m2_t
+test___riscv_vlse8_v_u8m2_mu(vbool4_t mask,vuint8m2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m2_mu(mask,merge,base,bstride,31);
+}
+
+vuint8m4_t
+test___riscv_vlse8_v_u8m4_mu(vbool2_t mask,vuint8m4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m4_mu(mask,merge,base,bstride,31);
+}
+
+vuint8m8_t
+test___riscv_vlse8_v_u8m8_mu(vbool1_t mask,vuint8m8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m8_mu(mask,merge,base,bstride,31);
+}
+
+vint16mf4_t
+test___riscv_vlse16_v_i16mf4_mu(vbool64_t mask,vint16mf4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf4_mu(mask,merge,base,bstride,31);
+}
+
+vint16mf2_t
+test___riscv_vlse16_v_i16mf2_mu(vbool32_t mask,vint16mf2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf2_mu(mask,merge,base,bstride,31);
+}
+
+vint16m1_t
+test___riscv_vlse16_v_i16m1_mu(vbool16_t mask,vint16m1_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m1_mu(mask,merge,base,bstride,31);
+}
+
+vint16m2_t
+test___riscv_vlse16_v_i16m2_mu(vbool8_t mask,vint16m2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m2_mu(mask,merge,base,bstride,31);
+}
+
+vint16m4_t
+test___riscv_vlse16_v_i16m4_mu(vbool4_t mask,vint16m4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m4_mu(mask,merge,base,bstride,31);
+}
+
+vint16m8_t
+test___riscv_vlse16_v_i16m8_mu(vbool2_t mask,vint16m8_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m8_mu(mask,merge,base,bstride,31);
+}
+
+vuint16mf4_t
+test___riscv_vlse16_v_u16mf4_mu(vbool64_t mask,vuint16mf4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf4_mu(mask,merge,base,bstride,31);
+}
+
+vuint16mf2_t
+test___riscv_vlse16_v_u16mf2_mu(vbool32_t mask,vuint16mf2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf2_mu(mask,merge,base,bstride,31);
+}
+
+vuint16m1_t
+test___riscv_vlse16_v_u16m1_mu(vbool16_t mask,vuint16m1_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m1_mu(mask,merge,base,bstride,31);
+}
+
+vuint16m2_t
+test___riscv_vlse16_v_u16m2_mu(vbool8_t mask,vuint16m2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m2_mu(mask,merge,base,bstride,31);
+}
+
+vuint16m4_t
+test___riscv_vlse16_v_u16m4_mu(vbool4_t mask,vuint16m4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m4_mu(mask,merge,base,bstride,31);
+}
+
+vuint16m8_t
+test___riscv_vlse16_v_u16m8_mu(vbool2_t mask,vuint16m8_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m8_mu(mask,merge,base,bstride,31);
+}
+
+vint32mf2_t
+test___riscv_vlse32_v_i32mf2_mu(vbool64_t mask,vint32mf2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32mf2_mu(mask,merge,base,bstride,31);
+}
+
+vint32m1_t
+test___riscv_vlse32_v_i32m1_mu(vbool32_t mask,vint32m1_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m1_mu(mask,merge,base,bstride,31);
+}
+
+vint32m2_t
+test___riscv_vlse32_v_i32m2_mu(vbool16_t mask,vint32m2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m2_mu(mask,merge,base,bstride,31);
+}
+
+vint32m4_t
+test___riscv_vlse32_v_i32m4_mu(vbool8_t mask,vint32m4_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m4_mu(mask,merge,base,bstride,31);
+}
+
+vint32m8_t
+test___riscv_vlse32_v_i32m8_mu(vbool4_t mask,vint32m8_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m8_mu(mask,merge,base,bstride,31);
+}
+
+vuint32mf2_t
+test___riscv_vlse32_v_u32mf2_mu(vbool64_t mask,vuint32mf2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32mf2_mu(mask,merge,base,bstride,31);
+}
+
+vuint32m1_t
+test___riscv_vlse32_v_u32m1_mu(vbool32_t mask,vuint32m1_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m1_mu(mask,merge,base,bstride,31);
+}
+
+vuint32m2_t
+test___riscv_vlse32_v_u32m2_mu(vbool16_t mask,vuint32m2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m2_mu(mask,merge,base,bstride,31);
+}
+
+vuint32m4_t
+test___riscv_vlse32_v_u32m4_mu(vbool8_t mask,vuint32m4_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m4_mu(mask,merge,base,bstride,31);
+}
+
+vuint32m8_t
+test___riscv_vlse32_v_u32m8_mu(vbool4_t mask,vuint32m8_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m8_mu(mask,merge,base,bstride,31);
+}
+
+vfloat32mf2_t
+test___riscv_vlse32_v_f32mf2_mu(vbool64_t mask,vfloat32mf2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32mf2_mu(mask,merge,base,bstride,31);
+}
+
+vfloat32m1_t
+test___riscv_vlse32_v_f32m1_mu(vbool32_t mask,vfloat32m1_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m1_mu(mask,merge,base,bstride,31);
+}
+
+vfloat32m2_t
+test___riscv_vlse32_v_f32m2_mu(vbool16_t mask,vfloat32m2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m2_mu(mask,merge,base,bstride,31);
+}
+
+vfloat32m4_t
+test___riscv_vlse32_v_f32m4_mu(vbool8_t mask,vfloat32m4_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m4_mu(mask,merge,base,bstride,31);
+}
+
+vfloat32m8_t
+test___riscv_vlse32_v_f32m8_mu(vbool4_t mask,vfloat32m8_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m8_mu(mask,merge,base,bstride,31);
+}
+
+vint64m1_t
+test___riscv_vlse64_v_i64m1_mu(vbool64_t mask,vint64m1_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m1_mu(mask,merge,base,bstride,31);
+}
+
+vint64m2_t
+test___riscv_vlse64_v_i64m2_mu(vbool32_t mask,vint64m2_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m2_mu(mask,merge,base,bstride,31);
+}
+
+vint64m4_t
+test___riscv_vlse64_v_i64m4_mu(vbool16_t mask,vint64m4_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m4_mu(mask,merge,base,bstride,31);
+}
+
+vint64m8_t
+test___riscv_vlse64_v_i64m8_mu(vbool8_t mask,vint64m8_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m8_mu(mask,merge,base,bstride,31);
+}
+
+vuint64m1_t
+test___riscv_vlse64_v_u64m1_mu(vbool64_t mask,vuint64m1_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m1_mu(mask,merge,base,bstride,31);
+}
+
+vuint64m2_t
+test___riscv_vlse64_v_u64m2_mu(vbool32_t mask,vuint64m2_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m2_mu(mask,merge,base,bstride,31);
+}
+
+vuint64m4_t
+test___riscv_vlse64_v_u64m4_mu(vbool16_t mask,vuint64m4_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m4_mu(mask,merge,base,bstride,31);
+}
+
+vuint64m8_t
+test___riscv_vlse64_v_u64m8_mu(vbool8_t mask,vuint64m8_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m8_mu(mask,merge,base,bstride,31);
+}
+
+vfloat64m1_t
+test___riscv_vlse64_v_f64m1_mu(vbool64_t mask,vfloat64m1_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m1_mu(mask,merge,base,bstride,31);
+}
+
+vfloat64m2_t
+test___riscv_vlse64_v_f64m2_mu(vbool32_t mask,vfloat64m2_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m2_mu(mask,merge,base,bstride,31);
+}
+
+vfloat64m4_t
+test___riscv_vlse64_v_f64m4_mu(vbool16_t mask,vfloat64m4_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m4_mu(mask,merge,base,bstride,31);
+}
+
+vfloat64m8_t
+test___riscv_vlse64_v_f64m8_mu(vbool8_t mask,vfloat64m8_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m8_mu(mask,merge,base,bstride,31);
+}
+
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*t[au],\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*t[au],\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*t[au],\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*t[au],\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*t[au],\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*t[au],\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*t[au],\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*t[au],\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*t[au],\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*t[au],\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*t[au],\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*t[au],\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*t[au],\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*t[au],\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*t[au],\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*t[au],\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*t[au],\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*t[au],\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m1,\s*t[au],\s*mu\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m2,\s*t[au],\s*mu\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m4,\s*t[au],\s*mu\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m8,\s*t[au],\s*mu\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_mu-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_mu-3.c
new file mode 100644
index 00000000000..0e6a4863c33
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_mu-3.c
@@ -0,0 +1,345 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+vint8mf8_t
+test___riscv_vlse8_v_i8mf8_mu(vbool64_t mask,vint8mf8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf8_mu(mask,merge,base,bstride,32);
+}
+
+vint8mf4_t
+test___riscv_vlse8_v_i8mf4_mu(vbool32_t mask,vint8mf4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf4_mu(mask,merge,base,bstride,32);
+}
+
+vint8mf2_t
+test___riscv_vlse8_v_i8mf2_mu(vbool16_t mask,vint8mf2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf2_mu(mask,merge,base,bstride,32);
+}
+
+vint8m1_t
+test___riscv_vlse8_v_i8m1_mu(vbool8_t mask,vint8m1_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m1_mu(mask,merge,base,bstride,32);
+}
+
+vint8m2_t
+test___riscv_vlse8_v_i8m2_mu(vbool4_t mask,vint8m2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m2_mu(mask,merge,base,bstride,32);
+}
+
+vint8m4_t
+test___riscv_vlse8_v_i8m4_mu(vbool2_t mask,vint8m4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m4_mu(mask,merge,base,bstride,32);
+}
+
+vint8m8_t
+test___riscv_vlse8_v_i8m8_mu(vbool1_t mask,vint8m8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m8_mu(mask,merge,base,bstride,32);
+}
+
+vuint8mf8_t
+test___riscv_vlse8_v_u8mf8_mu(vbool64_t mask,vuint8mf8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf8_mu(mask,merge,base,bstride,32);
+}
+
+vuint8mf4_t
+test___riscv_vlse8_v_u8mf4_mu(vbool32_t mask,vuint8mf4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf4_mu(mask,merge,base,bstride,32);
+}
+
+vuint8mf2_t
+test___riscv_vlse8_v_u8mf2_mu(vbool16_t mask,vuint8mf2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf2_mu(mask,merge,base,bstride,32);
+}
+
+vuint8m1_t
+test___riscv_vlse8_v_u8m1_mu(vbool8_t mask,vuint8m1_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m1_mu(mask,merge,base,bstride,32);
+}
+
+vuint8m2_t
+test___riscv_vlse8_v_u8m2_mu(vbool4_t mask,vuint8m2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m2_mu(mask,merge,base,bstride,32);
+}
+
+vuint8m4_t
+test___riscv_vlse8_v_u8m4_mu(vbool2_t mask,vuint8m4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m4_mu(mask,merge,base,bstride,32);
+}
+
+vuint8m8_t
+test___riscv_vlse8_v_u8m8_mu(vbool1_t mask,vuint8m8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m8_mu(mask,merge,base,bstride,32);
+}
+
+vint16mf4_t
+test___riscv_vlse16_v_i16mf4_mu(vbool64_t mask,vint16mf4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf4_mu(mask,merge,base,bstride,32);
+}
+
+vint16mf2_t
+test___riscv_vlse16_v_i16mf2_mu(vbool32_t mask,vint16mf2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf2_mu(mask,merge,base,bstride,32);
+}
+
+vint16m1_t
+test___riscv_vlse16_v_i16m1_mu(vbool16_t mask,vint16m1_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m1_mu(mask,merge,base,bstride,32);
+}
+
+vint16m2_t
+test___riscv_vlse16_v_i16m2_mu(vbool8_t mask,vint16m2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m2_mu(mask,merge,base,bstride,32);
+}
+
+vint16m4_t
+test___riscv_vlse16_v_i16m4_mu(vbool4_t mask,vint16m4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m4_mu(mask,merge,base,bstride,32);
+}
+
+vint16m8_t
+test___riscv_vlse16_v_i16m8_mu(vbool2_t mask,vint16m8_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m8_mu(mask,merge,base,bstride,32);
+}
+
+vuint16mf4_t
+test___riscv_vlse16_v_u16mf4_mu(vbool64_t mask,vuint16mf4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf4_mu(mask,merge,base,bstride,32);
+}
+
+vuint16mf2_t
+test___riscv_vlse16_v_u16mf2_mu(vbool32_t mask,vuint16mf2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf2_mu(mask,merge,base,bstride,32);
+}
+
+vuint16m1_t
+test___riscv_vlse16_v_u16m1_mu(vbool16_t mask,vuint16m1_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m1_mu(mask,merge,base,bstride,32);
+}
+
+vuint16m2_t
+test___riscv_vlse16_v_u16m2_mu(vbool8_t mask,vuint16m2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m2_mu(mask,merge,base,bstride,32);
+}
+
+vuint16m4_t
+test___riscv_vlse16_v_u16m4_mu(vbool4_t mask,vuint16m4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m4_mu(mask,merge,base,bstride,32);
+}
+
+vuint16m8_t
+test___riscv_vlse16_v_u16m8_mu(vbool2_t mask,vuint16m8_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m8_mu(mask,merge,base,bstride,32);
+}
+
+vint32mf2_t
+test___riscv_vlse32_v_i32mf2_mu(vbool64_t mask,vint32mf2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32mf2_mu(mask,merge,base,bstride,32);
+}
+
+vint32m1_t
+test___riscv_vlse32_v_i32m1_mu(vbool32_t mask,vint32m1_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m1_mu(mask,merge,base,bstride,32);
+}
+
+vint32m2_t
+test___riscv_vlse32_v_i32m2_mu(vbool16_t mask,vint32m2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m2_mu(mask,merge,base,bstride,32);
+}
+
+vint32m4_t
+test___riscv_vlse32_v_i32m4_mu(vbool8_t mask,vint32m4_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m4_mu(mask,merge,base,bstride,32);
+}
+
+vint32m8_t
+test___riscv_vlse32_v_i32m8_mu(vbool4_t mask,vint32m8_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m8_mu(mask,merge,base,bstride,32);
+}
+
+vuint32mf2_t
+test___riscv_vlse32_v_u32mf2_mu(vbool64_t mask,vuint32mf2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32mf2_mu(mask,merge,base,bstride,32);
+}
+
+vuint32m1_t
+test___riscv_vlse32_v_u32m1_mu(vbool32_t mask,vuint32m1_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m1_mu(mask,merge,base,bstride,32);
+}
+
+vuint32m2_t
+test___riscv_vlse32_v_u32m2_mu(vbool16_t mask,vuint32m2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m2_mu(mask,merge,base,bstride,32);
+}
+
+vuint32m4_t
+test___riscv_vlse32_v_u32m4_mu(vbool8_t mask,vuint32m4_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m4_mu(mask,merge,base,bstride,32);
+}
+
+vuint32m8_t
+test___riscv_vlse32_v_u32m8_mu(vbool4_t mask,vuint32m8_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m8_mu(mask,merge,base,bstride,32);
+}
+
+vfloat32mf2_t
+test___riscv_vlse32_v_f32mf2_mu(vbool64_t mask,vfloat32mf2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32mf2_mu(mask,merge,base,bstride,32);
+}
+
+vfloat32m1_t
+test___riscv_vlse32_v_f32m1_mu(vbool32_t mask,vfloat32m1_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m1_mu(mask,merge,base,bstride,32);
+}
+
+vfloat32m2_t
+test___riscv_vlse32_v_f32m2_mu(vbool16_t mask,vfloat32m2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m2_mu(mask,merge,base,bstride,32);
+}
+
+vfloat32m4_t
+test___riscv_vlse32_v_f32m4_mu(vbool8_t mask,vfloat32m4_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m4_mu(mask,merge,base,bstride,32);
+}
+
+vfloat32m8_t
+test___riscv_vlse32_v_f32m8_mu(vbool4_t mask,vfloat32m8_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m8_mu(mask,merge,base,bstride,32);
+}
+
+vint64m1_t
+test___riscv_vlse64_v_i64m1_mu(vbool64_t mask,vint64m1_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m1_mu(mask,merge,base,bstride,32);
+}
+
+vint64m2_t
+test___riscv_vlse64_v_i64m2_mu(vbool32_t mask,vint64m2_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m2_mu(mask,merge,base,bstride,32);
+}
+
+vint64m4_t
+test___riscv_vlse64_v_i64m4_mu(vbool16_t mask,vint64m4_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m4_mu(mask,merge,base,bstride,32);
+}
+
+vint64m8_t
+test___riscv_vlse64_v_i64m8_mu(vbool8_t mask,vint64m8_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m8_mu(mask,merge,base,bstride,32);
+}
+
+vuint64m1_t
+test___riscv_vlse64_v_u64m1_mu(vbool64_t mask,vuint64m1_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m1_mu(mask,merge,base,bstride,32);
+}
+
+vuint64m2_t
+test___riscv_vlse64_v_u64m2_mu(vbool32_t mask,vuint64m2_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m2_mu(mask,merge,base,bstride,32);
+}
+
+vuint64m4_t
+test___riscv_vlse64_v_u64m4_mu(vbool16_t mask,vuint64m4_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m4_mu(mask,merge,base,bstride,32);
+}
+
+vuint64m8_t
+test___riscv_vlse64_v_u64m8_mu(vbool8_t mask,vuint64m8_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m8_mu(mask,merge,base,bstride,32);
+}
+
+vfloat64m1_t
+test___riscv_vlse64_v_f64m1_mu(vbool64_t mask,vfloat64m1_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m1_mu(mask,merge,base,bstride,32);
+}
+
+vfloat64m2_t
+test___riscv_vlse64_v_f64m2_mu(vbool32_t mask,vfloat64m2_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m2_mu(mask,merge,base,bstride,32);
+}
+
+vfloat64m4_t
+test___riscv_vlse64_v_f64m4_mu(vbool16_t mask,vfloat64m4_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m4_mu(mask,merge,base,bstride,32);
+}
+
+vfloat64m8_t
+test___riscv_vlse64_v_f64m8_mu(vbool8_t mask,vfloat64m8_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m8_mu(mask,merge,base,bstride,32);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*mu\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*mu\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*mu\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*mu\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tu-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tu-1.c
new file mode 100644
index 00000000000..32d687645d9
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tu-1.c
@@ -0,0 +1,345 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+vint8mf8_t
+test___riscv_vlse8_v_i8mf8_tu(vint8mf8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf8_tu(merge,base,bstride,vl);
+}
+
+vint8mf4_t
+test___riscv_vlse8_v_i8mf4_tu(vint8mf4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf4_tu(merge,base,bstride,vl);
+}
+
+vint8mf2_t
+test___riscv_vlse8_v_i8mf2_tu(vint8mf2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf2_tu(merge,base,bstride,vl);
+}
+
+vint8m1_t
+test___riscv_vlse8_v_i8m1_tu(vint8m1_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m1_tu(merge,base,bstride,vl);
+}
+
+vint8m2_t
+test___riscv_vlse8_v_i8m2_tu(vint8m2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m2_tu(merge,base,bstride,vl);
+}
+
+vint8m4_t
+test___riscv_vlse8_v_i8m4_tu(vint8m4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m4_tu(merge,base,bstride,vl);
+}
+
+vint8m8_t
+test___riscv_vlse8_v_i8m8_tu(vint8m8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m8_tu(merge,base,bstride,vl);
+}
+
+vuint8mf8_t
+test___riscv_vlse8_v_u8mf8_tu(vuint8mf8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf8_tu(merge,base,bstride,vl);
+}
+
+vuint8mf4_t
+test___riscv_vlse8_v_u8mf4_tu(vuint8mf4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf4_tu(merge,base,bstride,vl);
+}
+
+vuint8mf2_t
+test___riscv_vlse8_v_u8mf2_tu(vuint8mf2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf2_tu(merge,base,bstride,vl);
+}
+
+vuint8m1_t
+test___riscv_vlse8_v_u8m1_tu(vuint8m1_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m1_tu(merge,base,bstride,vl);
+}
+
+vuint8m2_t
+test___riscv_vlse8_v_u8m2_tu(vuint8m2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m2_tu(merge,base,bstride,vl);
+}
+
+vuint8m4_t
+test___riscv_vlse8_v_u8m4_tu(vuint8m4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m4_tu(merge,base,bstride,vl);
+}
+
+vuint8m8_t
+test___riscv_vlse8_v_u8m8_tu(vuint8m8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m8_tu(merge,base,bstride,vl);
+}
+
+vint16mf4_t
+test___riscv_vlse16_v_i16mf4_tu(vint16mf4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf4_tu(merge,base,bstride,vl);
+}
+
+vint16mf2_t
+test___riscv_vlse16_v_i16mf2_tu(vint16mf2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf2_tu(merge,base,bstride,vl);
+}
+
+vint16m1_t
+test___riscv_vlse16_v_i16m1_tu(vint16m1_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m1_tu(merge,base,bstride,vl);
+}
+
+vint16m2_t
+test___riscv_vlse16_v_i16m2_tu(vint16m2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m2_tu(merge,base,bstride,vl);
+}
+
+vint16m4_t
+test___riscv_vlse16_v_i16m4_tu(vint16m4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m4_tu(merge,base,bstride,vl);
+}
+
+vint16m8_t
+test___riscv_vlse16_v_i16m8_tu(vint16m8_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m8_tu(merge,base,bstride,vl);
+}
+
+vuint16mf4_t
+test___riscv_vlse16_v_u16mf4_tu(vuint16mf4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf4_tu(merge,base,bstride,vl);
+}
+
+vuint16mf2_t
+test___riscv_vlse16_v_u16mf2_tu(vuint16mf2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf2_tu(merge,base,bstride,vl);
+}
+
+vuint16m1_t
+test___riscv_vlse16_v_u16m1_tu(vuint16m1_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m1_tu(merge,base,bstride,vl);
+}
+
+vuint16m2_t
+test___riscv_vlse16_v_u16m2_tu(vuint16m2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m2_tu(merge,base,bstride,vl);
+}
+
+vuint16m4_t
+test___riscv_vlse16_v_u16m4_tu(vuint16m4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m4_tu(merge,base,bstride,vl);
+}
+
+vuint16m8_t
+test___riscv_vlse16_v_u16m8_tu(vuint16m8_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m8_tu(merge,base,bstride,vl);
+}
+
+vint32mf2_t
+test___riscv_vlse32_v_i32mf2_tu(vint32mf2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32mf2_tu(merge,base,bstride,vl);
+}
+
+vint32m1_t
+test___riscv_vlse32_v_i32m1_tu(vint32m1_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m1_tu(merge,base,bstride,vl);
+}
+
+vint32m2_t
+test___riscv_vlse32_v_i32m2_tu(vint32m2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m2_tu(merge,base,bstride,vl);
+}
+
+vint32m4_t
+test___riscv_vlse32_v_i32m4_tu(vint32m4_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m4_tu(merge,base,bstride,vl);
+}
+
+vint32m8_t
+test___riscv_vlse32_v_i32m8_tu(vint32m8_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m8_tu(merge,base,bstride,vl);
+}
+
+vuint32mf2_t
+test___riscv_vlse32_v_u32mf2_tu(vuint32mf2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32mf2_tu(merge,base,bstride,vl);
+}
+
+vuint32m1_t
+test___riscv_vlse32_v_u32m1_tu(vuint32m1_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m1_tu(merge,base,bstride,vl);
+}
+
+vuint32m2_t
+test___riscv_vlse32_v_u32m2_tu(vuint32m2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m2_tu(merge,base,bstride,vl);
+}
+
+vuint32m4_t
+test___riscv_vlse32_v_u32m4_tu(vuint32m4_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m4_tu(merge,base,bstride,vl);
+}
+
+vuint32m8_t
+test___riscv_vlse32_v_u32m8_tu(vuint32m8_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m8_tu(merge,base,bstride,vl);
+}
+
+vfloat32mf2_t
+test___riscv_vlse32_v_f32mf2_tu(vfloat32mf2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32mf2_tu(merge,base,bstride,vl);
+}
+
+vfloat32m1_t
+test___riscv_vlse32_v_f32m1_tu(vfloat32m1_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m1_tu(merge,base,bstride,vl);
+}
+
+vfloat32m2_t
+test___riscv_vlse32_v_f32m2_tu(vfloat32m2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m2_tu(merge,base,bstride,vl);
+}
+
+vfloat32m4_t
+test___riscv_vlse32_v_f32m4_tu(vfloat32m4_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m4_tu(merge,base,bstride,vl);
+}
+
+vfloat32m8_t
+test___riscv_vlse32_v_f32m8_tu(vfloat32m8_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m8_tu(merge,base,bstride,vl);
+}
+
+vint64m1_t
+test___riscv_vlse64_v_i64m1_tu(vint64m1_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m1_tu(merge,base,bstride,vl);
+}
+
+vint64m2_t
+test___riscv_vlse64_v_i64m2_tu(vint64m2_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m2_tu(merge,base,bstride,vl);
+}
+
+vint64m4_t
+test___riscv_vlse64_v_i64m4_tu(vint64m4_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m4_tu(merge,base,bstride,vl);
+}
+
+vint64m8_t
+test___riscv_vlse64_v_i64m8_tu(vint64m8_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m8_tu(merge,base,bstride,vl);
+}
+
+vuint64m1_t
+test___riscv_vlse64_v_u64m1_tu(vuint64m1_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m1_tu(merge,base,bstride,vl);
+}
+
+vuint64m2_t
+test___riscv_vlse64_v_u64m2_tu(vuint64m2_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m2_tu(merge,base,bstride,vl);
+}
+
+vuint64m4_t
+test___riscv_vlse64_v_u64m4_tu(vuint64m4_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m4_tu(merge,base,bstride,vl);
+}
+
+vuint64m8_t
+test___riscv_vlse64_v_u64m8_tu(vuint64m8_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m8_tu(merge,base,bstride,vl);
+}
+
+vfloat64m1_t
+test___riscv_vlse64_v_f64m1_tu(vfloat64m1_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m1_tu(merge,base,bstride,vl);
+}
+
+vfloat64m2_t
+test___riscv_vlse64_v_f64m2_tu(vfloat64m2_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m2_tu(merge,base,bstride,vl);
+}
+
+vfloat64m4_t
+test___riscv_vlse64_v_f64m4_tu(vfloat64m4_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m4_tu(merge,base,bstride,vl);
+}
+
+vfloat64m8_t
+test___riscv_vlse64_v_f64m8_tu(vfloat64m8_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m8_tu(merge,base,bstride,vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tu-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tu-2.c
new file mode 100644
index 00000000000..70575848095
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tu-2.c
@@ -0,0 +1,345 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+vint8mf8_t
+test___riscv_vlse8_v_i8mf8_tu(vint8mf8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf8_tu(merge,base,bstride,31);
+}
+
+vint8mf4_t
+test___riscv_vlse8_v_i8mf4_tu(vint8mf4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf4_tu(merge,base,bstride,31);
+}
+
+vint8mf2_t
+test___riscv_vlse8_v_i8mf2_tu(vint8mf2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf2_tu(merge,base,bstride,31);
+}
+
+vint8m1_t
+test___riscv_vlse8_v_i8m1_tu(vint8m1_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m1_tu(merge,base,bstride,31);
+}
+
+vint8m2_t
+test___riscv_vlse8_v_i8m2_tu(vint8m2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m2_tu(merge,base,bstride,31);
+}
+
+vint8m4_t
+test___riscv_vlse8_v_i8m4_tu(vint8m4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m4_tu(merge,base,bstride,31);
+}
+
+vint8m8_t
+test___riscv_vlse8_v_i8m8_tu(vint8m8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m8_tu(merge,base,bstride,31);
+}
+
+vuint8mf8_t
+test___riscv_vlse8_v_u8mf8_tu(vuint8mf8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf8_tu(merge,base,bstride,31);
+}
+
+vuint8mf4_t
+test___riscv_vlse8_v_u8mf4_tu(vuint8mf4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf4_tu(merge,base,bstride,31);
+}
+
+vuint8mf2_t
+test___riscv_vlse8_v_u8mf2_tu(vuint8mf2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf2_tu(merge,base,bstride,31);
+}
+
+vuint8m1_t
+test___riscv_vlse8_v_u8m1_tu(vuint8m1_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m1_tu(merge,base,bstride,31);
+}
+
+vuint8m2_t
+test___riscv_vlse8_v_u8m2_tu(vuint8m2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m2_tu(merge,base,bstride,31);
+}
+
+vuint8m4_t
+test___riscv_vlse8_v_u8m4_tu(vuint8m4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m4_tu(merge,base,bstride,31);
+}
+
+vuint8m8_t
+test___riscv_vlse8_v_u8m8_tu(vuint8m8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m8_tu(merge,base,bstride,31);
+}
+
+vint16mf4_t
+test___riscv_vlse16_v_i16mf4_tu(vint16mf4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf4_tu(merge,base,bstride,31);
+}
+
+vint16mf2_t
+test___riscv_vlse16_v_i16mf2_tu(vint16mf2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf2_tu(merge,base,bstride,31);
+}
+
+vint16m1_t
+test___riscv_vlse16_v_i16m1_tu(vint16m1_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m1_tu(merge,base,bstride,31);
+}
+
+vint16m2_t
+test___riscv_vlse16_v_i16m2_tu(vint16m2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m2_tu(merge,base,bstride,31);
+}
+
+vint16m4_t
+test___riscv_vlse16_v_i16m4_tu(vint16m4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m4_tu(merge,base,bstride,31);
+}
+
+vint16m8_t
+test___riscv_vlse16_v_i16m8_tu(vint16m8_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m8_tu(merge,base,bstride,31);
+}
+
+vuint16mf4_t
+test___riscv_vlse16_v_u16mf4_tu(vuint16mf4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf4_tu(merge,base,bstride,31);
+}
+
+vuint16mf2_t
+test___riscv_vlse16_v_u16mf2_tu(vuint16mf2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf2_tu(merge,base,bstride,31);
+}
+
+vuint16m1_t
+test___riscv_vlse16_v_u16m1_tu(vuint16m1_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m1_tu(merge,base,bstride,31);
+}
+
+vuint16m2_t
+test___riscv_vlse16_v_u16m2_tu(vuint16m2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m2_tu(merge,base,bstride,31);
+}
+
+vuint16m4_t
+test___riscv_vlse16_v_u16m4_tu(vuint16m4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m4_tu(merge,base,bstride,31);
+}
+
+vuint16m8_t
+test___riscv_vlse16_v_u16m8_tu(vuint16m8_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m8_tu(merge,base,bstride,31);
+}
+
+vint32mf2_t
+test___riscv_vlse32_v_i32mf2_tu(vint32mf2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32mf2_tu(merge,base,bstride,31);
+}
+
+vint32m1_t
+test___riscv_vlse32_v_i32m1_tu(vint32m1_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m1_tu(merge,base,bstride,31);
+}
+
+vint32m2_t
+test___riscv_vlse32_v_i32m2_tu(vint32m2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m2_tu(merge,base,bstride,31);
+}
+
+vint32m4_t
+test___riscv_vlse32_v_i32m4_tu(vint32m4_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m4_tu(merge,base,bstride,31);
+}
+
+vint32m8_t
+test___riscv_vlse32_v_i32m8_tu(vint32m8_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m8_tu(merge,base,bstride,31);
+}
+
+vuint32mf2_t
+test___riscv_vlse32_v_u32mf2_tu(vuint32mf2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32mf2_tu(merge,base,bstride,31);
+}
+
+vuint32m1_t
+test___riscv_vlse32_v_u32m1_tu(vuint32m1_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m1_tu(merge,base,bstride,31);
+}
+
+vuint32m2_t
+test___riscv_vlse32_v_u32m2_tu(vuint32m2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m2_tu(merge,base,bstride,31);
+}
+
+vuint32m4_t
+test___riscv_vlse32_v_u32m4_tu(vuint32m4_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m4_tu(merge,base,bstride,31);
+}
+
+vuint32m8_t
+test___riscv_vlse32_v_u32m8_tu(vuint32m8_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m8_tu(merge,base,bstride,31);
+}
+
+vfloat32mf2_t
+test___riscv_vlse32_v_f32mf2_tu(vfloat32mf2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32mf2_tu(merge,base,bstride,31);
+}
+
+vfloat32m1_t
+test___riscv_vlse32_v_f32m1_tu(vfloat32m1_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m1_tu(merge,base,bstride,31);
+}
+
+vfloat32m2_t
+test___riscv_vlse32_v_f32m2_tu(vfloat32m2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m2_tu(merge,base,bstride,31);
+}
+
+vfloat32m4_t
+test___riscv_vlse32_v_f32m4_tu(vfloat32m4_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m4_tu(merge,base,bstride,31);
+}
+
+vfloat32m8_t
+test___riscv_vlse32_v_f32m8_tu(vfloat32m8_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m8_tu(merge,base,bstride,31);
+}
+
+vint64m1_t
+test___riscv_vlse64_v_i64m1_tu(vint64m1_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m1_tu(merge,base,bstride,31);
+}
+
+vint64m2_t
+test___riscv_vlse64_v_i64m2_tu(vint64m2_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m2_tu(merge,base,bstride,31);
+}
+
+vint64m4_t
+test___riscv_vlse64_v_i64m4_tu(vint64m4_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m4_tu(merge,base,bstride,31);
+}
+
+vint64m8_t
+test___riscv_vlse64_v_i64m8_tu(vint64m8_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m8_tu(merge,base,bstride,31);
+}
+
+vuint64m1_t
+test___riscv_vlse64_v_u64m1_tu(vuint64m1_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m1_tu(merge,base,bstride,31);
+}
+
+vuint64m2_t
+test___riscv_vlse64_v_u64m2_tu(vuint64m2_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m2_tu(merge,base,bstride,31);
+}
+
+vuint64m4_t
+test___riscv_vlse64_v_u64m4_tu(vuint64m4_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m4_tu(merge,base,bstride,31);
+}
+
+vuint64m8_t
+test___riscv_vlse64_v_u64m8_tu(vuint64m8_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m8_tu(merge,base,bstride,31);
+}
+
+vfloat64m1_t
+test___riscv_vlse64_v_f64m1_tu(vfloat64m1_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m1_tu(merge,base,bstride,31);
+}
+
+vfloat64m2_t
+test___riscv_vlse64_v_f64m2_tu(vfloat64m2_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m2_tu(merge,base,bstride,31);
+}
+
+vfloat64m4_t
+test___riscv_vlse64_v_f64m4_tu(vfloat64m4_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m4_tu(merge,base,bstride,31);
+}
+
+vfloat64m8_t
+test___riscv_vlse64_v_f64m8_tu(vfloat64m8_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m8_tu(merge,base,bstride,31);
+}
+
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m1,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m2,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m4,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m8,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tu-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tu-3.c
new file mode 100644
index 00000000000..c9a3c4bf561
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tu-3.c
@@ -0,0 +1,345 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+vint8mf8_t
+test___riscv_vlse8_v_i8mf8_tu(vint8mf8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf8_tu(merge,base,bstride,32);
+}
+
+vint8mf4_t
+test___riscv_vlse8_v_i8mf4_tu(vint8mf4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf4_tu(merge,base,bstride,32);
+}
+
+vint8mf2_t
+test___riscv_vlse8_v_i8mf2_tu(vint8mf2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf2_tu(merge,base,bstride,32);
+}
+
+vint8m1_t
+test___riscv_vlse8_v_i8m1_tu(vint8m1_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m1_tu(merge,base,bstride,32);
+}
+
+vint8m2_t
+test___riscv_vlse8_v_i8m2_tu(vint8m2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m2_tu(merge,base,bstride,32);
+}
+
+vint8m4_t
+test___riscv_vlse8_v_i8m4_tu(vint8m4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m4_tu(merge,base,bstride,32);
+}
+
+vint8m8_t
+test___riscv_vlse8_v_i8m8_tu(vint8m8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m8_tu(merge,base,bstride,32);
+}
+
+vuint8mf8_t
+test___riscv_vlse8_v_u8mf8_tu(vuint8mf8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf8_tu(merge,base,bstride,32);
+}
+
+vuint8mf4_t
+test___riscv_vlse8_v_u8mf4_tu(vuint8mf4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf4_tu(merge,base,bstride,32);
+}
+
+vuint8mf2_t
+test___riscv_vlse8_v_u8mf2_tu(vuint8mf2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf2_tu(merge,base,bstride,32);
+}
+
+vuint8m1_t
+test___riscv_vlse8_v_u8m1_tu(vuint8m1_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m1_tu(merge,base,bstride,32);
+}
+
+vuint8m2_t
+test___riscv_vlse8_v_u8m2_tu(vuint8m2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m2_tu(merge,base,bstride,32);
+}
+
+vuint8m4_t
+test___riscv_vlse8_v_u8m4_tu(vuint8m4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m4_tu(merge,base,bstride,32);
+}
+
+vuint8m8_t
+test___riscv_vlse8_v_u8m8_tu(vuint8m8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m8_tu(merge,base,bstride,32);
+}
+
+vint16mf4_t
+test___riscv_vlse16_v_i16mf4_tu(vint16mf4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf4_tu(merge,base,bstride,32);
+}
+
+vint16mf2_t
+test___riscv_vlse16_v_i16mf2_tu(vint16mf2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf2_tu(merge,base,bstride,32);
+}
+
+vint16m1_t
+test___riscv_vlse16_v_i16m1_tu(vint16m1_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m1_tu(merge,base,bstride,32);
+}
+
+vint16m2_t
+test___riscv_vlse16_v_i16m2_tu(vint16m2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m2_tu(merge,base,bstride,32);
+}
+
+vint16m4_t
+test___riscv_vlse16_v_i16m4_tu(vint16m4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m4_tu(merge,base,bstride,32);
+}
+
+vint16m8_t
+test___riscv_vlse16_v_i16m8_tu(vint16m8_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m8_tu(merge,base,bstride,32);
+}
+
+vuint16mf4_t
+test___riscv_vlse16_v_u16mf4_tu(vuint16mf4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf4_tu(merge,base,bstride,32);
+}
+
+vuint16mf2_t
+test___riscv_vlse16_v_u16mf2_tu(vuint16mf2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf2_tu(merge,base,bstride,32);
+}
+
+vuint16m1_t
+test___riscv_vlse16_v_u16m1_tu(vuint16m1_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m1_tu(merge,base,bstride,32);
+}
+
+vuint16m2_t
+test___riscv_vlse16_v_u16m2_tu(vuint16m2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m2_tu(merge,base,bstride,32);
+}
+
+vuint16m4_t
+test___riscv_vlse16_v_u16m4_tu(vuint16m4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m4_tu(merge,base,bstride,32);
+}
+
+vuint16m8_t
+test___riscv_vlse16_v_u16m8_tu(vuint16m8_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m8_tu(merge,base,bstride,32);
+}
+
+vint32mf2_t
+test___riscv_vlse32_v_i32mf2_tu(vint32mf2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32mf2_tu(merge,base,bstride,32);
+}
+
+vint32m1_t
+test___riscv_vlse32_v_i32m1_tu(vint32m1_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m1_tu(merge,base,bstride,32);
+}
+
+vint32m2_t
+test___riscv_vlse32_v_i32m2_tu(vint32m2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m2_tu(merge,base,bstride,32);
+}
+
+vint32m4_t
+test___riscv_vlse32_v_i32m4_tu(vint32m4_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m4_tu(merge,base,bstride,32);
+}
+
+vint32m8_t
+test___riscv_vlse32_v_i32m8_tu(vint32m8_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m8_tu(merge,base,bstride,32);
+}
+
+vuint32mf2_t
+test___riscv_vlse32_v_u32mf2_tu(vuint32mf2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32mf2_tu(merge,base,bstride,32);
+}
+
+vuint32m1_t
+test___riscv_vlse32_v_u32m1_tu(vuint32m1_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m1_tu(merge,base,bstride,32);
+}
+
+vuint32m2_t
+test___riscv_vlse32_v_u32m2_tu(vuint32m2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m2_tu(merge,base,bstride,32);
+}
+
+vuint32m4_t
+test___riscv_vlse32_v_u32m4_tu(vuint32m4_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m4_tu(merge,base,bstride,32);
+}
+
+vuint32m8_t
+test___riscv_vlse32_v_u32m8_tu(vuint32m8_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m8_tu(merge,base,bstride,32);
+}
+
+vfloat32mf2_t
+test___riscv_vlse32_v_f32mf2_tu(vfloat32mf2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32mf2_tu(merge,base,bstride,32);
+}
+
+vfloat32m1_t
+test___riscv_vlse32_v_f32m1_tu(vfloat32m1_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m1_tu(merge,base,bstride,32);
+}
+
+vfloat32m2_t
+test___riscv_vlse32_v_f32m2_tu(vfloat32m2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m2_tu(merge,base,bstride,32);
+}
+
+vfloat32m4_t
+test___riscv_vlse32_v_f32m4_tu(vfloat32m4_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m4_tu(merge,base,bstride,32);
+}
+
+vfloat32m8_t
+test___riscv_vlse32_v_f32m8_tu(vfloat32m8_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m8_tu(merge,base,bstride,32);
+}
+
+vint64m1_t
+test___riscv_vlse64_v_i64m1_tu(vint64m1_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m1_tu(merge,base,bstride,32);
+}
+
+vint64m2_t
+test___riscv_vlse64_v_i64m2_tu(vint64m2_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m2_tu(merge,base,bstride,32);
+}
+
+vint64m4_t
+test___riscv_vlse64_v_i64m4_tu(vint64m4_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m4_tu(merge,base,bstride,32);
+}
+
+vint64m8_t
+test___riscv_vlse64_v_i64m8_tu(vint64m8_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m8_tu(merge,base,bstride,32);
+}
+
+vuint64m1_t
+test___riscv_vlse64_v_u64m1_tu(vuint64m1_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m1_tu(merge,base,bstride,32);
+}
+
+vuint64m2_t
+test___riscv_vlse64_v_u64m2_tu(vuint64m2_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m2_tu(merge,base,bstride,32);
+}
+
+vuint64m4_t
+test___riscv_vlse64_v_u64m4_tu(vuint64m4_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m4_tu(merge,base,bstride,32);
+}
+
+vuint64m8_t
+test___riscv_vlse64_v_u64m8_tu(vuint64m8_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m8_tu(merge,base,bstride,32);
+}
+
+vfloat64m1_t
+test___riscv_vlse64_v_f64m1_tu(vfloat64m1_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m1_tu(merge,base,bstride,32);
+}
+
+vfloat64m2_t
+test___riscv_vlse64_v_f64m2_tu(vfloat64m2_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m2_tu(merge,base,bstride,32);
+}
+
+vfloat64m4_t
+test___riscv_vlse64_v_f64m4_tu(vfloat64m4_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m4_tu(merge,base,bstride,32);
+}
+
+vfloat64m8_t
+test___riscv_vlse64_v_f64m8_tu(vfloat64m8_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m8_tu(merge,base,bstride,32);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*[a-x0-9]+} 3 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tum-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tum-1.c
new file mode 100644
index 00000000000..b8127099c8d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tum-1.c
@@ -0,0 +1,345 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+vint8mf8_t
+test___riscv_vlse8_v_i8mf8_tum(vbool64_t mask,vint8mf8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf8_tum(mask,merge,base,bstride,vl);
+}
+
+vint8mf4_t
+test___riscv_vlse8_v_i8mf4_tum(vbool32_t mask,vint8mf4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf4_tum(mask,merge,base,bstride,vl);
+}
+
+vint8mf2_t
+test___riscv_vlse8_v_i8mf2_tum(vbool16_t mask,vint8mf2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf2_tum(mask,merge,base,bstride,vl);
+}
+
+vint8m1_t
+test___riscv_vlse8_v_i8m1_tum(vbool8_t mask,vint8m1_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m1_tum(mask,merge,base,bstride,vl);
+}
+
+vint8m2_t
+test___riscv_vlse8_v_i8m2_tum(vbool4_t mask,vint8m2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m2_tum(mask,merge,base,bstride,vl);
+}
+
+vint8m4_t
+test___riscv_vlse8_v_i8m4_tum(vbool2_t mask,vint8m4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m4_tum(mask,merge,base,bstride,vl);
+}
+
+vint8m8_t
+test___riscv_vlse8_v_i8m8_tum(vbool1_t mask,vint8m8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m8_tum(mask,merge,base,bstride,vl);
+}
+
+vuint8mf8_t
+test___riscv_vlse8_v_u8mf8_tum(vbool64_t mask,vuint8mf8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf8_tum(mask,merge,base,bstride,vl);
+}
+
+vuint8mf4_t
+test___riscv_vlse8_v_u8mf4_tum(vbool32_t mask,vuint8mf4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf4_tum(mask,merge,base,bstride,vl);
+}
+
+vuint8mf2_t
+test___riscv_vlse8_v_u8mf2_tum(vbool16_t mask,vuint8mf2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf2_tum(mask,merge,base,bstride,vl);
+}
+
+vuint8m1_t
+test___riscv_vlse8_v_u8m1_tum(vbool8_t mask,vuint8m1_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m1_tum(mask,merge,base,bstride,vl);
+}
+
+vuint8m2_t
+test___riscv_vlse8_v_u8m2_tum(vbool4_t mask,vuint8m2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m2_tum(mask,merge,base,bstride,vl);
+}
+
+vuint8m4_t
+test___riscv_vlse8_v_u8m4_tum(vbool2_t mask,vuint8m4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m4_tum(mask,merge,base,bstride,vl);
+}
+
+vuint8m8_t
+test___riscv_vlse8_v_u8m8_tum(vbool1_t mask,vuint8m8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m8_tum(mask,merge,base,bstride,vl);
+}
+
+vint16mf4_t
+test___riscv_vlse16_v_i16mf4_tum(vbool64_t mask,vint16mf4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf4_tum(mask,merge,base,bstride,vl);
+}
+
+vint16mf2_t
+test___riscv_vlse16_v_i16mf2_tum(vbool32_t mask,vint16mf2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf2_tum(mask,merge,base,bstride,vl);
+}
+
+vint16m1_t
+test___riscv_vlse16_v_i16m1_tum(vbool16_t mask,vint16m1_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m1_tum(mask,merge,base,bstride,vl);
+}
+
+vint16m2_t
+test___riscv_vlse16_v_i16m2_tum(vbool8_t mask,vint16m2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m2_tum(mask,merge,base,bstride,vl);
+}
+
+vint16m4_t
+test___riscv_vlse16_v_i16m4_tum(vbool4_t mask,vint16m4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m4_tum(mask,merge,base,bstride,vl);
+}
+
+vint16m8_t
+test___riscv_vlse16_v_i16m8_tum(vbool2_t mask,vint16m8_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m8_tum(mask,merge,base,bstride,vl);
+}
+
+vuint16mf4_t
+test___riscv_vlse16_v_u16mf4_tum(vbool64_t mask,vuint16mf4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf4_tum(mask,merge,base,bstride,vl);
+}
+
+vuint16mf2_t
+test___riscv_vlse16_v_u16mf2_tum(vbool32_t mask,vuint16mf2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf2_tum(mask,merge,base,bstride,vl);
+}
+
+vuint16m1_t
+test___riscv_vlse16_v_u16m1_tum(vbool16_t mask,vuint16m1_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m1_tum(mask,merge,base,bstride,vl);
+}
+
+vuint16m2_t
+test___riscv_vlse16_v_u16m2_tum(vbool8_t mask,vuint16m2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m2_tum(mask,merge,base,bstride,vl);
+}
+
+vuint16m4_t
+test___riscv_vlse16_v_u16m4_tum(vbool4_t mask,vuint16m4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m4_tum(mask,merge,base,bstride,vl);
+}
+
+vuint16m8_t
+test___riscv_vlse16_v_u16m8_tum(vbool2_t mask,vuint16m8_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m8_tum(mask,merge,base,bstride,vl);
+}
+
+vint32mf2_t
+test___riscv_vlse32_v_i32mf2_tum(vbool64_t mask,vint32mf2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32mf2_tum(mask,merge,base,bstride,vl);
+}
+
+vint32m1_t
+test___riscv_vlse32_v_i32m1_tum(vbool32_t mask,vint32m1_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m1_tum(mask,merge,base,bstride,vl);
+}
+
+vint32m2_t
+test___riscv_vlse32_v_i32m2_tum(vbool16_t mask,vint32m2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m2_tum(mask,merge,base,bstride,vl);
+}
+
+vint32m4_t
+test___riscv_vlse32_v_i32m4_tum(vbool8_t mask,vint32m4_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m4_tum(mask,merge,base,bstride,vl);
+}
+
+vint32m8_t
+test___riscv_vlse32_v_i32m8_tum(vbool4_t mask,vint32m8_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m8_tum(mask,merge,base,bstride,vl);
+}
+
+vuint32mf2_t
+test___riscv_vlse32_v_u32mf2_tum(vbool64_t mask,vuint32mf2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32mf2_tum(mask,merge,base,bstride,vl);
+}
+
+vuint32m1_t
+test___riscv_vlse32_v_u32m1_tum(vbool32_t mask,vuint32m1_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m1_tum(mask,merge,base,bstride,vl);
+}
+
+vuint32m2_t
+test___riscv_vlse32_v_u32m2_tum(vbool16_t mask,vuint32m2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m2_tum(mask,merge,base,bstride,vl);
+}
+
+vuint32m4_t
+test___riscv_vlse32_v_u32m4_tum(vbool8_t mask,vuint32m4_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m4_tum(mask,merge,base,bstride,vl);
+}
+
+vuint32m8_t
+test___riscv_vlse32_v_u32m8_tum(vbool4_t mask,vuint32m8_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m8_tum(mask,merge,base,bstride,vl);
+}
+
+vfloat32mf2_t
+test___riscv_vlse32_v_f32mf2_tum(vbool64_t mask,vfloat32mf2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32mf2_tum(mask,merge,base,bstride,vl);
+}
+
+vfloat32m1_t
+test___riscv_vlse32_v_f32m1_tum(vbool32_t mask,vfloat32m1_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m1_tum(mask,merge,base,bstride,vl);
+}
+
+vfloat32m2_t
+test___riscv_vlse32_v_f32m2_tum(vbool16_t mask,vfloat32m2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m2_tum(mask,merge,base,bstride,vl);
+}
+
+vfloat32m4_t
+test___riscv_vlse32_v_f32m4_tum(vbool8_t mask,vfloat32m4_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m4_tum(mask,merge,base,bstride,vl);
+}
+
+vfloat32m8_t
+test___riscv_vlse32_v_f32m8_tum(vbool4_t mask,vfloat32m8_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m8_tum(mask,merge,base,bstride,vl);
+}
+
+vint64m1_t
+test___riscv_vlse64_v_i64m1_tum(vbool64_t mask,vint64m1_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m1_tum(mask,merge,base,bstride,vl);
+}
+
+vint64m2_t
+test___riscv_vlse64_v_i64m2_tum(vbool32_t mask,vint64m2_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m2_tum(mask,merge,base,bstride,vl);
+}
+
+vint64m4_t
+test___riscv_vlse64_v_i64m4_tum(vbool16_t mask,vint64m4_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m4_tum(mask,merge,base,bstride,vl);
+}
+
+vint64m8_t
+test___riscv_vlse64_v_i64m8_tum(vbool8_t mask,vint64m8_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m8_tum(mask,merge,base,bstride,vl);
+}
+
+vuint64m1_t
+test___riscv_vlse64_v_u64m1_tum(vbool64_t mask,vuint64m1_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m1_tum(mask,merge,base,bstride,vl);
+}
+
+vuint64m2_t
+test___riscv_vlse64_v_u64m2_tum(vbool32_t mask,vuint64m2_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m2_tum(mask,merge,base,bstride,vl);
+}
+
+vuint64m4_t
+test___riscv_vlse64_v_u64m4_tum(vbool16_t mask,vuint64m4_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m4_tum(mask,merge,base,bstride,vl);
+}
+
+vuint64m8_t
+test___riscv_vlse64_v_u64m8_tum(vbool8_t mask,vuint64m8_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m8_tum(mask,merge,base,bstride,vl);
+}
+
+vfloat64m1_t
+test___riscv_vlse64_v_f64m1_tum(vbool64_t mask,vfloat64m1_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m1_tum(mask,merge,base,bstride,vl);
+}
+
+vfloat64m2_t
+test___riscv_vlse64_v_f64m2_tum(vbool32_t mask,vfloat64m2_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m2_tum(mask,merge,base,bstride,vl);
+}
+
+vfloat64m4_t
+test___riscv_vlse64_v_f64m4_tum(vbool16_t mask,vfloat64m4_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m4_tum(mask,merge,base,bstride,vl);
+}
+
+vfloat64m8_t
+test___riscv_vlse64_v_f64m8_tum(vbool8_t mask,vfloat64m8_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m8_tum(mask,merge,base,bstride,vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tum-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tum-2.c
new file mode 100644
index 00000000000..0bf740034e1
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tum-2.c
@@ -0,0 +1,345 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+vint8mf8_t
+test___riscv_vlse8_v_i8mf8_tum(vbool64_t mask,vint8mf8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf8_tum(mask,merge,base,bstride,31);
+}
+
+vint8mf4_t
+test___riscv_vlse8_v_i8mf4_tum(vbool32_t mask,vint8mf4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf4_tum(mask,merge,base,bstride,31);
+}
+
+vint8mf2_t
+test___riscv_vlse8_v_i8mf2_tum(vbool16_t mask,vint8mf2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf2_tum(mask,merge,base,bstride,31);
+}
+
+vint8m1_t
+test___riscv_vlse8_v_i8m1_tum(vbool8_t mask,vint8m1_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m1_tum(mask,merge,base,bstride,31);
+}
+
+vint8m2_t
+test___riscv_vlse8_v_i8m2_tum(vbool4_t mask,vint8m2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m2_tum(mask,merge,base,bstride,31);
+}
+
+vint8m4_t
+test___riscv_vlse8_v_i8m4_tum(vbool2_t mask,vint8m4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m4_tum(mask,merge,base,bstride,31);
+}
+
+vint8m8_t
+test___riscv_vlse8_v_i8m8_tum(vbool1_t mask,vint8m8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m8_tum(mask,merge,base,bstride,31);
+}
+
+vuint8mf8_t
+test___riscv_vlse8_v_u8mf8_tum(vbool64_t mask,vuint8mf8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf8_tum(mask,merge,base,bstride,31);
+}
+
+vuint8mf4_t
+test___riscv_vlse8_v_u8mf4_tum(vbool32_t mask,vuint8mf4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf4_tum(mask,merge,base,bstride,31);
+}
+
+vuint8mf2_t
+test___riscv_vlse8_v_u8mf2_tum(vbool16_t mask,vuint8mf2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf2_tum(mask,merge,base,bstride,31);
+}
+
+vuint8m1_t
+test___riscv_vlse8_v_u8m1_tum(vbool8_t mask,vuint8m1_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m1_tum(mask,merge,base,bstride,31);
+}
+
+vuint8m2_t
+test___riscv_vlse8_v_u8m2_tum(vbool4_t mask,vuint8m2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m2_tum(mask,merge,base,bstride,31);
+}
+
+vuint8m4_t
+test___riscv_vlse8_v_u8m4_tum(vbool2_t mask,vuint8m4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m4_tum(mask,merge,base,bstride,31);
+}
+
+vuint8m8_t
+test___riscv_vlse8_v_u8m8_tum(vbool1_t mask,vuint8m8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m8_tum(mask,merge,base,bstride,31);
+}
+
+vint16mf4_t
+test___riscv_vlse16_v_i16mf4_tum(vbool64_t mask,vint16mf4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf4_tum(mask,merge,base,bstride,31);
+}
+
+vint16mf2_t
+test___riscv_vlse16_v_i16mf2_tum(vbool32_t mask,vint16mf2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf2_tum(mask,merge,base,bstride,31);
+}
+
+vint16m1_t
+test___riscv_vlse16_v_i16m1_tum(vbool16_t mask,vint16m1_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m1_tum(mask,merge,base,bstride,31);
+}
+
+vint16m2_t
+test___riscv_vlse16_v_i16m2_tum(vbool8_t mask,vint16m2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m2_tum(mask,merge,base,bstride,31);
+}
+
+vint16m4_t
+test___riscv_vlse16_v_i16m4_tum(vbool4_t mask,vint16m4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m4_tum(mask,merge,base,bstride,31);
+}
+
+vint16m8_t
+test___riscv_vlse16_v_i16m8_tum(vbool2_t mask,vint16m8_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m8_tum(mask,merge,base,bstride,31);
+}
+
+vuint16mf4_t
+test___riscv_vlse16_v_u16mf4_tum(vbool64_t mask,vuint16mf4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf4_tum(mask,merge,base,bstride,31);
+}
+
+vuint16mf2_t
+test___riscv_vlse16_v_u16mf2_tum(vbool32_t mask,vuint16mf2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf2_tum(mask,merge,base,bstride,31);
+}
+
+vuint16m1_t
+test___riscv_vlse16_v_u16m1_tum(vbool16_t mask,vuint16m1_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m1_tum(mask,merge,base,bstride,31);
+}
+
+vuint16m2_t
+test___riscv_vlse16_v_u16m2_tum(vbool8_t mask,vuint16m2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m2_tum(mask,merge,base,bstride,31);
+}
+
+vuint16m4_t
+test___riscv_vlse16_v_u16m4_tum(vbool4_t mask,vuint16m4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m4_tum(mask,merge,base,bstride,31);
+}
+
+vuint16m8_t
+test___riscv_vlse16_v_u16m8_tum(vbool2_t mask,vuint16m8_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m8_tum(mask,merge,base,bstride,31);
+}
+
+vint32mf2_t
+test___riscv_vlse32_v_i32mf2_tum(vbool64_t mask,vint32mf2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32mf2_tum(mask,merge,base,bstride,31);
+}
+
+vint32m1_t
+test___riscv_vlse32_v_i32m1_tum(vbool32_t mask,vint32m1_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m1_tum(mask,merge,base,bstride,31);
+}
+
+vint32m2_t
+test___riscv_vlse32_v_i32m2_tum(vbool16_t mask,vint32m2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m2_tum(mask,merge,base,bstride,31);
+}
+
+vint32m4_t
+test___riscv_vlse32_v_i32m4_tum(vbool8_t mask,vint32m4_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m4_tum(mask,merge,base,bstride,31);
+}
+
+vint32m8_t
+test___riscv_vlse32_v_i32m8_tum(vbool4_t mask,vint32m8_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m8_tum(mask,merge,base,bstride,31);
+}
+
+vuint32mf2_t
+test___riscv_vlse32_v_u32mf2_tum(vbool64_t mask,vuint32mf2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32mf2_tum(mask,merge,base,bstride,31);
+}
+
+vuint32m1_t
+test___riscv_vlse32_v_u32m1_tum(vbool32_t mask,vuint32m1_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m1_tum(mask,merge,base,bstride,31);
+}
+
+vuint32m2_t
+test___riscv_vlse32_v_u32m2_tum(vbool16_t mask,vuint32m2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m2_tum(mask,merge,base,bstride,31);
+}
+
+vuint32m4_t
+test___riscv_vlse32_v_u32m4_tum(vbool8_t mask,vuint32m4_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m4_tum(mask,merge,base,bstride,31);
+}
+
+vuint32m8_t
+test___riscv_vlse32_v_u32m8_tum(vbool4_t mask,vuint32m8_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m8_tum(mask,merge,base,bstride,31);
+}
+
+vfloat32mf2_t
+test___riscv_vlse32_v_f32mf2_tum(vbool64_t mask,vfloat32mf2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32mf2_tum(mask,merge,base,bstride,31);
+}
+
+vfloat32m1_t
+test___riscv_vlse32_v_f32m1_tum(vbool32_t mask,vfloat32m1_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m1_tum(mask,merge,base,bstride,31);
+}
+
+vfloat32m2_t
+test___riscv_vlse32_v_f32m2_tum(vbool16_t mask,vfloat32m2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m2_tum(mask,merge,base,bstride,31);
+}
+
+vfloat32m4_t
+test___riscv_vlse32_v_f32m4_tum(vbool8_t mask,vfloat32m4_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m4_tum(mask,merge,base,bstride,31);
+}
+
+vfloat32m8_t
+test___riscv_vlse32_v_f32m8_tum(vbool4_t mask,vfloat32m8_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m8_tum(mask,merge,base,bstride,31);
+}
+
+vint64m1_t
+test___riscv_vlse64_v_i64m1_tum(vbool64_t mask,vint64m1_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m1_tum(mask,merge,base,bstride,31);
+}
+
+vint64m2_t
+test___riscv_vlse64_v_i64m2_tum(vbool32_t mask,vint64m2_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m2_tum(mask,merge,base,bstride,31);
+}
+
+vint64m4_t
+test___riscv_vlse64_v_i64m4_tum(vbool16_t mask,vint64m4_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m4_tum(mask,merge,base,bstride,31);
+}
+
+vint64m8_t
+test___riscv_vlse64_v_i64m8_tum(vbool8_t mask,vint64m8_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m8_tum(mask,merge,base,bstride,31);
+}
+
+vuint64m1_t
+test___riscv_vlse64_v_u64m1_tum(vbool64_t mask,vuint64m1_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m1_tum(mask,merge,base,bstride,31);
+}
+
+vuint64m2_t
+test___riscv_vlse64_v_u64m2_tum(vbool32_t mask,vuint64m2_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m2_tum(mask,merge,base,bstride,31);
+}
+
+vuint64m4_t
+test___riscv_vlse64_v_u64m4_tum(vbool16_t mask,vuint64m4_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m4_tum(mask,merge,base,bstride,31);
+}
+
+vuint64m8_t
+test___riscv_vlse64_v_u64m8_tum(vbool8_t mask,vuint64m8_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m8_tum(mask,merge,base,bstride,31);
+}
+
+vfloat64m1_t
+test___riscv_vlse64_v_f64m1_tum(vbool64_t mask,vfloat64m1_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m1_tum(mask,merge,base,bstride,31);
+}
+
+vfloat64m2_t
+test___riscv_vlse64_v_f64m2_tum(vbool32_t mask,vfloat64m2_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m2_tum(mask,merge,base,bstride,31);
+}
+
+vfloat64m4_t
+test___riscv_vlse64_v_f64m4_tum(vbool16_t mask,vfloat64m4_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m4_tum(mask,merge,base,bstride,31);
+}
+
+vfloat64m8_t
+test___riscv_vlse64_v_f64m8_tum(vbool8_t mask,vfloat64m8_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m8_tum(mask,merge,base,bstride,31);
+}
+
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m1,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m2,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m4,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m8,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tum-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tum-3.c
new file mode 100644
index 00000000000..5e627854537
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tum-3.c
@@ -0,0 +1,345 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+vint8mf8_t
+test___riscv_vlse8_v_i8mf8_tum(vbool64_t mask,vint8mf8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf8_tum(mask,merge,base,bstride,32);
+}
+
+vint8mf4_t
+test___riscv_vlse8_v_i8mf4_tum(vbool32_t mask,vint8mf4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf4_tum(mask,merge,base,bstride,32);
+}
+
+vint8mf2_t
+test___riscv_vlse8_v_i8mf2_tum(vbool16_t mask,vint8mf2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf2_tum(mask,merge,base,bstride,32);
+}
+
+vint8m1_t
+test___riscv_vlse8_v_i8m1_tum(vbool8_t mask,vint8m1_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m1_tum(mask,merge,base,bstride,32);
+}
+
+vint8m2_t
+test___riscv_vlse8_v_i8m2_tum(vbool4_t mask,vint8m2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m2_tum(mask,merge,base,bstride,32);
+}
+
+vint8m4_t
+test___riscv_vlse8_v_i8m4_tum(vbool2_t mask,vint8m4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m4_tum(mask,merge,base,bstride,32);
+}
+
+vint8m8_t
+test___riscv_vlse8_v_i8m8_tum(vbool1_t mask,vint8m8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m8_tum(mask,merge,base,bstride,32);
+}
+
+vuint8mf8_t
+test___riscv_vlse8_v_u8mf8_tum(vbool64_t mask,vuint8mf8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf8_tum(mask,merge,base,bstride,32);
+}
+
+vuint8mf4_t
+test___riscv_vlse8_v_u8mf4_tum(vbool32_t mask,vuint8mf4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf4_tum(mask,merge,base,bstride,32);
+}
+
+vuint8mf2_t
+test___riscv_vlse8_v_u8mf2_tum(vbool16_t mask,vuint8mf2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf2_tum(mask,merge,base,bstride,32);
+}
+
+vuint8m1_t
+test___riscv_vlse8_v_u8m1_tum(vbool8_t mask,vuint8m1_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m1_tum(mask,merge,base,bstride,32);
+}
+
+vuint8m2_t
+test___riscv_vlse8_v_u8m2_tum(vbool4_t mask,vuint8m2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m2_tum(mask,merge,base,bstride,32);
+}
+
+vuint8m4_t
+test___riscv_vlse8_v_u8m4_tum(vbool2_t mask,vuint8m4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m4_tum(mask,merge,base,bstride,32);
+}
+
+vuint8m8_t
+test___riscv_vlse8_v_u8m8_tum(vbool1_t mask,vuint8m8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m8_tum(mask,merge,base,bstride,32);
+}
+
+vint16mf4_t
+test___riscv_vlse16_v_i16mf4_tum(vbool64_t mask,vint16mf4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf4_tum(mask,merge,base,bstride,32);
+}
+
+vint16mf2_t
+test___riscv_vlse16_v_i16mf2_tum(vbool32_t mask,vint16mf2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf2_tum(mask,merge,base,bstride,32);
+}
+
+vint16m1_t
+test___riscv_vlse16_v_i16m1_tum(vbool16_t mask,vint16m1_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m1_tum(mask,merge,base,bstride,32);
+}
+
+vint16m2_t
+test___riscv_vlse16_v_i16m2_tum(vbool8_t mask,vint16m2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m2_tum(mask,merge,base,bstride,32);
+}
+
+vint16m4_t
+test___riscv_vlse16_v_i16m4_tum(vbool4_t mask,vint16m4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m4_tum(mask,merge,base,bstride,32);
+}
+
+vint16m8_t
+test___riscv_vlse16_v_i16m8_tum(vbool2_t mask,vint16m8_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m8_tum(mask,merge,base,bstride,32);
+}
+
+vuint16mf4_t
+test___riscv_vlse16_v_u16mf4_tum(vbool64_t mask,vuint16mf4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf4_tum(mask,merge,base,bstride,32);
+}
+
+vuint16mf2_t
+test___riscv_vlse16_v_u16mf2_tum(vbool32_t mask,vuint16mf2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf2_tum(mask,merge,base,bstride,32);
+}
+
+vuint16m1_t
+test___riscv_vlse16_v_u16m1_tum(vbool16_t mask,vuint16m1_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m1_tum(mask,merge,base,bstride,32);
+}
+
+vuint16m2_t
+test___riscv_vlse16_v_u16m2_tum(vbool8_t mask,vuint16m2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m2_tum(mask,merge,base,bstride,32);
+}
+
+vuint16m4_t
+test___riscv_vlse16_v_u16m4_tum(vbool4_t mask,vuint16m4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m4_tum(mask,merge,base,bstride,32);
+}
+
+vuint16m8_t
+test___riscv_vlse16_v_u16m8_tum(vbool2_t mask,vuint16m8_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m8_tum(mask,merge,base,bstride,32);
+}
+
+vint32mf2_t
+test___riscv_vlse32_v_i32mf2_tum(vbool64_t mask,vint32mf2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32mf2_tum(mask,merge,base,bstride,32);
+}
+
+vint32m1_t
+test___riscv_vlse32_v_i32m1_tum(vbool32_t mask,vint32m1_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m1_tum(mask,merge,base,bstride,32);
+}
+
+vint32m2_t
+test___riscv_vlse32_v_i32m2_tum(vbool16_t mask,vint32m2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m2_tum(mask,merge,base,bstride,32);
+}
+
+vint32m4_t
+test___riscv_vlse32_v_i32m4_tum(vbool8_t mask,vint32m4_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m4_tum(mask,merge,base,bstride,32);
+}
+
+vint32m8_t
+test___riscv_vlse32_v_i32m8_tum(vbool4_t mask,vint32m8_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m8_tum(mask,merge,base,bstride,32);
+}
+
+vuint32mf2_t
+test___riscv_vlse32_v_u32mf2_tum(vbool64_t mask,vuint32mf2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32mf2_tum(mask,merge,base,bstride,32);
+}
+
+vuint32m1_t
+test___riscv_vlse32_v_u32m1_tum(vbool32_t mask,vuint32m1_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m1_tum(mask,merge,base,bstride,32);
+}
+
+vuint32m2_t
+test___riscv_vlse32_v_u32m2_tum(vbool16_t mask,vuint32m2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m2_tum(mask,merge,base,bstride,32);
+}
+
+vuint32m4_t
+test___riscv_vlse32_v_u32m4_tum(vbool8_t mask,vuint32m4_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m4_tum(mask,merge,base,bstride,32);
+}
+
+vuint32m8_t
+test___riscv_vlse32_v_u32m8_tum(vbool4_t mask,vuint32m8_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m8_tum(mask,merge,base,bstride,32);
+}
+
+vfloat32mf2_t
+test___riscv_vlse32_v_f32mf2_tum(vbool64_t mask,vfloat32mf2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32mf2_tum(mask,merge,base,bstride,32);
+}
+
+vfloat32m1_t
+test___riscv_vlse32_v_f32m1_tum(vbool32_t mask,vfloat32m1_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m1_tum(mask,merge,base,bstride,32);
+}
+
+vfloat32m2_t
+test___riscv_vlse32_v_f32m2_tum(vbool16_t mask,vfloat32m2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m2_tum(mask,merge,base,bstride,32);
+}
+
+vfloat32m4_t
+test___riscv_vlse32_v_f32m4_tum(vbool8_t mask,vfloat32m4_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m4_tum(mask,merge,base,bstride,32);
+}
+
+vfloat32m8_t
+test___riscv_vlse32_v_f32m8_tum(vbool4_t mask,vfloat32m8_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m8_tum(mask,merge,base,bstride,32);
+}
+
+vint64m1_t
+test___riscv_vlse64_v_i64m1_tum(vbool64_t mask,vint64m1_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m1_tum(mask,merge,base,bstride,32);
+}
+
+vint64m2_t
+test___riscv_vlse64_v_i64m2_tum(vbool32_t mask,vint64m2_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m2_tum(mask,merge,base,bstride,32);
+}
+
+vint64m4_t
+test___riscv_vlse64_v_i64m4_tum(vbool16_t mask,vint64m4_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m4_tum(mask,merge,base,bstride,32);
+}
+
+vint64m8_t
+test___riscv_vlse64_v_i64m8_tum(vbool8_t mask,vint64m8_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m8_tum(mask,merge,base,bstride,32);
+}
+
+vuint64m1_t
+test___riscv_vlse64_v_u64m1_tum(vbool64_t mask,vuint64m1_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m1_tum(mask,merge,base,bstride,32);
+}
+
+vuint64m2_t
+test___riscv_vlse64_v_u64m2_tum(vbool32_t mask,vuint64m2_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m2_tum(mask,merge,base,bstride,32);
+}
+
+vuint64m4_t
+test___riscv_vlse64_v_u64m4_tum(vbool16_t mask,vuint64m4_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m4_tum(mask,merge,base,bstride,32);
+}
+
+vuint64m8_t
+test___riscv_vlse64_v_u64m8_tum(vbool8_t mask,vuint64m8_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m8_tum(mask,merge,base,bstride,32);
+}
+
+vfloat64m1_t
+test___riscv_vlse64_v_f64m1_tum(vbool64_t mask,vfloat64m1_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m1_tum(mask,merge,base,bstride,32);
+}
+
+vfloat64m2_t
+test___riscv_vlse64_v_f64m2_tum(vbool32_t mask,vfloat64m2_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m2_tum(mask,merge,base,bstride,32);
+}
+
+vfloat64m4_t
+test___riscv_vlse64_v_f64m4_tum(vbool16_t mask,vfloat64m4_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m4_tum(mask,merge,base,bstride,32);
+}
+
+vfloat64m8_t
+test___riscv_vlse64_v_f64m8_tum(vbool8_t mask,vfloat64m8_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m8_tum(mask,merge,base,bstride,32);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*tu,\s*m[au]\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tumu-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tumu-1.c
new file mode 100644
index 00000000000..35c846e1660
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tumu-1.c
@@ -0,0 +1,345 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+vint8mf8_t
+test___riscv_vlse8_v_i8mf8_tumu(vbool64_t mask,vint8mf8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf8_tumu(mask,merge,base,bstride,vl);
+}
+
+vint8mf4_t
+test___riscv_vlse8_v_i8mf4_tumu(vbool32_t mask,vint8mf4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf4_tumu(mask,merge,base,bstride,vl);
+}
+
+vint8mf2_t
+test___riscv_vlse8_v_i8mf2_tumu(vbool16_t mask,vint8mf2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf2_tumu(mask,merge,base,bstride,vl);
+}
+
+vint8m1_t
+test___riscv_vlse8_v_i8m1_tumu(vbool8_t mask,vint8m1_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m1_tumu(mask,merge,base,bstride,vl);
+}
+
+vint8m2_t
+test___riscv_vlse8_v_i8m2_tumu(vbool4_t mask,vint8m2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m2_tumu(mask,merge,base,bstride,vl);
+}
+
+vint8m4_t
+test___riscv_vlse8_v_i8m4_tumu(vbool2_t mask,vint8m4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m4_tumu(mask,merge,base,bstride,vl);
+}
+
+vint8m8_t
+test___riscv_vlse8_v_i8m8_tumu(vbool1_t mask,vint8m8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m8_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint8mf8_t
+test___riscv_vlse8_v_u8mf8_tumu(vbool64_t mask,vuint8mf8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf8_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint8mf4_t
+test___riscv_vlse8_v_u8mf4_tumu(vbool32_t mask,vuint8mf4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf4_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint8mf2_t
+test___riscv_vlse8_v_u8mf2_tumu(vbool16_t mask,vuint8mf2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf2_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint8m1_t
+test___riscv_vlse8_v_u8m1_tumu(vbool8_t mask,vuint8m1_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m1_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint8m2_t
+test___riscv_vlse8_v_u8m2_tumu(vbool4_t mask,vuint8m2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m2_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint8m4_t
+test___riscv_vlse8_v_u8m4_tumu(vbool2_t mask,vuint8m4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m4_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint8m8_t
+test___riscv_vlse8_v_u8m8_tumu(vbool1_t mask,vuint8m8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m8_tumu(mask,merge,base,bstride,vl);
+}
+
+vint16mf4_t
+test___riscv_vlse16_v_i16mf4_tumu(vbool64_t mask,vint16mf4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf4_tumu(mask,merge,base,bstride,vl);
+}
+
+vint16mf2_t
+test___riscv_vlse16_v_i16mf2_tumu(vbool32_t mask,vint16mf2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf2_tumu(mask,merge,base,bstride,vl);
+}
+
+vint16m1_t
+test___riscv_vlse16_v_i16m1_tumu(vbool16_t mask,vint16m1_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m1_tumu(mask,merge,base,bstride,vl);
+}
+
+vint16m2_t
+test___riscv_vlse16_v_i16m2_tumu(vbool8_t mask,vint16m2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m2_tumu(mask,merge,base,bstride,vl);
+}
+
+vint16m4_t
+test___riscv_vlse16_v_i16m4_tumu(vbool4_t mask,vint16m4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m4_tumu(mask,merge,base,bstride,vl);
+}
+
+vint16m8_t
+test___riscv_vlse16_v_i16m8_tumu(vbool2_t mask,vint16m8_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m8_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint16mf4_t
+test___riscv_vlse16_v_u16mf4_tumu(vbool64_t mask,vuint16mf4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf4_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint16mf2_t
+test___riscv_vlse16_v_u16mf2_tumu(vbool32_t mask,vuint16mf2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf2_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint16m1_t
+test___riscv_vlse16_v_u16m1_tumu(vbool16_t mask,vuint16m1_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m1_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint16m2_t
+test___riscv_vlse16_v_u16m2_tumu(vbool8_t mask,vuint16m2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m2_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint16m4_t
+test___riscv_vlse16_v_u16m4_tumu(vbool4_t mask,vuint16m4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m4_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint16m8_t
+test___riscv_vlse16_v_u16m8_tumu(vbool2_t mask,vuint16m8_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m8_tumu(mask,merge,base,bstride,vl);
+}
+
+vint32mf2_t
+test___riscv_vlse32_v_i32mf2_tumu(vbool64_t mask,vint32mf2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32mf2_tumu(mask,merge,base,bstride,vl);
+}
+
+vint32m1_t
+test___riscv_vlse32_v_i32m1_tumu(vbool32_t mask,vint32m1_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m1_tumu(mask,merge,base,bstride,vl);
+}
+
+vint32m2_t
+test___riscv_vlse32_v_i32m2_tumu(vbool16_t mask,vint32m2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m2_tumu(mask,merge,base,bstride,vl);
+}
+
+vint32m4_t
+test___riscv_vlse32_v_i32m4_tumu(vbool8_t mask,vint32m4_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m4_tumu(mask,merge,base,bstride,vl);
+}
+
+vint32m8_t
+test___riscv_vlse32_v_i32m8_tumu(vbool4_t mask,vint32m8_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m8_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint32mf2_t
+test___riscv_vlse32_v_u32mf2_tumu(vbool64_t mask,vuint32mf2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32mf2_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint32m1_t
+test___riscv_vlse32_v_u32m1_tumu(vbool32_t mask,vuint32m1_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m1_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint32m2_t
+test___riscv_vlse32_v_u32m2_tumu(vbool16_t mask,vuint32m2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m2_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint32m4_t
+test___riscv_vlse32_v_u32m4_tumu(vbool8_t mask,vuint32m4_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m4_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint32m8_t
+test___riscv_vlse32_v_u32m8_tumu(vbool4_t mask,vuint32m8_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m8_tumu(mask,merge,base,bstride,vl);
+}
+
+vfloat32mf2_t
+test___riscv_vlse32_v_f32mf2_tumu(vbool64_t mask,vfloat32mf2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32mf2_tumu(mask,merge,base,bstride,vl);
+}
+
+vfloat32m1_t
+test___riscv_vlse32_v_f32m1_tumu(vbool32_t mask,vfloat32m1_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m1_tumu(mask,merge,base,bstride,vl);
+}
+
+vfloat32m2_t
+test___riscv_vlse32_v_f32m2_tumu(vbool16_t mask,vfloat32m2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m2_tumu(mask,merge,base,bstride,vl);
+}
+
+vfloat32m4_t
+test___riscv_vlse32_v_f32m4_tumu(vbool8_t mask,vfloat32m4_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m4_tumu(mask,merge,base,bstride,vl);
+}
+
+vfloat32m8_t
+test___riscv_vlse32_v_f32m8_tumu(vbool4_t mask,vfloat32m8_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m8_tumu(mask,merge,base,bstride,vl);
+}
+
+vint64m1_t
+test___riscv_vlse64_v_i64m1_tumu(vbool64_t mask,vint64m1_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m1_tumu(mask,merge,base,bstride,vl);
+}
+
+vint64m2_t
+test___riscv_vlse64_v_i64m2_tumu(vbool32_t mask,vint64m2_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m2_tumu(mask,merge,base,bstride,vl);
+}
+
+vint64m4_t
+test___riscv_vlse64_v_i64m4_tumu(vbool16_t mask,vint64m4_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m4_tumu(mask,merge,base,bstride,vl);
+}
+
+vint64m8_t
+test___riscv_vlse64_v_i64m8_tumu(vbool8_t mask,vint64m8_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m8_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint64m1_t
+test___riscv_vlse64_v_u64m1_tumu(vbool64_t mask,vuint64m1_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m1_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint64m2_t
+test___riscv_vlse64_v_u64m2_tumu(vbool32_t mask,vuint64m2_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m2_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint64m4_t
+test___riscv_vlse64_v_u64m4_tumu(vbool16_t mask,vuint64m4_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m4_tumu(mask,merge,base,bstride,vl);
+}
+
+vuint64m8_t
+test___riscv_vlse64_v_u64m8_tumu(vbool8_t mask,vuint64m8_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m8_tumu(mask,merge,base,bstride,vl);
+}
+
+vfloat64m1_t
+test___riscv_vlse64_v_f64m1_tumu(vbool64_t mask,vfloat64m1_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m1_tumu(mask,merge,base,bstride,vl);
+}
+
+vfloat64m2_t
+test___riscv_vlse64_v_f64m2_tumu(vbool32_t mask,vfloat64m2_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m2_tumu(mask,merge,base,bstride,vl);
+}
+
+vfloat64m4_t
+test___riscv_vlse64_v_f64m4_tumu(vbool16_t mask,vfloat64m4_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m4_tumu(mask,merge,base,bstride,vl);
+}
+
+vfloat64m8_t
+test___riscv_vlse64_v_f64m8_tumu(vbool8_t mask,vfloat64m8_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m8_tumu(mask,merge,base,bstride,vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*tu,\s*mu\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*tu,\s*mu\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*tu,\s*mu\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*tu,\s*mu\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tumu-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tumu-2.c
new file mode 100644
index 00000000000..fa388faae56
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tumu-2.c
@@ -0,0 +1,345 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+vint8mf8_t
+test___riscv_vlse8_v_i8mf8_tumu(vbool64_t mask,vint8mf8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf8_tumu(mask,merge,base,bstride,31);
+}
+
+vint8mf4_t
+test___riscv_vlse8_v_i8mf4_tumu(vbool32_t mask,vint8mf4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf4_tumu(mask,merge,base,bstride,31);
+}
+
+vint8mf2_t
+test___riscv_vlse8_v_i8mf2_tumu(vbool16_t mask,vint8mf2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf2_tumu(mask,merge,base,bstride,31);
+}
+
+vint8m1_t
+test___riscv_vlse8_v_i8m1_tumu(vbool8_t mask,vint8m1_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m1_tumu(mask,merge,base,bstride,31);
+}
+
+vint8m2_t
+test___riscv_vlse8_v_i8m2_tumu(vbool4_t mask,vint8m2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m2_tumu(mask,merge,base,bstride,31);
+}
+
+vint8m4_t
+test___riscv_vlse8_v_i8m4_tumu(vbool2_t mask,vint8m4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m4_tumu(mask,merge,base,bstride,31);
+}
+
+vint8m8_t
+test___riscv_vlse8_v_i8m8_tumu(vbool1_t mask,vint8m8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m8_tumu(mask,merge,base,bstride,31);
+}
+
+vuint8mf8_t
+test___riscv_vlse8_v_u8mf8_tumu(vbool64_t mask,vuint8mf8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf8_tumu(mask,merge,base,bstride,31);
+}
+
+vuint8mf4_t
+test___riscv_vlse8_v_u8mf4_tumu(vbool32_t mask,vuint8mf4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf4_tumu(mask,merge,base,bstride,31);
+}
+
+vuint8mf2_t
+test___riscv_vlse8_v_u8mf2_tumu(vbool16_t mask,vuint8mf2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf2_tumu(mask,merge,base,bstride,31);
+}
+
+vuint8m1_t
+test___riscv_vlse8_v_u8m1_tumu(vbool8_t mask,vuint8m1_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m1_tumu(mask,merge,base,bstride,31);
+}
+
+vuint8m2_t
+test___riscv_vlse8_v_u8m2_tumu(vbool4_t mask,vuint8m2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m2_tumu(mask,merge,base,bstride,31);
+}
+
+vuint8m4_t
+test___riscv_vlse8_v_u8m4_tumu(vbool2_t mask,vuint8m4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m4_tumu(mask,merge,base,bstride,31);
+}
+
+vuint8m8_t
+test___riscv_vlse8_v_u8m8_tumu(vbool1_t mask,vuint8m8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m8_tumu(mask,merge,base,bstride,31);
+}
+
+vint16mf4_t
+test___riscv_vlse16_v_i16mf4_tumu(vbool64_t mask,vint16mf4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf4_tumu(mask,merge,base,bstride,31);
+}
+
+vint16mf2_t
+test___riscv_vlse16_v_i16mf2_tumu(vbool32_t mask,vint16mf2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf2_tumu(mask,merge,base,bstride,31);
+}
+
+vint16m1_t
+test___riscv_vlse16_v_i16m1_tumu(vbool16_t mask,vint16m1_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m1_tumu(mask,merge,base,bstride,31);
+}
+
+vint16m2_t
+test___riscv_vlse16_v_i16m2_tumu(vbool8_t mask,vint16m2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m2_tumu(mask,merge,base,bstride,31);
+}
+
+vint16m4_t
+test___riscv_vlse16_v_i16m4_tumu(vbool4_t mask,vint16m4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m4_tumu(mask,merge,base,bstride,31);
+}
+
+vint16m8_t
+test___riscv_vlse16_v_i16m8_tumu(vbool2_t mask,vint16m8_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m8_tumu(mask,merge,base,bstride,31);
+}
+
+vuint16mf4_t
+test___riscv_vlse16_v_u16mf4_tumu(vbool64_t mask,vuint16mf4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf4_tumu(mask,merge,base,bstride,31);
+}
+
+vuint16mf2_t
+test___riscv_vlse16_v_u16mf2_tumu(vbool32_t mask,vuint16mf2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf2_tumu(mask,merge,base,bstride,31);
+}
+
+vuint16m1_t
+test___riscv_vlse16_v_u16m1_tumu(vbool16_t mask,vuint16m1_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m1_tumu(mask,merge,base,bstride,31);
+}
+
+vuint16m2_t
+test___riscv_vlse16_v_u16m2_tumu(vbool8_t mask,vuint16m2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m2_tumu(mask,merge,base,bstride,31);
+}
+
+vuint16m4_t
+test___riscv_vlse16_v_u16m4_tumu(vbool4_t mask,vuint16m4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m4_tumu(mask,merge,base,bstride,31);
+}
+
+vuint16m8_t
+test___riscv_vlse16_v_u16m8_tumu(vbool2_t mask,vuint16m8_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m8_tumu(mask,merge,base,bstride,31);
+}
+
+vint32mf2_t
+test___riscv_vlse32_v_i32mf2_tumu(vbool64_t mask,vint32mf2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32mf2_tumu(mask,merge,base,bstride,31);
+}
+
+vint32m1_t
+test___riscv_vlse32_v_i32m1_tumu(vbool32_t mask,vint32m1_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m1_tumu(mask,merge,base,bstride,31);
+}
+
+vint32m2_t
+test___riscv_vlse32_v_i32m2_tumu(vbool16_t mask,vint32m2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m2_tumu(mask,merge,base,bstride,31);
+}
+
+vint32m4_t
+test___riscv_vlse32_v_i32m4_tumu(vbool8_t mask,vint32m4_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m4_tumu(mask,merge,base,bstride,31);
+}
+
+vint32m8_t
+test___riscv_vlse32_v_i32m8_tumu(vbool4_t mask,vint32m8_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m8_tumu(mask,merge,base,bstride,31);
+}
+
+vuint32mf2_t
+test___riscv_vlse32_v_u32mf2_tumu(vbool64_t mask,vuint32mf2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32mf2_tumu(mask,merge,base,bstride,31);
+}
+
+vuint32m1_t
+test___riscv_vlse32_v_u32m1_tumu(vbool32_t mask,vuint32m1_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m1_tumu(mask,merge,base,bstride,31);
+}
+
+vuint32m2_t
+test___riscv_vlse32_v_u32m2_tumu(vbool16_t mask,vuint32m2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m2_tumu(mask,merge,base,bstride,31);
+}
+
+vuint32m4_t
+test___riscv_vlse32_v_u32m4_tumu(vbool8_t mask,vuint32m4_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m4_tumu(mask,merge,base,bstride,31);
+}
+
+vuint32m8_t
+test___riscv_vlse32_v_u32m8_tumu(vbool4_t mask,vuint32m8_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m8_tumu(mask,merge,base,bstride,31);
+}
+
+vfloat32mf2_t
+test___riscv_vlse32_v_f32mf2_tumu(vbool64_t mask,vfloat32mf2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32mf2_tumu(mask,merge,base,bstride,31);
+}
+
+vfloat32m1_t
+test___riscv_vlse32_v_f32m1_tumu(vbool32_t mask,vfloat32m1_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m1_tumu(mask,merge,base,bstride,31);
+}
+
+vfloat32m2_t
+test___riscv_vlse32_v_f32m2_tumu(vbool16_t mask,vfloat32m2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m2_tumu(mask,merge,base,bstride,31);
+}
+
+vfloat32m4_t
+test___riscv_vlse32_v_f32m4_tumu(vbool8_t mask,vfloat32m4_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m4_tumu(mask,merge,base,bstride,31);
+}
+
+vfloat32m8_t
+test___riscv_vlse32_v_f32m8_tumu(vbool4_t mask,vfloat32m8_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m8_tumu(mask,merge,base,bstride,31);
+}
+
+vint64m1_t
+test___riscv_vlse64_v_i64m1_tumu(vbool64_t mask,vint64m1_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m1_tumu(mask,merge,base,bstride,31);
+}
+
+vint64m2_t
+test___riscv_vlse64_v_i64m2_tumu(vbool32_t mask,vint64m2_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m2_tumu(mask,merge,base,bstride,31);
+}
+
+vint64m4_t
+test___riscv_vlse64_v_i64m4_tumu(vbool16_t mask,vint64m4_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m4_tumu(mask,merge,base,bstride,31);
+}
+
+vint64m8_t
+test___riscv_vlse64_v_i64m8_tumu(vbool8_t mask,vint64m8_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m8_tumu(mask,merge,base,bstride,31);
+}
+
+vuint64m1_t
+test___riscv_vlse64_v_u64m1_tumu(vbool64_t mask,vuint64m1_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m1_tumu(mask,merge,base,bstride,31);
+}
+
+vuint64m2_t
+test___riscv_vlse64_v_u64m2_tumu(vbool32_t mask,vuint64m2_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m2_tumu(mask,merge,base,bstride,31);
+}
+
+vuint64m4_t
+test___riscv_vlse64_v_u64m4_tumu(vbool16_t mask,vuint64m4_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m4_tumu(mask,merge,base,bstride,31);
+}
+
+vuint64m8_t
+test___riscv_vlse64_v_u64m8_tumu(vbool8_t mask,vuint64m8_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m8_tumu(mask,merge,base,bstride,31);
+}
+
+vfloat64m1_t
+test___riscv_vlse64_v_f64m1_tumu(vbool64_t mask,vfloat64m1_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m1_tumu(mask,merge,base,bstride,31);
+}
+
+vfloat64m2_t
+test___riscv_vlse64_v_f64m2_tumu(vbool32_t mask,vfloat64m2_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m2_tumu(mask,merge,base,bstride,31);
+}
+
+vfloat64m4_t
+test___riscv_vlse64_v_f64m4_tumu(vbool16_t mask,vfloat64m4_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m4_tumu(mask,merge,base,bstride,31);
+}
+
+vfloat64m8_t
+test___riscv_vlse64_v_f64m8_tumu(vbool8_t mask,vfloat64m8_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m8_tumu(mask,merge,base,bstride,31);
+}
+
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*tu,\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*tu,\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*tu,\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*tu,\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*tu,\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*tu,\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*tu,\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*tu,\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*tu,\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*tu,\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*tu,\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*tu,\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*tu,\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*tu,\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*tu,\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*tu,\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*tu,\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*tu,\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m1,\s*tu,\s*mu\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m2,\s*tu,\s*mu\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m4,\s*tu,\s*mu\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m8,\s*tu,\s*mu\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tumu-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tumu-3.c
new file mode 100644
index 00000000000..659f57beb34
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vlse_tumu-3.c
@@ -0,0 +1,345 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+vint8mf8_t
+test___riscv_vlse8_v_i8mf8_tumu(vbool64_t mask,vint8mf8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf8_tumu(mask,merge,base,bstride,32);
+}
+
+vint8mf4_t
+test___riscv_vlse8_v_i8mf4_tumu(vbool32_t mask,vint8mf4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf4_tumu(mask,merge,base,bstride,32);
+}
+
+vint8mf2_t
+test___riscv_vlse8_v_i8mf2_tumu(vbool16_t mask,vint8mf2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8mf2_tumu(mask,merge,base,bstride,32);
+}
+
+vint8m1_t
+test___riscv_vlse8_v_i8m1_tumu(vbool8_t mask,vint8m1_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m1_tumu(mask,merge,base,bstride,32);
+}
+
+vint8m2_t
+test___riscv_vlse8_v_i8m2_tumu(vbool4_t mask,vint8m2_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m2_tumu(mask,merge,base,bstride,32);
+}
+
+vint8m4_t
+test___riscv_vlse8_v_i8m4_tumu(vbool2_t mask,vint8m4_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m4_tumu(mask,merge,base,bstride,32);
+}
+
+vint8m8_t
+test___riscv_vlse8_v_i8m8_tumu(vbool1_t mask,vint8m8_t merge,int8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_i8m8_tumu(mask,merge,base,bstride,32);
+}
+
+vuint8mf8_t
+test___riscv_vlse8_v_u8mf8_tumu(vbool64_t mask,vuint8mf8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf8_tumu(mask,merge,base,bstride,32);
+}
+
+vuint8mf4_t
+test___riscv_vlse8_v_u8mf4_tumu(vbool32_t mask,vuint8mf4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf4_tumu(mask,merge,base,bstride,32);
+}
+
+vuint8mf2_t
+test___riscv_vlse8_v_u8mf2_tumu(vbool16_t mask,vuint8mf2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8mf2_tumu(mask,merge,base,bstride,32);
+}
+
+vuint8m1_t
+test___riscv_vlse8_v_u8m1_tumu(vbool8_t mask,vuint8m1_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m1_tumu(mask,merge,base,bstride,32);
+}
+
+vuint8m2_t
+test___riscv_vlse8_v_u8m2_tumu(vbool4_t mask,vuint8m2_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m2_tumu(mask,merge,base,bstride,32);
+}
+
+vuint8m4_t
+test___riscv_vlse8_v_u8m4_tumu(vbool2_t mask,vuint8m4_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m4_tumu(mask,merge,base,bstride,32);
+}
+
+vuint8m8_t
+test___riscv_vlse8_v_u8m8_tumu(vbool1_t mask,vuint8m8_t merge,uint8_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse8_v_u8m8_tumu(mask,merge,base,bstride,32);
+}
+
+vint16mf4_t
+test___riscv_vlse16_v_i16mf4_tumu(vbool64_t mask,vint16mf4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf4_tumu(mask,merge,base,bstride,32);
+}
+
+vint16mf2_t
+test___riscv_vlse16_v_i16mf2_tumu(vbool32_t mask,vint16mf2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16mf2_tumu(mask,merge,base,bstride,32);
+}
+
+vint16m1_t
+test___riscv_vlse16_v_i16m1_tumu(vbool16_t mask,vint16m1_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m1_tumu(mask,merge,base,bstride,32);
+}
+
+vint16m2_t
+test___riscv_vlse16_v_i16m2_tumu(vbool8_t mask,vint16m2_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m2_tumu(mask,merge,base,bstride,32);
+}
+
+vint16m4_t
+test___riscv_vlse16_v_i16m4_tumu(vbool4_t mask,vint16m4_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m4_tumu(mask,merge,base,bstride,32);
+}
+
+vint16m8_t
+test___riscv_vlse16_v_i16m8_tumu(vbool2_t mask,vint16m8_t merge,int16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_i16m8_tumu(mask,merge,base,bstride,32);
+}
+
+vuint16mf4_t
+test___riscv_vlse16_v_u16mf4_tumu(vbool64_t mask,vuint16mf4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf4_tumu(mask,merge,base,bstride,32);
+}
+
+vuint16mf2_t
+test___riscv_vlse16_v_u16mf2_tumu(vbool32_t mask,vuint16mf2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16mf2_tumu(mask,merge,base,bstride,32);
+}
+
+vuint16m1_t
+test___riscv_vlse16_v_u16m1_tumu(vbool16_t mask,vuint16m1_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m1_tumu(mask,merge,base,bstride,32);
+}
+
+vuint16m2_t
+test___riscv_vlse16_v_u16m2_tumu(vbool8_t mask,vuint16m2_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m2_tumu(mask,merge,base,bstride,32);
+}
+
+vuint16m4_t
+test___riscv_vlse16_v_u16m4_tumu(vbool4_t mask,vuint16m4_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m4_tumu(mask,merge,base,bstride,32);
+}
+
+vuint16m8_t
+test___riscv_vlse16_v_u16m8_tumu(vbool2_t mask,vuint16m8_t merge,uint16_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse16_v_u16m8_tumu(mask,merge,base,bstride,32);
+}
+
+vint32mf2_t
+test___riscv_vlse32_v_i32mf2_tumu(vbool64_t mask,vint32mf2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32mf2_tumu(mask,merge,base,bstride,32);
+}
+
+vint32m1_t
+test___riscv_vlse32_v_i32m1_tumu(vbool32_t mask,vint32m1_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m1_tumu(mask,merge,base,bstride,32);
+}
+
+vint32m2_t
+test___riscv_vlse32_v_i32m2_tumu(vbool16_t mask,vint32m2_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m2_tumu(mask,merge,base,bstride,32);
+}
+
+vint32m4_t
+test___riscv_vlse32_v_i32m4_tumu(vbool8_t mask,vint32m4_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m4_tumu(mask,merge,base,bstride,32);
+}
+
+vint32m8_t
+test___riscv_vlse32_v_i32m8_tumu(vbool4_t mask,vint32m8_t merge,int32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_i32m8_tumu(mask,merge,base,bstride,32);
+}
+
+vuint32mf2_t
+test___riscv_vlse32_v_u32mf2_tumu(vbool64_t mask,vuint32mf2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32mf2_tumu(mask,merge,base,bstride,32);
+}
+
+vuint32m1_t
+test___riscv_vlse32_v_u32m1_tumu(vbool32_t mask,vuint32m1_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m1_tumu(mask,merge,base,bstride,32);
+}
+
+vuint32m2_t
+test___riscv_vlse32_v_u32m2_tumu(vbool16_t mask,vuint32m2_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m2_tumu(mask,merge,base,bstride,32);
+}
+
+vuint32m4_t
+test___riscv_vlse32_v_u32m4_tumu(vbool8_t mask,vuint32m4_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m4_tumu(mask,merge,base,bstride,32);
+}
+
+vuint32m8_t
+test___riscv_vlse32_v_u32m8_tumu(vbool4_t mask,vuint32m8_t merge,uint32_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_u32m8_tumu(mask,merge,base,bstride,32);
+}
+
+vfloat32mf2_t
+test___riscv_vlse32_v_f32mf2_tumu(vbool64_t mask,vfloat32mf2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32mf2_tumu(mask,merge,base,bstride,32);
+}
+
+vfloat32m1_t
+test___riscv_vlse32_v_f32m1_tumu(vbool32_t mask,vfloat32m1_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m1_tumu(mask,merge,base,bstride,32);
+}
+
+vfloat32m2_t
+test___riscv_vlse32_v_f32m2_tumu(vbool16_t mask,vfloat32m2_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m2_tumu(mask,merge,base,bstride,32);
+}
+
+vfloat32m4_t
+test___riscv_vlse32_v_f32m4_tumu(vbool8_t mask,vfloat32m4_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m4_tumu(mask,merge,base,bstride,32);
+}
+
+vfloat32m8_t
+test___riscv_vlse32_v_f32m8_tumu(vbool4_t mask,vfloat32m8_t merge,float* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse32_v_f32m8_tumu(mask,merge,base,bstride,32);
+}
+
+vint64m1_t
+test___riscv_vlse64_v_i64m1_tumu(vbool64_t mask,vint64m1_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m1_tumu(mask,merge,base,bstride,32);
+}
+
+vint64m2_t
+test___riscv_vlse64_v_i64m2_tumu(vbool32_t mask,vint64m2_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m2_tumu(mask,merge,base,bstride,32);
+}
+
+vint64m4_t
+test___riscv_vlse64_v_i64m4_tumu(vbool16_t mask,vint64m4_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m4_tumu(mask,merge,base,bstride,32);
+}
+
+vint64m8_t
+test___riscv_vlse64_v_i64m8_tumu(vbool8_t mask,vint64m8_t merge,int64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_i64m8_tumu(mask,merge,base,bstride,32);
+}
+
+vuint64m1_t
+test___riscv_vlse64_v_u64m1_tumu(vbool64_t mask,vuint64m1_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m1_tumu(mask,merge,base,bstride,32);
+}
+
+vuint64m2_t
+test___riscv_vlse64_v_u64m2_tumu(vbool32_t mask,vuint64m2_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m2_tumu(mask,merge,base,bstride,32);
+}
+
+vuint64m4_t
+test___riscv_vlse64_v_u64m4_tumu(vbool16_t mask,vuint64m4_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m4_tumu(mask,merge,base,bstride,32);
+}
+
+vuint64m8_t
+test___riscv_vlse64_v_u64m8_tumu(vbool8_t mask,vuint64m8_t merge,uint64_t* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_u64m8_tumu(mask,merge,base,bstride,32);
+}
+
+vfloat64m1_t
+test___riscv_vlse64_v_f64m1_tumu(vbool64_t mask,vfloat64m1_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m1_tumu(mask,merge,base,bstride,32);
+}
+
+vfloat64m2_t
+test___riscv_vlse64_v_f64m2_tumu(vbool32_t mask,vfloat64m2_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m2_tumu(mask,merge,base,bstride,32);
+}
+
+vfloat64m4_t
+test___riscv_vlse64_v_f64m4_tumu(vbool16_t mask,vfloat64m4_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m4_tumu(mask,merge,base,bstride,32);
+}
+
+vfloat64m8_t
+test___riscv_vlse64_v_f64m8_tumu(vbool8_t mask,vfloat64m8_t merge,double* base,ptrdiff_t bstride,size_t vl)
+{
+  return __riscv_vlse64_v_f64m8_tumu(mask,merge,base,bstride,32);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*mu\s+vlse8\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*mu\s+vlse16\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*mu\s+vlse32\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*tu,\s*mu\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*tu,\s*mu\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*tu,\s*mu\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*tu,\s*mu\s+vlse64\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),[a-x0-9]+,\s*v0.t} 3 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vsse-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vsse-1.c
new file mode 100644
index 00000000000..fff498f771f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vsse-1.c
@@ -0,0 +1,345 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+void
+test___riscv_vsse8_v_i8mf8(int8_t* base,ptrdiff_t bstride,vint8mf8_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8mf8(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_i8mf4(int8_t* base,ptrdiff_t bstride,vint8mf4_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8mf4(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_i8mf2(int8_t* base,ptrdiff_t bstride,vint8mf2_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8mf2(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_i8m1(int8_t* base,ptrdiff_t bstride,vint8m1_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8m1(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_i8m2(int8_t* base,ptrdiff_t bstride,vint8m2_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8m2(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_i8m4(int8_t* base,ptrdiff_t bstride,vint8m4_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8m4(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_i8m8(int8_t* base,ptrdiff_t bstride,vint8m8_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8m8(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_u8mf8(uint8_t* base,ptrdiff_t bstride,vuint8mf8_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8mf8(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_u8mf4(uint8_t* base,ptrdiff_t bstride,vuint8mf4_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8mf4(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_u8mf2(uint8_t* base,ptrdiff_t bstride,vuint8mf2_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8mf2(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_u8m1(uint8_t* base,ptrdiff_t bstride,vuint8m1_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8m1(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_u8m2(uint8_t* base,ptrdiff_t bstride,vuint8m2_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8m2(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_u8m4(uint8_t* base,ptrdiff_t bstride,vuint8m4_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8m4(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_u8m8(uint8_t* base,ptrdiff_t bstride,vuint8m8_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8m8(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_i16mf4(int16_t* base,ptrdiff_t bstride,vint16mf4_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16mf4(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_i16mf2(int16_t* base,ptrdiff_t bstride,vint16mf2_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16mf2(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_i16m1(int16_t* base,ptrdiff_t bstride,vint16m1_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16m1(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_i16m2(int16_t* base,ptrdiff_t bstride,vint16m2_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16m2(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_i16m4(int16_t* base,ptrdiff_t bstride,vint16m4_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16m4(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_i16m8(int16_t* base,ptrdiff_t bstride,vint16m8_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16m8(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_u16mf4(uint16_t* base,ptrdiff_t bstride,vuint16mf4_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16mf4(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_u16mf2(uint16_t* base,ptrdiff_t bstride,vuint16mf2_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16mf2(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_u16m1(uint16_t* base,ptrdiff_t bstride,vuint16m1_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16m1(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_u16m2(uint16_t* base,ptrdiff_t bstride,vuint16m2_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16m2(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_u16m4(uint16_t* base,ptrdiff_t bstride,vuint16m4_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16m4(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_u16m8(uint16_t* base,ptrdiff_t bstride,vuint16m8_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16m8(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_i32mf2(int32_t* base,ptrdiff_t bstride,vint32mf2_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32mf2(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_i32m1(int32_t* base,ptrdiff_t bstride,vint32m1_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32m1(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_i32m2(int32_t* base,ptrdiff_t bstride,vint32m2_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32m2(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_i32m4(int32_t* base,ptrdiff_t bstride,vint32m4_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32m4(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_i32m8(int32_t* base,ptrdiff_t bstride,vint32m8_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32m8(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_u32mf2(uint32_t* base,ptrdiff_t bstride,vuint32mf2_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32mf2(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_u32m1(uint32_t* base,ptrdiff_t bstride,vuint32m1_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32m1(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_u32m2(uint32_t* base,ptrdiff_t bstride,vuint32m2_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32m2(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_u32m4(uint32_t* base,ptrdiff_t bstride,vuint32m4_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32m4(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_u32m8(uint32_t* base,ptrdiff_t bstride,vuint32m8_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32m8(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_f32mf2(float* base,ptrdiff_t bstride,vfloat32mf2_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32mf2(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_f32m1(float* base,ptrdiff_t bstride,vfloat32m1_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32m1(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_f32m2(float* base,ptrdiff_t bstride,vfloat32m2_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32m2(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_f32m4(float* base,ptrdiff_t bstride,vfloat32m4_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32m4(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_f32m8(float* base,ptrdiff_t bstride,vfloat32m8_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32m8(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_i64m1(int64_t* base,ptrdiff_t bstride,vint64m1_t value,size_t vl)
+{
+  __riscv_vsse64_v_i64m1(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_i64m2(int64_t* base,ptrdiff_t bstride,vint64m2_t value,size_t vl)
+{
+  __riscv_vsse64_v_i64m2(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_i64m4(int64_t* base,ptrdiff_t bstride,vint64m4_t value,size_t vl)
+{
+  __riscv_vsse64_v_i64m4(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_i64m8(int64_t* base,ptrdiff_t bstride,vint64m8_t value,size_t vl)
+{
+  __riscv_vsse64_v_i64m8(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_u64m1(uint64_t* base,ptrdiff_t bstride,vuint64m1_t value,size_t vl)
+{
+  __riscv_vsse64_v_u64m1(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_u64m2(uint64_t* base,ptrdiff_t bstride,vuint64m2_t value,size_t vl)
+{
+  __riscv_vsse64_v_u64m2(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_u64m4(uint64_t* base,ptrdiff_t bstride,vuint64m4_t value,size_t vl)
+{
+  __riscv_vsse64_v_u64m4(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_u64m8(uint64_t* base,ptrdiff_t bstride,vuint64m8_t value,size_t vl)
+{
+  __riscv_vsse64_v_u64m8(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_f64m1(double* base,ptrdiff_t bstride,vfloat64m1_t value,size_t vl)
+{
+  __riscv_vsse64_v_f64m1(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_f64m2(double* base,ptrdiff_t bstride,vfloat64m2_t value,size_t vl)
+{
+  __riscv_vsse64_v_f64m2(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_f64m4(double* base,ptrdiff_t bstride,vfloat64m4_t value,size_t vl)
+{
+  __riscv_vsse64_v_f64m4(base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_f64m8(double* base,ptrdiff_t bstride,vfloat64m8_t value,size_t vl)
+{
+  __riscv_vsse64_v_f64m8(base,bstride,value,vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vsse-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vsse-2.c
new file mode 100644
index 00000000000..921bc0d0493
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vsse-2.c
@@ -0,0 +1,345 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+void
+test___riscv_vsse8_v_i8mf8(int8_t* base,ptrdiff_t bstride,vint8mf8_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8mf8(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse8_v_i8mf4(int8_t* base,ptrdiff_t bstride,vint8mf4_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8mf4(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse8_v_i8mf2(int8_t* base,ptrdiff_t bstride,vint8mf2_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8mf2(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse8_v_i8m1(int8_t* base,ptrdiff_t bstride,vint8m1_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8m1(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse8_v_i8m2(int8_t* base,ptrdiff_t bstride,vint8m2_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8m2(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse8_v_i8m4(int8_t* base,ptrdiff_t bstride,vint8m4_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8m4(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse8_v_i8m8(int8_t* base,ptrdiff_t bstride,vint8m8_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8m8(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse8_v_u8mf8(uint8_t* base,ptrdiff_t bstride,vuint8mf8_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8mf8(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse8_v_u8mf4(uint8_t* base,ptrdiff_t bstride,vuint8mf4_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8mf4(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse8_v_u8mf2(uint8_t* base,ptrdiff_t bstride,vuint8mf2_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8mf2(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse8_v_u8m1(uint8_t* base,ptrdiff_t bstride,vuint8m1_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8m1(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse8_v_u8m2(uint8_t* base,ptrdiff_t bstride,vuint8m2_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8m2(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse8_v_u8m4(uint8_t* base,ptrdiff_t bstride,vuint8m4_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8m4(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse8_v_u8m8(uint8_t* base,ptrdiff_t bstride,vuint8m8_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8m8(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse16_v_i16mf4(int16_t* base,ptrdiff_t bstride,vint16mf4_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16mf4(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse16_v_i16mf2(int16_t* base,ptrdiff_t bstride,vint16mf2_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16mf2(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse16_v_i16m1(int16_t* base,ptrdiff_t bstride,vint16m1_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16m1(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse16_v_i16m2(int16_t* base,ptrdiff_t bstride,vint16m2_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16m2(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse16_v_i16m4(int16_t* base,ptrdiff_t bstride,vint16m4_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16m4(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse16_v_i16m8(int16_t* base,ptrdiff_t bstride,vint16m8_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16m8(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse16_v_u16mf4(uint16_t* base,ptrdiff_t bstride,vuint16mf4_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16mf4(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse16_v_u16mf2(uint16_t* base,ptrdiff_t bstride,vuint16mf2_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16mf2(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse16_v_u16m1(uint16_t* base,ptrdiff_t bstride,vuint16m1_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16m1(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse16_v_u16m2(uint16_t* base,ptrdiff_t bstride,vuint16m2_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16m2(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse16_v_u16m4(uint16_t* base,ptrdiff_t bstride,vuint16m4_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16m4(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse16_v_u16m8(uint16_t* base,ptrdiff_t bstride,vuint16m8_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16m8(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_i32mf2(int32_t* base,ptrdiff_t bstride,vint32mf2_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32mf2(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_i32m1(int32_t* base,ptrdiff_t bstride,vint32m1_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32m1(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_i32m2(int32_t* base,ptrdiff_t bstride,vint32m2_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32m2(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_i32m4(int32_t* base,ptrdiff_t bstride,vint32m4_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32m4(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_i32m8(int32_t* base,ptrdiff_t bstride,vint32m8_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32m8(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_u32mf2(uint32_t* base,ptrdiff_t bstride,vuint32mf2_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32mf2(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_u32m1(uint32_t* base,ptrdiff_t bstride,vuint32m1_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32m1(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_u32m2(uint32_t* base,ptrdiff_t bstride,vuint32m2_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32m2(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_u32m4(uint32_t* base,ptrdiff_t bstride,vuint32m4_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32m4(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_u32m8(uint32_t* base,ptrdiff_t bstride,vuint32m8_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32m8(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_f32mf2(float* base,ptrdiff_t bstride,vfloat32mf2_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32mf2(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_f32m1(float* base,ptrdiff_t bstride,vfloat32m1_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32m1(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_f32m2(float* base,ptrdiff_t bstride,vfloat32m2_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32m2(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_f32m4(float* base,ptrdiff_t bstride,vfloat32m4_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32m4(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_f32m8(float* base,ptrdiff_t bstride,vfloat32m8_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32m8(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse64_v_i64m1(int64_t* base,ptrdiff_t bstride,vint64m1_t value,size_t vl)
+{
+  __riscv_vsse64_v_i64m1(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse64_v_i64m2(int64_t* base,ptrdiff_t bstride,vint64m2_t value,size_t vl)
+{
+  __riscv_vsse64_v_i64m2(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse64_v_i64m4(int64_t* base,ptrdiff_t bstride,vint64m4_t value,size_t vl)
+{
+  __riscv_vsse64_v_i64m4(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse64_v_i64m8(int64_t* base,ptrdiff_t bstride,vint64m8_t value,size_t vl)
+{
+  __riscv_vsse64_v_i64m8(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse64_v_u64m1(uint64_t* base,ptrdiff_t bstride,vuint64m1_t value,size_t vl)
+{
+  __riscv_vsse64_v_u64m1(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse64_v_u64m2(uint64_t* base,ptrdiff_t bstride,vuint64m2_t value,size_t vl)
+{
+  __riscv_vsse64_v_u64m2(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse64_v_u64m4(uint64_t* base,ptrdiff_t bstride,vuint64m4_t value,size_t vl)
+{
+  __riscv_vsse64_v_u64m4(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse64_v_u64m8(uint64_t* base,ptrdiff_t bstride,vuint64m8_t value,size_t vl)
+{
+  __riscv_vsse64_v_u64m8(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse64_v_f64m1(double* base,ptrdiff_t bstride,vfloat64m1_t value,size_t vl)
+{
+  __riscv_vsse64_v_f64m1(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse64_v_f64m2(double* base,ptrdiff_t bstride,vfloat64m2_t value,size_t vl)
+{
+  __riscv_vsse64_v_f64m2(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse64_v_f64m4(double* base,ptrdiff_t bstride,vfloat64m4_t value,size_t vl)
+{
+  __riscv_vsse64_v_f64m4(base,bstride,value,31);
+}
+
+void
+test___riscv_vsse64_v_f64m8(double* base,ptrdiff_t bstride,vfloat64m8_t value,size_t vl)
+{
+  __riscv_vsse64_v_f64m8(base,bstride,value,31);
+}
+
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vsse-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vsse-3.c
new file mode 100644
index 00000000000..9c0d8a57499
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vsse-3.c
@@ -0,0 +1,345 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+void
+test___riscv_vsse8_v_i8mf8(int8_t* base,ptrdiff_t bstride,vint8mf8_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8mf8(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse8_v_i8mf4(int8_t* base,ptrdiff_t bstride,vint8mf4_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8mf4(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse8_v_i8mf2(int8_t* base,ptrdiff_t bstride,vint8mf2_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8mf2(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse8_v_i8m1(int8_t* base,ptrdiff_t bstride,vint8m1_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8m1(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse8_v_i8m2(int8_t* base,ptrdiff_t bstride,vint8m2_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8m2(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse8_v_i8m4(int8_t* base,ptrdiff_t bstride,vint8m4_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8m4(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse8_v_i8m8(int8_t* base,ptrdiff_t bstride,vint8m8_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8m8(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse8_v_u8mf8(uint8_t* base,ptrdiff_t bstride,vuint8mf8_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8mf8(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse8_v_u8mf4(uint8_t* base,ptrdiff_t bstride,vuint8mf4_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8mf4(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse8_v_u8mf2(uint8_t* base,ptrdiff_t bstride,vuint8mf2_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8mf2(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse8_v_u8m1(uint8_t* base,ptrdiff_t bstride,vuint8m1_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8m1(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse8_v_u8m2(uint8_t* base,ptrdiff_t bstride,vuint8m2_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8m2(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse8_v_u8m4(uint8_t* base,ptrdiff_t bstride,vuint8m4_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8m4(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse8_v_u8m8(uint8_t* base,ptrdiff_t bstride,vuint8m8_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8m8(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse16_v_i16mf4(int16_t* base,ptrdiff_t bstride,vint16mf4_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16mf4(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse16_v_i16mf2(int16_t* base,ptrdiff_t bstride,vint16mf2_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16mf2(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse16_v_i16m1(int16_t* base,ptrdiff_t bstride,vint16m1_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16m1(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse16_v_i16m2(int16_t* base,ptrdiff_t bstride,vint16m2_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16m2(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse16_v_i16m4(int16_t* base,ptrdiff_t bstride,vint16m4_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16m4(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse16_v_i16m8(int16_t* base,ptrdiff_t bstride,vint16m8_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16m8(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse16_v_u16mf4(uint16_t* base,ptrdiff_t bstride,vuint16mf4_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16mf4(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse16_v_u16mf2(uint16_t* base,ptrdiff_t bstride,vuint16mf2_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16mf2(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse16_v_u16m1(uint16_t* base,ptrdiff_t bstride,vuint16m1_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16m1(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse16_v_u16m2(uint16_t* base,ptrdiff_t bstride,vuint16m2_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16m2(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse16_v_u16m4(uint16_t* base,ptrdiff_t bstride,vuint16m4_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16m4(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse16_v_u16m8(uint16_t* base,ptrdiff_t bstride,vuint16m8_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16m8(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse32_v_i32mf2(int32_t* base,ptrdiff_t bstride,vint32mf2_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32mf2(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse32_v_i32m1(int32_t* base,ptrdiff_t bstride,vint32m1_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32m1(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse32_v_i32m2(int32_t* base,ptrdiff_t bstride,vint32m2_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32m2(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse32_v_i32m4(int32_t* base,ptrdiff_t bstride,vint32m4_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32m4(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse32_v_i32m8(int32_t* base,ptrdiff_t bstride,vint32m8_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32m8(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse32_v_u32mf2(uint32_t* base,ptrdiff_t bstride,vuint32mf2_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32mf2(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse32_v_u32m1(uint32_t* base,ptrdiff_t bstride,vuint32m1_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32m1(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse32_v_u32m2(uint32_t* base,ptrdiff_t bstride,vuint32m2_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32m2(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse32_v_u32m4(uint32_t* base,ptrdiff_t bstride,vuint32m4_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32m4(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse32_v_u32m8(uint32_t* base,ptrdiff_t bstride,vuint32m8_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32m8(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse32_v_f32mf2(float* base,ptrdiff_t bstride,vfloat32mf2_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32mf2(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse32_v_f32m1(float* base,ptrdiff_t bstride,vfloat32m1_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32m1(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse32_v_f32m2(float* base,ptrdiff_t bstride,vfloat32m2_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32m2(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse32_v_f32m4(float* base,ptrdiff_t bstride,vfloat32m4_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32m4(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse32_v_f32m8(float* base,ptrdiff_t bstride,vfloat32m8_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32m8(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse64_v_i64m1(int64_t* base,ptrdiff_t bstride,vint64m1_t value,size_t vl)
+{
+  __riscv_vsse64_v_i64m1(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse64_v_i64m2(int64_t* base,ptrdiff_t bstride,vint64m2_t value,size_t vl)
+{
+  __riscv_vsse64_v_i64m2(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse64_v_i64m4(int64_t* base,ptrdiff_t bstride,vint64m4_t value,size_t vl)
+{
+  __riscv_vsse64_v_i64m4(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse64_v_i64m8(int64_t* base,ptrdiff_t bstride,vint64m8_t value,size_t vl)
+{
+  __riscv_vsse64_v_i64m8(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse64_v_u64m1(uint64_t* base,ptrdiff_t bstride,vuint64m1_t value,size_t vl)
+{
+  __riscv_vsse64_v_u64m1(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse64_v_u64m2(uint64_t* base,ptrdiff_t bstride,vuint64m2_t value,size_t vl)
+{
+  __riscv_vsse64_v_u64m2(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse64_v_u64m4(uint64_t* base,ptrdiff_t bstride,vuint64m4_t value,size_t vl)
+{
+  __riscv_vsse64_v_u64m4(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse64_v_u64m8(uint64_t* base,ptrdiff_t bstride,vuint64m8_t value,size_t vl)
+{
+  __riscv_vsse64_v_u64m8(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse64_v_f64m1(double* base,ptrdiff_t bstride,vfloat64m1_t value,size_t vl)
+{
+  __riscv_vsse64_v_f64m1(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse64_v_f64m2(double* base,ptrdiff_t bstride,vfloat64m2_t value,size_t vl)
+{
+  __riscv_vsse64_v_f64m2(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse64_v_f64m4(double* base,ptrdiff_t bstride,vfloat64m4_t value,size_t vl)
+{
+  __riscv_vsse64_v_f64m4(base,bstride,value,32);
+}
+
+void
+test___riscv_vsse64_v_f64m8(double* base,ptrdiff_t bstride,vfloat64m8_t value,size_t vl)
+{
+  __riscv_vsse64_v_f64m8(base,bstride,value,32);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+\s+} 3 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vsse_m-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vsse_m-1.c
new file mode 100644
index 00000000000..daf625e01dc
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vsse_m-1.c
@@ -0,0 +1,345 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+void
+test___riscv_vsse8_v_i8mf8_m(vbool64_t mask,int8_t* base,ptrdiff_t bstride,vint8mf8_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8mf8_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_i8mf4_m(vbool32_t mask,int8_t* base,ptrdiff_t bstride,vint8mf4_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8mf4_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_i8mf2_m(vbool16_t mask,int8_t* base,ptrdiff_t bstride,vint8mf2_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8mf2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_i8m1_m(vbool8_t mask,int8_t* base,ptrdiff_t bstride,vint8m1_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8m1_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_i8m2_m(vbool4_t mask,int8_t* base,ptrdiff_t bstride,vint8m2_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8m2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_i8m4_m(vbool2_t mask,int8_t* base,ptrdiff_t bstride,vint8m4_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8m4_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_i8m8_m(vbool1_t mask,int8_t* base,ptrdiff_t bstride,vint8m8_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8m8_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_u8mf8_m(vbool64_t mask,uint8_t* base,ptrdiff_t bstride,vuint8mf8_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8mf8_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_u8mf4_m(vbool32_t mask,uint8_t* base,ptrdiff_t bstride,vuint8mf4_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8mf4_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_u8mf2_m(vbool16_t mask,uint8_t* base,ptrdiff_t bstride,vuint8mf2_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8mf2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_u8m1_m(vbool8_t mask,uint8_t* base,ptrdiff_t bstride,vuint8m1_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8m1_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_u8m2_m(vbool4_t mask,uint8_t* base,ptrdiff_t bstride,vuint8m2_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8m2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_u8m4_m(vbool2_t mask,uint8_t* base,ptrdiff_t bstride,vuint8m4_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8m4_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_u8m8_m(vbool1_t mask,uint8_t* base,ptrdiff_t bstride,vuint8m8_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8m8_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_i16mf4_m(vbool64_t mask,int16_t* base,ptrdiff_t bstride,vint16mf4_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16mf4_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_i16mf2_m(vbool32_t mask,int16_t* base,ptrdiff_t bstride,vint16mf2_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16mf2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_i16m1_m(vbool16_t mask,int16_t* base,ptrdiff_t bstride,vint16m1_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16m1_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_i16m2_m(vbool8_t mask,int16_t* base,ptrdiff_t bstride,vint16m2_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16m2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_i16m4_m(vbool4_t mask,int16_t* base,ptrdiff_t bstride,vint16m4_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16m4_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_i16m8_m(vbool2_t mask,int16_t* base,ptrdiff_t bstride,vint16m8_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16m8_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_u16mf4_m(vbool64_t mask,uint16_t* base,ptrdiff_t bstride,vuint16mf4_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16mf4_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_u16mf2_m(vbool32_t mask,uint16_t* base,ptrdiff_t bstride,vuint16mf2_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16mf2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_u16m1_m(vbool16_t mask,uint16_t* base,ptrdiff_t bstride,vuint16m1_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16m1_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_u16m2_m(vbool8_t mask,uint16_t* base,ptrdiff_t bstride,vuint16m2_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16m2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_u16m4_m(vbool4_t mask,uint16_t* base,ptrdiff_t bstride,vuint16m4_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16m4_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_u16m8_m(vbool2_t mask,uint16_t* base,ptrdiff_t bstride,vuint16m8_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16m8_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_i32mf2_m(vbool64_t mask,int32_t* base,ptrdiff_t bstride,vint32mf2_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32mf2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_i32m1_m(vbool32_t mask,int32_t* base,ptrdiff_t bstride,vint32m1_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32m1_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_i32m2_m(vbool16_t mask,int32_t* base,ptrdiff_t bstride,vint32m2_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32m2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_i32m4_m(vbool8_t mask,int32_t* base,ptrdiff_t bstride,vint32m4_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32m4_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_i32m8_m(vbool4_t mask,int32_t* base,ptrdiff_t bstride,vint32m8_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32m8_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_u32mf2_m(vbool64_t mask,uint32_t* base,ptrdiff_t bstride,vuint32mf2_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32mf2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_u32m1_m(vbool32_t mask,uint32_t* base,ptrdiff_t bstride,vuint32m1_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32m1_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_u32m2_m(vbool16_t mask,uint32_t* base,ptrdiff_t bstride,vuint32m2_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32m2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_u32m4_m(vbool8_t mask,uint32_t* base,ptrdiff_t bstride,vuint32m4_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32m4_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_u32m8_m(vbool4_t mask,uint32_t* base,ptrdiff_t bstride,vuint32m8_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32m8_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_f32mf2_m(vbool64_t mask,float* base,ptrdiff_t bstride,vfloat32mf2_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32mf2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_f32m1_m(vbool32_t mask,float* base,ptrdiff_t bstride,vfloat32m1_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32m1_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_f32m2_m(vbool16_t mask,float* base,ptrdiff_t bstride,vfloat32m2_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32m2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_f32m4_m(vbool8_t mask,float* base,ptrdiff_t bstride,vfloat32m4_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32m4_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_f32m8_m(vbool4_t mask,float* base,ptrdiff_t bstride,vfloat32m8_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32m8_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_i64m1_m(vbool64_t mask,int64_t* base,ptrdiff_t bstride,vint64m1_t value,size_t vl)
+{
+  __riscv_vsse64_v_i64m1_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_i64m2_m(vbool32_t mask,int64_t* base,ptrdiff_t bstride,vint64m2_t value,size_t vl)
+{
+  __riscv_vsse64_v_i64m2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_i64m4_m(vbool16_t mask,int64_t* base,ptrdiff_t bstride,vint64m4_t value,size_t vl)
+{
+  __riscv_vsse64_v_i64m4_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_i64m8_m(vbool8_t mask,int64_t* base,ptrdiff_t bstride,vint64m8_t value,size_t vl)
+{
+  __riscv_vsse64_v_i64m8_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_u64m1_m(vbool64_t mask,uint64_t* base,ptrdiff_t bstride,vuint64m1_t value,size_t vl)
+{
+  __riscv_vsse64_v_u64m1_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_u64m2_m(vbool32_t mask,uint64_t* base,ptrdiff_t bstride,vuint64m2_t value,size_t vl)
+{
+  __riscv_vsse64_v_u64m2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_u64m4_m(vbool16_t mask,uint64_t* base,ptrdiff_t bstride,vuint64m4_t value,size_t vl)
+{
+  __riscv_vsse64_v_u64m4_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_u64m8_m(vbool8_t mask,uint64_t* base,ptrdiff_t bstride,vuint64m8_t value,size_t vl)
+{
+  __riscv_vsse64_v_u64m8_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_f64m1_m(vbool64_t mask,double* base,ptrdiff_t bstride,vfloat64m1_t value,size_t vl)
+{
+  __riscv_vsse64_v_f64m1_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_f64m2_m(vbool32_t mask,double* base,ptrdiff_t bstride,vfloat64m2_t value,size_t vl)
+{
+  __riscv_vsse64_v_f64m2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_f64m4_m(vbool16_t mask,double* base,ptrdiff_t bstride,vfloat64m4_t value,size_t vl)
+{
+  __riscv_vsse64_v_f64m4_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_f64m8_m(vbool8_t mask,double* base,ptrdiff_t bstride,vfloat64m8_t value,size_t vl)
+{
+  __riscv_vsse64_v_f64m8_m(mask,base,bstride,value,vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vsse_m-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vsse_m-2.c
new file mode 100644
index 00000000000..4ff35ffc7e7
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vsse_m-2.c
@@ -0,0 +1,345 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+void
+test___riscv_vsse8_v_i8mf8_m(vbool64_t mask,int8_t* base,ptrdiff_t bstride,vint8mf8_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8mf8_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse8_v_i8mf4_m(vbool32_t mask,int8_t* base,ptrdiff_t bstride,vint8mf4_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8mf4_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse8_v_i8mf2_m(vbool16_t mask,int8_t* base,ptrdiff_t bstride,vint8mf2_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8mf2_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse8_v_i8m1_m(vbool8_t mask,int8_t* base,ptrdiff_t bstride,vint8m1_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8m1_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse8_v_i8m2_m(vbool4_t mask,int8_t* base,ptrdiff_t bstride,vint8m2_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8m2_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse8_v_i8m4_m(vbool2_t mask,int8_t* base,ptrdiff_t bstride,vint8m4_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8m4_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse8_v_i8m8_m(vbool1_t mask,int8_t* base,ptrdiff_t bstride,vint8m8_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8m8_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse8_v_u8mf8_m(vbool64_t mask,uint8_t* base,ptrdiff_t bstride,vuint8mf8_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8mf8_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse8_v_u8mf4_m(vbool32_t mask,uint8_t* base,ptrdiff_t bstride,vuint8mf4_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8mf4_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse8_v_u8mf2_m(vbool16_t mask,uint8_t* base,ptrdiff_t bstride,vuint8mf2_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8mf2_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse8_v_u8m1_m(vbool8_t mask,uint8_t* base,ptrdiff_t bstride,vuint8m1_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8m1_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse8_v_u8m2_m(vbool4_t mask,uint8_t* base,ptrdiff_t bstride,vuint8m2_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8m2_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse8_v_u8m4_m(vbool2_t mask,uint8_t* base,ptrdiff_t bstride,vuint8m4_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8m4_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse8_v_u8m8_m(vbool1_t mask,uint8_t* base,ptrdiff_t bstride,vuint8m8_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8m8_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse16_v_i16mf4_m(vbool64_t mask,int16_t* base,ptrdiff_t bstride,vint16mf4_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16mf4_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse16_v_i16mf2_m(vbool32_t mask,int16_t* base,ptrdiff_t bstride,vint16mf2_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16mf2_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse16_v_i16m1_m(vbool16_t mask,int16_t* base,ptrdiff_t bstride,vint16m1_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16m1_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse16_v_i16m2_m(vbool8_t mask,int16_t* base,ptrdiff_t bstride,vint16m2_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16m2_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse16_v_i16m4_m(vbool4_t mask,int16_t* base,ptrdiff_t bstride,vint16m4_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16m4_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse16_v_i16m8_m(vbool2_t mask,int16_t* base,ptrdiff_t bstride,vint16m8_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16m8_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse16_v_u16mf4_m(vbool64_t mask,uint16_t* base,ptrdiff_t bstride,vuint16mf4_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16mf4_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse16_v_u16mf2_m(vbool32_t mask,uint16_t* base,ptrdiff_t bstride,vuint16mf2_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16mf2_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse16_v_u16m1_m(vbool16_t mask,uint16_t* base,ptrdiff_t bstride,vuint16m1_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16m1_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse16_v_u16m2_m(vbool8_t mask,uint16_t* base,ptrdiff_t bstride,vuint16m2_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16m2_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse16_v_u16m4_m(vbool4_t mask,uint16_t* base,ptrdiff_t bstride,vuint16m4_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16m4_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse16_v_u16m8_m(vbool2_t mask,uint16_t* base,ptrdiff_t bstride,vuint16m8_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16m8_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_i32mf2_m(vbool64_t mask,int32_t* base,ptrdiff_t bstride,vint32mf2_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32mf2_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_i32m1_m(vbool32_t mask,int32_t* base,ptrdiff_t bstride,vint32m1_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32m1_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_i32m2_m(vbool16_t mask,int32_t* base,ptrdiff_t bstride,vint32m2_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32m2_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_i32m4_m(vbool8_t mask,int32_t* base,ptrdiff_t bstride,vint32m4_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32m4_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_i32m8_m(vbool4_t mask,int32_t* base,ptrdiff_t bstride,vint32m8_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32m8_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_u32mf2_m(vbool64_t mask,uint32_t* base,ptrdiff_t bstride,vuint32mf2_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32mf2_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_u32m1_m(vbool32_t mask,uint32_t* base,ptrdiff_t bstride,vuint32m1_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32m1_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_u32m2_m(vbool16_t mask,uint32_t* base,ptrdiff_t bstride,vuint32m2_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32m2_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_u32m4_m(vbool8_t mask,uint32_t* base,ptrdiff_t bstride,vuint32m4_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32m4_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_u32m8_m(vbool4_t mask,uint32_t* base,ptrdiff_t bstride,vuint32m8_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32m8_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_f32mf2_m(vbool64_t mask,float* base,ptrdiff_t bstride,vfloat32mf2_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32mf2_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_f32m1_m(vbool32_t mask,float* base,ptrdiff_t bstride,vfloat32m1_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32m1_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_f32m2_m(vbool16_t mask,float* base,ptrdiff_t bstride,vfloat32m2_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32m2_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_f32m4_m(vbool8_t mask,float* base,ptrdiff_t bstride,vfloat32m4_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32m4_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse32_v_f32m8_m(vbool4_t mask,float* base,ptrdiff_t bstride,vfloat32m8_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32m8_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse64_v_i64m1_m(vbool64_t mask,int64_t* base,ptrdiff_t bstride,vint64m1_t value,size_t vl)
+{
+  __riscv_vsse64_v_i64m1_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse64_v_i64m2_m(vbool32_t mask,int64_t* base,ptrdiff_t bstride,vint64m2_t value,size_t vl)
+{
+  __riscv_vsse64_v_i64m2_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse64_v_i64m4_m(vbool16_t mask,int64_t* base,ptrdiff_t bstride,vint64m4_t value,size_t vl)
+{
+  __riscv_vsse64_v_i64m4_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse64_v_i64m8_m(vbool8_t mask,int64_t* base,ptrdiff_t bstride,vint64m8_t value,size_t vl)
+{
+  __riscv_vsse64_v_i64m8_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse64_v_u64m1_m(vbool64_t mask,uint64_t* base,ptrdiff_t bstride,vuint64m1_t value,size_t vl)
+{
+  __riscv_vsse64_v_u64m1_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse64_v_u64m2_m(vbool32_t mask,uint64_t* base,ptrdiff_t bstride,vuint64m2_t value,size_t vl)
+{
+  __riscv_vsse64_v_u64m2_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse64_v_u64m4_m(vbool16_t mask,uint64_t* base,ptrdiff_t bstride,vuint64m4_t value,size_t vl)
+{
+  __riscv_vsse64_v_u64m4_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse64_v_u64m8_m(vbool8_t mask,uint64_t* base,ptrdiff_t bstride,vuint64m8_t value,size_t vl)
+{
+  __riscv_vsse64_v_u64m8_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse64_v_f64m1_m(vbool64_t mask,double* base,ptrdiff_t bstride,vfloat64m1_t value,size_t vl)
+{
+  __riscv_vsse64_v_f64m1_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse64_v_f64m2_m(vbool32_t mask,double* base,ptrdiff_t bstride,vfloat64m2_t value,size_t vl)
+{
+  __riscv_vsse64_v_f64m2_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse64_v_f64m4_m(vbool16_t mask,double* base,ptrdiff_t bstride,vfloat64m4_t value,size_t vl)
+{
+  __riscv_vsse64_v_f64m4_m(mask,base,bstride,value,31);
+}
+
+void
+test___riscv_vsse64_v_f64m8_m(vbool8_t mask,double* base,ptrdiff_t bstride,vfloat64m8_t value,size_t vl)
+{
+  __riscv_vsse64_v_f64m8_m(mask,base,bstride,value,31);
+}
+
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vsse_m-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vsse_m-3.c
new file mode 100644
index 00000000000..daf625e01dc
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vsse_m-3.c
@@ -0,0 +1,345 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */
+
+#include "riscv_vector.h"
+
+void
+test___riscv_vsse8_v_i8mf8_m(vbool64_t mask,int8_t* base,ptrdiff_t bstride,vint8mf8_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8mf8_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_i8mf4_m(vbool32_t mask,int8_t* base,ptrdiff_t bstride,vint8mf4_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8mf4_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_i8mf2_m(vbool16_t mask,int8_t* base,ptrdiff_t bstride,vint8mf2_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8mf2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_i8m1_m(vbool8_t mask,int8_t* base,ptrdiff_t bstride,vint8m1_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8m1_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_i8m2_m(vbool4_t mask,int8_t* base,ptrdiff_t bstride,vint8m2_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8m2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_i8m4_m(vbool2_t mask,int8_t* base,ptrdiff_t bstride,vint8m4_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8m4_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_i8m8_m(vbool1_t mask,int8_t* base,ptrdiff_t bstride,vint8m8_t value,size_t vl)
+{
+  __riscv_vsse8_v_i8m8_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_u8mf8_m(vbool64_t mask,uint8_t* base,ptrdiff_t bstride,vuint8mf8_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8mf8_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_u8mf4_m(vbool32_t mask,uint8_t* base,ptrdiff_t bstride,vuint8mf4_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8mf4_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_u8mf2_m(vbool16_t mask,uint8_t* base,ptrdiff_t bstride,vuint8mf2_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8mf2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_u8m1_m(vbool8_t mask,uint8_t* base,ptrdiff_t bstride,vuint8m1_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8m1_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_u8m2_m(vbool4_t mask,uint8_t* base,ptrdiff_t bstride,vuint8m2_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8m2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_u8m4_m(vbool2_t mask,uint8_t* base,ptrdiff_t bstride,vuint8m4_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8m4_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse8_v_u8m8_m(vbool1_t mask,uint8_t* base,ptrdiff_t bstride,vuint8m8_t value,size_t vl)
+{
+  __riscv_vsse8_v_u8m8_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_i16mf4_m(vbool64_t mask,int16_t* base,ptrdiff_t bstride,vint16mf4_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16mf4_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_i16mf2_m(vbool32_t mask,int16_t* base,ptrdiff_t bstride,vint16mf2_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16mf2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_i16m1_m(vbool16_t mask,int16_t* base,ptrdiff_t bstride,vint16m1_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16m1_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_i16m2_m(vbool8_t mask,int16_t* base,ptrdiff_t bstride,vint16m2_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16m2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_i16m4_m(vbool4_t mask,int16_t* base,ptrdiff_t bstride,vint16m4_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16m4_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_i16m8_m(vbool2_t mask,int16_t* base,ptrdiff_t bstride,vint16m8_t value,size_t vl)
+{
+  __riscv_vsse16_v_i16m8_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_u16mf4_m(vbool64_t mask,uint16_t* base,ptrdiff_t bstride,vuint16mf4_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16mf4_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_u16mf2_m(vbool32_t mask,uint16_t* base,ptrdiff_t bstride,vuint16mf2_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16mf2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_u16m1_m(vbool16_t mask,uint16_t* base,ptrdiff_t bstride,vuint16m1_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16m1_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_u16m2_m(vbool8_t mask,uint16_t* base,ptrdiff_t bstride,vuint16m2_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16m2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_u16m4_m(vbool4_t mask,uint16_t* base,ptrdiff_t bstride,vuint16m4_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16m4_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse16_v_u16m8_m(vbool2_t mask,uint16_t* base,ptrdiff_t bstride,vuint16m8_t value,size_t vl)
+{
+  __riscv_vsse16_v_u16m8_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_i32mf2_m(vbool64_t mask,int32_t* base,ptrdiff_t bstride,vint32mf2_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32mf2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_i32m1_m(vbool32_t mask,int32_t* base,ptrdiff_t bstride,vint32m1_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32m1_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_i32m2_m(vbool16_t mask,int32_t* base,ptrdiff_t bstride,vint32m2_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32m2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_i32m4_m(vbool8_t mask,int32_t* base,ptrdiff_t bstride,vint32m4_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32m4_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_i32m8_m(vbool4_t mask,int32_t* base,ptrdiff_t bstride,vint32m8_t value,size_t vl)
+{
+  __riscv_vsse32_v_i32m8_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_u32mf2_m(vbool64_t mask,uint32_t* base,ptrdiff_t bstride,vuint32mf2_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32mf2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_u32m1_m(vbool32_t mask,uint32_t* base,ptrdiff_t bstride,vuint32m1_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32m1_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_u32m2_m(vbool16_t mask,uint32_t* base,ptrdiff_t bstride,vuint32m2_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32m2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_u32m4_m(vbool8_t mask,uint32_t* base,ptrdiff_t bstride,vuint32m4_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32m4_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_u32m8_m(vbool4_t mask,uint32_t* base,ptrdiff_t bstride,vuint32m8_t value,size_t vl)
+{
+  __riscv_vsse32_v_u32m8_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_f32mf2_m(vbool64_t mask,float* base,ptrdiff_t bstride,vfloat32mf2_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32mf2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_f32m1_m(vbool32_t mask,float* base,ptrdiff_t bstride,vfloat32m1_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32m1_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_f32m2_m(vbool16_t mask,float* base,ptrdiff_t bstride,vfloat32m2_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32m2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_f32m4_m(vbool8_t mask,float* base,ptrdiff_t bstride,vfloat32m4_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32m4_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse32_v_f32m8_m(vbool4_t mask,float* base,ptrdiff_t bstride,vfloat32m8_t value,size_t vl)
+{
+  __riscv_vsse32_v_f32m8_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_i64m1_m(vbool64_t mask,int64_t* base,ptrdiff_t bstride,vint64m1_t value,size_t vl)
+{
+  __riscv_vsse64_v_i64m1_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_i64m2_m(vbool32_t mask,int64_t* base,ptrdiff_t bstride,vint64m2_t value,size_t vl)
+{
+  __riscv_vsse64_v_i64m2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_i64m4_m(vbool16_t mask,int64_t* base,ptrdiff_t bstride,vint64m4_t value,size_t vl)
+{
+  __riscv_vsse64_v_i64m4_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_i64m8_m(vbool8_t mask,int64_t* base,ptrdiff_t bstride,vint64m8_t value,size_t vl)
+{
+  __riscv_vsse64_v_i64m8_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_u64m1_m(vbool64_t mask,uint64_t* base,ptrdiff_t bstride,vuint64m1_t value,size_t vl)
+{
+  __riscv_vsse64_v_u64m1_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_u64m2_m(vbool32_t mask,uint64_t* base,ptrdiff_t bstride,vuint64m2_t value,size_t vl)
+{
+  __riscv_vsse64_v_u64m2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_u64m4_m(vbool16_t mask,uint64_t* base,ptrdiff_t bstride,vuint64m4_t value,size_t vl)
+{
+  __riscv_vsse64_v_u64m4_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_u64m8_m(vbool8_t mask,uint64_t* base,ptrdiff_t bstride,vuint64m8_t value,size_t vl)
+{
+  __riscv_vsse64_v_u64m8_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_f64m1_m(vbool64_t mask,double* base,ptrdiff_t bstride,vfloat64m1_t value,size_t vl)
+{
+  __riscv_vsse64_v_f64m1_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_f64m2_m(vbool32_t mask,double* base,ptrdiff_t bstride,vfloat64m2_t value,size_t vl)
+{
+  __riscv_vsse64_v_f64m2_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_f64m4_m(vbool16_t mask,double* base,ptrdiff_t bstride,vfloat64m4_t value,size_t vl)
+{
+  __riscv_vsse64_v_f64m4_m(mask,base,bstride,value,vl);
+}
+
+void
+test___riscv_vsse64_v_f64m8_m(vbool8_t mask,double* base,ptrdiff_t bstride,vfloat64m8_t value,size_t vl)
+{
+  __riscv_vsse64_v_f64m8_m(mask,base,bstride,value,vl);
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vsse8\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vsse16\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 2 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vsse32\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
+/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vsse64\.v\s+v[0-9]+,0\s*\([a-x0-9]+\),\s*[a-x0-9]+,\s*v0.t} 3 } } */
-- 
2.36.3


^ permalink raw reply	[flat|nested] 2+ messages in thread

* [PATCH] RISC-V: Add vlse/vsse C/C++ API intrinsics support
@ 2023-01-20  4:25 juzhe.zhong
  0 siblings, 0 replies; 2+ messages in thread
From: juzhe.zhong @ 2023-01-20  4:25 UTC (permalink / raw)
  To: gcc-patches; +Cc: kito.cheng, palmer, Ju-Zhe Zhong

From: Ju-Zhe Zhong <juzhe.zhong@rivai.ai>

gcc/ChangeLog:

        * config/riscv/predicates.md (pmode_reg_or_0_operand): New predicate.
        * config/riscv/riscv-vector-builtins-bases.cc (class loadstore): Add vlse/vsse intrinsic support.
        (BASE): Ditto.
        * config/riscv/riscv-vector-builtins-bases.h: Ditto.
        * config/riscv/riscv-vector-builtins-functions.def (vlse): Ditto.
        (vsse): Ditto.
        * config/riscv/riscv-vector-builtins.cc (function_expander::use_contiguous_load_insn): Ditto.
        * config/riscv/vector.md (@pred_strided_load<mode>): Ditto.
        (@pred_strided_store<mode>): Ditto.

---
 gcc/config/riscv/predicates.md                |  4 +
 .../riscv/riscv-vector-builtins-bases.cc      | 26 +++++-
 .../riscv/riscv-vector-builtins-bases.h       |  2 +
 .../riscv/riscv-vector-builtins-functions.def |  2 +
 gcc/config/riscv/riscv-vector-builtins.cc     | 33 ++++++-
 gcc/config/riscv/vector.md                    | 90 +++++++++++++++++--
 6 files changed, 143 insertions(+), 14 deletions(-)

diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md
index 5a5a49bf7c0..bae9cfa02dd 100644
--- a/gcc/config/riscv/predicates.md
+++ b/gcc/config/riscv/predicates.md
@@ -286,6 +286,10 @@
 	    (match_test "GET_CODE (op) == UNSPEC
 			 && (XINT (op, 1) == UNSPEC_VUNDEF)"))))
 
+(define_special_predicate "pmode_reg_or_0_operand"
+  (ior (match_operand 0 "const_0_operand")
+       (match_operand 0 "pmode_register_operand")))
+
 ;; The scalar operand can be directly broadcast by RVV instructions.
 (define_predicate "direct_broadcast_operand"
   (ior (match_operand 0 "register_operand")
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index 0da4797d272..17a1294cf85 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -84,8 +84,8 @@ public:
   }
 };
 
-/* Implements vle.v/vse.v/vlm.v/vsm.v codegen.  */
-template <bool STORE_P>
+/* Implements vle.v/vse.v/vlm.v/vsm.v/vlse.v/vsse.v codegen.  */
+template <bool STORE_P, bool STRIDED_P = false>
 class loadstore : public function_base
 {
   unsigned int call_properties (const function_instance &) const override
@@ -106,9 +106,23 @@ class loadstore : public function_base
   rtx expand (function_expander &e) const override
   {
     if (STORE_P)
-      return e.use_contiguous_store_insn (code_for_pred_store (e.vector_mode ()));
+      {
+	if (STRIDED_P)
+	  return e.use_contiguous_store_insn (
+	    code_for_pred_strided_store (e.vector_mode ()));
+	else
+	  return e.use_contiguous_store_insn (
+	    code_for_pred_store (e.vector_mode ()));
+      }
     else
-      return e.use_contiguous_load_insn (code_for_pred_mov (e.vector_mode ()));
+      {
+	if (STRIDED_P)
+	  return e.use_contiguous_load_insn (
+	    code_for_pred_strided_load (e.vector_mode ()));
+	else
+	  return e.use_contiguous_load_insn (
+	    code_for_pred_mov (e.vector_mode ()));
+      }
   }
 };
 
@@ -118,6 +132,8 @@ static CONSTEXPR const loadstore<false> vle_obj;
 static CONSTEXPR const loadstore<true> vse_obj;
 static CONSTEXPR const loadstore<false> vlm_obj;
 static CONSTEXPR const loadstore<true> vsm_obj;
+static CONSTEXPR const loadstore<false, true> vlse_obj;
+static CONSTEXPR const loadstore<true, true> vsse_obj;
 
 /* Declare the function base NAME, pointing it to an instance
    of class <NAME>_obj.  */
@@ -130,5 +146,7 @@ BASE (vle)
 BASE (vse)
 BASE (vlm)
 BASE (vsm)
+BASE (vlse)
+BASE (vsse)
 
 } // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.h b/gcc/config/riscv/riscv-vector-builtins-bases.h
index 28151a8d8d2..d8676e94b28 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.h
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.h
@@ -30,6 +30,8 @@ extern const function_base *const vle;
 extern const function_base *const vse;
 extern const function_base *const vlm;
 extern const function_base *const vsm;
+extern const function_base *const vlse;
+extern const function_base *const vsse;
 }
 
 } // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.def b/gcc/config/riscv/riscv-vector-builtins-functions.def
index 63aa8fe32c8..348262928c8 100644
--- a/gcc/config/riscv/riscv-vector-builtins-functions.def
+++ b/gcc/config/riscv/riscv-vector-builtins-functions.def
@@ -44,5 +44,7 @@ DEF_RVV_FUNCTION (vle, loadstore, full_preds, all_v_scalar_const_ptr_ops)
 DEF_RVV_FUNCTION (vse, loadstore, none_m_preds, all_v_scalar_ptr_ops)
 DEF_RVV_FUNCTION (vlm, loadstore, none_preds, b_v_scalar_const_ptr_ops)
 DEF_RVV_FUNCTION (vsm, loadstore, none_preds, b_v_scalar_ptr_ops)
+DEF_RVV_FUNCTION (vlse, loadstore, full_preds, all_v_scalar_const_ptr_ptrdiff_ops)
+DEF_RVV_FUNCTION (vsse, loadstore, none_m_preds, all_v_scalar_ptr_ptrdiff_ops)
 
 #undef DEF_RVV_FUNCTION
diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
index f95fe0d58d5..b97a2c94550 100644
--- a/gcc/config/riscv/riscv-vector-builtins.cc
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -167,6 +167,19 @@ static CONSTEXPR const rvv_arg_type_info scalar_ptr_args[]
   = {rvv_arg_type_info (RVV_BASE_scalar_ptr),
      rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
 
+/* A list of args for vector_type func (const scalar_type *, ptrdiff_t)
+ * function.  */
+static CONSTEXPR const rvv_arg_type_info scalar_const_ptr_ptrdiff_args[]
+  = {rvv_arg_type_info (RVV_BASE_scalar_const_ptr),
+     rvv_arg_type_info (RVV_BASE_ptrdiff), rvv_arg_type_info_end};
+
+/* A list of args for void func (scalar_type *, ptrdiff_t, vector_type)
+ * function.  */
+static CONSTEXPR const rvv_arg_type_info scalar_ptr_ptrdiff_args[]
+  = {rvv_arg_type_info (RVV_BASE_scalar_ptr),
+     rvv_arg_type_info (RVV_BASE_ptrdiff), rvv_arg_type_info (RVV_BASE_vector),
+     rvv_arg_type_info_end};
+
 /* A list of none preds that will be registered for intrinsic functions.  */
 static CONSTEXPR const predication_type_index none_preds[]
   = {PRED_TYPE_none, NUM_PRED_TYPES};
@@ -227,6 +240,22 @@ static CONSTEXPR const rvv_op_info b_v_scalar_ptr_ops
      rvv_arg_type_info (RVV_BASE_void), /* Return type */
      scalar_ptr_args /* Args */};
 
+/* A static operand information for vector_type func (const scalar_type *,
+ * ptrdiff_t) function registration. */
+static CONSTEXPR const rvv_op_info all_v_scalar_const_ptr_ptrdiff_ops
+  = {all_ops,				  /* Types */
+     OP_TYPE_v,				  /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     scalar_const_ptr_ptrdiff_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, ptrdiff_t,
+ * vector_type) function registration. */
+static CONSTEXPR const rvv_op_info all_v_scalar_ptr_ptrdiff_ops
+  = {all_ops,				/* Types */
+     OP_TYPE_v,				/* Suffix */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type */
+     scalar_ptr_ptrdiff_args /* Args */};
+
 /* A list of all RVV intrinsic functions.  */
 static function_group_info function_groups[] = {
 #define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)                         \
@@ -920,7 +949,9 @@ function_expander::use_contiguous_load_insn (insn_code icode)
       add_input_operand (Pmode, get_tail_policy_for_pred (pred));
       add_input_operand (Pmode, get_mask_policy_for_pred (pred));
     }
-  add_input_operand (Pmode, get_avl_type_rtx (avl_type::NONVLMAX));
+
+  if (opno != insn_data[icode].n_generator_args)
+    add_input_operand (Pmode, get_avl_type_rtx (avl_type::NONVLMAX));
 
   return generate_insn (icode);
 }
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index e1173f2d5a6..498cf21905b 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -33,6 +33,7 @@
   UNSPEC_VUNDEF
   UNSPEC_VPREDICATE
   UNSPEC_VLMAX
+  UNSPEC_STRIDED
 ])
 
 (define_constants [
@@ -204,28 +205,56 @@
 
 ;; The index of operand[] to get the avl op.
 (define_attr "vl_op_idx" ""
-	(cond [(eq_attr "type" "vlde,vste,vimov,vfmov,vldm,vstm,vlds,vmalu")
-	 (const_int 4)]
-	(const_int INVALID_ATTRIBUTE)))
+  (cond [(eq_attr "type" "vlde,vste,vimov,vfmov,vldm,vstm,vmalu,vsts")
+	   (const_int 4)
+
+	 ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
+	 ;; wheras it is pred_strided_load if operands[3] is vector mode.
+         (eq_attr "type" "vlds")
+	   (if_then_else (match_test "VECTOR_MODE_P (GET_MODE (operands[3]))")
+             (const_int 5)
+             (const_int 4))]
+  (const_int INVALID_ATTRIBUTE)))
 
 ;; The tail policy op value.
 (define_attr "ta" ""
-  (cond [(eq_attr "type" "vlde,vimov,vfmov,vlds")
-	   (symbol_ref "riscv_vector::get_ta(operands[5])")]
+  (cond [(eq_attr "type" "vlde,vimov,vfmov")
+	   (symbol_ref "riscv_vector::get_ta(operands[5])")
+
+	 ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
+	 ;; wheras it is pred_strided_load if operands[3] is vector mode.
+	 (eq_attr "type" "vlds")
+	   (if_then_else (match_test "VECTOR_MODE_P (GET_MODE (operands[3]))")
+	     (symbol_ref "riscv_vector::get_ta(operands[6])")
+	     (symbol_ref "riscv_vector::get_ta(operands[5])"))]
 	(const_int INVALID_ATTRIBUTE)))
 
 ;; The mask policy op value.
 (define_attr "ma" ""
-  (cond [(eq_attr "type" "vlde,vlds")
-	   (symbol_ref "riscv_vector::get_ma(operands[6])")]
+  (cond [(eq_attr "type" "vlde")
+	   (symbol_ref "riscv_vector::get_ma(operands[6])")
+
+	 ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
+	 ;; wheras it is pred_strided_load if operands[3] is vector mode.
+	 (eq_attr "type" "vlds")
+	   (if_then_else (match_test "VECTOR_MODE_P (GET_MODE (operands[3]))")
+	     (symbol_ref "riscv_vector::get_ma(operands[7])")
+	     (symbol_ref "riscv_vector::get_ma(operands[6])"))]
 	(const_int INVALID_ATTRIBUTE)))
 
 ;; The avl type value.
 (define_attr "avl_type" ""
-  (cond [(eq_attr "type" "vlde,vlde,vste,vimov,vimov,vimov,vfmov,vlds,vlds")
+  (cond [(eq_attr "type" "vlde,vlde,vste,vimov,vimov,vimov,vfmov")
 	   (symbol_ref "INTVAL (operands[7])")
 	 (eq_attr "type" "vldm,vstm,vimov,vmalu,vmalu")
-	   (symbol_ref "INTVAL (operands[5])")]
+	   (symbol_ref "INTVAL (operands[5])")
+
+	 ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
+	 ;; wheras it is pred_strided_load if operands[3] is vector mode.
+	 (eq_attr "type" "vlds")
+	   (if_then_else (match_test "VECTOR_MODE_P (GET_MODE (operands[3]))")
+	     (const_int INVALID_ATTRIBUTE)
+	     (symbol_ref "INTVAL (operands[7])"))]
 	(const_int INVALID_ATTRIBUTE)))
 
 ;; -----------------------------------------------------------------
@@ -760,3 +789,46 @@
    vlse<sew>.v\t%0,%3,zero"
   [(set_attr "type" "vimov,vfmov,vlds,vlds")
    (set_attr "mode" "<MODE>")])
+
+;; -------------------------------------------------------------------------------
+;; ---- Predicated Strided loads/stores
+;; -------------------------------------------------------------------------------
+;; Includes:
+;; - 7.5. Vector Strided Instructions
+;; -------------------------------------------------------------------------------
+
+(define_insn "@pred_strided_load<mode>"
+  [(set (match_operand:V 0 "register_operand"              "=vr,    vr,    vd")
+	(if_then_else:V
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm")
+	     (match_operand 5 "vector_length_operand"    "   rK,    rK,    rK")
+	     (match_operand 6 "const_int_operand"        "    i,     i,     i")
+	     (match_operand 7 "const_int_operand"        "    i,     i,     i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:V
+	    [(match_operand:V 3 "memory_operand"         "    m,     m,     m")
+	     (match_operand 4 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")] UNSPEC_STRIDED)
+	  (match_operand:V 2 "vector_merge_operand"      "    0,    vu,    vu")))]
+  "TARGET_VECTOR"
+  "vlse<sew>.v\t%0,%3,%z4%p1"
+  [(set_attr "type" "vlds")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_strided_store<mode>"
+  [(set (match_operand:V 0 "memory_operand"                 "+m")
+	(if_then_else:V
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+	     (match_operand 4 "vector_length_operand"    "   rK")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:V
+	    [(match_operand 2 "pmode_reg_or_0_operand"   "   rJ")
+	     (match_operand:V 3 "register_operand"       "   vr")] UNSPEC_STRIDED)
+	  (match_dup 0)))]
+  "TARGET_VECTOR"
+  "vsse<sew>.v\t%3,%0,%z2%p1"
+  [(set_attr "type" "vsts")
+   (set_attr "mode" "<MODE>")])
-- 
2.36.3


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2023-01-20  4:25 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-20  4:21 [PATCH] RISC-V: Add vlse/vsse C/C++ API intrinsics support juzhe.zhong
2023-01-20  4:25 juzhe.zhong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).