public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH v3] RISC-V: Add load and store intrinsics support for RVV support
@ 2022-06-01  1:18 juzhe.zhong
  0 siblings, 0 replies; only message in thread
From: juzhe.zhong @ 2022-06-01  1:18 UTC (permalink / raw)
  To: gcc-patches; +Cc: zhongjuzhe

From: zhongjuzhe <juzhe.zhong@rivai.ai>

This patch is supplemental patch for [PATCH 14/21] which is missed in v1.

gcc/ChangeLog:

        * config/riscv/constraints.md (vi): New constraint.
        (vj): New constraint.
        (vk): New constraint.
        (vc): New constraint.
        (Wn5): New constraint.
        * config/riscv/predicates.md (vector_any_register_operand): New predicate.
        (p_reg_or_0_operand): New predicate.
        (vector_reg_or_const0_operand): New predicate.
        (vector_move_operand): New predicate.
        (p_reg_or_uimm5_operand): New predicate.
        (reg_or_const_int_operand): New predicate.
        (reg_or_uimm5_operand): New predicate.
        (reg_or_neg_simm5_operand): New predicate.
        (vector_const_simm5_operand): New predicate.
        (vector_neg_const_simm5_operand): New predicate.
        (vector_const_uimm5_operand): New predicate.
        (vector_arith_operand): New predicate.
        (vector_neg_arith_operand): New predicate.
        (vector_shift_operand): New predicate.
        (vector_constant_vector_operand): New predicate.
        (vector_perm_operand): New predicate.
        * config/riscv/riscv-vector-builtins-functions.cc (intrinsic_rename): New function.
        (vector_scalar_operation_p): New function.
        (readvl::call_properties): New function.
        (readvl::assemble_name): New function.
        (readvl::get_return_type): New function.
        (readvl::get_argument_types): New function.
        (readvl::expand): New function.
        (vlse::call_properties): New function.
        (vlse::get_return_type): New function.
        (vlse::get_argument_types): New function.
        (vlse::can_be_overloaded_p): New function.
        (vlse::expand): New function.
        (vsse::call_properties): New function.
        (vsse::get_argument_types): New function.
        (vsse::can_be_overloaded_p): New function.
        (vsse::expand): New function.
        (vlm::assemble_name): New function.
        (vlm::call_properties): New function.
        (vlm::get_return_type): New function.
        (vlm::get_argument_types): New function.
        (vlm::expand): New function.
        (vsm::assemble_name): New function.
        (vsm::call_properties): New function.
        (vsm::get_argument_types): New function.
        (vsm::expand): New function.
        (indexedloadstore::assemble_name): New function.
        (indexedloadstore::get_argument_types): New function.
        (vlxei::call_properties): New function.
        (vlxei::get_return_type): New function.
        (vluxei::expand): New function.
        (vloxei::expand): New function.
        (vsuxei::call_properties): New function.
        (vsuxei::expand): New function.
        (vsoxei::call_properties): New function.
        (vsoxei::expand): New function.
        (vleff::call_properties): New function.
        (vleff::assemble_name): New function.
        (vleff::get_return_type): New function.
        (vleff::get_argument_types): New function.
        (vleff::can_be_overloaded_p): New function.
        (vleff::fold): New function.
        (vleff::expand): New function.
        * config/riscv/riscv-vector-builtins-functions.def (readvl): New macro define.
        (vlm): New macro define.
        (vsm): New macro define.
        (vlse): New macro define.
        (vsse): New macro define.
        (vluxei): New macro define.
        (vloxei): New macro define.
        (vsuxei): New macro define.
        (vsoxei): New macro define.
        (vleff): New macro define.
        * config/riscv/riscv-vector-builtins-functions.h (class readvl): New class.
        (class vlse): New class.
        (class vsse): New class.
        (class vlm): New class.
        (class vsm): New class.
        (class indexedloadstore): New class.
        (class vlxei): New class.
        (class vluxei): New class.
        (class vloxei): New class.
        (class vsuxei): New class.
        (class vsoxei): New class.
        (class vleff): New class.
        * config/riscv/riscv-vector-builtins-iterators.def (VNOT64BITI): New iterator.
        (V16): New iterator.
        (VI16): New iterator.
        (V2UNITS): New iterator.
        (V4UNITS): New iterator.
        (V8UNITS): New iterator.
        (V16UNITS): New iterator.
        (V32UNITS): New iterator.
        (V64UNITS): New iterator.
        (V2UNITSI): New iterator.
        (V4UNITSI): New iterator.
        (V8UNITSI): New iterator.
        (V16UNITSI): New iterator.
        (V32UNITSI): New iterator.
        (V64UNITSI): New iterator.
        (V128UNITSI): New iterator.
        (VWI): New iterator.
        (VWINOQI): New iterator.
        (VWF): New iterator.
        (VQWI): New iterator.
        (VOWI): New iterator.
        (VWREDI): New iterator.
        (VWREDF): New iterator.
        (VW): New iterator.
        (VQW): New iterator.
        (VOW): New iterator.
        (VMAP): New iterator.
        (VMAPI16): New iterator.
        (VWMAP): New iterator.
        (VWFMAP): New iterator.
        (VLMUL1): New iterator.
        (VWLMUL1): New iterator.
        * config/riscv/riscv-vector.cc (rvv_classify_vlmul_field): Adjust for MODE_VECTOR_BOOL.
        * config/riscv/riscv.md: Add RVV instructions.
        * config/riscv/vector-iterators.md (aadd): New iterators and attributes.
        (osum): New iterators and attributes.
        (ssra): New iterators and attributes.
        (clipu): New iterators and attributes.
        (rec7): New iterators and attributes.
        (x): New iterators and attributes.
        (u): New iterators and attributes.
        (sof): New iterators and attributes.
        (down): New iterators and attributes.
        (vsarith): New iterators and attributes.
        (nmsub): New iterators and attributes.
        (nmsac): New iterators and attributes.
        (plus): New iterators and attributes.
        (z): New iterators and attributes.
        (varith): New iterators and attributes.
        (mul): New iterators and attributes.
        (xnor): New iterators and attributes.
        * config/riscv/vector.md (@readvl_<X:mode>): New pattern.
        (@vlse<mode>): New pattern.
        (@vsse<mode>): New pattern.
        (@vl<uo>xei<V2UNITS:mode><V2UNITSI:mode>): New pattern.
        (@vl<uo>xei<V4UNITS:mode><V4UNITSI:mode>): New pattern.
        (@vl<uo>xei<V8UNITS:mode><V8UNITSI:mode>): New pattern.
        (@vl<uo>xei<V16UNITS:mode><V16UNITSI:mode>): New pattern.
        (@vl<uo>xei<V32UNITS:mode><V32UNITSI:mode>): New pattern.
        (@vl<uo>xei<V64UNITS:mode><V64UNITSI:mode>): New pattern.
        (@vl<uo>xei<V128UNITSI:mode><V128UNITSI:mode>): New pattern.
        (@vs<uo>xei<V2UNITS:mode><V2UNITSI:mode>): New pattern.
        (@vs<uo>xei<V4UNITS:mode><V4UNITSI:mode>): New pattern.
        (@vs<uo>xei<V8UNITS:mode><V8UNITSI:mode>): New pattern.
        (@vs<uo>xei<V16UNITS:mode><V16UNITSI:mode>): New pattern.
        (@vs<uo>xei<V32UNITS:mode><V32UNITSI:mode>): New pattern.
        (@vs<uo>xei<V64UNITS:mode><V64UNITSI:mode>): New pattern.
        (@vs<uo>xei<V128UNITSI:mode><V128UNITSI:mode>): New pattern.
        (@vle<mode>ff): New pattern.

---
 gcc/config/riscv/constraints.md               |  25 +
 gcc/config/riscv/predicates.md                |  83 +-
 .../riscv/riscv-vector-builtins-functions.cc  | 409 ++++++++++
 .../riscv/riscv-vector-builtins-functions.def |  70 ++
 .../riscv/riscv-vector-builtins-functions.h   | 178 +++++
 .../riscv/riscv-vector-builtins-iterators.def | 377 +++++++++
 gcc/config/riscv/riscv-vector.cc              |  33 +-
 gcc/config/riscv/riscv.md                     |   3 +-
 gcc/config/riscv/vector-iterators.md          | 753 +++++++++++++++++-
 gcc/config/riscv/vector.md                    | 408 +++++++++-
 10 files changed, 2311 insertions(+), 28 deletions(-)

diff --git a/gcc/config/riscv/constraints.md b/gcc/config/riscv/constraints.md
index 114878130bb..e3cdc56d484 100644
--- a/gcc/config/riscv/constraints.md
+++ b/gcc/config/riscv/constraints.md
@@ -92,6 +92,26 @@
 (define_register_constraint "vm" "TARGET_VECTOR ? VM_REGS : NO_REGS"
   "A vector mask register (if available).")
 
+(define_constraint "vi"
+  "A vector 5-bit signed immediate."
+  (and (match_code "const_vector")
+       (match_test "rvv_const_vec_all_same_in_range_p (op, -16, 15)")))
+
+(define_constraint "vj"
+  "A vector negated 5-bit signed immediate."
+  (and (match_code "const_vector")
+       (match_test "rvv_const_vec_all_same_in_range_p (op, -15, 16)")))
+
+(define_constraint "vk"
+  "A vector 5-bit unsigned immediate."
+  (and (match_code "const_vector")
+       (match_test "rvv_const_vec_all_same_in_range_p (op, 0, 31)")))
+
+(define_constraint "vc"
+  "Any vector duplicate constant."
+  (and (match_code "const_vector")
+       (match_test "const_vec_duplicate_p (op)")))
+       
 (define_constraint "vp"
   "POLY_INT"
   (and (match_code "const_poly_int")
@@ -102,3 +122,8 @@
   "Signed immediate 5-bit value"
   (and (match_code "const_int")
        (match_test "IN_RANGE (INTVAL (op), -16, 15)")))
+
+(define_constraint "Wn5"
+  "Signed immediate 5-bit value"
+  (and (match_code "const_int")
+       (match_test "IN_RANGE (INTVAL (op), -15, 16)")))
diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md
index e31c829bf5b..9004fad01ee 100644
--- a/gcc/config/riscv/predicates.md
+++ b/gcc/config/riscv/predicates.md
@@ -249,6 +249,13 @@
 
 ;; Vector Predicates.
 
+;; A special predicate that doesn't match a particular mode.
+(define_special_predicate "vector_any_register_operand"
+  (match_code "reg, subreg")
+{
+  return VECTOR_MODE_P (GET_MODE (op));
+})
+
 (define_special_predicate "p_reg_or_const_csr_operand"
   (match_code "reg, subreg, const_int")
 {
@@ -257,30 +264,78 @@
   return GET_MODE (op) == Pmode;
 })
 
-;; A special predicate that doesn't match a particular mode.
-(define_special_predicate "vector_any_register_operand"
-  (match_code "reg, subreg")
+(define_special_predicate "p_reg_or_0_operand"
+  (match_code "reg, subreg, const_int")
 {
-  return VECTOR_MODE_P (GET_MODE (op));
+  if (CONST_INT_P (op))
+    return op == const0_rtx;
+  return GET_MODE (op) == Pmode;
 })
 
-(define_predicate "vector_reg_or_const0_operand"
-  (ior (match_operand 0 "register_operand")
-       (match_test "op == const0_rtx && !VECTOR_MODE_P (GET_MODE (op))")))
-
-(define_predicate "vector_move_operand"
-  (ior (match_operand 0 "nonimmediate_operand")
-      (match_code "const_vector")))
-
+(define_special_predicate "p_reg_or_uimm5_operand"
+  (match_code "reg, subreg, const_int")
+{
+  if (CONST_INT_P (op))
+    return satisfies_constraint_K (op);
+  return GET_MODE (op) == Pmode;
+})
+ 
 (define_predicate "reg_or_mem_operand"
   (ior (match_operand 0 "register_operand")
        (match_operand 0 "memory_operand")))
 
+(define_predicate "reg_or_const_int_operand"
+  (ior (match_operand 0 "register_operand")
+       (match_code "const_wide_int, const_int")))
+
 (define_predicate "reg_or_simm5_operand"
   (ior (match_operand 0 "register_operand")
        (and (match_operand 0 "const_int_operand")
 	    (match_test "!FLOAT_MODE_P (GET_MODE (op)) && IN_RANGE (INTVAL (op), -16, 15)"))))
 
-(define_predicate "reg_or_const_int_operand"
+(define_predicate "reg_or_uimm5_operand"
+  (match_operand 0 "csr_operand"))
+
+(define_predicate "reg_or_neg_simm5_operand"
+  (ior (match_operand 0 "register_operand")
+       (and (match_operand 0 "const_int_operand")
+	    (match_test "IN_RANGE (INTVAL (op), -15, 16)"))))
+
+(define_predicate "vector_const_simm5_operand"
+  (and (match_code "const_vector")
+       (match_test "rvv_const_vec_all_same_in_range_p (op, -16, 15)")))
+
+(define_predicate "vector_neg_const_simm5_operand"
+  (and (match_code "const_vector")
+       (match_test "rvv_const_vec_all_same_in_range_p (op, -15, 16)")))
+
+(define_predicate "vector_const_uimm5_operand"
+  (and (match_code "const_vector")
+       (match_test "rvv_const_vec_all_same_in_range_p (op, 0, 31)")))
+ 
+(define_predicate "vector_move_operand"
+  (ior (match_operand 0 "nonimmediate_operand")
+      (match_code "const_vector")))
+
+(define_predicate "vector_arith_operand"
+  (ior (match_operand 0 "vector_const_simm5_operand")
+       (match_operand 0 "register_operand")))
+
+(define_predicate "vector_neg_arith_operand"
+  (ior (match_operand 0 "vector_neg_const_simm5_operand")
+       (match_operand 0 "register_operand")))
+
+(define_predicate "vector_shift_operand"
+  (ior (match_operand 0 "vector_const_uimm5_operand")
+       (match_operand 0 "register_operand")))
+
+(define_predicate "vector_constant_vector_operand"
+  (match_code "const,const_vector"))
+
+(define_predicate "vector_perm_operand"
+  (ior (match_operand 0 "register_operand")
+       (match_operand 0 "vector_constant_vector_operand")))
+
+(define_predicate "vector_reg_or_const0_operand"
   (ior (match_operand 0 "register_operand")
-       (match_code "const_wide_int, const_int")))
\ No newline at end of file
+       (match_test "op == const0_rtx && !VECTOR_MODE_P (GET_MODE (op))")))
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.cc b/gcc/config/riscv/riscv-vector-builtins-functions.cc
index 9d2895c3d3e..0726465f146 100644
--- a/gcc/config/riscv/riscv-vector-builtins-functions.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-functions.cc
@@ -446,6 +446,18 @@ is_dt_const (enum data_type_index dt)
 }
 
 /* Helper functions for builder implementation. */
+static void
+intrinsic_rename (function_instance &instance, int index)
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[index];
+  bool unsigned_p = instance.get_data_type_list ()[index] == DT_unsigned;
+  const char *name = instance.get_base_name ();
+  const char *op = get_operation_str (instance.get_operation ());
+  const char *dt = mode2data_type_str (mode, unsigned_p, false);
+  const char *pred = get_pred_str (instance.get_pred ());
+  snprintf (instance.function_name, NAME_MAXLEN, "%s%s%s%s", name, op, dt, pred);
+}
+
 static void
 intrinsic_rename (function_instance &instance, int index1, int index2)
 {
@@ -471,6 +483,12 @@ get_dt_t_with_index (const function_instance &instance, int index)
   return get_dt_t (mode, unsigned_p, ptr_p, c_p);
 }
 
+static bool
+vector_scalar_operation_p (enum operation_index op)
+{
+  return op == OP_vx || op == OP_wx || op == OP_vxm || op == OP_vf ||
+         op == OP_wf || op == OP_vfm;
+}
 
 /* Return true if the function has no return value.  */
 static bool
@@ -1394,6 +1412,58 @@ vsetvlmax::expand (const function_instance &instance, tree exp, rtx target) cons
                                 !function_returns_void_p (fndecl));
 }
 
+/* A function implementation for readvl functions.  */
+unsigned int
+readvl::call_properties () const
+{
+  return CP_READ_CSR;
+}
+
+char *
+readvl::assemble_name (function_instance &instance)
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[0];
+  bool unsigned_p = instance.get_data_type_list ()[0] == DT_unsigned;
+  const char *name = instance.get_base_name ();
+  const char *dt = mode2data_type_str (mode, unsigned_p, false);
+  snprintf (instance.function_name, NAME_MAXLEN, "%s%s", name, dt);
+  return nullptr;
+}
+
+tree
+readvl::get_return_type (const function_instance &) const
+{
+  return size_type_node;
+}
+
+void
+readvl::get_argument_types (const function_instance &instance,
+                            vec<tree> &argument_types) const
+{
+  argument_types.quick_push (get_dt_t_with_index (instance, 0));
+}
+
+rtx
+readvl::expand (const function_instance &, tree exp, rtx target) const
+{
+  struct expand_operand ops[MAX_RECOG_OPERANDS];
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  tree fndecl = TREE_OPERAND (CALL_EXPR_FN (exp), 0);
+  enum insn_code icode = code_for_readvl (Pmode);
+
+  /* Map any target to operand 0.  */
+  int opno = 0;
+  create_output_operand (&ops[opno++], target, mode);
+
+  for (int argno = 0; argno < call_expr_nargs (exp); argno++)
+    add_input_operand (&ops[opno++], exp, argno);
+
+  /* Map the arguments to the other operands.  */
+  gcc_assert (opno == insn_data[icode].n_generator_args);
+  return generate_builtin_insn (icode, opno, ops,
+                                !function_returns_void_p (fndecl));
+}
+
 /* A function implementation for Miscellaneous functions.  */
 char *
 misc::assemble_name (function_instance &instance)
@@ -1663,6 +1733,345 @@ vse::expand (const function_instance &instance, tree exp, rtx target) const
   return expand_builtin_insn (icode, exp, target, instance);
 }
 
+/* A function implementation for vlse functions.  */
+unsigned int
+vlse::call_properties () const
+{
+  return CP_READ_MEMORY;
+}
+
+tree
+vlse::get_return_type (const function_instance &instance) const
+{
+  return get_dt_t_with_index (instance, 0);
+}
+
+void
+vlse::get_argument_types (const function_instance &instance,
+                          vec<tree> &argument_types) const
+{
+  loadstore::get_argument_types (instance, argument_types);
+  argument_types.quick_push (ptrdiff_type_node);
+}
+
+bool
+vlse::can_be_overloaded_p (const function_instance &instance) const
+{
+  return instance.get_pred () == PRED_m || instance.get_pred () == PRED_tu ||
+         instance.get_pred () == PRED_tamu ||
+         instance.get_pred () == PRED_tuma || instance.get_pred () == PRED_tumu;
+}
+
+rtx
+vlse::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode = code_for_vlse (mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vsse functions.  */
+unsigned int
+vsse::call_properties () const
+{
+  return CP_WRITE_MEMORY;
+}
+
+void
+vsse::get_argument_types (const function_instance &instance,
+                          vec<tree> &argument_types) const
+{
+  loadstore::get_argument_types (instance, argument_types);
+  argument_types.quick_insert (1, ptrdiff_type_node);
+}
+
+bool
+vsse::can_be_overloaded_p (const function_instance &) const
+{
+  return true;
+}
+
+rtx
+vsse::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[0];
+  enum insn_code icode = code_for_vsse (mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vlm functions.  */
+char *
+vlm::assemble_name (function_instance &instance)
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[0];
+  bool unsigned_p = instance.get_data_type_list ()[0] == DT_unsigned;
+  const char *name = instance.get_base_name ();
+  const char *op = get_operation_str (instance.get_operation ());
+  const char *dt = mode2data_type_str (mode, unsigned_p, false);
+  snprintf (instance.function_name, NAME_MAXLEN, "%s%s%s", name, op, dt);
+  return nullptr;
+}
+
+unsigned int
+vlm::call_properties () const
+{
+  return CP_READ_MEMORY;
+}
+
+tree
+vlm::get_return_type (const function_instance &instance) const
+{
+  return get_dt_t_with_index (instance, 0);
+}
+
+void
+vlm::get_argument_types (const function_instance &,
+                         vec<tree> &argument_types) const
+{
+  argument_types.quick_push (const_scalar_pointer_types[VECTOR_TYPE_uint8]);
+}
+
+rtx
+vlm::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode = code_for_vlm (mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vsm functions.  */
+char *
+vsm::assemble_name (function_instance &instance)
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[0];
+  bool unsigned_p = instance.get_data_type_list ()[0] == DT_unsigned;
+  const char *name = instance.get_base_name ();
+  const char *op = get_operation_str (instance.get_operation ());
+  const char *dt = mode2data_type_str (mode, unsigned_p, false);
+  snprintf (instance.function_name, NAME_MAXLEN, "%s%s%s", name, op, dt);
+  append_name (name);
+  return finish_name ();
+}
+
+unsigned int
+vsm::call_properties () const
+{
+  return CP_WRITE_MEMORY;
+}
+
+void
+vsm::get_argument_types (const function_instance &instance,
+                         vec<tree> &argument_types) const
+{
+  argument_types.quick_push (scalar_pointer_types[VECTOR_TYPE_uint8]);
+  argument_types.quick_push (get_dt_t_with_index (instance, 0));
+}
+
+rtx
+vsm::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[0];
+  enum insn_code icode = code_for_vsm (mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for indexed loadstore functions.  */
+char *
+indexedloadstore::assemble_name (function_instance &instance)
+{
+  machine_mode data_mode = instance.get_arg_pattern ().arg_list[0];
+  machine_mode index_mode = instance.get_arg_pattern ().arg_list[2];
+  bool unsigned_p = instance.get_data_type_list ()[0] == DT_unsigned;
+  int sew = GET_MODE_BITSIZE (GET_MODE_INNER (index_mode));
+  char name[NAME_MAXLEN];
+  snprintf (name, NAME_MAXLEN, "%s%d", instance.get_base_name (), sew);
+  const char *op = get_operation_str (instance.get_operation ());
+  const char *dt = mode2data_type_str (data_mode, unsigned_p, false);
+  const char *pred = get_pred_str (instance.get_pred ());
+  snprintf (instance.function_name, NAME_MAXLEN, "%s%s%s%s", name, op, dt, pred);
+  append_name (name);
+  append_name (get_pred_str (instance.get_pred (), true));
+  return finish_name ();
+}
+
+void
+indexedloadstore::get_argument_types (const function_instance &instance,
+                                      vec<tree> &argument_types) const
+{
+  for (unsigned int i = 1; i < instance.get_arg_pattern ().arg_len; i++)
+    argument_types.quick_push (get_dt_t_with_index (instance, i));
+}
+
+/* A function implementation for vlxei functions.  */
+unsigned int
+vlxei::call_properties () const
+{
+  return CP_READ_MEMORY;
+}
+
+tree
+vlxei::get_return_type (const function_instance &instance) const
+{
+  return get_dt_t_with_index (instance, 0);
+}
+
+/* A function implementation for vluxei functions.  */
+rtx
+vluxei::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode1 = TYPE_MODE (TREE_TYPE (exp));
+  machine_mode mode2 = instance.get_arg_pattern ().arg_list[2];
+  enum insn_code icode = code_for_vlxei (UNSPEC_UNORDER_INDEXED_LOAD, mode1, mode2);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vloxei functions.  */
+rtx
+vloxei::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode1 = TYPE_MODE (TREE_TYPE (exp));
+  machine_mode mode2 = instance.get_arg_pattern ().arg_list[2];
+  enum insn_code icode = code_for_vlxei (UNSPEC_ORDER_INDEXED_LOAD, mode1, mode2);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vsuxei functions.  */
+unsigned int
+vsuxei::call_properties () const
+{
+  return CP_WRITE_MEMORY;
+}
+
+rtx
+vsuxei::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode1 = instance.get_arg_pattern ().arg_list[3];
+  machine_mode mode2 = instance.get_arg_pattern ().arg_list[2];
+  enum insn_code icode = code_for_vsxei (UNSPEC_UNORDER_INDEXED_STORE, mode1, mode2);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vsoxei functions.  */
+unsigned int
+vsoxei::call_properties () const
+{
+  return CP_WRITE_MEMORY;
+}
+
+rtx
+vsoxei::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode1 = instance.get_arg_pattern ().arg_list[3];
+  machine_mode mode2 = instance.get_arg_pattern ().arg_list[2];
+  enum insn_code icode = code_for_vsxei (UNSPEC_ORDER_INDEXED_STORE, mode1, mode2);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vleff functions.  */
+unsigned int
+vleff::call_properties () const
+{
+  return CP_READ_MEMORY | CP_RAISE_LD_EXCEPTIONS;
+}
+
+char *
+vleff::assemble_name (function_instance &instance)
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[0];
+  bool unsigned_p = instance.get_data_type_list ()[0] == DT_unsigned;
+  unsigned int sew = GET_MODE_BITSIZE (GET_MODE_INNER (mode));
+  char name[NAME_MAXLEN];
+  snprintf (name, NAME_MAXLEN, "vle%dff", sew);
+  const char *op = get_operation_str (instance.get_operation ());
+  const char *dt = mode2data_type_str (mode, unsigned_p, false);
+  const char *pred = get_pred_str (instance.get_pred ());
+  snprintf (instance.function_name, NAME_MAXLEN, "%s%s%s%s", name, op, dt, pred);
+  if (this->can_be_overloaded_p (instance))
+    {
+      append_name (name);
+      append_name (get_pred_str (instance.get_pred (), true));
+      return finish_name ();
+    }
+  return nullptr;
+}
+
+tree
+vleff::get_return_type (const function_instance &instance) const
+{
+  return get_dt_t_with_index (instance, 0);
+}
+
+void
+vleff::get_argument_types (const function_instance &instance,
+                           vec<tree> &argument_types) const
+{
+  for (unsigned int i = 1; i < instance.get_arg_pattern ().arg_len; i++)
+    argument_types.quick_push (get_dt_t_with_index (instance, i));
+  argument_types.quick_push (build_pointer_type (size_type_node));
+}
+
+bool 
+vleff::can_be_overloaded_p (const function_instance &instance) const
+{
+  return instance.get_pred () == PRED_m || instance.get_pred () == PRED_tu ||
+         instance.get_pred () == PRED_tamu ||
+         instance.get_pred () == PRED_tuma || instance.get_pred () == PRED_tumu;
+}
+
+gimple *
+vleff::fold (const function_instance &instance, gimple_stmt_iterator *gsi_in,
+             gcall *call_in) const
+{
+  /* split vleff (a, b, c) -> d = vleff (a, c) + b = readvl (d). */
+  auto_vec<tree, 8> vargs;
+
+  unsigned int offset = 2;
+
+  for (unsigned int i = 0; i < gimple_call_num_args (call_in); i++)
+    {
+      if (i == gimple_call_num_args (call_in) - offset)
+        continue;
+
+      vargs.quick_push (gimple_call_arg (call_in, i));
+    }
+
+  gimple *repl = gimple_build_call_vec (gimple_call_fn (call_in), vargs);
+  gimple_call_set_lhs (repl, gimple_call_lhs (call_in));
+
+  tree var = create_tmp_var (size_type_node, "new_vl");
+  tree tem = make_ssa_name (size_type_node);
+  machine_mode mode = instance.get_arg_pattern ().arg_list[0];
+  bool unsigned_p = instance.get_data_type_list ()[0] == DT_unsigned;
+  char resolver[NAME_MAXLEN];
+  snprintf (resolver, NAME_MAXLEN, "readvl%s", mode2data_type_str (mode, unsigned_p, false));
+  function_instance fn_instance (resolver);
+  hashval_t hashval = fn_instance.hash ();
+  registered_function *rfn_slot =
+      function_table->find_with_hash (fn_instance, hashval);
+  tree decl = rfn_slot->decl;
+  
+  gimple *g = gimple_build_call (decl, 1, gimple_call_lhs (call_in));
+  gimple_call_set_lhs (g, var);
+  tree indirect = fold_build2 (
+      MEM_REF, size_type_node,
+      gimple_call_arg (call_in, gimple_call_num_args (call_in) - offset),
+      build_int_cst (build_pointer_type (size_type_node), 0));
+  gassign *assign = gimple_build_assign (indirect, tem);
+  gsi_insert_after (gsi_in, assign, GSI_SAME_STMT);
+  gsi_insert_after (gsi_in, gimple_build_assign (tem, var), GSI_SAME_STMT);
+  gsi_insert_after (gsi_in, g, GSI_SAME_STMT);
+
+  return repl;
+}
+
+rtx
+vleff::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode = code_for_vleff (mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
 } // end namespace riscv_vector
 
 using namespace riscv_vector;
diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.def b/gcc/config/riscv/riscv-vector-builtins-functions.def
index 739ae60fff5..6d82b1c933d 100644
--- a/gcc/config/riscv/riscv-vector-builtins-functions.def
+++ b/gcc/config/riscv/riscv-vector-builtins-functions.def
@@ -35,6 +35,9 @@ along with GCC; see the file COPYING3. If not see
 DEF_RVV_FUNCTION(vsetvl, vsetvl, (1, VITER(VI, signed)), PAT_none, PRED_none, OP_none)
 DEF_RVV_FUNCTION(vsetvlmax, vsetvlmax, (1, VITER(VI, signed)), PAT_none, PRED_none, OP_none)
 /* Helper misc intrinsics for software programmer. */
+DEF_RVV_FUNCTION(readvl, readvl, (1, VITER(VI, signed)), PAT_none, PRED_none, OP_none)
+DEF_RVV_FUNCTION(readvl, readvl, (1, VITER(VI, unsigned)), PAT_none, PRED_none, OP_none)
+DEF_RVV_FUNCTION(readvl, readvl, (1, VITER(VF, signed)), PAT_none, PRED_none, OP_none)
 DEF_RVV_FUNCTION(vreinterpret, vreinterpret, (2, VITER(VI, unsigned), VATTR(0, VI, signed)), PAT_none, PRED_none, OP_v)
 DEF_RVV_FUNCTION(vreinterpret, vreinterpret, (2, VITER(VI, signed), VATTR(0, VI, unsigned)), PAT_none, PRED_none, OP_v)
 DEF_RVV_FUNCTION(vreinterpret, vreinterpret, (2, VATTR(1, VCONVERFI, signed), VITER(VCONVERF, signed)), PAT_none, PRED_none, OP_v)
@@ -69,6 +72,73 @@ DEF_RVV_FUNCTION(vle, vle, (2, VITER(VF, signed), VATTR(0, VSUB, c_ptr)), pat_ma
 DEF_RVV_FUNCTION(vse, vse, (3, VITER(VI, signed), VATTR(0, VSUB, ptr), VATTR(0, VI, signed)), pat_mask_ignore_policy, pred_mask2, OP_v)
 DEF_RVV_FUNCTION(vse, vse, (3, VITER(VI, unsigned), VATTR(0, VSUB, uptr), VATTR(0, VI, unsigned)), pat_mask_ignore_policy, pred_mask2, OP_v)
 DEF_RVV_FUNCTION(vse, vse, (3, VITER(VF, signed), VATTR(0, VSUB, ptr), VATTR(0, VF, signed)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vlm, vlm, (1, VITER(VB, signed)), PAT_none, PRED_void, OP_v)
+DEF_RVV_FUNCTION(vsm, vsm, (1, VITER(VB, signed)), PAT_none, PRED_void, OP_v)
+DEF_RVV_FUNCTION(vlse, vlse, (2, VITER(VI, signed), VATTR(0, VSUB, c_ptr)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vlse, vlse, (2, VITER(VI, unsigned), VATTR(0, VSUB, c_uptr)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vlse, vlse, (2, VITER(VF, signed), VATTR(0, VSUB, c_ptr)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vsse, vsse, (3, VITER(VI, signed), VATTR(0, VSUB, ptr), VATTR(0, VI, signed)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsse, vsse, (3, VITER(VI, unsigned), VATTR(0, VSUB, uptr), VATTR(0, VI, unsigned)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsse, vsse, (3, VITER(VF, signed), VATTR(0, VSUB, ptr), VATTR(0, VF, signed)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vluxei, vluxei, (3, VITER(V2UNITS, signed), VATTR(0, VSUB, c_ptr), VITER(V2UNITSI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vluxei, vluxei, (3, VITER(V2UNITS, unsigned), VATTR(0, VSUB, c_uptr), VITER(V2UNITSI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vluxei, vluxei, (3, VITER(V4UNITS, signed), VATTR(0, VSUB, c_ptr), VITER(V4UNITSI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vluxei, vluxei, (3, VITER(V4UNITS, unsigned), VATTR(0, VSUB, c_uptr), VITER(V4UNITSI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vluxei, vluxei, (3, VITER(V8UNITS, signed), VATTR(0, VSUB, c_ptr), VITER(V8UNITSI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vluxei, vluxei, (3, VITER(V8UNITS, unsigned), VATTR(0, VSUB, c_uptr), VITER(V8UNITSI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vluxei, vluxei, (3, VITER(V16UNITS, signed), VATTR(0, VSUB, c_ptr), VITER(V16UNITSI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vluxei, vluxei, (3, VITER(V16UNITS, unsigned), VATTR(0, VSUB, c_uptr), VITER(V16UNITSI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vluxei, vluxei, (3, VITER(V32UNITS, signed), VATTR(0, VSUB, c_ptr), VITER(V32UNITSI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vluxei, vluxei, (3, VITER(V32UNITS, unsigned), VATTR(0, VSUB, c_uptr), VITER(V32UNITSI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vluxei, vluxei, (3, VITER(V64UNITS, signed), VATTR(0, VSUB, c_ptr), VITER(V64UNITSI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vluxei, vluxei, (3, VITER(V64UNITS, unsigned), VATTR(0, VSUB, c_uptr), VITER(V64UNITSI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vluxei, vluxei, (3, VITER(V128UNITSI, signed), VATTR(0, VSUB, c_ptr), VITER(V128UNITSI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vluxei, vluxei, (3, VITER(V128UNITSI, unsigned), VATTR(0, VSUB, c_uptr), VITER(V128UNITSI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vloxei, vloxei, (3, VITER(V2UNITS, signed), VATTR(0, VSUB, c_ptr), VITER(V2UNITSI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vloxei, vloxei, (3, VITER(V2UNITS, unsigned), VATTR(0, VSUB, c_uptr), VITER(V2UNITSI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vloxei, vloxei, (3, VITER(V4UNITS, signed), VATTR(0, VSUB, c_ptr), VITER(V4UNITSI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vloxei, vloxei, (3, VITER(V4UNITS, unsigned), VATTR(0, VSUB, c_uptr), VITER(V4UNITSI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vloxei, vloxei, (3, VITER(V8UNITS, signed), VATTR(0, VSUB, c_ptr), VITER(V8UNITSI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vloxei, vloxei, (3, VITER(V8UNITS, unsigned), VATTR(0, VSUB, c_uptr), VITER(V8UNITSI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vloxei, vloxei, (3, VITER(V16UNITS, signed), VATTR(0, VSUB, c_ptr), VITER(V16UNITSI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vloxei, vloxei, (3, VITER(V16UNITS, unsigned), VATTR(0, VSUB, c_uptr), VITER(V16UNITSI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vloxei, vloxei, (3, VITER(V32UNITS, signed), VATTR(0, VSUB, c_ptr), VITER(V32UNITSI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vloxei, vloxei, (3, VITER(V32UNITS, unsigned), VATTR(0, VSUB, c_uptr), VITER(V32UNITSI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vloxei, vloxei, (3, VITER(V64UNITS, signed), VATTR(0, VSUB, c_ptr), VITER(V64UNITSI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vloxei, vloxei, (3, VITER(V64UNITS, unsigned), VATTR(0, VSUB, c_uptr), VITER(V64UNITSI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vloxei, vloxei, (3, VITER(V128UNITSI, signed), VATTR(0, VSUB, c_ptr), VITER(V128UNITSI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vloxei, vloxei, (3, VITER(V128UNITSI, unsigned), VATTR(0, VSUB, c_uptr), VITER(V128UNITSI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vsuxei, vsuxei, (4, VITER(V2UNITS, signed), VATTR(0, VSUB, c_ptr), VITER(V2UNITSI, unsigned), VATTR(0, V2UNITS, signed)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsuxei, vsuxei, (4, VITER(V2UNITS, unsigned), VATTR(0, VSUB, c_uptr), VITER(V2UNITSI, unsigned), VATTR(0, V2UNITS, unsigned)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsuxei, vsuxei, (4, VITER(V4UNITS, signed), VATTR(0, VSUB, c_ptr), VITER(V4UNITSI, unsigned), VATTR(0, V4UNITS, signed)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsuxei, vsuxei, (4, VITER(V4UNITS, unsigned), VATTR(0, VSUB, c_uptr), VITER(V4UNITSI, unsigned), VATTR(0, V4UNITS, unsigned)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsuxei, vsuxei, (4, VITER(V8UNITS, signed), VATTR(0, VSUB, c_ptr), VITER(V8UNITSI, unsigned), VATTR(0, V8UNITS, signed)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsuxei, vsuxei, (4, VITER(V8UNITS, unsigned), VATTR(0, VSUB, c_uptr), VITER(V8UNITSI, unsigned), VATTR(0, V8UNITS, unsigned)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsuxei, vsuxei, (4, VITER(V16UNITS, signed), VATTR(0, VSUB, c_ptr), VITER(V16UNITSI, unsigned), VATTR(0, V16UNITS, signed)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsuxei, vsuxei, (4, VITER(V16UNITS, unsigned), VATTR(0, VSUB, c_uptr), VITER(V16UNITSI, unsigned), VATTR(0, V16UNITS, unsigned)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsuxei, vsuxei, (4, VITER(V32UNITS, signed), VATTR(0, VSUB, c_ptr), VITER(V32UNITSI, unsigned), VATTR(0, V32UNITS, signed)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsuxei, vsuxei, (4, VITER(V32UNITS, unsigned), VATTR(0, VSUB, c_uptr), VITER(V32UNITSI, unsigned), VATTR(0, V32UNITS, unsigned)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsuxei, vsuxei, (4, VITER(V64UNITS, signed), VATTR(0, VSUB, c_ptr), VITER(V64UNITSI, unsigned), VATTR(0, V64UNITS, signed)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsuxei, vsuxei, (4, VITER(V64UNITS, unsigned), VATTR(0, VSUB, c_uptr), VITER(V64UNITSI, unsigned), VATTR(0, V64UNITS, unsigned)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsuxei, vsuxei, (4, VITER(V128UNITSI, signed), VATTR(0, VSUB, c_ptr), VITER(V128UNITSI, unsigned), VATTR(0, V128UNITSI, signed)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsuxei, vsuxei, (4, VITER(V128UNITSI, unsigned), VATTR(0, VSUB, c_uptr), VITER(V128UNITSI, unsigned), VATTR(0, V128UNITSI, unsigned)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsoxei, vsoxei, (4, VITER(V2UNITS, signed), VATTR(0, VSUB, c_ptr), VITER(V2UNITSI, unsigned), VATTR(0, V2UNITS, signed)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsoxei, vsoxei, (4, VITER(V2UNITS, unsigned), VATTR(0, VSUB, c_uptr), VITER(V2UNITSI, unsigned), VATTR(0, V2UNITS, unsigned)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsoxei, vsoxei, (4, VITER(V4UNITS, signed), VATTR(0, VSUB, c_ptr), VITER(V4UNITSI, unsigned), VATTR(0, V4UNITS, signed)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsoxei, vsoxei, (4, VITER(V4UNITS, unsigned), VATTR(0, VSUB, c_uptr), VITER(V4UNITSI, unsigned), VATTR(0, V4UNITS, unsigned)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsoxei, vsoxei, (4, VITER(V8UNITS, signed), VATTR(0, VSUB, c_ptr), VITER(V8UNITSI, unsigned), VATTR(0, V8UNITS, signed)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsoxei, vsoxei, (4, VITER(V8UNITS, unsigned), VATTR(0, VSUB, c_uptr), VITER(V8UNITSI, unsigned), VATTR(0, V8UNITS, unsigned)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsoxei, vsoxei, (4, VITER(V16UNITS, signed), VATTR(0, VSUB, c_ptr), VITER(V16UNITSI, unsigned), VATTR(0, V16UNITS, signed)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsoxei, vsoxei, (4, VITER(V16UNITS, unsigned), VATTR(0, VSUB, c_uptr), VITER(V16UNITSI, unsigned), VATTR(0, V16UNITS, unsigned)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsoxei, vsoxei, (4, VITER(V32UNITS, signed), VATTR(0, VSUB, c_ptr), VITER(V32UNITSI, unsigned), VATTR(0, V32UNITS, signed)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsoxei, vsoxei, (4, VITER(V32UNITS, unsigned), VATTR(0, VSUB, c_uptr), VITER(V32UNITSI, unsigned), VATTR(0, V32UNITS, unsigned)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsoxei, vsoxei, (4, VITER(V64UNITS, signed), VATTR(0, VSUB, c_ptr), VITER(V64UNITSI, unsigned), VATTR(0, V64UNITS, signed)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsoxei, vsoxei, (4, VITER(V64UNITS, unsigned), VATTR(0, VSUB, c_uptr), VITER(V64UNITSI, unsigned), VATTR(0, V64UNITS, unsigned)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsoxei, vsoxei, (4, VITER(V128UNITSI, signed), VATTR(0, VSUB, c_ptr), VITER(V128UNITSI, unsigned), VATTR(0, V128UNITSI, signed)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vsoxei, vsoxei, (4, VITER(V128UNITSI, unsigned), VATTR(0, VSUB, c_uptr), VITER(V128UNITSI, unsigned), VATTR(0, V128UNITSI, unsigned)), pat_mask_ignore_policy, pred_mask2, OP_v)
+DEF_RVV_FUNCTION(vleff, vleff, (2, VITER(VI, signed), VATTR(0, VSUB, c_ptr)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vleff, vleff, (2, VITER(VI, unsigned), VATTR(0, VSUB, c_uptr)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vleff, vleff, (2, VITER(VF, signed), VATTR(0, VSUB, c_ptr)), pat_mask_tail, pred_all, OP_v)
 #undef REQUIRED_EXTENSIONS
 #undef DEF_RVV_FUNCTION
 #undef VITER
diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.h b/gcc/config/riscv/riscv-vector-builtins-functions.h
index 90063005024..a37e21876a6 100644
--- a/gcc/config/riscv/riscv-vector-builtins-functions.h
+++ b/gcc/config/riscv/riscv-vector-builtins-functions.h
@@ -522,6 +522,24 @@ public:
   virtual rtx expand (const function_instance &, tree, rtx) const override;
 };
 
+/* A function_base for readvl functions.  */
+class readvl : public function_builder
+{
+public:
+  // use the same construction function as the function_builder
+  using function_builder::function_builder;
+
+  virtual unsigned int call_properties () const override;
+
+  virtual char * assemble_name (function_instance &) override;
+
+  virtual tree get_return_type (const function_instance &) const override;
+
+  virtual void get_argument_types (const function_instance &, vec<tree> &) const override;
+
+  virtual rtx expand (const function_instance &, tree, rtx) const override;
+};
+
 /* A function_base for Miscellaneous functions.  */
 class misc : public function_builder
 {
@@ -654,6 +672,166 @@ public:
   virtual rtx expand (const function_instance &, tree, rtx) const override;
 };
 
+/* A function_base for vlse functions.  */
+class vlse : public loadstore
+{
+public:
+  // use the same construction function as the loadstore
+  using loadstore::loadstore;
+
+  virtual unsigned int call_properties () const override;
+
+  virtual tree get_return_type (const function_instance &) const override;
+
+  virtual void get_argument_types (const function_instance &, vec<tree> &) const override;
+  
+  virtual bool can_be_overloaded_p (const function_instance &) const override;
+
+  virtual rtx expand (const function_instance &, tree, rtx) const override;
+};
+
+/* A function_base for vsse functions.  */
+class vsse : public loadstore
+{
+public:
+  // use the same construction function as the loadstore
+  using loadstore::loadstore;
+
+  virtual unsigned int call_properties () const override;
+
+  virtual void get_argument_types (const function_instance &, vec<tree> &) const override;
+  
+  virtual bool can_be_overloaded_p (const function_instance &) const override;
+
+  virtual rtx expand (const function_instance &, tree, rtx) const override;
+};
+
+/* A function_base for vlm functions.  */
+class vlm : public loadstore
+{
+public:
+  // use the same construction function as the loadstore
+  using loadstore::loadstore;
+  
+  virtual char * assemble_name (function_instance &) override;
+  
+  virtual unsigned int call_properties () const override;
+
+  virtual tree get_return_type (const function_instance &) const override;
+
+  virtual void get_argument_types (const function_instance &, vec<tree> &) const override;
+
+  virtual rtx expand (const function_instance &, tree, rtx) const override;
+};
+
+/* A function_base for vsm functions.  */
+class vsm : public loadstore
+{
+public:
+  // use the same construction function as the loadstore
+  using loadstore::loadstore;
+  
+  virtual char * assemble_name (function_instance &) override;
+    
+  virtual unsigned int call_properties () const override;
+  
+  virtual void get_argument_types (const function_instance &, vec<tree> &) const override;
+  
+  virtual rtx expand (const function_instance &, tree, rtx) const override;
+};
+
+/* A function_base for indexed loadstore functions.  */
+class indexedloadstore : public function_builder
+{
+public:
+  // use the same construction function as the function_builder
+  using function_builder::function_builder;
+  
+  virtual char * assemble_name (function_instance &) override;
+  
+  virtual void get_argument_types (const function_instance &, vec<tree> &) const override;
+};
+
+/* A function_base for vlxei functions.  */
+class vlxei : public indexedloadstore
+{
+public:
+  // use the same construction function as the indexedloadstore
+  using indexedloadstore::indexedloadstore;
+
+  virtual unsigned int call_properties () const override;
+
+  virtual tree get_return_type (const function_instance &) const override;
+};
+
+/* A function_base for vluxei functions.  */
+class vluxei : public vlxei
+{
+public:
+  // use the same construction function as the vlxei
+  using vlxei::vlxei;
+
+  virtual rtx expand (const function_instance &, tree, rtx) const override;
+};
+
+/* A function_base for vloxei functions.  */
+class vloxei : public vlxei
+{
+public:
+  // use the same construction function as the vlxei
+  using vlxei::vlxei;
+
+  virtual rtx expand (const function_instance &, tree, rtx) const override;
+};
+
+/* A function_base for vsuxei functions.  */
+class vsuxei : public indexedloadstore
+{
+public:
+  // use the same construction function as the indexedloadstore
+  using indexedloadstore::indexedloadstore;
+  
+  virtual unsigned int call_properties () const override;
+  
+  virtual rtx expand (const function_instance &, tree, rtx) const override;
+};
+
+/* A function_base for vsoxei functions.  */
+class vsoxei : public indexedloadstore
+{
+public:
+  // use the same construction function as the indexedloadstore
+  using indexedloadstore::indexedloadstore;
+  
+  virtual unsigned int call_properties () const override;
+  
+  virtual rtx expand (const function_instance &, tree, rtx) const override;
+};
+
+/* A function_base for vleff functions.  */
+class vleff : public function_builder
+{
+public:
+  // use the same construction function as the function_builder
+  using function_builder::function_builder;
+
+  virtual unsigned int call_properties () const override;
+
+  virtual char * assemble_name (function_instance &) override;
+
+  virtual tree get_return_type (const function_instance &) const override;
+
+  virtual void get_argument_types (const function_instance &, vec<tree> &) const override;
+  
+  virtual bool can_be_overloaded_p (const function_instance &) const override;
+
+  virtual gimple * fold (const function_instance &, gimple_stmt_iterator *,
+        gcall *) const override;
+
+  virtual rtx expand (const function_instance &, tree, rtx) const override;
+};
+
+
 } // namespace riscv_vector
 
 #endif // end GCC_RISCV_VECTOR_BUILTINS_FUNCTIONS_H
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv-vector-builtins-iterators.def b/gcc/config/riscv/riscv-vector-builtins-iterators.def
index 8f2ea912804..1c3ddd64a3d 100644
--- a/gcc/config/riscv/riscv-vector-builtins-iterators.def
+++ b/gcc/config/riscv/riscv-vector-builtins-iterators.def
@@ -118,6 +118,78 @@ DEF_RISCV_ARG_MODE_ATTR(V64BITI, 0, VNx2DI, VNx2DI, TARGET_ANY)
 DEF_RISCV_ARG_MODE_ATTR(V64BITI, 1, VNx4DI, VNx4DI, TARGET_ANY)
 DEF_RISCV_ARG_MODE_ATTR(V64BITI, 2, VNx8DI, VNx8DI, TARGET_ANY)
 DEF_RISCV_ARG_MODE_ATTR(V64BITI, 3, VNx16DI, VNx16DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VNOT64BITI, 18)
+DEF_RISCV_ARG_MODE_ATTR(VNOT64BITI, 0, VNx2QI, VNx2QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VNOT64BITI, 1, VNx4QI, VNx4QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VNOT64BITI, 2, VNx8QI, VNx8QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VNOT64BITI, 3, VNx16QI, VNx16QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VNOT64BITI, 4, VNx32QI, VNx32QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VNOT64BITI, 5, VNx64QI, VNx64QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VNOT64BITI, 6, VNx128QI, VNx128QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VNOT64BITI, 7, VNx2HI, VNx2HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VNOT64BITI, 8, VNx4HI, VNx4HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VNOT64BITI, 9, VNx8HI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VNOT64BITI, 10, VNx16HI, VNx16HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VNOT64BITI, 11, VNx32HI, VNx32HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VNOT64BITI, 12, VNx64HI, VNx64HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VNOT64BITI, 13, VNx2SI, VNx2SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VNOT64BITI, 14, VNx4SI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VNOT64BITI, 15, VNx8SI, VNx8SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VNOT64BITI, 16, VNx16SI, VNx16SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VNOT64BITI, 17, VNx32SI, VNx32SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(V16, 30)
+DEF_RISCV_ARG_MODE_ATTR(V16, 0, VNx2QI, VNx2QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V16, 1, VNx4QI, VNx4QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V16, 2, VNx8QI, VNx8QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V16, 3, VNx16QI, VNx16QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V16, 4, VNx32QI, VNx32QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V16, 5, VNx64QI, VNx64QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V16, 6, VNx2HI, VNx2HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V16, 7, VNx4HI, VNx4HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V16, 8, VNx8HI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V16, 9, VNx16HI, VNx16HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V16, 10, VNx32HI, VNx32HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V16, 11, VNx64HI, VNx64HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V16, 12, VNx2SI, VNx2SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V16, 13, VNx4SI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V16, 14, VNx8SI, VNx8SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V16, 15, VNx16SI, VNx16SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V16, 16, VNx32SI, VNx32SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V16, 17, VNx2DI, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V16, 18, VNx4DI, VNx4DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V16, 19, VNx8DI, VNx8DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V16, 20, VNx16DI, VNx16DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V16, 21, VNx2SF, VNx2SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(V16, 22, VNx4SF, VNx4SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(V16, 23, VNx8SF, VNx8SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(V16, 24, VNx16SF, VNx16SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(V16, 25, VNx32SF, VNx32SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(V16, 26, VNx2DF, VNx2DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(V16, 27, VNx4DF, VNx4DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(V16, 28, VNx8DF, VNx8DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(V16, 29, VNx16DF, VNx16DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VI16, 21)
+DEF_RISCV_ARG_MODE_ATTR(VI16, 0, VNx2QI, VNx2QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI16, 1, VNx4QI, VNx4QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI16, 2, VNx8QI, VNx8QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI16, 3, VNx16QI, VNx16QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI16, 4, VNx32QI, VNx32QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI16, 5, VNx64QI, VNx64QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI16, 6, VNx2HI, VNx2HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI16, 7, VNx4HI, VNx4HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI16, 8, VNx8HI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI16, 9, VNx16HI, VNx16HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI16, 10, VNx32HI, VNx32HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI16, 11, VNx64HI, VNx64HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI16, 12, VNx2SI, VNx2SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI16, 13, VNx4SI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI16, 14, VNx8SI, VNx8SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI16, 15, VNx16SI, VNx16SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI16, 16, VNx32SI, VNx32SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI16, 17, VNx2DI, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI16, 18, VNx4DI, VNx4DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI16, 19, VNx8DI, VNx8DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI16, 20, VNx16DI, VNx16DI, TARGET_ANY)
 DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VCONVERFI, 9)
 DEF_RISCV_ARG_MODE_ATTR(VCONVERFI, 0, VNx2SF, VNx2SI, TARGET_ANY)
 DEF_RISCV_ARG_MODE_ATTR(VCONVERFI, 1, VNx4SF, VNx4SI, TARGET_ANY)
@@ -289,6 +361,142 @@ DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 21, VNx16SF, VNx16SF, TARGET_HARD_FLOAT)
 DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 22, VNx2DF, VNx2DF, TARGET_DOUBLE_FLOAT)
 DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 23, VNx4DF, VNx4DF, TARGET_DOUBLE_FLOAT)
 DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 24, VNx8DF, VNx8DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(V2UNITS, 6)
+DEF_RISCV_ARG_MODE_ATTR(V2UNITS, 0, VNx2QI, VNx2QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V2UNITS, 1, VNx2HI, VNx2HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V2UNITS, 2, VNx2SI, VNx2SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V2UNITS, 3, VNx2DI, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V2UNITS, 4, VNx2SF, VNx2SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(V2UNITS, 5, VNx2DF, VNx2DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(V4UNITS, 6)
+DEF_RISCV_ARG_MODE_ATTR(V4UNITS, 0, VNx4QI, VNx4QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V4UNITS, 1, VNx4HI, VNx4HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V4UNITS, 2, VNx4SI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V4UNITS, 3, VNx4DI, VNx4DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V4UNITS, 4, VNx4SF, VNx4SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(V4UNITS, 5, VNx4DF, VNx4DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(V8UNITS, 6)
+DEF_RISCV_ARG_MODE_ATTR(V8UNITS, 0, VNx8QI, VNx8QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V8UNITS, 1, VNx8HI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V8UNITS, 2, VNx8SI, VNx8SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V8UNITS, 3, VNx8DI, VNx8DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V8UNITS, 4, VNx8SF, VNx8SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(V8UNITS, 5, VNx8DF, VNx8DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(V16UNITS, 6)
+DEF_RISCV_ARG_MODE_ATTR(V16UNITS, 0, VNx16QI, VNx16QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V16UNITS, 1, VNx16HI, VNx16HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V16UNITS, 2, VNx16SI, VNx16SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V16UNITS, 3, VNx16DI, VNx16DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V16UNITS, 4, VNx16SF, VNx16SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(V16UNITS, 5, VNx16DF, VNx16DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(V32UNITS, 4)
+DEF_RISCV_ARG_MODE_ATTR(V32UNITS, 0, VNx32QI, VNx32QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V32UNITS, 1, VNx32HI, VNx32HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V32UNITS, 2, VNx32SI, VNx32SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V32UNITS, 3, VNx32SF, VNx32SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(V64UNITS, 2)
+DEF_RISCV_ARG_MODE_ATTR(V64UNITS, 0, VNx64QI, VNx64QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V64UNITS, 1, VNx64HI, VNx64HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(V2UNITSI, 4)
+DEF_RISCV_ARG_MODE_ATTR(V2UNITSI, 0, VNx2QI, VNx2QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V2UNITSI, 1, VNx2HI, VNx2HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V2UNITSI, 2, VNx2SI, VNx2SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V2UNITSI, 3, VNx2DI, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(V4UNITSI, 4)
+DEF_RISCV_ARG_MODE_ATTR(V4UNITSI, 0, VNx4QI, VNx4QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V4UNITSI, 1, VNx4HI, VNx4HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V4UNITSI, 2, VNx4SI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V4UNITSI, 3, VNx4DI, VNx4DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(V8UNITSI, 4)
+DEF_RISCV_ARG_MODE_ATTR(V8UNITSI, 0, VNx8QI, VNx8QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V8UNITSI, 1, VNx8HI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V8UNITSI, 2, VNx8SI, VNx8SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V8UNITSI, 3, VNx8DI, VNx8DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(V16UNITSI, 4)
+DEF_RISCV_ARG_MODE_ATTR(V16UNITSI, 0, VNx16QI, VNx16QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V16UNITSI, 1, VNx16HI, VNx16HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V16UNITSI, 2, VNx16SI, VNx16SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V16UNITSI, 3, VNx16DI, VNx16DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(V32UNITSI, 3)
+DEF_RISCV_ARG_MODE_ATTR(V32UNITSI, 0, VNx32QI, VNx32QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V32UNITSI, 1, VNx32HI, VNx32HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V32UNITSI, 2, VNx32SI, VNx32SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(V64UNITSI, 2)
+DEF_RISCV_ARG_MODE_ATTR(V64UNITSI, 0, VNx64QI, VNx64QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V64UNITSI, 1, VNx64HI, VNx64HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(V128UNITSI, 1)
+DEF_RISCV_ARG_MODE_ATTR(V128UNITSI, 0, VNx128QI, VNx128QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VWI, 15)
+DEF_RISCV_ARG_MODE_ATTR(VWI, 0, VNx2QI, VNx2QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWI, 1, VNx4QI, VNx4QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWI, 2, VNx8QI, VNx8QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWI, 3, VNx16QI, VNx16QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWI, 4, VNx32QI, VNx32QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWI, 5, VNx64QI, VNx64QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWI, 6, VNx2HI, VNx2HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWI, 7, VNx4HI, VNx4HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWI, 8, VNx8HI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWI, 9, VNx16HI, VNx16HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWI, 10, VNx32HI, VNx32HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWI, 11, VNx2SI, VNx2SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWI, 12, VNx4SI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWI, 13, VNx8SI, VNx8SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWI, 14, VNx16SI, VNx16SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VWINOQI, 9)
+DEF_RISCV_ARG_MODE_ATTR(VWINOQI, 0, VNx2HI, VNx2HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWINOQI, 1, VNx4HI, VNx4HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWINOQI, 2, VNx8HI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWINOQI, 3, VNx16HI, VNx16HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWINOQI, 4, VNx32HI, VNx32HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWINOQI, 5, VNx2SI, VNx2SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWINOQI, 6, VNx4SI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWINOQI, 7, VNx8SI, VNx8SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWINOQI, 8, VNx16SI, VNx16SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VWF, 4)
+DEF_RISCV_ARG_MODE_ATTR(VWF, 0, VNx2SF, VNx2SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VWF, 1, VNx4SF, VNx4SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VWF, 2, VNx8SF, VNx8SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VWF, 3, VNx16SF, VNx16SF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VQWI, 9)
+DEF_RISCV_ARG_MODE_ATTR(VQWI, 0, VNx2QI, VNx2QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VQWI, 1, VNx4QI, VNx4QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VQWI, 2, VNx8QI, VNx8QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VQWI, 3, VNx16QI, VNx16QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VQWI, 4, VNx32QI, VNx32QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VQWI, 5, VNx2HI, VNx2HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VQWI, 6, VNx4HI, VNx4HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VQWI, 7, VNx8HI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VQWI, 8, VNx16HI, VNx16HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VOWI, 4)
+DEF_RISCV_ARG_MODE_ATTR(VOWI, 0, VNx2QI, VNx2QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VOWI, 1, VNx4QI, VNx4QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VOWI, 2, VNx8QI, VNx8QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VOWI, 3, VNx16QI, VNx16QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VWREDI, 18)
+DEF_RISCV_ARG_MODE_ATTR(VWREDI, 0, VNx2QI, VNx2QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWREDI, 1, VNx4QI, VNx4QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWREDI, 2, VNx8QI, VNx8QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWREDI, 3, VNx16QI, VNx16QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWREDI, 4, VNx32QI, VNx32QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWREDI, 5, VNx64QI, VNx64QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWREDI, 6, VNx128QI, VNx128QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWREDI, 7, VNx2HI, VNx2HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWREDI, 8, VNx4HI, VNx4HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWREDI, 9, VNx8HI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWREDI, 10, VNx16HI, VNx16HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWREDI, 11, VNx32HI, VNx32HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWREDI, 12, VNx64HI, VNx64HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWREDI, 13, VNx2SI, VNx2SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWREDI, 14, VNx4SI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWREDI, 15, VNx8SI, VNx8SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWREDI, 16, VNx16SI, VNx16SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWREDI, 17, VNx32SI, VNx32SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VWREDF, 5)
+DEF_RISCV_ARG_MODE_ATTR(VWREDF, 0, VNx2SF, VNx2SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VWREDF, 1, VNx4SF, VNx4SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VWREDF, 2, VNx8SF, VNx8SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VWREDF, 3, VNx16SF, VNx16SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VWREDF, 4, VNx32SF, VNx32SF, TARGET_HARD_FLOAT)
 DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VM, 69)
 DEF_RISCV_ARG_MODE_ATTR(VM, 0, VNx2BI, VNx2BI, TARGET_ANY)
 DEF_RISCV_ARG_MODE_ATTR(VM, 1, VNx4BI, VNx4BI, TARGET_ANY)
@@ -437,6 +645,175 @@ DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI_VM, 18, VNx2DI, VNx4BI, TARGET_ANY)
 DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI_VM, 19, VNx4DI, VNx8BI, TARGET_ANY)
 DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI_VM, 20, VNx8DI, VNx16BI, TARGET_ANY)
 DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI_VM, 21, VNx16DI, VNx32BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VW, 19)
+DEF_RISCV_ARG_MODE_ATTR(VW, 0, VNx2QI, VNx2HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VW, 1, VNx4QI, VNx4HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VW, 2, VNx8QI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VW, 3, VNx16QI, VNx16HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VW, 4, VNx32QI, VNx32HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VW, 5, VNx64QI, VNx64HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VW, 6, VNx2HI, VNx2SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VW, 7, VNx4HI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VW, 8, VNx8HI, VNx8SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VW, 9, VNx16HI, VNx16SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VW, 10, VNx32HI, VNx32SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VW, 11, VNx2SI, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VW, 12, VNx4SI, VNx4DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VW, 13, VNx8SI, VNx8DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VW, 14, VNx16SI, VNx16DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VW, 15, VNx2SF, VNx2DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VW, 16, VNx4SF, VNx4DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VW, 17, VNx8SF, VNx8DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VW, 18, VNx16SF, VNx16DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VQW, 9)
+DEF_RISCV_ARG_MODE_ATTR(VQW, 0, VNx2QI, VNx2SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VQW, 1, VNx4QI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VQW, 2, VNx8QI, VNx8SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VQW, 3, VNx16QI, VNx16SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VQW, 4, VNx32QI, VNx32SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VQW, 5, VNx2HI, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VQW, 6, VNx4HI, VNx4DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VQW, 7, VNx8HI, VNx8DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VQW, 8, VNx16HI, VNx16DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VOW, 4)
+DEF_RISCV_ARG_MODE_ATTR(VOW, 0, VNx2QI, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VOW, 1, VNx4QI, VNx4DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VOW, 2, VNx8QI, VNx8DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VOW, 3, VNx16QI, VNx16DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VMAP, 31)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 0, VNx2QI, VNx2QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 1, VNx4QI, VNx4QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 2, VNx8QI, VNx8QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 3, VNx16QI, VNx16QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 4, VNx32QI, VNx32QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 5, VNx64QI, VNx64QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 6, VNx128QI, VNx128QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 7, VNx2HI, VNx2HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 8, VNx4HI, VNx4HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 9, VNx8HI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 10, VNx16HI, VNx16HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 11, VNx32HI, VNx32HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 12, VNx64HI, VNx64HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 13, VNx2SI, VNx2SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 14, VNx4SI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 15, VNx8SI, VNx8SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 16, VNx16SI, VNx16SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 17, VNx32SI, VNx32SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 18, VNx2DI, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 19, VNx4DI, VNx4DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 20, VNx8DI, VNx8DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 21, VNx16DI, VNx16DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 22, VNx2SF, VNx2SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 23, VNx4SF, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 24, VNx8SF, VNx8SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 25, VNx16SF, VNx16SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 26, VNx32SF, VNx32SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 27, VNx2DF, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 28, VNx4DF, VNx4DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 29, VNx8DF, VNx8DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAP, 30, VNx16DF, VNx16DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VMAPI16, 30)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 0, VNx2QI, VNx2HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 1, VNx4QI, VNx4HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 2, VNx8QI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 3, VNx16QI, VNx16HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 4, VNx32QI, VNx32HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 5, VNx64QI, VNx64HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 6, VNx2HI, VNx2HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 7, VNx4HI, VNx4HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 8, VNx8HI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 9, VNx16HI, VNx16HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 10, VNx32HI, VNx32HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 11, VNx64HI, VNx64HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 12, VNx2SI, VNx2HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 13, VNx4SI, VNx4HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 14, VNx8SI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 15, VNx16SI, VNx16HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 16, VNx32SI, VNx32HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 17, VNx2DI, VNx2HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 18, VNx4DI, VNx4HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 19, VNx8DI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 20, VNx16DI, VNx16HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 21, VNx2SF, VNx2HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 22, VNx4SF, VNx4HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 23, VNx8SF, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 24, VNx16SF, VNx16HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 25, VNx32SF, VNx32HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 26, VNx2DF, VNx2HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 27, VNx4DF, VNx4HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 28, VNx8DF, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VMAPI16, 29, VNx16DF, VNx16HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VWMAP, 4)
+DEF_RISCV_ARG_MODE_ATTR(VWMAP, 0, VNx2SF, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWMAP, 1, VNx4SF, VNx4DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWMAP, 2, VNx8SF, VNx8DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWMAP, 3, VNx16SF, VNx16DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VWFMAP, 9)
+DEF_RISCV_ARG_MODE_ATTR(VWFMAP, 0, VNx2HI, VNx2SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VWFMAP, 1, VNx4HI, VNx4SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VWFMAP, 2, VNx8HI, VNx8SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VWFMAP, 3, VNx16HI, VNx16SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VWFMAP, 4, VNx32HI, VNx32SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VWFMAP, 5, VNx2SI, VNx2DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VWFMAP, 6, VNx4SI, VNx4DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VWFMAP, 7, VNx8SI, VNx8DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VWFMAP, 8, VNx16SI, VNx16DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VLMUL1, 31)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 0, VNx2QI, VNx16QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 1, VNx4QI, VNx16QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 2, VNx8QI, VNx16QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 3, VNx16QI, VNx16QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 4, VNx32QI, VNx16QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 5, VNx64QI, VNx16QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 6, VNx128QI, VNx16QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 7, VNx2HI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 8, VNx4HI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 9, VNx8HI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 10, VNx16HI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 11, VNx32HI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 12, VNx64HI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 13, VNx2SI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 14, VNx4SI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 15, VNx8SI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 16, VNx16SI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 17, VNx32SI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 18, VNx2DI, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 19, VNx4DI, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 20, VNx8DI, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 21, VNx16DI, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 22, VNx2SF, VNx4SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 23, VNx4SF, VNx4SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 24, VNx8SF, VNx4SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 25, VNx16SF, VNx4SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 26, VNx32SF, VNx4SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 27, VNx2DF, VNx2DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 28, VNx4DF, VNx2DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 29, VNx8DF, VNx2DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VLMUL1, 30, VNx16DF, VNx2DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VWLMUL1, 23)
+DEF_RISCV_ARG_MODE_ATTR(VWLMUL1, 0, VNx2QI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWLMUL1, 1, VNx4QI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWLMUL1, 2, VNx8QI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWLMUL1, 3, VNx16QI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWLMUL1, 4, VNx32QI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWLMUL1, 5, VNx64QI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWLMUL1, 6, VNx128QI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWLMUL1, 7, VNx2HI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWLMUL1, 8, VNx4HI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWLMUL1, 9, VNx8HI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWLMUL1, 10, VNx16HI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWLMUL1, 11, VNx32HI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWLMUL1, 12, VNx64HI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWLMUL1, 13, VNx2SI, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWLMUL1, 14, VNx4SI, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWLMUL1, 15, VNx8SI, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWLMUL1, 16, VNx16SI, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWLMUL1, 17, VNx32SI, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VWLMUL1, 18, VNx2SF, VNx2DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VWLMUL1, 19, VNx4SF, VNx2DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VWLMUL1, 20, VNx8SF, VNx2DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VWLMUL1, 21, VNx16SF, VNx2DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VWLMUL1, 22, VNx32SF, VNx2DF, TARGET_DOUBLE_FLOAT)
 
 #undef DEF_RISCV_ARG_MODE_ATTR_VARIABLE
 #undef DEF_RISCV_ARG_MODE_ATTR
diff --git a/gcc/config/riscv/riscv-vector.cc b/gcc/config/riscv/riscv-vector.cc
index 4cb5e79421d..175d6da4695 100644
--- a/gcc/config/riscv/riscv-vector.cc
+++ b/gcc/config/riscv/riscv-vector.cc
@@ -354,11 +354,35 @@ rvv_classify_vsew_field (machine_mode mode)
 enum vlmul_field_enum
 rvv_classify_vlmul_field (machine_mode mode)
 {
-	/* Case 1: LMUL = 1. */
+  /* Case 1: Mask. */
+  if (GET_MODE_CLASS (mode) == MODE_VECTOR_BOOL)
+    {
+	switch (mode)
+	  {
+	case E_VNx8BImode:
+	    return VLMUL_FIELD_111;
+	case E_VNx4BImode:
+	    return VLMUL_FIELD_110;		
+	case E_VNx2BImode:
+	    return VLMUL_FIELD_101;
+	case E_VNx16BImode:
+	    return VLMUL_FIELD_000;
+	case E_VNx32BImode:
+	    return VLMUL_FIELD_001;
+	case E_VNx64BImode:
+	    return VLMUL_FIELD_010;
+	case E_VNx128BImode:
+	    return VLMUL_FIELD_011;
+	  default:
+	    gcc_unreachable ();
+	  }      
+    }
+    
+  /* Case 2: LMUL = 1. */
   if (known_eq (GET_MODE_SIZE (mode), BYTES_PER_RISCV_VECTOR))
     return VLMUL_FIELD_000;
   
-	/* Case 2: Fractional LMUL. */
+  /* Case 3: Fractional LMUL. */
   if (known_gt (GET_MODE_SIZE (mode), BYTES_PER_RISCV_VECTOR))
     {
 	unsigned int factor = exact_div (GET_MODE_SIZE (mode), 
@@ -376,7 +400,7 @@ rvv_classify_vlmul_field (machine_mode mode)
 	  }
     }
 	
-	/* Case 3: Fractional LMUL. */
+  /* Case 4: Fractional LMUL. */
   if (known_lt (GET_MODE_SIZE (mode), BYTES_PER_RISCV_VECTOR))
     {
 	unsigned int factor = exact_div (BYTES_PER_RISCV_VECTOR, 
@@ -393,7 +417,8 @@ rvv_classify_vlmul_field (machine_mode mode)
 	    gcc_unreachable ();
 	  }
     }
-	gcc_unreachable ();
+    
+  gcc_unreachable ();
 }
 
 /* Return the vsew field for a vtype bitmap. */
diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md
index 238c972de09..ae4f5b50214 100644
--- a/gcc/config/riscv/riscv.md
+++ b/gcc/config/riscv/riscv.md
@@ -190,6 +190,7 @@
 ;; nop		no operation
 ;; ghost	an instruction that produces no real code
 ;; bitmanip	bit manipulation instructions
+;; csr csr instructions
 ;; vsetvl vector configuration setting
 ;; vload vector whole register load
 ;; vstore vector whole register store
@@ -247,7 +248,7 @@
   "unknown,branch,jump,call,load,fpload,store,fpstore,
    mtc,mfc,const,arith,logical,shift,slt,imul,idiv,move,fmove,fadd,fmul,
    fmadd,fdiv,fcmp,fcvt,fsqrt,multi,auipc,sfb_alu,nop,ghost,bitmanip,rotate,
-   vsetvl,vload,vstore,vcopy,vle,vse,vlse,vsse,vluxei,vloxei,vsuxei,vsoxei,vleff,
+   csr,vsetvl,vload,vstore,vcopy,vle,vse,vlse,vsse,vluxei,vloxei,vsuxei,vsoxei,vleff,
    varith,vadc,vmadc,vwarith,vlogical,vshift,vcmp,vmul,vmulh,vdiv,vwmul,vmadd,vwmadd,
    vmerge,vmove,vsarith,vsmul,vscaleshift,vclip,vfsqrt,vfsgnj,vfclass,vfcvt,vfwcvt,vfncvt,
    vwcvt,vncvt,vreduc,vwreduc,vmask,vcpop,vmsetbit,viota,vid,vmv_x_s,vmv_s_x,vfmv_f_s,vfmv_s_f,
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index df9011ee901..501980d822f 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -23,7 +23,9 @@
   UNSPEC_VSETVLI
   ;; RVV instructions.
   UNSPEC_RVV
-   ;; reinterpret
+  ;; read vl.
+  UNSPEC_READVL
+  ;; reinterpret
   UNSPEC_REINTERPRET
   ;; lmul_ext
   UNSPEC_LMUL_EXT
@@ -37,15 +39,136 @@
   UNSPEC_VEC_DUPLICATE
   ;; vector select
   UNSPEC_SELECT
-  
-  ;; vle/vse
+
+  ;; vle/vse/vlse/vsse.
+  ;; vluxei/vloxei/vsuxei/vsoxei.
+  ;; vleff.
+  ;; vlseg/vsseg/vlsegff.
+  ;; vlsseg/vssseg.
+  ;; vluxseg/vloxseg/vsuxseg/vsoxseg.
   UNSPEC_UNIT_STRIDE_LOAD
   UNSPEC_UNIT_STRIDE_STORE
-  
+  UNSPEC_STRIDED_LOAD
+  UNSPEC_STRIDED_STORE
+  UNSPEC_UNORDER_INDEXED_LOAD
+  UNSPEC_ORDER_INDEXED_LOAD
+  UNSPEC_UNORDER_INDEXED_STORE
+  UNSPEC_ORDER_INDEXED_STORE
+  UNSPEC_FAULT_ONLY_FIRST_LOAD
+
+  ;; multiply-add.
+  UNSPEC_MACC
+  UNSPEC_NMACC
+  UNSPEC_MSAC
+  UNSPEC_NMSAC
+  UNSPEC_MADD
+  UNSPEC_NMADD
+  UNSPEC_MSUB
+  UNSPEC_NMSUB
+
+  ;; unspec double-widening for distinguish.
+  UNSPEC_DOUBLE_WIDEN
   ;; unspec merge
   UNSPEC_MERGE
-  
+  ;; unspec move
+  UNSPEC_MOVE
+
+  ;; saturating op.
+  UNSPEC_AADDU
+  UNSPEC_AADD
+	UNSPEC_ASUBU
+  UNSPEC_ASUB
+  UNSPEC_SMUL
+
+  ;; scaling shift.
+  UNSPEC_SSRL
+  UNSPEC_SSRA
+
+  ;; narrowing clip.
+  UNSPEC_SIGNED_CLIP
+  UNSPEC_UNSIGNED_CLIP
+
+  ;; reciprocal.
+  UNSPEC_RSQRT7
+  UNSPEC_REC7
+
+  ;; Sign-Injection.
+  UNSPEC_NCOPYSIGN
+  UNSPEC_XORSIGN
+
+  ;; Classify.
+  UNSPEC_FCLASS
+
+  ;; convert.
+  UNSPEC_FLOAT_TO_SIGNED_INT
+  UNSPEC_FLOAT_TO_UNSIGNED_INT
+
+  ;; rounding.
+  UNSPEC_ROD
+
+  ;; reduction operations.
+  UNSPEC_REDUC_SUM
+  UNSPEC_REDUC_UNORDERED_SUM
+  UNSPEC_REDUC_ORDERED_SUM
+  UNSPEC_REDUC_MAX
+  UNSPEC_REDUC_MAXU
+  UNSPEC_REDUC_MIN
+  UNSPEC_REDUC_MINU
+  UNSPEC_REDUC_AND
+  UNSPEC_REDUC_OR
+  UNSPEC_REDUC_XOR
+
+  ;; vcpop
+  UNSPEC_VCPOP
+  ;; find-first-set mask bit.
+  UNSPEC_FIRST
+  ;; set-before-first mask bit.
+  UNSPEC_SBF
+  ;; set-including-first mask bit.
+  UNSPEC_SIF
+  ;; set-only-first mask bit.
+  UNSPEC_SOF
+  ;; iota
+  UNSPEC_IOTA
+  ;; id
+  UNSPEC_ID
+  ;; vfmv.s.x, vmv.s.x
+  UNSPEC_VMV_SX
+
+  ;; slide instructins.
+  UNSPEC_SLIDEUP
+  UNSPEC_SLIDEDOWN
+  UNSPEC_SLIDE1UP
+  UNSPEC_SLIDE1DOWN
+
+  ;; rgather
+  UNSPEC_RGATHER
+  ;; rgatherei16
+  UNSPEC_RGATHEREI16
+
+  ;; compress
+  UNSPEC_COMPRESS
+
+  ;; lowpart of the mode
+  UNSPEC_LO
+  ;; highpart of the mode
+  UNSPEC_HI
+
+  UNSPEC_VADD UNSPEC_VSUB UNSPEC_VRSUB
+  UNSPEC_VAND UNSPEC_VIOX UNSPEC_VXOR
+  UNSPEC_VMIN UNSPEC_VMINU UNSPEC_VMAX UNSPEC_VMAXU
+  UNSPEC_VMUL UNSPEC_VMULH UNSPEC_VMULHU UNSPEC_VMULHSU
+  UNSPEC_VDIV UNSPEC_VDIVU UNSPEC_VREM UNSPEC_VREMU
+  UNSPEC_VSADD UNSPEC_VSADDU UNSPEC_VSSUB UNSPEC_VSSUBU
+  UNSPEC_VAADD UNSPEC_VAADDU UNSPEC_VASUB UNSPEC_VASUBU
+  UNSPEC_VSMUL
+  UNSPEC_VADC UNSPEC_VSBC
+  UNSPEC_VMADC UNSPEC_VMSBC
+  UNSPEC_VMSEQ UNSPEC_VMSNE UNSPEC_VMSLE UNSPEC_VMSLEU UNSPEC_VMSGT UNSPEC_VMSGTU
+  UNSPEC_VMSLT UNSPEC_VMSLTU UNSPEC_VMSGE UNSPEC_VMSGEU
+  UNSPEC_VMERGE
   UNSPEC_VMV
+  UNSPEC_VMVS
 ])
 
 ;; All vector modes supported.
@@ -97,6 +220,29 @@
 ;; All vector modes supported for integer sew = 64.
 (define_mode_iterator V64BITI [VNx2DI VNx4DI VNx8DI VNx16DI])
 
+;; All vector modes supported for integer sew < 64.
+(define_mode_iterator VNOT64BITI [
+  VNx2QI VNx4QI VNx8QI VNx16QI VNx32QI VNx64QI VNx128QI
+  VNx2HI VNx4HI VNx8HI VNx16HI VNx32HI VNx64HI
+  VNx2SI VNx4SI VNx8SI VNx16SI VNx32SI])
+
+;; All vector modes supported in vrgatherei16.
+(define_mode_iterator V16 [
+  VNx2QI VNx4QI VNx8QI VNx16QI VNx32QI VNx64QI
+  VNx2HI VNx4HI VNx8HI VNx16HI VNx32HI VNx64HI
+  VNx2SI VNx4SI VNx8SI VNx16SI VNx32SI
+  VNx2DI VNx4DI VNx8DI VNx16DI
+  (VNx2SF "TARGET_HARD_FLOAT") (VNx4SF "TARGET_HARD_FLOAT") (VNx8SF "TARGET_HARD_FLOAT")
+  (VNx16SF "TARGET_HARD_FLOAT") (VNx32SF "TARGET_HARD_FLOAT")
+  (VNx2DF "TARGET_DOUBLE_FLOAT") (VNx4DF "TARGET_DOUBLE_FLOAT") (VNx8DF "TARGET_DOUBLE_FLOAT")
+  (VNx16DF "TARGET_DOUBLE_FLOAT")])
+
+(define_mode_iterator VI16 [
+  VNx2QI VNx4QI VNx8QI VNx16QI VNx32QI VNx64QI
+  VNx2HI VNx4HI VNx8HI VNx16HI VNx32HI VNx64HI
+  VNx2SI VNx4SI VNx8SI VNx16SI VNx32SI
+  VNx2DI VNx4DI VNx8DI VNx16DI])
+
 ;; vector integer and float-point mode interconversion.
 (define_mode_attr VCONVERFI [
   (VNx2SF "VNx2SI")
@@ -227,6 +373,135 @@
   (VNx16SF "TARGET_HARD_FLOAT")
   (VNx2DF "TARGET_DOUBLE_FLOAT") (VNx4DF "TARGET_DOUBLE_FLOAT") (VNx8DF "TARGET_DOUBLE_FLOAT")])
 
+;; vector modes nunits = 2.
+(define_mode_iterator V2UNITS [
+  VNx2QI
+  VNx2HI
+  VNx2SI
+  VNx2DI
+  (VNx2SF "TARGET_HARD_FLOAT")
+  (VNx2DF "TARGET_DOUBLE_FLOAT")])
+
+;; vector modes nunits = 4.
+(define_mode_iterator V4UNITS [
+  VNx4QI
+  VNx4HI
+  VNx4SI
+  VNx4DI
+  (VNx4SF "TARGET_HARD_FLOAT")
+  (VNx4DF "TARGET_DOUBLE_FLOAT")])
+
+;; vector modes nunits = 8.
+(define_mode_iterator V8UNITS [
+  VNx8QI
+  VNx8HI
+  VNx8SI
+  VNx8DI
+  (VNx8SF "TARGET_HARD_FLOAT")
+  (VNx8DF "TARGET_DOUBLE_FLOAT")])
+
+;; vector modes nunits = 16.
+(define_mode_iterator V16UNITS [
+  VNx16QI
+  VNx16HI
+  VNx16SI
+  VNx16DI
+  (VNx16SF "TARGET_HARD_FLOAT")
+  (VNx16DF "TARGET_DOUBLE_FLOAT")])
+
+;; vector modes nunits = 32.
+(define_mode_iterator V32UNITS [
+  VNx32QI
+  VNx32HI
+  VNx32SI
+  (VNx32SF "TARGET_HARD_FLOAT")])
+
+;; vector modes nunits = 64.
+(define_mode_iterator V64UNITS [
+  VNx64QI
+  VNx64HI])
+
+;; vector index modes nunits = 2.
+(define_mode_iterator V2UNITSI [
+  VNx2QI
+  VNx2HI
+  VNx2SI
+  VNx2DI])
+
+;; vector index modes nunits = 4.
+(define_mode_iterator V4UNITSI [
+  VNx4QI
+  VNx4HI
+  VNx4SI
+  VNx4DI])
+
+;; vector index modes nunits = 8.
+(define_mode_iterator V8UNITSI [
+  VNx8QI
+  VNx8HI
+  VNx8SI
+  VNx8DI])
+
+;; vector index modes nunits = 16.
+(define_mode_iterator V16UNITSI [
+  VNx16QI
+  VNx16HI
+  VNx16SI
+  VNx16DI])
+
+;; vector index modes nunits = 32.
+(define_mode_iterator V32UNITSI [
+  VNx32QI
+  VNx32HI
+  VNx32SI])
+
+;; vector index modes nunits = 64.
+(define_mode_iterator V64UNITSI [
+  VNx64QI
+  VNx64HI])
+
+;; vector index modes nunits = 128.
+(define_mode_iterator V128UNITSI [VNx128QI])
+
+;; All vector modes supported for widening integer alu.
+(define_mode_iterator VWI [
+  VNx2QI VNx4QI VNx8QI VNx16QI VNx32QI VNx64QI
+  VNx2HI VNx4HI VNx8HI VNx16HI VNx32HI
+  VNx2SI VNx4SI VNx8SI VNx16SI])
+
+;; All vector modes supported for widening integer alu.
+(define_mode_iterator VWINOQI [
+  VNx2HI VNx4HI VNx8HI VNx16HI VNx32HI
+  VNx2SI VNx4SI VNx8SI VNx16SI])
+
+;; All vector modes supported for widening floating point alu.
+(define_mode_iterator VWF [
+  (VNx2SF "TARGET_HARD_FLOAT") (VNx4SF "TARGET_HARD_FLOAT") (VNx8SF"TARGET_HARD_FLOAT")
+  (VNx16SF "TARGET_DOUBLE_FLOAT")])
+
+;; All vector modes supported for quad-widening integer alu.
+(define_mode_iterator VQWI [
+  VNx2QI VNx4QI VNx8QI VNx16QI
+  VNx32QI VNx2HI VNx4HI VNx8HI
+  VNx16HI])
+
+;; All vector modes supported for oct-widening integer alu.
+(define_mode_iterator VOWI [
+  VNx2QI VNx4QI VNx8QI VNx16QI])
+
+;; All vector modes supported for widening integer point reduction operation.
+(define_mode_iterator VWREDI [
+  VNx2QI VNx4QI VNx8QI VNx16QI
+  VNx32QI VNx64QI VNx128QI VNx2HI
+  VNx4HI VNx8HI VNx16HI VNx32HI
+  VNx64HI VNx2SI VNx4SI VNx8SI
+  VNx16SI VNx32SI])
+
+;; All vector modes supported for widening floating point reduction operation.
+(define_mode_iterator VWREDF [
+  (VNx2SF "TARGET_HARD_FLOAT") (VNx4SF "TARGET_HARD_FLOAT")
+  (VNx8SF "TARGET_HARD_FLOAT") (VNx16SF "TARGET_HARD_FLOAT") (VNx32SF "TARGET_HARD_FLOAT")])
+
 ;; Map a vector int or float mode to a vector compare mode.
 (define_mode_attr VM [
   (VNx2BI "VNx2BI") (VNx4BI "VNx4BI") (VNx8BI "VNx8BI") (VNx16BI "VNx16BI")
@@ -250,6 +525,26 @@
   (VNx32SF "VNx32BI") (VNx2DF "VNx2BI") (VNx4DF "VNx4BI") (VNx8DF "VNx8BI")
   (VNx16DF "VNx16BI")])
 
+(define_mode_attr vm [
+  (VNx2QI "vnx2bi") (VNx4QI "vnx4bi") (VNx8QI "vnx8bi") (VNx16QI "vnx16bi")
+  (VNx32QI "vnx32bi") (VNx64QI "vnx64bi") (VNx128QI "vnx128bi") (VNx2HI "vnx2bi")
+  (VNx4HI "vnx4bi") (VNx8HI "vnx8bi") (VNx16HI "vnx16bi") (VNx32HI "vnx32bi")
+  (VNx64HI "vnx64bi") (VNx2SI "vnx2bi") (VNx4SI "vnx4bi") (VNx8SI "vnx8bi")
+  (VNx16SI "vnx16bi") (VNx32SI "vnx32bi") (VNx2DI "vnx2bi") (VNx4DI "vnx4bi")
+  (VNx8DI "vnx8bi") (VNx16DI "vnx16bi")
+  (VNx2SF "vnx2bi") (VNx4SF "vnx4bi") (VNx8SF "vnx8bi") (VNx16SF "vnx16bi")
+  (VNx32SF "vnx32bi") (VNx2DF "vnx2bi") (VNx4DF "vnx4bi") (VNx8DF "vnx8bi")
+  (VNx16DF "vnx16bi")
+  (VNx2QI "vnx2bi") (VNx4QI "vnx4bi") (VNx8QI "vnx8bi") (VNx16QI "vnx16bi")
+  (VNx32QI "vnx32bi") (VNx64QI "vnx64bi") (VNx128QI "vnx128bi") (VNx2HI "vnx2bi")
+  (VNx4HI "vnx4bi") (VNx8HI "vnx8bi") (VNx16HI "vnx16bi") (VNx32HI "vnx32bi")
+  (VNx64HI "vnx64bi") (VNx2SI "vnx2bi") (VNx4SI "vnx4bi") (VNx8SI "vnx8bi")
+  (VNx16SI "vnx16bi") (VNx32SI "vnx32bi") (VNx2DI "vnx2bi") (VNx4DI "vnx4bi")
+  (VNx8DI "vnx8bi") (VNx16DI "vnx16bi")
+  (VNx2SF "vnx2bi") (VNx4SF "vnx4bi") (VNx8SF "vnx8bi") (VNx16SF "vnx16bi")
+  (VNx32SF "vnx32bi") (VNx2DF "vnx2bi") (VNx4DF "vnx4bi") (VNx8DF "vnx8bi")
+  (VNx16DF "vnx16bi")])
+
 ;; Map a vector mode to its element mode.
 (define_mode_attr VSUB [
   (VNx2QI "QI") (VNx4QI "QI") (VNx8QI "QI") (VNx16QI "QI")
@@ -262,6 +557,17 @@
   (VNx32SF "SF") (VNx2DF "DF") (VNx4DF "DF") (VNx8DF "DF")
   (VNx16DF "DF")])
 
+(define_mode_attr vsub [
+  (VNx2QI "qi") (VNx4QI "qi") (VNx8QI "qi") (VNx16QI "qi")
+  (VNx32QI "qi") (VNx64QI "qi") (VNx128QI "qi") (VNx2HI "hi")
+  (VNx4HI "hi") (VNx8HI "hi") (VNx16HI "hi") (VNx32HI "hi")
+  (VNx64HI "hi") (VNx2SI "si") (VNx4SI "si") (VNx8SI "si")
+  (VNx16SI "si") (VNx32SI "si") (VNx2DI "di") (VNx4DI "di")
+  (VNx8DI "di") (VNx16DI "di")
+  (VNx2SF "sf") (VNx4SF "sf") (VNx8SF "sf") (VNx16SF "sf")
+  (VNx32SF "sf") (VNx2DF "df") (VNx4DF "df") (VNx8DF "df")
+  (VNx16DF "df")])
+
 (define_mode_attr VDI_TO_VSI [
   (VNx2QI "VNx4SI") (VNx4QI "VNx4SI") (VNx8QI "VNx4SI") (VNx16QI "VNx4SI") (VNx32QI "VNx4SI") (VNx64QI "VNx4SI") (VNx128QI "VNx4SI")
   (VNx2HI "VNx4SI") (VNx4HI "VNx4SI") (VNx8HI "VNx4SI") (VNx16HI "VNx4SI") (VNx32HI "VNx4SI") (VNx64HI "VNx4SI")
@@ -306,19 +612,456 @@
   (VNx2SF "1") (VNx4SF "1") (VNx8SF "2") (VNx16SF "4")
   (VNx32SF "8") (VNx2DF "1") (VNx4DF "2") (VNx8DF "4")
   (VNx16DF "8")])
+
+;; Map a vector int or float mode to widening vector mode.
+(define_mode_attr VW [
+  (VNx2QI "VNx2HI") (VNx4QI "VNx4HI") (VNx8QI "VNx8HI") (VNx16QI "VNx16HI") (VNx32QI "VNx32HI") (VNx64QI "VNx64HI")
+  (VNx2HI "VNx2SI") (VNx4HI "VNx4SI") (VNx8HI "VNx8SI") (VNx16HI "VNx16SI") (VNx32HI "VNx32SI")
+  (VNx2SI "VNx2DI") (VNx4SI "VNx4DI") (VNx8SI "VNx8DI") (VNx16SI "VNx16DI")
+  (VNx2SF "VNx2DF") (VNx4SF "VNx4DF") (VNx8SF "VNx8DF") (VNx16SF "VNx16DF")])
+
+(define_mode_attr vw [
+  (VNx2QI "vnx2hi") (VNx4QI "vnx4hi") (VNx8QI "vnx8hi") (VNx16QI "vnx16hi") (VNx32QI "vnx32hi") (VNx64QI "vnx64hi")
+  (VNx2HI "vnx2si") (VNx4HI "vnx4si") (VNx8HI "vnx8si") (VNx16HI "vnx16si") (VNx32HI "vnx32si")
+  (VNx2SI "vnx2di") (VNx4SI "vnx4di") (VNx8SI "vnx8di") (VNx16SI "vnx16di")
+  (VNx2SF "vnx2df") (VNx4SF "vnx4df") (VNx8SF "vnx8df") (VNx16SF "vnx16df")])
+
+;; Map a vector int or float mode to quad-widening vector mode.
+(define_mode_attr VQW [
+  (VNx2QI "VNx2SI") (VNx4QI "VNx4SI") (VNx8QI "VNx8SI") (VNx16QI "VNx16SI")
+  (VNx32QI "VNx32SI") (VNx2HI "VNx2DI") (VNx4HI "VNx4DI") (VNx8HI "VNx8DI")
+  (VNx16HI "VNx16DI")])
+
+(define_mode_attr vqw [
+  (VNx2QI "vnx2si") (VNx4QI "vnx4si") (VNx8QI "vnx8si") (VNx16QI "vnx16si")
+  (VNx32QI "vnx32si") (VNx2HI "vnx2di") (VNx4HI "vnx4di") (VNx8HI "vnx8di")
+  (VNx16HI "vnx16di")])
+
+;; Map a vector int or float mode to oct-widening vector mode.
+(define_mode_attr VOW [
+  (VNx2QI "VNx2DI") (VNx4QI "VNx4DI") (VNx8QI "VNx8DI") (VNx16QI "VNx16DI")])
+
+(define_mode_attr vow [
+  (VNx2QI "vnx2di") (VNx4QI "vnx4di") (VNx8QI "vnx8di") (VNx16QI "vnx16di")])
+
+;; Map same size mode.
+(define_mode_attr VMAP [
+  (VNx2QI "VNx2QI") (VNx4QI "VNx4QI") (VNx8QI "VNx8QI") (VNx16QI "VNx16QI")
+  (VNx32QI "VNx32QI") (VNx64QI "VNx64QI") (VNx128QI "VNx128QI") (VNx2HI "VNx2HI")
+  (VNx4HI "VNx4HI") (VNx8HI "VNx8HI") (VNx16HI "VNx16HI") (VNx32HI "VNx32HI")
+  (VNx64HI "VNx64HI") (VNx2SI "VNx2SI") (VNx4SI "VNx4SI") (VNx8SI "VNx8SI")
+  (VNx16SI "VNx16SI") (VNx32SI "VNx32SI") (VNx2DI "VNx2DI") (VNx4DI "VNx4DI")
+  (VNx8DI "VNx8DI") (VNx16DI "VNx16DI")
+  (VNx2SF "VNx2SI") (VNx4SF "VNx4SI") (VNx8SF "VNx8SI") (VNx16SF "VNx16SI")
+  (VNx32SF "VNx32SI") (VNx2DF "VNx2DI") (VNx4DF "VNx4DI") (VNx8DF "VNx8DI")
+  (VNx16DF "VNx16DI")])
+
+(define_mode_attr VMAPI16 [
+  (VNx2QI "VNx2HI") (VNx4QI "VNx4HI") (VNx8QI "VNx8HI") (VNx16QI "VNx16HI")
+  (VNx32QI "VNx32HI") (VNx64QI "VNx64HI") (VNx2HI "VNx2HI")
+  (VNx4HI "VNx4HI") (VNx8HI "VNx8HI") (VNx16HI "VNx16HI") (VNx32HI "VNx32HI")
+  (VNx64HI "VNx64HI") (VNx2SI "VNx2HI") (VNx4SI "VNx4HI") (VNx8SI "VNx8HI")
+  (VNx16SI "VNx16HI") (VNx32SI "VNx32HI") (VNx2DI "VNx2HI") (VNx4DI "VNx4HI")
+  (VNx8DI "VNx8HI") (VNx16DI "VNx16HI") 
+  (VNx2SF "VNx2HI") (VNx4SF "VNx4HI") (VNx8SF "VNx8HI") (VNx16SF "VNx16HI")
+  (VNx32SF "VNx32HI") (VNx2DF "VNx2HI") (VNx4DF "VNx4HI") (VNx8DF "VNx8HI")
+  (VNx16DF "VNx16HI")])
+
+(define_mode_attr vmap [
+  (VNx2QI "vnx2qi") (VNx4QI "vnx4qi") (VNx8QI "vnx8qi") (VNx16QI "vnx16qi")
+  (VNx32QI "vnx32qi") (VNx64QI "vnx64qi") (VNx128QI "vnx128qi") (VNx2HI "vnx2hi")
+  (VNx4HI "vnx4hi") (VNx8HI "vnx8hi") (VNx16HI "vnx16hi") (VNx32HI "vnx32hi")
+  (VNx64HI "vnx64hi") (VNx2SI "vnx2si") (VNx4SI "vnx4si") (VNx8SI "vnx8si")
+  (VNx16SI "vnx16si") (VNx32SI "vnx32si") (VNx2DI "vnx2di") (VNx4DI "vnx4di")
+  (VNx8DI "vnx8di") (VNx16DI "vnx16di")
+  (VNx2SF "vnx2si") (VNx4SF "vnx4si") (VNx8SF "vnx8si") (VNx16SF "vnx16si")
+  (VNx32SF "vnx32si") (VNx2DF "vnx2di") (VNx4DF "vnx4di") (VNx8DF "vnx8di")
+  (VNx16DF "vnx16di")])
+
+;; Map widen same size mode.
+(define_mode_attr VWMAP [
+  (VNx2SF "VNx2DI") (VNx4SF "VNx4DI") (VNx8SF "VNx8DI")
+  (VNx16SF "VNx16DI")])
+
+(define_mode_attr vwmap [
+  (VNx2SF "vnx2di") (VNx4SF "vnx4di") (VNx8SF "vnx8di")
+  (VNx16SF "vnx16di")])
+
+;; Map a vector int mode to vector widening float mode.
+(define_mode_attr VWFMAP [
+  (VNx2HI "VNx2SF") (VNx4HI "VNx4SF")
+  (VNx8HI "VNx8SF") (VNx16HI "VNx16SF") (VNx32HI "VNx32SF") (VNx2SI "VNx2DF")
+  (VNx4SI "VNx4DF") (VNx8SI "VNx8DF") (VNx16SI "VNx16DF")])
+
+(define_mode_attr vwfmap [
+  (VNx2QI "vnx2hf") (VNx4QI "vnx4hf") (VNx8QI "vnx8hf") (VNx16QI "vnx16hf")
+  (VNx32QI "vnx32hf") (VNx64QI "vnx64hf") (VNx2HI "vnx2sf") (VNx4HI "vnx4sf")
+  (VNx8HI "vnx8sf") (VNx16HI "vnx16sf") (VNx32HI "vnx32sf") (VNx2SI "vnx2df")
+  (VNx4SI "vnx4df") (VNx8SI "vnx8df") (VNx16SI "vnx16df")])
+
+;; Map a vector mode to its LMUL==1 equivalent.
+;; This is for reductions which use scalars in vector registers.
+(define_mode_attr VLMUL1 [
+  (VNx2QI "VNx16QI") (VNx4QI "VNx16QI") (VNx8QI "VNx16QI") (VNx16QI "VNx16QI")
+  (VNx32QI "VNx16QI") (VNx64QI "VNx16QI") (VNx128QI "VNx16QI") (VNx2HI "VNx8HI")
+  (VNx4HI "VNx8HI") (VNx8HI "VNx8HI") (VNx16HI "VNx8HI") (VNx32HI "VNx8HI")
+  (VNx64HI "VNx8HI") (VNx2SI "VNx4SI") (VNx4SI "VNx4SI") (VNx8SI "VNx4SI")
+  (VNx16SI "VNx4SI") (VNx32SI "VNx4SI") (VNx2DI "VNx2DI") (VNx4DI "VNx2DI")
+  (VNx8DI "VNx2DI") (VNx16DI "VNx2DI") 
+  (VNx2SF "VNx4SF") (VNx4SF "VNx4SF") (VNx8SF "VNx4SF") (VNx16SF "VNx4SF")
+  (VNx32SF "VNx4SF") (VNx2DF "VNx2DF") (VNx4DF "VNx2DF") (VNx8DF "VNx2DF")
+  (VNx16DF "VNx2DF")])
+
+;; Map a vector mode to its LMUL==1 widen vector type.
+;; This is for widening reductions which use scalars in vector registers.
+(define_mode_attr VWLMUL1 [
+  (VNx2QI "VNx8HI") (VNx4QI "VNx8HI") (VNx8QI "VNx8HI") (VNx16QI "VNx8HI")
+  (VNx32QI "VNx8HI") (VNx64QI "VNx8HI") (VNx128QI "VNx8HI") (VNx2HI "VNx4SI")
+  (VNx4HI "VNx4SI") (VNx8HI "VNx4SI") (VNx16HI "VNx4SI") (VNx32HI "VNx4SI")
+  (VNx64HI "VNx4SI") (VNx2SI "VNx2DI") (VNx4SI "VNx2DI") (VNx8SI "VNx2DI")
+  (VNx16SI "VNx2DI") (VNx32SI "VNx2DI")
+  (VNx2SF "VNx2DF") (VNx4SF "VNx2DF") (VNx8SF "VNx2DF") (VNx16SF "VNx2DF")
+  (VNx32SF "VNx2DF")])
   
+;; all indexed load/store.
+(define_int_iterator INDEXED_LOAD [UNSPEC_UNORDER_INDEXED_LOAD UNSPEC_ORDER_INDEXED_LOAD])
+(define_int_iterator INDEXED_STORE [UNSPEC_UNORDER_INDEXED_STORE UNSPEC_ORDER_INDEXED_STORE])
+
+;; integer multiply-add.
+(define_int_iterator IMAC [UNSPEC_MACC UNSPEC_NMSAC UNSPEC_MADD UNSPEC_NMSUB])
+
+;; Floating-point multiply-add.
+(define_int_iterator FMAC [UNSPEC_MACC UNSPEC_NMACC UNSPEC_MSAC UNSPEC_NMSAC
+      UNSPEC_MADD UNSPEC_NMADD UNSPEC_MSUB UNSPEC_NMSUB])
+
+;; Iterator for sign-injection instructions.
+(define_int_iterator COPYSIGNS [UNSPEC_COPYSIGN UNSPEC_NCOPYSIGN UNSPEC_XORSIGN])
+
+;; Iterator for all fixed-point instructions.
+(define_int_iterator SAT_OP [UNSPEC_AADDU UNSPEC_AADD
+				    UNSPEC_ASUBU UNSPEC_ASUB UNSPEC_SMUL])
+
+;; Iterator for vssrl and vssra instructions.
+(define_int_iterator SSHIFT [UNSPEC_SSRL UNSPEC_SSRA])
+
+;; Iterator for vnclip and vnclipu instructions.
+(define_int_iterator CLIP [UNSPEC_SIGNED_CLIP UNSPEC_UNSIGNED_CLIP])
+
+;; Iterator for reciprocal.
+(define_int_iterator RECIPROCAL [UNSPEC_RSQRT7 UNSPEC_REC7])
+
+;; Iterator for convert instructions.
+(define_int_iterator FCVT [UNSPEC_FLOAT_TO_SIGNED_INT UNSPEC_FLOAT_TO_UNSIGNED_INT])
+
+;; Iterator for integer reduction operations.
+(define_int_iterator REDUC [UNSPEC_REDUC_SUM
+          UNSPEC_REDUC_MAX
+          UNSPEC_REDUC_MAXU
+          UNSPEC_REDUC_MIN
+          UNSPEC_REDUC_MINU
+          UNSPEC_REDUC_AND
+          UNSPEC_REDUC_OR
+          UNSPEC_REDUC_XOR])
+
+;; Iterator for integer reduction min/max operations.
+(define_int_iterator REDUC_MAXMIN [UNSPEC_REDUC_MAX UNSPEC_REDUC_MAXU UNSPEC_REDUC_MIN UNSPEC_REDUC_MINU])
+
+;; Iterator for floating-point reduction instructions.
+(define_int_iterator FREDUC [UNSPEC_REDUC_UNORDERED_SUM UNSPEC_REDUC_ORDERED_SUM UNSPEC_REDUC_MAX UNSPEC_REDUC_MIN])
+
+;; Iterator for floating-point reduction auto-vectorization.
+(define_int_iterator FREDUCAUTO [UNSPEC_REDUC_SUM UNSPEC_REDUC_MAX UNSPEC_REDUC_MIN])
+
+;; Iterator for mask bits set instructions.
+(define_int_iterator MASK_SET [UNSPEC_SBF UNSPEC_SIF UNSPEC_SOF])
+
+;; Iterator for slide instructions.
+(define_int_iterator SLIDE [UNSPEC_SLIDEUP UNSPEC_SLIDEDOWN])
+(define_int_iterator SLIDE1 [UNSPEC_SLIDE1UP UNSPEC_SLIDE1DOWN])
+(define_int_iterator SLIDE_UP [UNSPEC_SLIDEUP])
+(define_int_iterator SLIDE_DOWN [UNSPEC_SLIDEDOWN])
+(define_int_iterator SLIDE1_UP [UNSPEC_SLIDE1UP])
+(define_int_iterator SLIDE1_DOWN [UNSPEC_SLIDE1DOWN])
+(define_int_iterator MUL_HIGHPART [UNSPEC_VMULH UNSPEC_VMULHU])
+
+;; expands used to process  sew64 on TARGET_32BIT
+
+(define_int_iterator VXOP [
+  UNSPEC_VADD UNSPEC_VSUB
+  UNSPEC_VAND UNSPEC_VIOX UNSPEC_VXOR
+  UNSPEC_VMIN UNSPEC_VMINU UNSPEC_VMAX UNSPEC_VMAXU
+  UNSPEC_VMUL UNSPEC_VMULH UNSPEC_VMULHU UNSPEC_VMULHSU
+  UNSPEC_VDIV UNSPEC_VDIVU UNSPEC_VREM UNSPEC_VREMU
+  UNSPEC_VSADD UNSPEC_VSADDU UNSPEC_VSSUB UNSPEC_VSSUBU
+  UNSPEC_VAADD UNSPEC_VAADDU UNSPEC_VASUB UNSPEC_VASUBU
+  UNSPEC_VSMUL
+])
+
+(define_int_iterator VXMOP [
+  UNSPEC_VADC UNSPEC_VSBC
+])
+
+(define_int_iterator VXMOP_NO_POLICY [
+  UNSPEC_VMADC UNSPEC_VMSBC
+])
+
+
+(define_int_iterator MVXOP [
+  UNSPEC_VMADC UNSPEC_VMSBC
+])
+
+;; mvx
+(define_int_iterator MVXMOP [
+  UNSPEC_VMSEQ UNSPEC_VMSNE UNSPEC_VMSLE UNSPEC_VMSLEU UNSPEC_VMSGT UNSPEC_VMSGTU
+  UNSPEC_VMSLT UNSPEC_VMSLTU UNSPEC_VMSGE UNSPEC_VMSGEU
+])
+
+;; mac
+(define_int_iterator MACOP [
+  UNSPEC_MACC UNSPEC_NMSAC UNSPEC_MADD UNSPEC_NMSUB
+])
+
+(define_int_iterator VMERGEOP [
+  UNSPEC_VMERGE
+])
+
 (define_int_iterator VMVOP [
   UNSPEC_VMV
 ])
 
+(define_int_iterator VMVSOP [
+  UNSPEC_VMVS
+])
+
+(define_int_iterator VXROP [
+  UNSPEC_VRSUB
+])
+
+(define_int_iterator VSLIDE1 [
+  UNSPEC_SLIDE1UP UNSPEC_SLIDE1DOWN
+])
+
+;; map insn string to order type
+(define_int_attr uo
+ [(UNSPEC_UNORDER_INDEXED_LOAD "u") (UNSPEC_ORDER_INDEXED_LOAD "o")
+  (UNSPEC_UNORDER_INDEXED_STORE "u") (UNSPEC_ORDER_INDEXED_STORE "o")])
+
+(define_int_attr sat_op [(UNSPEC_AADDU "aaddu") (UNSPEC_AADD "aadd")
+			 (UNSPEC_ASUBU "asubu") (UNSPEC_ASUB "asub")
+			 (UNSPEC_SMUL "smul")])
+
+;; <reduc> expands to the name of the reduction that implements a
+;; particular int.
+(define_int_attr reduc [(UNSPEC_REDUC_SUM "sum") (UNSPEC_REDUC_UNORDERED_SUM "usum") (UNSPEC_REDUC_ORDERED_SUM "osum")
+          (UNSPEC_REDUC_MAX "max") (UNSPEC_REDUC_MAXU "maxu")
+          (UNSPEC_REDUC_MIN "min") (UNSPEC_REDUC_MINU "minu")
+          (UNSPEC_REDUC_AND "and") (UNSPEC_REDUC_OR "or") (UNSPEC_REDUC_XOR "xor")])
+
+;; Attribute for vssrl and vssra instructions.
+(define_int_attr sshift [(UNSPEC_SSRL "ssrl") (UNSPEC_SSRA "ssra")])
+
+;; Attribute for vnclip and vnclipu instructions.
+(define_int_attr clip [(UNSPEC_SIGNED_CLIP "clip") (UNSPEC_UNSIGNED_CLIP "clipu")])
+
+;; Attribute for vfrsqrt7 and vfrec7 instructions.
+(define_int_attr reciprocal [(UNSPEC_RSQRT7 "rsqrt7") (UNSPEC_REC7 "rec7")])
+
+;; Attributes for sign-injection instructions.
+(define_int_attr nx [(UNSPEC_COPYSIGN "") (UNSPEC_NCOPYSIGN "n") (UNSPEC_XORSIGN "x")])
+
+;; Attributes for convert instructions.
+(define_int_attr fu [(UNSPEC_FLOAT_TO_SIGNED_INT "") (UNSPEC_FLOAT_TO_UNSIGNED_INT "u")])
+
+;; Attributes for mask set bit.
+(define_int_attr smb [(UNSPEC_SBF "sbf") (UNSPEC_SIF "sif") (UNSPEC_SOF "sof")])
+
+;; Attributes for slide instructions.
+(define_int_attr ud [(UNSPEC_SLIDEUP "up") (UNSPEC_SLIDEDOWN "down")
+                     (UNSPEC_SLIDE1UP "up") (UNSPEC_SLIDE1DOWN "down")])
+
+;; Attributes for saturation operations.
+(define_int_attr vsat [(UNSPEC_AADDU "vsarith") (UNSPEC_AADD "vsarith")
+				    (UNSPEC_ASUBU "vsarith")  (UNSPEC_ASUB "vsarith")  (UNSPEC_SMUL "vsmul") ])
+
+;; Attributes for integer multiply-add.
+(define_int_attr imac [(UNSPEC_MACC "macc") (UNSPEC_NMSAC "nmsac") (UNSPEC_MADD "madd") (UNSPEC_NMSUB "nmsub")])
+
+;; Attributes for Floating-point multiply-add.
+(define_int_attr fmac [(UNSPEC_MACC "macc") (UNSPEC_NMACC "nmacc") (UNSPEC_MSAC "msac") (UNSPEC_NMSAC "nmsac")
+      (UNSPEC_MADD "madd") (UNSPEC_NMADD "nmadd") (UNSPEC_MSUB "msub") (UNSPEC_NMSUB "nmsub")])
+
+;; Attributes for signed and unsigned.
+(define_int_attr su
+ [(UNSPEC_VMULH "s") (UNSPEC_VMULHU "u")])
+
+;; Attributes for signed and unsigned.
+(define_int_attr u
+ [(UNSPEC_VMULH "") (UNSPEC_VMULHU "u")])
+
+;; optab for unspec iterator
+(define_int_attr optab [(UNSPEC_REDUC_SUM "plus")
+          (UNSPEC_REDUC_MAX "smax") (UNSPEC_REDUC_MAXU "umax")
+          (UNSPEC_REDUC_MIN "smin") (UNSPEC_REDUC_MINU "umin")
+          (UNSPEC_REDUC_AND "and") (UNSPEC_REDUC_OR "ior") (UNSPEC_REDUC_XOR "xor")])
+
+;; add and sub.
+(define_code_iterator plus_minus [plus minus])
+
+;; add, sub and mult.
+(define_code_iterator plus_minus_mult [plus minus mult])
+
+;; All operation valid for min and max.
+(define_code_iterator any_minmax [smin umin smax umax])
+
+;; Saturating add.
+(define_code_iterator any_satplus [ss_plus us_plus])
+
+;; sub and div.
+(define_code_iterator minus_div [minus div])
+
+;; All operation valid for floating-point.
+(define_code_iterator any_fop [plus mult smax smin minus div])
+
+;;All operantion valid for floating-point and integer convert.
+(define_code_iterator any_fix [fix unsigned_fix])
+(define_code_iterator any_float [float unsigned_float])
+
+;; All operation valid for <op>not instruction in mask-register logical.
+(define_code_iterator any_logicalnot [and ior])
+
+;; EQ, NE, LE, LEU.
+(define_code_iterator eq_ne_le_leu [eq ne le leu])
+
+;; GT, GTU
+(define_code_iterator gt_gtu [gt gtu])
+
+;; EQ, NE, LE, LEU, GT, GTU.
+(define_code_iterator eq_ne_le_leu_gt_gtu [eq ne le leu gt gtu])
+
+;; LT, LTU.
+(define_code_iterator lt_ltu [lt ltu])
+
+;; GE, GEU.
+(define_code_iterator ge_geu [ge geu])
+
+;; All operation valid for floating-point comparison.
+(define_code_iterator any_fcmp [eq ne lt le gt ge])
+
+;; All operation valid for floating-point no trapping comparison.
+(define_code_iterator any_fcmp_no_trapping [unordered ordered unlt unle unge ungt uneq ltgt])
+
+;; All integer comparison except GE.
+(define_code_iterator cmp_noltge [eq ne le gt leu gtu])
+
+;; All integer LT,GE.
+(define_code_iterator cmp_lt [lt ltu])
+
+;; All integer GE.
+(define_code_iterator cmp_ge [ge geu])
+
+;; All integer LT,GE.
+(define_code_iterator cmp_ltge [lt ltu ge geu])
+
+;; RVV integer unary operations.
+(define_code_iterator int_unary [neg not])
+
+;; RVV floating-point unary operations.
+(define_code_iterator fp_unary [neg abs sqrt])
+
+;; RVV integer binary operations.
+(define_code_iterator int_binary [and ior xor smin umin smax umax mult div udiv mod umod])
+
+;; RVV integer binary vector-scalar operations.
+(define_code_iterator int_binary_vs [plus minus mult and ior xor smin umin smax umax])
+
+(define_code_iterator int_binary_vs_simm5 [plus and ior xor])
+
+(define_code_iterator int_binary_vs_reg [mult smin umin smax umax])
+
+;; RVV floating-point binary operations.
+(define_code_iterator fp_binary [plus mult smax smin])
+
+;; RVV floating-point binary vector-scalar operations.
+(define_code_iterator fp_binary_vs [plus minus mult smax smin])
+
+;; comparison code.
+(define_code_iterator cmp_all [eq ne le gt leu gtu lt ltu ge geu])
+
+;; <sz> expand to the name of the wcvt and wcvtu that implements a
+;; particular code.
+(define_code_attr sz [(sign_extend "s") (zero_extend "z")])
+
+;; map code to type.
+(define_code_attr rvv_type [(plus "varith") (minus "varith") 
+    (and "vlogical") (ior "vlogical") (xor "vlogical") (mult "vmul")
+    (smax "varith") (smin "varith") (umax "varith") (umin "varith") 
+    (div "vdiv") (udiv "vdiv") (mod "vdiv") (umod "vdiv")])
+
+;; map code to reverse operand.
+(define_code_attr rinsn [(plus "add") (minus "rsub") (mult "mul") 
+        (and "and") (ior "or") (xor "xor") 
+        (smin "min") (umin "minu") (smax "max") (umax "maxu")])
+
+;; map not insn for not logic.
+(define_code_attr ninsn [(and "nand") (ior "nor") (xor "xnor")])
+
+;; map comparison code to the constraint.
+(define_code_attr cmp_imm_p_tab [
+  (eq "Ws5") (ne "Ws5") (le "Ws5") (gt "Ws5") (leu "Ws5") (gtu "Ws5")
+  (lt "Wn5") (ltu "Wn5") (ge "Wn5") (geu "Wn5")
+])
+
 (define_int_attr vxoptab [
+  (UNSPEC_VADD "add") (UNSPEC_VSUB "sub") (UNSPEC_VRSUB "rsub")
+  (UNSPEC_VAND "and") (UNSPEC_VIOX "ior") (UNSPEC_VXOR "xor")
+  (UNSPEC_VMIN "smin") (UNSPEC_VMINU "umin") (UNSPEC_VMAX "smax") (UNSPEC_VMAXU "umax")
+  (UNSPEC_VMUL "mul") (UNSPEC_VMULH "mulh") (UNSPEC_VMULHU "mulhu") (UNSPEC_VMULHSU "mulhsu")
+  (UNSPEC_VDIV "div") (UNSPEC_VDIVU "udiv") (UNSPEC_VREM "mod") (UNSPEC_VREMU "umod")
+  (UNSPEC_VSADD "ssadd") (UNSPEC_VSADDU "usadd") (UNSPEC_VSSUB "sssub") (UNSPEC_VSSUBU "ussub")
+  (UNSPEC_VAADD "aadd") (UNSPEC_VAADDU "aaddu") (UNSPEC_VASUB "asub") (UNSPEC_VASUBU "asubu")
+  (UNSPEC_VSMUL "smul")
+  (UNSPEC_VADC "adc") (UNSPEC_VSBC "sbc")
+  (UNSPEC_VMADC "madc") (UNSPEC_VMSBC "msbc")
+  (UNSPEC_MACC "macc") (UNSPEC_NMSAC "nmsac") (UNSPEC_MADD "madd") (UNSPEC_NMSUB "nmsub")
+  (UNSPEC_VMERGE "merge")
   (UNSPEC_VMV "mv")
+  (UNSPEC_VMVS "mv")
+  (UNSPEC_SLIDE1UP "up") (UNSPEC_SLIDE1DOWN "down")
 ])
 
 (define_int_attr VXOPTAB [
+  (UNSPEC_VADD "UNSPEC_VADD") (UNSPEC_VSUB "UNSPEC_VSUB") (UNSPEC_VRSUB "UNSPEC_VRSUB")
+  (UNSPEC_VAND "UNSPEC_VAND") (UNSPEC_VIOX "UNSPEC_VIOX") (UNSPEC_VXOR "UNSPEC_VXOR")
+  (UNSPEC_VMIN "UNSPEC_VMIN") (UNSPEC_VMINU "UNSPEC_VMINU") (UNSPEC_VMAX "UNSPEC_VMAX") (UNSPEC_VMAXU "UNSPEC_VMAXU")
+  (UNSPEC_VMUL "UNSPEC_VMUL") (UNSPEC_VMULH "UNSPEC_VMULH") (UNSPEC_VMULHU "UNSPEC_VMULHU") (UNSPEC_VMULHSU "UNSPEC_VMULHSU")
+  (UNSPEC_VDIV "UNSPEC_VDIV") (UNSPEC_VDIVU "UNSPEC_VDIVU") (UNSPEC_VREM "UNSPEC_VREM") (UNSPEC_VREMU "UNSPEC_VREMU")
+  (UNSPEC_VSADD "UNSPEC_VSADD") (UNSPEC_VSADDU "UNSPEC_VSADDU") (UNSPEC_VSSUB "UNSPEC_VSSUB") (UNSPEC_VSSUBU "UNSPEC_VSSUBU")
+  (UNSPEC_VAADD "UNSPEC_VAADD") (UNSPEC_VAADDU "UNSPEC_VAADDU") (UNSPEC_VASUB "UNSPEC_VASUB") (UNSPEC_VASUBU "UNSPEC_VASUBU")
+  (UNSPEC_VSMUL "UNSPEC_VSMUL")
+  (UNSPEC_VADC "UNSPEC_VADC") (UNSPEC_VSBC "UNSPEC_VSBC")
+  (UNSPEC_VMADC "UNSPEC_VMADC") (UNSPEC_VMSBC "UNSPEC_VMSBC")
+  (UNSPEC_MACC "UNSPEC_MACC") (UNSPEC_NMSAC "UNSPEC_NMSAC") (UNSPEC_MADD "UNSPEC_MADD") (UNSPEC_NMSUB "UNSPEC_NMSUB")
+  (UNSPEC_VMERGE "UNSPEC_VMERGE")
   (UNSPEC_VMV "UNSPEC_VMV")
+  (UNSPEC_VMVS "UNSPEC_VMVS")
+  (UNSPEC_SLIDE1UP "UNSPEC_SLIDE1UP") (UNSPEC_SLIDE1DOWN "UNSPEC_SLIDE1DOWN")
 ])
 
 (define_int_attr immptab [
+  (UNSPEC_VADD "Ws5") (UNSPEC_VSUB "Wn5") (UNSPEC_VRSUB "Ws5")
+  (UNSPEC_VAND "Ws5") (UNSPEC_VIOX "Ws5") (UNSPEC_VXOR "Ws5")
+  (UNSPEC_VMIN "J") (UNSPEC_VMINU "J") (UNSPEC_VMAX "J") (UNSPEC_VMAXU "J")
+  (UNSPEC_VMUL "J") (UNSPEC_VMULH "J") (UNSPEC_VMULHU "J") (UNSPEC_VMULHSU "J")
+  (UNSPEC_VDIV "J") (UNSPEC_VDIVU "J") (UNSPEC_VREM "J") (UNSPEC_VREMU "J")
+  (UNSPEC_VSADD "Ws5") (UNSPEC_VSADDU "Ws5") (UNSPEC_VSSUB "Wn5") (UNSPEC_VSSUBU "Wn5")
+  (UNSPEC_VAADD "J") (UNSPEC_VAADDU "J") (UNSPEC_VASUB "J") (UNSPEC_VASUBU "J")
+  (UNSPEC_VSMUL "J")
+  (UNSPEC_VADC "Ws5")
+  (UNSPEC_VADC "Ws5") (UNSPEC_VSBC "J")
+  (UNSPEC_VMADC "Ws5") (UNSPEC_VMSBC "J")
+  (UNSPEC_MACC "J") (UNSPEC_NMSAC "J") (UNSPEC_MADD "J") (UNSPEC_NMSUB "J")
+  (UNSPEC_VMERGE "Ws5")
   (UNSPEC_VMV "Ws5")
+  (UNSPEC_VMVS "J")
+  (UNSPEC_SLIDE1UP "J") (UNSPEC_SLIDE1DOWN "J")
 ])
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 54e68aa165b..fc7ec77dfc4 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -240,6 +240,23 @@
 ;; == Intrinsics
 ;; ===============================================================================
 
+;; -------------------------------------------------------------------------------
+;; ---- CSR Instructions
+;; -------------------------------------------------------------------------------
+;; Includes:
+;; - csr read vl instructions
+;; -------------------------------------------------------------------------------
+
+;; vl read instruction
+(define_insn "@readvl_<X:mode>"
+  [(set (match_operand:X 0 "register_operand" "=r")
+    (unspec:X
+      [(match_operand 1 "vector_any_register_operand" "vr")] UNSPEC_READVL))]
+  "TARGET_VECTOR"
+  "csrr\t%0,vl"
+  [(set_attr "type" "csr")
+   (set_attr "mode" "<X:MODE>")])
+   
 ;; -------------------------------------------------------------------------------
 ;; ---- 6. Configuration-Setting Instructions
 ;; -------------------------------------------------------------------------------
@@ -490,10 +507,6 @@
 ;; - 7.5. Vector Strided Instructions
 ;; - 7.6. Vector Indexed Instructions
 ;; - 7.7. Unit-stride Fault-Only-First Instructions
-;; - 7.8. Vector Load/Store Segment Instructions
-;;  -  7.8.1. Vector Unit-Stride Segment Loads and Stores
-;;  -  7.8.2. Vector Strided Segment Loads and Stores
-;;  -  7.8.3. Vector Indexed Segment Loads and Stores
 ;; -------------------------------------------------------------------------------
 
 ;; Vector Unit-Stride Loads.
@@ -574,6 +587,393 @@
   [(set_attr "type" "vse")
    (set_attr "mode" "<MODE>")])
 
+;; Vector Strided Loads.
+
+;; This special pattern, we add policy operand because
+;; we need it in the expansion.
+(define_insn "@vlse<mode>"
+  [(set (match_operand:V 0 "register_operand"                 "=vd,vd,vd,vd,vr,vr,vr,vr")
+    (unspec:V
+      [(unspec:V
+        [(match_operand:<VM> 1 "vector_reg_or_const0_operand" "vm,vm,vm,vm,J,J,J,J")
+         (unspec:V
+           [(match_operand 3 "pmode_register_operand"         "r,r,r,r,r,r,r,r")
+           (match_operand 4 "p_reg_or_0_operand"              "r,J,r,J,r,J,r,J")
+           (mem:BLK (scratch))] UNSPEC_STRIDED_LOAD)
+         (match_operand:V 2 "vector_reg_or_const0_operand"    "0,0,J,J,0,0,J,J")] UNSPEC_SELECT)
+      (match_operand 5 "p_reg_or_const_csr_operand"           "rK,rK,rK,rK,rK,rK,rK,rK")
+      (match_operand 6 "const_int_operand")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))]
+  "TARGET_VECTOR"
+  "@
+   vlse<sew>.v\t%0,(%3),%4,%1.t
+   vlse<sew>.v\t%0,(%3),zero,%1.t
+   vlse<sew>.v\t%0,(%3),%4,%1.t
+   vlse<sew>.v\t%0,(%3),zero,%1.t
+   vlse<sew>.v\t%0,(%3),%4
+   vlse<sew>.v\t%0,(%3),zero
+   vlse<sew>.v\t%0,(%3),%4
+   vlse<sew>.v\t%0,(%3),zero"
+  [(set_attr "type" "vlse")
+   (set_attr "mode" "<MODE>")])
+
+;; Vector Strided Stores.
+(define_insn "@vsse<mode>"
+  [(set (mem:BLK (scratch))
+    (unspec:BLK
+      [(unspec:V
+        [(match_operand:<VM> 0 "vector_reg_or_const0_operand" "vm,vm,J,J")
+         (unspec:BLK
+          [(match_operand 1 "pmode_register_operand"          "r,r,r,r")
+           (match_operand 2 "p_reg_or_0_operand"              "r,J,r,J")
+           (match_operand:V 3 "register_operand"              "vr,vr,vr,vr")] UNSPEC_STRIDED_STORE)
+         (match_dup 1)] UNSPEC_SELECT)
+      (match_operand 4 "p_reg_or_const_csr_operand"           "rK,rK,rK,rK")
+      (match_operand 5 "const_int_operand")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))]
+  "TARGET_VECTOR"
+  "@
+   vsse<sew>.v\t%3,(%1),%2,%0.t
+   vsse<sew>.v\t%3,(%1),zero,%0.t
+   vsse<sew>.v\t%3,(%1),%2
+   vsse<sew>.v\t%3,(%1),zero"
+  [(set_attr "type" "vsse")
+   (set_attr "mode" "<MODE>")])
+
+;; Vector Unordered and Ordered Indexed Loads.
+;; The following patterns are the patterns will be matched after
+;; reload. We split them to 2,4,8,16,32,64,128 to reduce patterns
+;; in CODE_FOR_xxxxx, thus reduce compilation time.
+
+;; pattern of indexed loads for nunits = 2.
+(define_insn "@vl<uo>xei<V2UNITS:mode><V2UNITSI:mode>"
+  [(set (match_operand:V2UNITS 0 "register_operand"                     "=&vr,&vr,&vr,&vr")
+    (unspec:V2UNITS
+      [(unspec:V2UNITS
+        [(match_operand:<V2UNITS:VM> 1 "vector_reg_or_const0_operand"   "vm,vm,J,J")
+         (match_operand:V2UNITS 2 "vector_reg_or_const0_operand"        "0,J,0,J")
+         (match_operand 3 "pmode_register_operand"                      "r,r,r,r")
+         (match_operand:V2UNITSI 4 "register_operand"                   "vr,vr,vr,vr")
+         (mem:BLK (scratch))] INDEXED_LOAD)
+      (match_operand 5 "p_reg_or_const_csr_operand"                     "rK,rK,rK,rK")
+      (match_operand 6 "const_int_operand")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))]
+  "TARGET_VECTOR"
+  "@
+   vl<uo>xei<V2UNITSI:sew>.v\t%0,(%3),%4,%1.t
+   vl<uo>xei<V2UNITSI:sew>.v\t%0,(%3),%4,%1.t
+   vl<uo>xei<V2UNITSI:sew>.v\t%0,(%3),%4
+   vl<uo>xei<V2UNITSI:sew>.v\t%0,(%3),%4"
+  [(set_attr "type" "vl<uo>xei")
+   (set_attr "mode" "<V2UNITS:MODE>")])
+
+;; pattern of indexed loads for nunits = 4.
+(define_insn "@vl<uo>xei<V4UNITS:mode><V4UNITSI:mode>"
+  [(set (match_operand:V4UNITS 0 "register_operand"                   "=&vr,&vr,&vr,&vr")
+    (unspec:V4UNITS
+      [(unspec:V4UNITS
+        [(match_operand:<V4UNITS:VM> 1 "vector_reg_or_const0_operand" "vm,vm,J,J")
+         (match_operand:V4UNITS 2 "vector_reg_or_const0_operand"      "0,J,0,J")
+         (match_operand 3 "pmode_register_operand"                    "r,r,r,r")
+         (match_operand:V4UNITSI 4 "register_operand"                 "vr,vr,vr,vr")
+         (mem:BLK (scratch))] INDEXED_LOAD)
+      (match_operand 5 "p_reg_or_const_csr_operand"                   "rK,rK,rK,rK")
+      (match_operand 6 "const_int_operand")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))]
+  "TARGET_VECTOR"
+  "@
+   vl<uo>xei<V4UNITSI:sew>.v\t%0,(%3),%4,%1.t
+   vl<uo>xei<V4UNITSI:sew>.v\t%0,(%3),%4,%1.t
+   vl<uo>xei<V4UNITSI:sew>.v\t%0,(%3),%4
+   vl<uo>xei<V4UNITSI:sew>.v\t%0,(%3),%4"
+  [(set_attr "type" "vl<uo>xei")
+   (set_attr "mode" "<V4UNITS:MODE>")])
+
+;; pattern of indexed loads for nunits = 8.
+(define_insn "@vl<uo>xei<V8UNITS:mode><V8UNITSI:mode>"
+  [(set (match_operand:V8UNITS 0 "register_operand"                   "=&vr,&vr,&vr,&vr")
+    (unspec:V8UNITS
+      [(unspec:V8UNITS
+        [(match_operand:<V8UNITS:VM> 1 "vector_reg_or_const0_operand" "vm,vm,J,J")
+         (match_operand:V8UNITS 2 "vector_reg_or_const0_operand"      "0,J,0,J")
+         (match_operand 3 "pmode_register_operand"                    "r,r,r,r")
+         (match_operand:V8UNITSI 4 "register_operand"                 "vr,vr,vr,vr")
+         (mem:BLK (scratch))] INDEXED_LOAD)
+      (match_operand 5 "p_reg_or_const_csr_operand"                   "rK,rK,rK,rK")
+      (match_operand 6 "const_int_operand")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))]
+  "TARGET_VECTOR"
+  "@
+   vl<uo>xei<V8UNITSI:sew>.v\t%0,(%3),%4,%1.t
+   vl<uo>xei<V8UNITSI:sew>.v\t%0,(%3),%4,%1.t
+   vl<uo>xei<V8UNITSI:sew>.v\t%0,(%3),%4
+   vl<uo>xei<V8UNITSI:sew>.v\t%0,(%3),%4"
+  [(set_attr "type" "vl<uo>xei")
+   (set_attr "mode" "<V8UNITS:MODE>")])
+
+;; pattern of indexed loads for nunits = 16.
+(define_insn "@vl<uo>xei<V16UNITS:mode><V16UNITSI:mode>"
+  [(set (match_operand:V16UNITS 0 "register_operand"                    "=&vr,&vr,&vr,&vr")
+    (unspec:V16UNITS
+      [(unspec:V16UNITS
+        [(match_operand:<V16UNITS:VM> 1 "vector_reg_or_const0_operand"  "vm,vm,J,J")
+         (match_operand:V16UNITS 2 "vector_reg_or_const0_operand"       "0,J,0,J")
+         (match_operand 3 "pmode_register_operand"                      "r,r,r,r")
+         (match_operand:V16UNITSI 4 "register_operand"                  "vr,vr,vr,vr")
+         (mem:BLK (scratch))] INDEXED_LOAD)
+      (match_operand 5 "p_reg_or_const_csr_operand"                     "rK,rK,rK,rK")
+      (match_operand 6 "const_int_operand")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))]
+  "TARGET_VECTOR"
+  "@
+   vl<uo>xei<V16UNITSI:sew>.v\t%0,(%3),%4,%1.t
+   vl<uo>xei<V16UNITSI:sew>.v\t%0,(%3),%4,%1.t
+   vl<uo>xei<V16UNITSI:sew>.v\t%0,(%3),%4
+   vl<uo>xei<V16UNITSI:sew>.v\t%0,(%3),%4"
+  [(set_attr "type" "vl<uo>xei")
+   (set_attr "mode" "<V16UNITS:MODE>")])
+
+;; pattern of indexed loads for nunits = 32.
+(define_insn "@vl<uo>xei<V32UNITS:mode><V32UNITSI:mode>"
+  [(set (match_operand:V32UNITS 0 "register_operand"                    "=&vr,&vr,&vr,&vr")
+    (unspec:V32UNITS
+      [(unspec:V32UNITS
+        [(match_operand:<V32UNITS:VM> 1 "vector_reg_or_const0_operand"  "vm,vm,J,J")
+         (match_operand:V32UNITS 2 "vector_reg_or_const0_operand"       "0,J,0,J")
+         (match_operand 3 "pmode_register_operand"                      "r,r,r,r")
+         (match_operand:V32UNITSI 4 "register_operand"                  "vr,vr,vr,vr")
+         (mem:BLK (scratch))] INDEXED_LOAD)
+      (match_operand 5 "p_reg_or_const_csr_operand"                     "rK,rK,rK,rK")
+      (match_operand 6 "const_int_operand")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))]
+  "TARGET_VECTOR"
+  "@
+   vl<uo>xei<V32UNITSI:sew>.v\t%0,(%3),%4,%1.t
+   vl<uo>xei<V32UNITSI:sew>.v\t%0,(%3),%4,%1.t
+   vl<uo>xei<V32UNITSI:sew>.v\t%0,(%3),%4
+   vl<uo>xei<V32UNITSI:sew>.v\t%0,(%3),%4"
+  [(set_attr "type" "vl<uo>xei")
+   (set_attr "mode" "<V32UNITS:MODE>")])
+
+;; pattern of indexed loads for nunits = 64.
+(define_insn "@vl<uo>xei<V64UNITS:mode><V64UNITSI:mode>"
+  [(set (match_operand:V64UNITS 0 "register_operand"                    "=&vr,&vr,&vr,&vr")
+    (unspec:V64UNITS
+      [(unspec:V64UNITS
+        [(match_operand:<V64UNITS:VM> 1 "vector_reg_or_const0_operand"  "vm,vm,J,J")
+         (match_operand:V64UNITS 2 "vector_reg_or_const0_operand"       "0,J,0,J")
+         (match_operand 3 "pmode_register_operand"                      "r,r,r,r")
+         (match_operand:V64UNITSI 4 "register_operand"                  "vr,vr,vr,vr")
+         (mem:BLK (scratch))] INDEXED_LOAD)
+      (match_operand 5 "p_reg_or_const_csr_operand"                     "rK,rK,rK,rK")
+      (match_operand 6 "const_int_operand")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))]
+  "TARGET_VECTOR"
+  "@
+   vl<uo>xei<V64UNITSI:sew>.v\t%0,(%3),%4,%1.t
+   vl<uo>xei<V64UNITSI:sew>.v\t%0,(%3),%4,%1.t
+   vl<uo>xei<V64UNITSI:sew>.v\t%0,(%3),%4
+   vl<uo>xei<V64UNITSI:sew>.v\t%0,(%3),%4"
+  [(set_attr "type" "vl<uo>xei")
+   (set_attr "mode" "<V64UNITS:MODE>")])
+
+;; pattern of indexed loads for nunits = 128.
+(define_insn "@vl<uo>xei<V128UNITSI:mode><V128UNITSI:mode>"
+  [(set (match_operand:V128UNITSI 0 "register_operand"                    "=&vr,&vr,&vr,&vr")
+    (unspec:V128UNITSI
+      [(unspec:V128UNITSI
+        [(match_operand:<V128UNITSI:VM> 1 "vector_reg_or_const0_operand"  "vm,vm,J,J")
+         (match_operand:V128UNITSI 2 "vector_reg_or_const0_operand"       "0,J,0,J")
+         (match_operand 3 "pmode_register_operand"                        "r,r,r,r")
+         (match_operand:V128UNITSI 4 "register_operand"                   "vr,vr,vr,vr")
+         (mem:BLK (scratch))] INDEXED_LOAD)
+      (match_operand 5 "p_reg_or_const_csr_operand"                       "rK,rK,rK,rK")
+      (match_operand 6 "const_int_operand")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))]
+  "TARGET_VECTOR"
+  "@
+   vl<uo>xei<V128UNITSI:sew>.v\t%0,(%3),%4,%1.t
+   vl<uo>xei<V128UNITSI:sew>.v\t%0,(%3),%4,%1.t
+   vl<uo>xei<V128UNITSI:sew>.v\t%0,(%3),%4
+   vl<uo>xei<V128UNITSI:sew>.v\t%0,(%3),%4"
+  [(set_attr "type" "vl<uo>xei")
+   (set_attr "mode" "<V128UNITSI:MODE>")])
+
+;; Vector Unordered and Ordered Indexed Stores.
+
+;; pattern of indexed stores for nunits = 2.
+(define_insn "@vs<uo>xei<V2UNITS:mode><V2UNITSI:mode>"
+  [(set (mem:BLK (scratch))
+    (unspec:BLK
+      [(unspec:V2UNITS
+         [(match_operand:<V2UNITS:VM> 0 "vector_reg_or_const0_operand"  "vm,J")
+          (match_operand 1 "pmode_register_operand"                     "r,r")
+          (match_operand:V2UNITSI 2 "register_operand"                  "vr,vr")
+          (match_operand:V2UNITS 3 "register_operand"                   "vr,vr")] INDEXED_STORE)
+    (match_operand 4 "p_reg_or_const_csr_operand"                       "rK,rK")
+    (match_operand 5 "const_int_operand")
+    (reg:SI VL_REGNUM)
+    (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))]
+  "TARGET_VECTOR"
+  "@
+   vs<uo>xei<V2UNITSI:sew>.v\t%3,(%1),%2,%0.t
+   vs<uo>xei<V2UNITSI:sew>.v\t%3,(%1),%2"
+  [(set_attr "type" "vs<uo>xei")
+   (set_attr "mode" "<V2UNITS:MODE>")])
+
+;; pattern of indexed stores for nunits = 4.
+(define_insn "@vs<uo>xei<V4UNITS:mode><V4UNITSI:mode>"
+  [(set (mem:BLK (scratch))
+    (unspec:BLK
+      [(unspec:V4UNITS
+         [(match_operand:<V4UNITS:VM> 0 "vector_reg_or_const0_operand"  "vm,J")
+          (match_operand 1 "pmode_register_operand"                     "r,r")
+          (match_operand:V4UNITSI 2 "register_operand"                  "vr,vr")
+          (match_operand:V4UNITS 3 "register_operand"                   "vr,vr")] INDEXED_STORE)
+    (match_operand 4 "p_reg_or_const_csr_operand"                       "rK,rK")
+    (match_operand 5 "const_int_operand")
+    (reg:SI VL_REGNUM)
+    (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))]
+  "TARGET_VECTOR"
+  "@
+   vs<uo>xei<V4UNITSI:sew>.v\t%3,(%1),%2,%0.t
+   vs<uo>xei<V4UNITSI:sew>.v\t%3,(%1),%2"
+  [(set_attr "type" "vs<uo>xei")
+   (set_attr "mode" "<V4UNITS:MODE>")])
+
+;; pattern of indexed stores for nunits = 8.
+(define_insn "@vs<uo>xei<V8UNITS:mode><V8UNITSI:mode>"
+  [(set (mem:BLK (scratch))
+    (unspec:BLK
+      [(unspec:V8UNITS
+         [(match_operand:<V8UNITS:VM> 0 "vector_reg_or_const0_operand"  "vm,J")
+          (match_operand 1 "pmode_register_operand"                     "r,r")
+          (match_operand:V8UNITSI 2 "register_operand"                  "vr,vr")
+          (match_operand:V8UNITS 3 "register_operand"                   "vr,vr")] INDEXED_STORE)
+    (match_operand 4 "p_reg_or_const_csr_operand"                       "rK,rK")
+    (match_operand 5 "const_int_operand")
+    (reg:SI VL_REGNUM)
+    (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))]
+  "TARGET_VECTOR"
+  "@
+   vs<uo>xei<V8UNITSI:sew>.v\t%3,(%1),%2,%0.t
+   vs<uo>xei<V8UNITSI:sew>.v\t%3,(%1),%2"
+  [(set_attr "type" "vs<uo>xei")
+   (set_attr "mode" "<V8UNITS:MODE>")])
+
+;; pattern of indexed stores for nunits = 16.
+(define_insn "@vs<uo>xei<V16UNITS:mode><V16UNITSI:mode>"
+  [(set (mem:BLK (scratch))
+    (unspec:BLK
+      [(unspec:V16UNITS
+         [(match_operand:<V16UNITS:VM> 0 "vector_reg_or_const0_operand" "vm,J")
+          (match_operand 1 "pmode_register_operand"                     "r,r")
+          (match_operand:V16UNITSI 2 "register_operand"                 "vr,vr")
+          (match_operand:V16UNITS 3 "register_operand"                  "vr,vr")] INDEXED_STORE)
+    (match_operand 4 "p_reg_or_const_csr_operand"                       "rK,rK")
+    (match_operand 5 "const_int_operand")
+    (reg:SI VL_REGNUM)
+    (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))]
+  "TARGET_VECTOR"
+  "@
+   vs<uo>xei<V16UNITSI:sew>.v\t%3,(%1),%2,%0.t
+   vs<uo>xei<V16UNITSI:sew>.v\t%3,(%1),%2"
+  [(set_attr "type" "vs<uo>xei")
+   (set_attr "mode" "<V16UNITS:MODE>")])
+
+;; pattern of indexed stores for nunits = 32.
+(define_insn "@vs<uo>xei<V32UNITS:mode><V32UNITSI:mode>"
+  [(set (mem:BLK (scratch))
+    (unspec:BLK
+      [(unspec:V32UNITS
+         [(match_operand:<V32UNITS:VM> 0 "vector_reg_or_const0_operand" "vm,J")
+          (match_operand 1 "pmode_register_operand"                     "r,r")
+          (match_operand:V32UNITSI 2 "register_operand"                 "vr,vr")
+          (match_operand:V32UNITS 3 "register_operand"                  "vr,vr")] INDEXED_STORE)
+    (match_operand 4 "p_reg_or_const_csr_operand"                       "rK,rK")
+    (match_operand 5 "const_int_operand")
+    (reg:SI VL_REGNUM)
+    (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))]
+  "TARGET_VECTOR"
+  "@
+   vs<uo>xei<V32UNITSI:sew>.v\t%3,(%1),%2,%0.t
+   vs<uo>xei<V32UNITSI:sew>.v\t%3,(%1),%2"
+  [(set_attr "type" "vs<uo>xei")
+   (set_attr "mode" "<V32UNITS:MODE>")])
+
+;; pattern of indexed stores for nunits = 64.
+(define_insn "@vs<uo>xei<V64UNITS:mode><V64UNITSI:mode>"
+  [(set (mem:BLK (scratch))
+    (unspec:BLK
+      [(unspec:V64UNITS
+         [(match_operand:<V64UNITS:VM> 0 "vector_reg_or_const0_operand" "vm,J")
+          (match_operand 1 "pmode_register_operand"                     "r,r")
+          (match_operand:V64UNITSI 2 "register_operand"                 "vr,vr")
+          (match_operand:V64UNITS 3 "register_operand"                  "vr,vr")] INDEXED_STORE)
+    (match_operand 4 "p_reg_or_const_csr_operand"                       "rK,rK")
+    (match_operand 5 "const_int_operand")
+    (reg:SI VL_REGNUM)
+    (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))]
+  "TARGET_VECTOR"
+  "@
+   vs<uo>xei<V64UNITSI:sew>.v\t%3,(%1),%2,%0.t
+   vs<uo>xei<V64UNITSI:sew>.v\t%3,(%1),%2"
+  [(set_attr "type" "vs<uo>xei")
+   (set_attr "mode" "<V64UNITS:MODE>")])
+
+;; pattern of indexed stores for nunits = 128.
+(define_insn "@vs<uo>xei<V128UNITSI:mode><V128UNITSI:mode>"
+  [(set (mem:BLK (scratch))
+    (unspec:BLK
+      [(unspec:V128UNITSI
+         [(match_operand:<V128UNITSI:VM> 0 "vector_reg_or_const0_operand" "vm,J")
+          (match_operand 1 "pmode_register_operand"                       "r,r")
+          (match_operand:V128UNITSI 2 "register_operand"                  "vr,vr")
+          (match_operand:V128UNITSI 3 "register_operand"                  "vr,vr")] INDEXED_STORE)
+    (match_operand 4 "p_reg_or_const_csr_operand"                         "rK,rK")
+    (match_operand 5 "const_int_operand")
+    (reg:SI VL_REGNUM)
+    (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))]
+  "TARGET_VECTOR"
+  "@
+   vs<uo>xei<V128UNITSI:sew>.v\t%3,(%1),%2,%0.t
+   vs<uo>xei<V128UNITSI:sew>.v\t%3,(%1),%2"
+  [(set_attr "type" "vs<uo>xei")
+   (set_attr "mode" "<V128UNITSI:MODE>")])
+
+;; Unit-stride Fault-Only-First Loads.
+(define_insn "@vle<mode>ff"
+  [(set (match_operand:V 0 "register_operand"               "=vd,vd,vr,vr")
+   (unspec:V
+    [(unspec:V
+      [(match_operand:<VM> 1 "vector_reg_or_const0_operand" "vm,vm,J,J")
+       (unspec:V
+         [(match_operand 3 "pmode_register_operand"         "r,r,r,r")
+           (mem:BLK (scratch))] UNSPEC_FAULT_ONLY_FIRST_LOAD)
+       (match_operand:V 2 "vector_reg_or_const0_operand"    "0,J,0,J")] UNSPEC_SELECT)
+    (match_operand 4 "p_reg_or_const_csr_operand"           "rK,rK,rK,rK")
+    (match_operand 5 "const_int_operand")
+    (reg:SI VL_REGNUM)
+    (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))
+  (clobber (reg:SI VL_REGNUM))]
+  "TARGET_VECTOR"
+  "@
+   vle<sew>ff.v\t%0,(%3),%1.t
+   vle<sew>ff.v\t%0,(%3),%1.t
+   vle<sew>ff.v\t%0,(%3)
+   vle<sew>ff.v\t%0,(%3)"
+  [(set_attr "type" "vleff")
+   (set_attr "mode" "<MODE>")])
+
 ;; vmv.v.x
 (define_expand "@v<vxoptab><mode>_v_x"
   [(unspec [
-- 
2.36.1




^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2022-06-01  1:18 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-01  1:18 [PATCH v3] RISC-V: Add load and store intrinsics support for RVV support juzhe.zhong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).