public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH] RISC-V: Add tuple type vget/vset intrinsics
@ 2023-04-19 13:00 juzhe.zhong
  2023-05-03 10:41 ` Kito Cheng
  0 siblings, 1 reply; 2+ messages in thread
From: juzhe.zhong @ 2023-04-19 13:00 UTC (permalink / raw)
  To: gcc-patches; +Cc: kito.cheng, palmer, Juzhe-Zhong

From: Juzhe-Zhong <juzhe.zhong@rivai.ai>

gcc/ChangeLog:

        * config/riscv/genrvv-type-indexer.cc (valid_type): Adapt for tuple type support.
        (inttype): Ditto.
        (floattype): Ditto.
        (main): Ditto.
        * config/riscv/riscv-vector-builtins-bases.cc: Ditto.
        * config/riscv/riscv-vector-builtins-functions.def (vset): Add tuple type vset.
        (vget): Add tuple type vget.
        * config/riscv/riscv-vector-builtins-types.def (DEF_RVV_TUPLE_OPS): New macro.
        (vint8mf8x2_t): Ditto.
        (vuint8mf8x2_t): Ditto.
        (vint8mf8x3_t): Ditto.
        (vuint8mf8x3_t): Ditto.
        (vint8mf8x4_t): Ditto.
        (vuint8mf8x4_t): Ditto.
        (vint8mf8x5_t): Ditto.
        (vuint8mf8x5_t): Ditto.
        (vint8mf8x6_t): Ditto.
        (vuint8mf8x6_t): Ditto.
        (vint8mf8x7_t): Ditto.
        (vuint8mf8x7_t): Ditto.
        (vint8mf8x8_t): Ditto.
        (vuint8mf8x8_t): Ditto.
        (vint8mf4x2_t): Ditto.
        (vuint8mf4x2_t): Ditto.
        (vint8mf4x3_t): Ditto.
        (vuint8mf4x3_t): Ditto.
        (vint8mf4x4_t): Ditto.
        (vuint8mf4x4_t): Ditto.
        (vint8mf4x5_t): Ditto.
        (vuint8mf4x5_t): Ditto.
        (vint8mf4x6_t): Ditto.
        (vuint8mf4x6_t): Ditto.
        (vint8mf4x7_t): Ditto.
        (vuint8mf4x7_t): Ditto.
        (vint8mf4x8_t): Ditto.
        (vuint8mf4x8_t): Ditto.
        (vint8mf2x2_t): Ditto.
        (vuint8mf2x2_t): Ditto.
        (vint8mf2x3_t): Ditto.
        (vuint8mf2x3_t): Ditto.
        (vint8mf2x4_t): Ditto.
        (vuint8mf2x4_t): Ditto.
        (vint8mf2x5_t): Ditto.
        (vuint8mf2x5_t): Ditto.
        (vint8mf2x6_t): Ditto.
        (vuint8mf2x6_t): Ditto.
        (vint8mf2x7_t): Ditto.
        (vuint8mf2x7_t): Ditto.
        (vint8mf2x8_t): Ditto.
        (vuint8mf2x8_t): Ditto.
        (vint8m1x2_t): Ditto.
        (vuint8m1x2_t): Ditto.
        (vint8m1x3_t): Ditto.
        (vuint8m1x3_t): Ditto.
        (vint8m1x4_t): Ditto.
        (vuint8m1x4_t): Ditto.
        (vint8m1x5_t): Ditto.
        (vuint8m1x5_t): Ditto.
        (vint8m1x6_t): Ditto.
        (vuint8m1x6_t): Ditto.
        (vint8m1x7_t): Ditto.
        (vuint8m1x7_t): Ditto.
        (vint8m1x8_t): Ditto.
        (vuint8m1x8_t): Ditto.
        (vint8m2x2_t): Ditto.
        (vuint8m2x2_t): Ditto.
        (vint8m2x3_t): Ditto.
        (vuint8m2x3_t): Ditto.
        (vint8m2x4_t): Ditto.
        (vuint8m2x4_t): Ditto.
        (vint8m4x2_t): Ditto.
        (vuint8m4x2_t): Ditto.
        (vint16mf4x2_t): Ditto.
        (vuint16mf4x2_t): Ditto.
        (vint16mf4x3_t): Ditto.
        (vuint16mf4x3_t): Ditto.
        (vint16mf4x4_t): Ditto.
        (vuint16mf4x4_t): Ditto.
        (vint16mf4x5_t): Ditto.
        (vuint16mf4x5_t): Ditto.
        (vint16mf4x6_t): Ditto.
        (vuint16mf4x6_t): Ditto.
        (vint16mf4x7_t): Ditto.
        (vuint16mf4x7_t): Ditto.
        (vint16mf4x8_t): Ditto.
        (vuint16mf4x8_t): Ditto.
        (vint16mf2x2_t): Ditto.
        (vuint16mf2x2_t): Ditto.
        (vint16mf2x3_t): Ditto.
        (vuint16mf2x3_t): Ditto.
        (vint16mf2x4_t): Ditto.
        (vuint16mf2x4_t): Ditto.
        (vint16mf2x5_t): Ditto.
        (vuint16mf2x5_t): Ditto.
        (vint16mf2x6_t): Ditto.
        (vuint16mf2x6_t): Ditto.
        (vint16mf2x7_t): Ditto.
        (vuint16mf2x7_t): Ditto.
        (vint16mf2x8_t): Ditto.
        (vuint16mf2x8_t): Ditto.
        (vint16m1x2_t): Ditto.
        (vuint16m1x2_t): Ditto.
        (vint16m1x3_t): Ditto.
        (vuint16m1x3_t): Ditto.
        (vint16m1x4_t): Ditto.
        (vuint16m1x4_t): Ditto.
        (vint16m1x5_t): Ditto.
        (vuint16m1x5_t): Ditto.
        (vint16m1x6_t): Ditto.
        (vuint16m1x6_t): Ditto.
        (vint16m1x7_t): Ditto.
        (vuint16m1x7_t): Ditto.
        (vint16m1x8_t): Ditto.
        (vuint16m1x8_t): Ditto.
        (vint16m2x2_t): Ditto.
        (vuint16m2x2_t): Ditto.
        (vint16m2x3_t): Ditto.
        (vuint16m2x3_t): Ditto.
        (vint16m2x4_t): Ditto.
        (vuint16m2x4_t): Ditto.
        (vint16m4x2_t): Ditto.
        (vuint16m4x2_t): Ditto.
        (vint32mf2x2_t): Ditto.
        (vuint32mf2x2_t): Ditto.
        (vint32mf2x3_t): Ditto.
        (vuint32mf2x3_t): Ditto.
        (vint32mf2x4_t): Ditto.
        (vuint32mf2x4_t): Ditto.
        (vint32mf2x5_t): Ditto.
        (vuint32mf2x5_t): Ditto.
        (vint32mf2x6_t): Ditto.
        (vuint32mf2x6_t): Ditto.
        (vint32mf2x7_t): Ditto.
        (vuint32mf2x7_t): Ditto.
        (vint32mf2x8_t): Ditto.
        (vuint32mf2x8_t): Ditto.
        (vint32m1x2_t): Ditto.
        (vuint32m1x2_t): Ditto.
        (vint32m1x3_t): Ditto.
        (vuint32m1x3_t): Ditto.
        (vint32m1x4_t): Ditto.
        (vuint32m1x4_t): Ditto.
        (vint32m1x5_t): Ditto.
        (vuint32m1x5_t): Ditto.
        (vint32m1x6_t): Ditto.
        (vuint32m1x6_t): Ditto.
        (vint32m1x7_t): Ditto.
        (vuint32m1x7_t): Ditto.
        (vint32m1x8_t): Ditto.
        (vuint32m1x8_t): Ditto.
        (vint32m2x2_t): Ditto.
        (vuint32m2x2_t): Ditto.
        (vint32m2x3_t): Ditto.
        (vuint32m2x3_t): Ditto.
        (vint32m2x4_t): Ditto.
        (vuint32m2x4_t): Ditto.
        (vint32m4x2_t): Ditto.
        (vuint32m4x2_t): Ditto.
        (vint64m1x2_t): Ditto.
        (vuint64m1x2_t): Ditto.
        (vint64m1x3_t): Ditto.
        (vuint64m1x3_t): Ditto.
        (vint64m1x4_t): Ditto.
        (vuint64m1x4_t): Ditto.
        (vint64m1x5_t): Ditto.
        (vuint64m1x5_t): Ditto.
        (vint64m1x6_t): Ditto.
        (vuint64m1x6_t): Ditto.
        (vint64m1x7_t): Ditto.
        (vuint64m1x7_t): Ditto.
        (vint64m1x8_t): Ditto.
        (vuint64m1x8_t): Ditto.
        (vint64m2x2_t): Ditto.
        (vuint64m2x2_t): Ditto.
        (vint64m2x3_t): Ditto.
        (vuint64m2x3_t): Ditto.
        (vint64m2x4_t): Ditto.
        (vuint64m2x4_t): Ditto.
        (vint64m4x2_t): Ditto.
        (vuint64m4x2_t): Ditto.
        (vfloat32mf2x2_t): Ditto.
        (vfloat32mf2x3_t): Ditto.
        (vfloat32mf2x4_t): Ditto.
        (vfloat32mf2x5_t): Ditto.
        (vfloat32mf2x6_t): Ditto.
        (vfloat32mf2x7_t): Ditto.
        (vfloat32mf2x8_t): Ditto.
        (vfloat32m1x2_t): Ditto.
        (vfloat32m1x3_t): Ditto.
        (vfloat32m1x4_t): Ditto.
        (vfloat32m1x5_t): Ditto.
        (vfloat32m1x6_t): Ditto.
        (vfloat32m1x7_t): Ditto.
        (vfloat32m1x8_t): Ditto.
        (vfloat32m2x2_t): Ditto.
        (vfloat32m2x3_t): Ditto.
        (vfloat32m2x4_t): Ditto.
        (vfloat32m4x2_t): Ditto.
        (vfloat64m1x2_t): Ditto.
        (vfloat64m1x3_t): Ditto.
        (vfloat64m1x4_t): Ditto.
        (vfloat64m1x5_t): Ditto.
        (vfloat64m1x6_t): Ditto.
        (vfloat64m1x7_t): Ditto.
        (vfloat64m1x8_t): Ditto.
        (vfloat64m2x2_t): Ditto.
        (vfloat64m2x3_t): Ditto.
        (vfloat64m2x4_t): Ditto.
        (vfloat64m4x2_t): Ditto.
        * config/riscv/riscv-vector-builtins.cc (DEF_RVV_TUPLE_OPS): Ditto.
        (DEF_RVV_TYPE_INDEX): Ditto.
        (rvv_arg_type_info::get_tuple_subpart_type): New function.
        (DEF_RVV_TUPLE_TYPE): New macro.
        * config/riscv/riscv-vector-builtins.def (DEF_RVV_TYPE_INDEX): Adapt for tuple vget/vset support.
        (vint8mf4_t): Ditto.
        (vuint8mf4_t): Ditto.
        (vint8mf2_t): Ditto.
        (vuint8mf2_t): Ditto.
        (vint8m1_t): Ditto.
        (vuint8m1_t): Ditto.
        (vint8m2_t): Ditto.
        (vuint8m2_t): Ditto.
        (vint8m4_t): Ditto.
        (vuint8m4_t): Ditto.
        (vint8m8_t): Ditto.
        (vuint8m8_t): Ditto.
        (vint16mf4_t): Ditto.
        (vuint16mf4_t): Ditto.
        (vint16mf2_t): Ditto.
        (vuint16mf2_t): Ditto.
        (vint16m1_t): Ditto.
        (vuint16m1_t): Ditto.
        (vint16m2_t): Ditto.
        (vuint16m2_t): Ditto.
        (vint16m4_t): Ditto.
        (vuint16m4_t): Ditto.
        (vint16m8_t): Ditto.
        (vuint16m8_t): Ditto.
        (vint32mf2_t): Ditto.
        (vuint32mf2_t): Ditto.
        (vint32m1_t): Ditto.
        (vuint32m1_t): Ditto.
        (vint32m2_t): Ditto.
        (vuint32m2_t): Ditto.
        (vint32m4_t): Ditto.
        (vuint32m4_t): Ditto.
        (vint32m8_t): Ditto.
        (vuint32m8_t): Ditto.
        (vint64m1_t): Ditto.
        (vuint64m1_t): Ditto.
        (vint64m2_t): Ditto.
        (vuint64m2_t): Ditto.
        (vint64m4_t): Ditto.
        (vuint64m4_t): Ditto.
        (vint64m8_t): Ditto.
        (vuint64m8_t): Ditto.
        (vfloat32mf2_t): Ditto.
        (vfloat32m1_t): Ditto.
        (vfloat32m2_t): Ditto.
        (vfloat32m4_t): Ditto.
        (vfloat32m8_t): Ditto.
        (vfloat64m1_t): Ditto.
        (vfloat64m2_t): Ditto.
        (vfloat64m4_t): Ditto.
        (vfloat64m8_t): Ditto.
        (tuple_subpart): Add tuple subpart base type.
        * config/riscv/riscv-vector-builtins.h (struct rvv_arg_type_info): Ditto.
        (tuple_type_field): New function.

---
 gcc/config/riscv/genrvv-type-indexer.cc       | 255 +++++++----
 .../riscv/riscv-vector-builtins-bases.cc      |  49 ++
 .../riscv/riscv-vector-builtins-functions.def |   4 +
 .../riscv/riscv-vector-builtins-types.def     | 209 +++++++++
 gcc/config/riscv/riscv-vector-builtins.cc     |  60 ++-
 gcc/config/riscv/riscv-vector-builtins.def    | 421 +++++++++---------
 gcc/config/riscv/riscv-vector-builtins.h      |  11 +
 7 files changed, 688 insertions(+), 321 deletions(-)

diff --git a/gcc/config/riscv/genrvv-type-indexer.cc b/gcc/config/riscv/genrvv-type-indexer.cc
index e677b55290c..b96aefebc4e 100644
--- a/gcc/config/riscv/genrvv-type-indexer.cc
+++ b/gcc/config/riscv/genrvv-type-indexer.cc
@@ -60,6 +60,28 @@ valid_type (unsigned sew, int lmul_log2, bool float_p)
     }
 }
 
+bool
+valid_type (unsigned sew, int lmul_log2, unsigned nf, bool float_p)
+{
+  if (!valid_type (sew, lmul_log2, float_p))
+    return false;
+
+  if (nf > 8 || nf < 1)
+    return false;
+
+  switch (lmul_log2)
+    {
+    case 1:
+      return nf < 5;
+    case 2:
+      return nf < 3;
+    case 3:
+      return nf == 1;
+    default:
+      return true;
+    }
+}
+
 std::string
 inttype (unsigned sew, int lmul_log2, bool unsigned_p)
 {
@@ -74,6 +96,23 @@ inttype (unsigned sew, int lmul_log2, bool unsigned_p)
   return mode.str ();
 }
 
+std::string
+inttype (unsigned sew, int lmul_log2, unsigned nf, bool unsigned_p)
+{
+  if (!valid_type (sew, lmul_log2, nf, /*float_t*/ false))
+    return "INVALID";
+
+  std::stringstream mode;
+  mode << "v";
+  if (unsigned_p)
+    mode << "u";
+  mode << "int" << sew << to_lmul (lmul_log2);
+  if (nf > 1)
+    mode << "x" << nf;
+  mode << "_t";
+  return mode.str ();
+}
+
 std::string
 floattype (unsigned sew, int lmul_log2)
 {
@@ -85,6 +124,20 @@ floattype (unsigned sew, int lmul_log2)
   return mode.str ();
 }
 
+std::string
+floattype (unsigned sew, int lmul_log2, unsigned nf)
+{
+  if (!valid_type (sew, lmul_log2, nf, /*float_t*/ true))
+    return "INVALID";
+
+  std::stringstream mode;
+  mode << "vfloat" << sew << to_lmul (lmul_log2);
+  if (nf > 1)
+    mode << "x" << nf;
+  mode << "_t";
+  return mode.str ();
+}
+
 std::string
 maskmode (unsigned sew, int lmul_log2)
 {
@@ -168,24 +221,104 @@ main (int argc, const char **argv)
       for (unsigned lmul_log2_offset : {1, 2, 3, 4, 5, 6})
 	{
 	  unsigned multiple_of_lmul = 1 << lmul_log2_offset;
-	  const char *comma = lmul_log2_offset == 6 ? "" : ",";
-	  fprintf (fp, "  /*X%d_INTERPRET*/ INVALID%s\n", multiple_of_lmul,
-		   comma);
+	  fprintf (fp, "  /*X%d_INTERPRET*/ INVALID,\n", multiple_of_lmul);
 	}
+      fprintf (fp, "  /*TUPLE_SUBPART*/ INVALID\n");
       fprintf (fp, ")\n");
     }
 
   // Build for vint and vuint
   for (unsigned sew : {8, 16, 32, 64})
     for (int lmul_log2 : {-3, -2, -1, 0, 1, 2, 3})
-      for (bool unsigned_p : {false, true})
+      for (unsigned nf : {1, 2, 3, 4, 5, 6, 7, 8})
+	for (bool unsigned_p : {false, true})
+	  {
+	    if (!valid_type (sew, lmul_log2, nf, /*float_t*/ false))
+	      continue;
+
+	    fprintf (fp, "DEF_RVV_TYPE_INDEX (\n");
+	    fprintf (fp, "  /*VECTOR*/ %s,\n",
+		     inttype (sew, lmul_log2, nf, unsigned_p).c_str ());
+	    fprintf (fp, "  /*MASK*/ %s,\n",
+		     maskmode (sew, lmul_log2).c_str ());
+	    fprintf (fp, "  /*SIGNED*/ %s,\n",
+		     inttype (sew, lmul_log2, /*unsigned_p*/ false).c_str ());
+	    fprintf (fp, "  /*UNSIGNED*/ %s,\n",
+		     inttype (sew, lmul_log2, /*unsigned_p*/ true).c_str ());
+	    for (unsigned eew : {8, 16, 32, 64})
+	      fprintf (fp, "  /*EEW%d_INDEX*/ %s,\n", eew,
+		       same_ratio_eew_type (sew, lmul_log2, eew,
+					    /*unsigned_p*/ true, false)
+			 .c_str ());
+	    fprintf (fp, "  /*SHIFT*/ %s,\n",
+		     inttype (sew, lmul_log2, /*unsigned_p*/ true).c_str ());
+	    fprintf (fp, "  /*DOUBLE_TRUNC*/ %s,\n",
+		     same_ratio_eew_type (sew, lmul_log2, sew / 2, unsigned_p,
+					  false)
+		       .c_str ());
+	    fprintf (fp, "  /*QUAD_TRUNC*/ %s,\n",
+		     same_ratio_eew_type (sew, lmul_log2, sew / 4, unsigned_p,
+					  false)
+		       .c_str ());
+	    fprintf (fp, "  /*OCT_TRUNC*/ %s,\n",
+		     same_ratio_eew_type (sew, lmul_log2, sew / 8, unsigned_p,
+					  false)
+		       .c_str ());
+	    fprintf (fp, "  /*DOUBLE_TRUNC_SCALAR*/ %s,\n",
+		     same_ratio_eew_type (sew, lmul_log2, sew / 2, unsigned_p,
+					  false)
+		       .c_str ());
+	    fprintf (fp, "  /*DOUBLE_TRUNC_SIGNED*/ INVALID,\n");
+	    fprintf (fp, "  /*DOUBLE_TRUNC_UNSIGNED*/ %s,\n",
+		     same_ratio_eew_type (sew, lmul_log2, sew / 2, true, false)
+		       .c_str ());
+	    if (unsigned_p)
+	      fprintf (fp, "  /*DOUBLE_TRUNC_UNSIGNED_SCALAR*/ INVALID,\n");
+	    else
+	      fprintf (fp, "  /*DOUBLE_TRUNC_UNSIGNED_SCALAR*/ %s,\n",
+		       same_ratio_eew_type (sew, lmul_log2, sew / 2, true,
+					    false)
+			 .c_str ());
+	    fprintf (fp, "  /*DOUBLE_TRUNC_FLOAT*/ %s,\n",
+		     same_ratio_eew_type (sew, lmul_log2, sew / 2, false, true)
+		       .c_str ());
+	    fprintf (fp, "  /*FLOAT*/ %s,\n",
+		     floattype (sew, lmul_log2).c_str ());
+	    fprintf (fp, "  /*LMUL1*/ %s,\n",
+		     inttype (sew, /*lmul_log2*/ 0, unsigned_p).c_str ());
+	    fprintf (fp, "  /*WLMUL1*/ %s,\n",
+		     inttype (sew * 2, /*lmul_log2*/ 0, unsigned_p).c_str ());
+	    for (unsigned eew : {8, 16, 32, 64})
+	      {
+		if (eew == sew)
+		  fprintf (fp, "  /*EEW%d_INTERPRET*/ INVALID,\n", eew);
+		else
+		  fprintf (fp, "  /*EEW%d_INTERPRET*/ %s,\n", eew,
+			   inttype (eew, lmul_log2, unsigned_p).c_str ());
+	      }
+
+	    for (unsigned lmul_log2_offset : {1, 2, 3, 4, 5, 6})
+	      {
+		unsigned multiple_of_lmul = 1 << lmul_log2_offset;
+		fprintf (fp, "  /*X%d_VLMUL_EXT*/ %s,\n", multiple_of_lmul,
+			 inttype (sew, lmul_log2 + lmul_log2_offset, unsigned_p)
+			   .c_str ());
+	      }
+	    fprintf (fp, "  /*TUPLE_SUBPART*/ %s\n",
+		     inttype (sew, lmul_log2, 1, unsigned_p).c_str ());
+	    fprintf (fp, ")\n");
+	  }
+  // Build for vfloat
+  for (unsigned sew : {32, 64})
+    for (int lmul_log2 : {-3, -2, -1, 0, 1, 2, 3})
+      for (unsigned nf : {1, 2, 3, 4, 5, 6, 7, 8})
 	{
-	  if (!valid_type (sew, lmul_log2, /*float_t*/ false))
+	  if (!valid_type (sew, lmul_log2, nf, /*float_t*/ true))
 	    continue;
 
 	  fprintf (fp, "DEF_RVV_TYPE_INDEX (\n");
 	  fprintf (fp, "  /*VECTOR*/ %s,\n",
-		   inttype (sew, lmul_log2, unsigned_p).c_str ());
+		   floattype (sew, lmul_log2, nf).c_str ());
 	  fprintf (fp, "  /*MASK*/ %s,\n", maskmode (sew, lmul_log2).c_str ());
 	  fprintf (fp, "  /*SIGNED*/ %s,\n",
 		   inttype (sew, lmul_log2, /*unsigned_p*/ false).c_str ());
@@ -196,118 +329,42 @@ main (int argc, const char **argv)
 		     same_ratio_eew_type (sew, lmul_log2, eew,
 					  /*unsigned_p*/ true, false)
 		       .c_str ());
-	  fprintf (fp, "  /*SHIFT*/ %s,\n",
-		   inttype (sew, lmul_log2, /*unsigned_p*/ true).c_str ());
+	  fprintf (fp, "  /*SHIFT*/ INVALID,\n");
 	  fprintf (fp, "  /*DOUBLE_TRUNC*/ %s,\n",
-		   same_ratio_eew_type (sew, lmul_log2, sew / 2, unsigned_p,
-					false)
-		     .c_str ());
-	  fprintf (fp, "  /*QUAD_TRUNC*/ %s,\n",
-		   same_ratio_eew_type (sew, lmul_log2, sew / 4, unsigned_p,
-					false)
-		     .c_str ());
-	  fprintf (fp, "  /*OCT_TRUNC*/ %s,\n",
-		   same_ratio_eew_type (sew, lmul_log2, sew / 8, unsigned_p,
-					false)
+		   same_ratio_eew_type (sew, lmul_log2, sew / 2, false, true)
 		     .c_str ());
+	  fprintf (fp, "  /*QUAD_TRUNC*/ INVALID,\n");
+	  fprintf (fp, "  /*OCT_TRUNC*/ INVALID,\n");
 	  fprintf (fp, "  /*DOUBLE_TRUNC_SCALAR*/ %s,\n",
-		   same_ratio_eew_type (sew, lmul_log2, sew / 2, unsigned_p,
-					false)
+		   same_ratio_eew_type (sew, lmul_log2, sew / 2, false, true)
+		     .c_str ());
+	  fprintf (fp, "  /*DOUBLE_TRUNC_SIGNED*/ %s,\n",
+		   same_ratio_eew_type (sew, lmul_log2, sew / 2, false, false)
 		     .c_str ());
-	  fprintf (fp, "  /*DOUBLE_TRUNC_SIGNED*/ INVALID,\n");
 	  fprintf (fp, "  /*DOUBLE_TRUNC_UNSIGNED*/ %s,\n",
 		   same_ratio_eew_type (sew, lmul_log2, sew / 2, true, false)
 		     .c_str ());
-	  if (unsigned_p)
-	    fprintf (fp, "  /*DOUBLE_TRUNC_UNSIGNED_SCALAR*/ INVALID,\n");
-	  else
-	    fprintf (fp, "  /*DOUBLE_TRUNC_UNSIGNED_SCALAR*/ %s,\n",
-		     same_ratio_eew_type (sew, lmul_log2, sew / 2, true, false)
-		       .c_str ());
+	  fprintf (fp, "  /*DOUBLE_TRUNC_UNSIGNED_SCALAR*/ INVALID,\n");
 	  fprintf (fp, "  /*DOUBLE_TRUNC_FLOAT*/ %s,\n",
 		   same_ratio_eew_type (sew, lmul_log2, sew / 2, false, true)
 		     .c_str ());
-	  fprintf (fp, "  /*FLOAT*/ %s,\n",
-		   floattype (sew, lmul_log2).c_str ());
+	  fprintf (fp, "  /*FLOAT*/ INVALID,\n");
 	  fprintf (fp, "  /*LMUL1*/ %s,\n",
-		   inttype (sew, /*lmul_log2*/ 0, unsigned_p).c_str ());
+		   floattype (sew, /*lmul_log2*/ 0).c_str ());
 	  fprintf (fp, "  /*WLMUL1*/ %s,\n",
-		   inttype (sew * 2, /*lmul_log2*/ 0, unsigned_p).c_str ());
+		   floattype (sew * 2, /*lmul_log2*/ 0).c_str ());
 	  for (unsigned eew : {8, 16, 32, 64})
-	    {
-	      if (eew == sew)
-		fprintf (fp, "  /*EEW%d_INTERPRET*/ INVALID,\n", eew);
-	      else
-		fprintf (fp, "  /*EEW%d_INTERPRET*/ %s,\n", eew,
-			 inttype (eew, lmul_log2, unsigned_p).c_str ());
-	    }
-
+	    fprintf (fp, "  /*EEW%d_INTERPRET*/ INVALID,\n", eew);
 	  for (unsigned lmul_log2_offset : {1, 2, 3, 4, 5, 6})
 	    {
 	      unsigned multiple_of_lmul = 1 << lmul_log2_offset;
-	      const char *comma = lmul_log2_offset == 6 ? "" : ",";
-	      fprintf (fp, "  /*X%d_VLMUL_EXT*/ %s%s\n", multiple_of_lmul,
-		       inttype (sew, lmul_log2 + lmul_log2_offset, unsigned_p)
-			 .c_str (),
-		       comma);
+	      fprintf (fp, "  /*X%d_VLMUL_EXT*/ %s,\n", multiple_of_lmul,
+		       floattype (sew, lmul_log2 + lmul_log2_offset).c_str ());
 	    }
+	  fprintf (fp, "  /*TUPLE_SUBPART*/ %s\n",
+		   floattype (sew, lmul_log2, 1).c_str ());
 	  fprintf (fp, ")\n");
 	}
-  // Build for vfloat
-  for (unsigned sew : {32, 64})
-    for (int lmul_log2 : {-3, -2, -1, 0, 1, 2, 3})
-      {
-	if (!valid_type (sew, lmul_log2, /*float_t*/ true))
-	  continue;
-
-	fprintf (fp, "DEF_RVV_TYPE_INDEX (\n");
-	fprintf (fp, "  /*VECTOR*/ %s,\n", floattype (sew, lmul_log2).c_str ());
-	fprintf (fp, "  /*MASK*/ %s,\n", maskmode (sew, lmul_log2).c_str ());
-	fprintf (fp, "  /*SIGNED*/ %s,\n",
-		 inttype (sew, lmul_log2, /*unsigned_p*/ false).c_str ());
-	fprintf (fp, "  /*UNSIGNED*/ %s,\n",
-		 inttype (sew, lmul_log2, /*unsigned_p*/ true).c_str ());
-	for (unsigned eew : {8, 16, 32, 64})
-	  fprintf (fp, "  /*EEW%d_INDEX*/ %s,\n", eew,
-		   same_ratio_eew_type (sew, lmul_log2, eew,
-					/*unsigned_p*/ true, false)
-		     .c_str ());
-	fprintf (fp, "  /*SHIFT*/ INVALID,\n");
-	fprintf (
-	  fp, "  /*DOUBLE_TRUNC*/ %s,\n",
-	  same_ratio_eew_type (sew, lmul_log2, sew / 2, false, true).c_str ());
-	fprintf (fp, "  /*QUAD_TRUNC*/ INVALID,\n");
-	fprintf (fp, "  /*OCT_TRUNC*/ INVALID,\n");
-	fprintf (
-	  fp, "  /*DOUBLE_TRUNC_SCALAR*/ %s,\n",
-	  same_ratio_eew_type (sew, lmul_log2, sew / 2, false, true).c_str ());
-	fprintf (
-	  fp, "  /*DOUBLE_TRUNC_SIGNED*/ %s,\n",
-	  same_ratio_eew_type (sew, lmul_log2, sew / 2, false, false).c_str ());
-	fprintf (
-	  fp, "  /*DOUBLE_TRUNC_UNSIGNED*/ %s,\n",
-	  same_ratio_eew_type (sew, lmul_log2, sew / 2, true, false).c_str ());
-	fprintf (fp, "  /*DOUBLE_TRUNC_UNSIGNED_SCALAR*/ INVALID,\n");
-	fprintf (
-	  fp, "  /*DOUBLE_TRUNC_FLOAT*/ %s,\n",
-	  same_ratio_eew_type (sew, lmul_log2, sew / 2, false, true).c_str ());
-	fprintf (fp, "  /*FLOAT*/ INVALID,\n");
-	fprintf (fp, "  /*LMUL1*/ %s,\n",
-		 floattype (sew, /*lmul_log2*/ 0).c_str ());
-	fprintf (fp, "  /*WLMUL1*/ %s,\n",
-		 floattype (sew * 2, /*lmul_log2*/ 0).c_str ());
-	for (unsigned eew : {8, 16, 32, 64})
-	  fprintf (fp, "  /*EEW%d_INTERPRET*/ INVALID,\n", eew);
-	for (unsigned lmul_log2_offset : {1, 2, 3, 4, 5, 6})
-	  {
-	    unsigned multiple_of_lmul = 1 << lmul_log2_offset;
-	    const char *comma = lmul_log2_offset == 6 ? "" : ",";
-	    fprintf (fp, "  /*X%d_VLMUL_EXT*/ %s%s\n", multiple_of_lmul,
-		     floattype (sew, lmul_log2 + lmul_log2_offset).c_str (),
-		     comma);
-	  }
-	fprintf (fp, ")\n");
-      }
 
   return 0;
 }
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index 52467bbc961..8693b2887fb 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -1548,9 +1548,40 @@ class vset : public function_base
 public:
   bool apply_vl_p () const override { return false; }
 
+  gimple *fold (gimple_folder &f) const override
+  {
+    tree rhs_tuple = gimple_call_arg (f.call, 0);
+    /* LMUL > 1 non-tuple vector types are not structure,
+       we can't use __val[index] to set the subpart.  */
+    if (!riscv_v_ext_tuple_mode_p (TYPE_MODE (TREE_TYPE (rhs_tuple))))
+      return NULL;
+    tree index = gimple_call_arg (f.call, 1);
+    tree rhs_vector = gimple_call_arg (f.call, 2);
+
+    /* Replace the call with two statements: a copy of the full tuple
+       to the call result, followed by an update of the individual vector.
+
+       The fold routines expect the replacement statement to have the
+       same lhs as the original call, so return the copy statement
+       rather than the field update.  */
+    gassign *copy = gimple_build_assign (unshare_expr (f.lhs), rhs_tuple);
+
+    /* Get a reference to the individual vector.  */
+    tree field = tuple_type_field (TREE_TYPE (f.lhs));
+    tree lhs_array
+      = build3 (COMPONENT_REF, TREE_TYPE (field), f.lhs, field, NULL_TREE);
+    tree lhs_vector = build4 (ARRAY_REF, TREE_TYPE (rhs_vector), lhs_array,
+			      index, NULL_TREE, NULL_TREE);
+    gassign *update = gimple_build_assign (lhs_vector, rhs_vector);
+    gsi_insert_after (f.gsi, update, GSI_SAME_STMT);
+
+    return copy;
+  }
+
   rtx expand (function_expander &e) const override
   {
     rtx dest = expand_normal (CALL_EXPR_ARG (e.exp, 0));
+    gcc_assert (riscv_v_ext_vector_mode_p (GET_MODE (dest)));
     rtx index = expand_normal (CALL_EXPR_ARG (e.exp, 1));
     rtx src = expand_normal (CALL_EXPR_ARG (e.exp, 2));
     poly_int64 offset = INTVAL (index) * GET_MODE_SIZE (GET_MODE (src));
@@ -1567,9 +1598,27 @@ class vget : public function_base
 public:
   bool apply_vl_p () const override { return false; }
 
+  gimple *fold (gimple_folder &f) const override
+  {
+    /* Fold into a normal gimple component access.  */
+    tree rhs_tuple = gimple_call_arg (f.call, 0);
+    /* LMUL > 1 non-tuple vector types are not structure,
+       we can't use __val[index] to get the subpart.  */
+    if (!riscv_v_ext_tuple_mode_p (TYPE_MODE (TREE_TYPE (rhs_tuple))))
+      return NULL;
+    tree index = gimple_call_arg (f.call, 1);
+    tree field = tuple_type_field (TREE_TYPE (rhs_tuple));
+    tree rhs_array
+      = build3 (COMPONENT_REF, TREE_TYPE (field), rhs_tuple, field, NULL_TREE);
+    tree rhs_vector = build4 (ARRAY_REF, TREE_TYPE (f.lhs), rhs_array, index,
+			      NULL_TREE, NULL_TREE);
+    return gimple_build_assign (f.lhs, rhs_vector);
+  }
+
   rtx expand (function_expander &e) const override
   {
     rtx src = expand_normal (CALL_EXPR_ARG (e.exp, 0));
+    gcc_assert (riscv_v_ext_vector_mode_p (GET_MODE (src)));
     rtx index = expand_normal (CALL_EXPR_ARG (e.exp, 1));
     poly_int64 offset = INTVAL (index) * GET_MODE_SIZE (GET_MODE (e.target));
     rtx subreg
diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.def b/gcc/config/riscv/riscv-vector-builtins-functions.def
index 3f1513cb9fd..ed3f5583fc6 100644
--- a/gcc/config/riscv/riscv-vector-builtins-functions.def
+++ b/gcc/config/riscv/riscv-vector-builtins-functions.def
@@ -533,4 +533,8 @@ DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul2_x2_ops)
 DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul2_x4_ops)
 DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul4_x2_ops)
 
+// Tuple types
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_tuple_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_tuple_ops)
+
 #undef DEF_RVV_FUNCTION
diff --git a/gcc/config/riscv/riscv-vector-builtins-types.def b/gcc/config/riscv/riscv-vector-builtins-types.def
index a74df066521..5bd36a6524e 100644
--- a/gcc/config/riscv/riscv-vector-builtins-types.def
+++ b/gcc/config/riscv/riscv-vector-builtins-types.def
@@ -235,6 +235,12 @@ along with GCC; see the file COPYING3. If not see
 #define DEF_RVV_LMUL4_OPS(TYPE, REQUIRE)
 #endif
 
+/* Use "DEF_RVV_TUPLE_OPS" macro include all tuple types
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_TUPLE_OPS
+#define DEF_RVV_TUPLE_OPS(TYPE, REQUIRE)
+#endif
+
 DEF_RVV_I_OPS (vint8mf8_t, RVV_REQUIRE_MIN_VLEN_64)
 DEF_RVV_I_OPS (vint8mf4_t, 0)
 DEF_RVV_I_OPS (vint8mf2_t, 0)
@@ -818,6 +824,208 @@ DEF_RVV_LMUL4_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_LMUL4_OPS (vfloat32m4_t, RVV_REQUIRE_ELEN_FP_32)
 DEF_RVV_LMUL4_OPS (vfloat64m4_t, RVV_REQUIRE_ELEN_FP_64)
 
+DEF_RVV_TUPLE_OPS (vint8mf8x2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vuint8mf8x2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vint8mf8x3_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vuint8mf8x3_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vint8mf8x4_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vuint8mf8x4_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vint8mf8x5_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vuint8mf8x5_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vint8mf8x6_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vuint8mf8x6_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vint8mf8x7_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vuint8mf8x7_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vint8mf8x8_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vuint8mf8x8_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vint8mf4x2_t, 0)
+DEF_RVV_TUPLE_OPS (vuint8mf4x2_t, 0)
+DEF_RVV_TUPLE_OPS (vint8mf4x3_t, 0)
+DEF_RVV_TUPLE_OPS (vuint8mf4x3_t, 0)
+DEF_RVV_TUPLE_OPS (vint8mf4x4_t, 0)
+DEF_RVV_TUPLE_OPS (vuint8mf4x4_t, 0)
+DEF_RVV_TUPLE_OPS (vint8mf4x5_t, 0)
+DEF_RVV_TUPLE_OPS (vuint8mf4x5_t, 0)
+DEF_RVV_TUPLE_OPS (vint8mf4x6_t, 0)
+DEF_RVV_TUPLE_OPS (vuint8mf4x6_t, 0)
+DEF_RVV_TUPLE_OPS (vint8mf4x7_t, 0)
+DEF_RVV_TUPLE_OPS (vuint8mf4x7_t, 0)
+DEF_RVV_TUPLE_OPS (vint8mf4x8_t, 0)
+DEF_RVV_TUPLE_OPS (vuint8mf4x8_t, 0)
+DEF_RVV_TUPLE_OPS (vint8mf2x2_t, 0)
+DEF_RVV_TUPLE_OPS (vuint8mf2x2_t, 0)
+DEF_RVV_TUPLE_OPS (vint8mf2x3_t, 0)
+DEF_RVV_TUPLE_OPS (vuint8mf2x3_t, 0)
+DEF_RVV_TUPLE_OPS (vint8mf2x4_t, 0)
+DEF_RVV_TUPLE_OPS (vuint8mf2x4_t, 0)
+DEF_RVV_TUPLE_OPS (vint8mf2x5_t, 0)
+DEF_RVV_TUPLE_OPS (vuint8mf2x5_t, 0)
+DEF_RVV_TUPLE_OPS (vint8mf2x6_t, 0)
+DEF_RVV_TUPLE_OPS (vuint8mf2x6_t, 0)
+DEF_RVV_TUPLE_OPS (vint8mf2x7_t, 0)
+DEF_RVV_TUPLE_OPS (vuint8mf2x7_t, 0)
+DEF_RVV_TUPLE_OPS (vint8mf2x8_t, 0)
+DEF_RVV_TUPLE_OPS (vuint8mf2x8_t, 0)
+DEF_RVV_TUPLE_OPS (vint8m1x2_t, 0)
+DEF_RVV_TUPLE_OPS (vuint8m1x2_t, 0)
+DEF_RVV_TUPLE_OPS (vint8m1x3_t, 0)
+DEF_RVV_TUPLE_OPS (vuint8m1x3_t, 0)
+DEF_RVV_TUPLE_OPS (vint8m1x4_t, 0)
+DEF_RVV_TUPLE_OPS (vuint8m1x4_t, 0)
+DEF_RVV_TUPLE_OPS (vint8m1x5_t, 0)
+DEF_RVV_TUPLE_OPS (vuint8m1x5_t, 0)
+DEF_RVV_TUPLE_OPS (vint8m1x6_t, 0)
+DEF_RVV_TUPLE_OPS (vuint8m1x6_t, 0)
+DEF_RVV_TUPLE_OPS (vint8m1x7_t, 0)
+DEF_RVV_TUPLE_OPS (vuint8m1x7_t, 0)
+DEF_RVV_TUPLE_OPS (vint8m1x8_t, 0)
+DEF_RVV_TUPLE_OPS (vuint8m1x8_t, 0)
+DEF_RVV_TUPLE_OPS (vint8m2x2_t, 0)
+DEF_RVV_TUPLE_OPS (vuint8m2x2_t, 0)
+DEF_RVV_TUPLE_OPS (vint8m2x3_t, 0)
+DEF_RVV_TUPLE_OPS (vuint8m2x3_t, 0)
+DEF_RVV_TUPLE_OPS (vint8m2x4_t, 0)
+DEF_RVV_TUPLE_OPS (vuint8m2x4_t, 0)
+DEF_RVV_TUPLE_OPS (vint8m4x2_t, 0)
+DEF_RVV_TUPLE_OPS (vuint8m4x2_t, 0)
+DEF_RVV_TUPLE_OPS (vint16mf4x2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vuint16mf4x2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vint16mf4x3_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vuint16mf4x3_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vint16mf4x4_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vuint16mf4x4_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vint16mf4x5_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vuint16mf4x5_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vint16mf4x6_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vuint16mf4x6_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vint16mf4x7_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vuint16mf4x7_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vint16mf4x8_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vuint16mf4x8_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vint16mf2x2_t, 0)
+DEF_RVV_TUPLE_OPS (vuint16mf2x2_t, 0)
+DEF_RVV_TUPLE_OPS (vint16mf2x3_t, 0)
+DEF_RVV_TUPLE_OPS (vuint16mf2x3_t, 0)
+DEF_RVV_TUPLE_OPS (vint16mf2x4_t, 0)
+DEF_RVV_TUPLE_OPS (vuint16mf2x4_t, 0)
+DEF_RVV_TUPLE_OPS (vint16mf2x5_t, 0)
+DEF_RVV_TUPLE_OPS (vuint16mf2x5_t, 0)
+DEF_RVV_TUPLE_OPS (vint16mf2x6_t, 0)
+DEF_RVV_TUPLE_OPS (vuint16mf2x6_t, 0)
+DEF_RVV_TUPLE_OPS (vint16mf2x7_t, 0)
+DEF_RVV_TUPLE_OPS (vuint16mf2x7_t, 0)
+DEF_RVV_TUPLE_OPS (vint16mf2x8_t, 0)
+DEF_RVV_TUPLE_OPS (vuint16mf2x8_t, 0)
+DEF_RVV_TUPLE_OPS (vint16m1x2_t, 0)
+DEF_RVV_TUPLE_OPS (vuint16m1x2_t, 0)
+DEF_RVV_TUPLE_OPS (vint16m1x3_t, 0)
+DEF_RVV_TUPLE_OPS (vuint16m1x3_t, 0)
+DEF_RVV_TUPLE_OPS (vint16m1x4_t, 0)
+DEF_RVV_TUPLE_OPS (vuint16m1x4_t, 0)
+DEF_RVV_TUPLE_OPS (vint16m1x5_t, 0)
+DEF_RVV_TUPLE_OPS (vuint16m1x5_t, 0)
+DEF_RVV_TUPLE_OPS (vint16m1x6_t, 0)
+DEF_RVV_TUPLE_OPS (vuint16m1x6_t, 0)
+DEF_RVV_TUPLE_OPS (vint16m1x7_t, 0)
+DEF_RVV_TUPLE_OPS (vuint16m1x7_t, 0)
+DEF_RVV_TUPLE_OPS (vint16m1x8_t, 0)
+DEF_RVV_TUPLE_OPS (vuint16m1x8_t, 0)
+DEF_RVV_TUPLE_OPS (vint16m2x2_t, 0)
+DEF_RVV_TUPLE_OPS (vuint16m2x2_t, 0)
+DEF_RVV_TUPLE_OPS (vint16m2x3_t, 0)
+DEF_RVV_TUPLE_OPS (vuint16m2x3_t, 0)
+DEF_RVV_TUPLE_OPS (vint16m2x4_t, 0)
+DEF_RVV_TUPLE_OPS (vuint16m2x4_t, 0)
+DEF_RVV_TUPLE_OPS (vint16m4x2_t, 0)
+DEF_RVV_TUPLE_OPS (vuint16m4x2_t, 0)
+DEF_RVV_TUPLE_OPS (vint32mf2x2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vuint32mf2x2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vint32mf2x3_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vuint32mf2x3_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vint32mf2x4_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vuint32mf2x4_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vint32mf2x5_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vuint32mf2x5_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vint32mf2x6_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vuint32mf2x6_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vint32mf2x7_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vuint32mf2x7_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vint32mf2x8_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vuint32mf2x8_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vint32m1x2_t, 0)
+DEF_RVV_TUPLE_OPS (vuint32m1x2_t, 0)
+DEF_RVV_TUPLE_OPS (vint32m1x3_t, 0)
+DEF_RVV_TUPLE_OPS (vuint32m1x3_t, 0)
+DEF_RVV_TUPLE_OPS (vint32m1x4_t, 0)
+DEF_RVV_TUPLE_OPS (vuint32m1x4_t, 0)
+DEF_RVV_TUPLE_OPS (vint32m1x5_t, 0)
+DEF_RVV_TUPLE_OPS (vuint32m1x5_t, 0)
+DEF_RVV_TUPLE_OPS (vint32m1x6_t, 0)
+DEF_RVV_TUPLE_OPS (vuint32m1x6_t, 0)
+DEF_RVV_TUPLE_OPS (vint32m1x7_t, 0)
+DEF_RVV_TUPLE_OPS (vuint32m1x7_t, 0)
+DEF_RVV_TUPLE_OPS (vint32m1x8_t, 0)
+DEF_RVV_TUPLE_OPS (vuint32m1x8_t, 0)
+DEF_RVV_TUPLE_OPS (vint32m2x2_t, 0)
+DEF_RVV_TUPLE_OPS (vuint32m2x2_t, 0)
+DEF_RVV_TUPLE_OPS (vint32m2x3_t, 0)
+DEF_RVV_TUPLE_OPS (vuint32m2x3_t, 0)
+DEF_RVV_TUPLE_OPS (vint32m2x4_t, 0)
+DEF_RVV_TUPLE_OPS (vuint32m2x4_t, 0)
+DEF_RVV_TUPLE_OPS (vint32m4x2_t, 0)
+DEF_RVV_TUPLE_OPS (vuint32m4x2_t, 0)
+DEF_RVV_TUPLE_OPS (vint64m1x2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint64m1x2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vint64m1x3_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint64m1x3_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vint64m1x4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint64m1x4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vint64m1x5_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint64m1x5_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vint64m1x6_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint64m1x6_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vint64m1x7_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint64m1x7_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vint64m1x8_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint64m1x8_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vint64m2x2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint64m2x2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vint64m2x3_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint64m2x3_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vint64m2x4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint64m2x4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vint64m4x2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint64m4x2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vfloat32mf2x2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vfloat32mf2x3_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vfloat32mf2x4_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vfloat32mf2x5_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vfloat32mf2x6_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vfloat32mf2x7_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vfloat32mf2x8_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vfloat32m1x2_t, RVV_REQUIRE_ELEN_FP_32)
+DEF_RVV_TUPLE_OPS (vfloat32m1x3_t, RVV_REQUIRE_ELEN_FP_32)
+DEF_RVV_TUPLE_OPS (vfloat32m1x4_t, RVV_REQUIRE_ELEN_FP_32)
+DEF_RVV_TUPLE_OPS (vfloat32m1x5_t, RVV_REQUIRE_ELEN_FP_32)
+DEF_RVV_TUPLE_OPS (vfloat32m1x6_t, RVV_REQUIRE_ELEN_FP_32)
+DEF_RVV_TUPLE_OPS (vfloat32m1x7_t, RVV_REQUIRE_ELEN_FP_32)
+DEF_RVV_TUPLE_OPS (vfloat32m1x8_t, RVV_REQUIRE_ELEN_FP_32)
+DEF_RVV_TUPLE_OPS (vfloat32m2x2_t, RVV_REQUIRE_ELEN_FP_32)
+DEF_RVV_TUPLE_OPS (vfloat32m2x3_t, RVV_REQUIRE_ELEN_FP_32)
+DEF_RVV_TUPLE_OPS (vfloat32m2x4_t, RVV_REQUIRE_ELEN_FP_32)
+DEF_RVV_TUPLE_OPS (vfloat32m4x2_t, RVV_REQUIRE_ELEN_FP_32)
+DEF_RVV_TUPLE_OPS (vfloat64m1x2_t, RVV_REQUIRE_ELEN_FP_64)
+DEF_RVV_TUPLE_OPS (vfloat64m1x3_t, RVV_REQUIRE_ELEN_FP_64)
+DEF_RVV_TUPLE_OPS (vfloat64m1x4_t, RVV_REQUIRE_ELEN_FP_64)
+DEF_RVV_TUPLE_OPS (vfloat64m1x5_t, RVV_REQUIRE_ELEN_FP_64)
+DEF_RVV_TUPLE_OPS (vfloat64m1x6_t, RVV_REQUIRE_ELEN_FP_64)
+DEF_RVV_TUPLE_OPS (vfloat64m1x7_t, RVV_REQUIRE_ELEN_FP_64)
+DEF_RVV_TUPLE_OPS (vfloat64m1x8_t, RVV_REQUIRE_ELEN_FP_64)
+DEF_RVV_TUPLE_OPS (vfloat64m2x2_t, RVV_REQUIRE_ELEN_FP_64)
+DEF_RVV_TUPLE_OPS (vfloat64m2x3_t, RVV_REQUIRE_ELEN_FP_64)
+DEF_RVV_TUPLE_OPS (vfloat64m2x4_t, RVV_REQUIRE_ELEN_FP_64)
+DEF_RVV_TUPLE_OPS (vfloat64m4x2_t, RVV_REQUIRE_ELEN_FP_64)
+
 #undef DEF_RVV_I_OPS
 #undef DEF_RVV_U_OPS
 #undef DEF_RVV_F_OPS
@@ -853,3 +1061,4 @@ DEF_RVV_LMUL4_OPS (vfloat64m4_t, RVV_REQUIRE_ELEN_FP_64)
 #undef DEF_RVV_LMUL1_OPS
 #undef DEF_RVV_LMUL2_OPS
 #undef DEF_RVV_LMUL4_OPS
+#undef DEF_RVV_TUPLE_OPS
diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
index 3cfa9c90181..e3cdbfe890a 100644
--- a/gcc/config/riscv/riscv-vector-builtins.cc
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -378,6 +378,12 @@ static const rvv_type_info lmul4_ops[] = {
 #include "riscv-vector-builtins-types.def"
   {NUM_VECTOR_TYPES, 0}};
 
+/* A list of Tuple types will be registered for intrinsic functions.  */
+static const rvv_type_info tuple_ops[] = {
+#define DEF_RVV_TUPLE_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
 static CONSTEXPR const rvv_arg_type_info rvv_arg_type_info_end
   = rvv_arg_type_info (NUM_BASE_TYPES);
 
@@ -759,6 +765,11 @@ static CONSTEXPR const rvv_arg_type_info ext_x8_vget_args[]
   = {rvv_arg_type_info (RVV_BASE_vlmul_ext_x8),
      rvv_arg_type_info (RVV_BASE_size), rvv_arg_type_info_end};
 
+/* A list of args for vector_type func (vector_type) function.  */
+static CONSTEXPR const rvv_arg_type_info tuple_vset_args[]
+  = {rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info (RVV_BASE_size),
+     rvv_arg_type_info (RVV_BASE_tuple_subpart), rvv_arg_type_info_end};
+
 /* A list of none preds that will be registered for intrinsic functions.  */
 static CONSTEXPR const predication_type_index none_preds[]
   = {PRED_TYPE_none, NUM_PRED_TYPES};
@@ -2143,17 +2154,32 @@ static CONSTEXPR const rvv_op_info ul_none_void_ops
      rvv_arg_type_info (RVV_BASE_unsigned_long), /* Return type */
      void_args /* Args */};
 
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_vset_tuple_ops
+  = {tuple_ops,				  /* Types */
+     OP_TYPE_v,				  /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     tuple_vset_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_vget_tuple_ops
+  = {tuple_ops,					 /* Types */
+     OP_TYPE_v,					 /* Suffix */
+     rvv_arg_type_info (RVV_BASE_tuple_subpart), /* Return type */
+     v_size_args /* Args */};
+
 /* A list of all RVV base function types.  */
 static CONSTEXPR const function_type_info function_types[] = {
-#define DEF_RVV_TYPE_INDEX(VECTOR, MASK, SIGNED, UNSIGNED, EEW8_INDEX, EEW16_INDEX, \
-		      EEW32_INDEX, EEW64_INDEX, SHIFT, DOUBLE_TRUNC,           \
-		      QUAD_TRUNC, OCT_TRUNC, DOUBLE_TRUNC_SCALAR,              \
-		      DOUBLE_TRUNC_SIGNED, DOUBLE_TRUNC_UNSIGNED,              \
-		      DOUBLE_TRUNC_UNSIGNED_SCALAR, DOUBLE_TRUNC_FLOAT, FLOAT, \
-		      LMUL1, WLMUL1, EEW8_INTERPRET, EEW16_INTERPRET,          \
-		      EEW32_INTERPRET, EEW64_INTERPRET, X2_VLMUL_EXT,          \
-		      X4_VLMUL_EXT, X8_VLMUL_EXT, X16_VLMUL_EXT,               \
-		      X32_VLMUL_EXT, X64_VLMUL_EXT)                            \
+#define DEF_RVV_TYPE_INDEX(                                                    \
+  VECTOR, MASK, SIGNED, UNSIGNED, EEW8_INDEX, EEW16_INDEX, EEW32_INDEX,        \
+  EEW64_INDEX, SHIFT, DOUBLE_TRUNC, QUAD_TRUNC, OCT_TRUNC,                     \
+  DOUBLE_TRUNC_SCALAR, DOUBLE_TRUNC_SIGNED, DOUBLE_TRUNC_UNSIGNED,             \
+  DOUBLE_TRUNC_UNSIGNED_SCALAR, DOUBLE_TRUNC_FLOAT, FLOAT, LMUL1, WLMUL1,      \
+  EEW8_INTERPRET, EEW16_INTERPRET, EEW32_INTERPRET, EEW64_INTERPRET,           \
+  X2_VLMUL_EXT, X4_VLMUL_EXT, X8_VLMUL_EXT, X16_VLMUL_EXT, X32_VLMUL_EXT,      \
+  X64_VLMUL_EXT, TUPLE_SUBPART)                                                \
   {                                                                            \
     VECTOR_TYPE_##VECTOR,                                                      \
     VECTOR_TYPE_INVALID,                                                       \
@@ -2196,6 +2222,7 @@ static CONSTEXPR const function_type_info function_types[] = {
     VECTOR_TYPE_##X32_VLMUL_EXT,                                               \
     VECTOR_TYPE_##X64_VLMUL_EXT,                                               \
     VECTOR_TYPE_INVALID,                                                       \
+    VECTOR_TYPE_##TUPLE_SUBPART,                                               \
   },
 #include "riscv-vector-builtins.def"
 }; // namespace riscv_vector
@@ -2645,6 +2672,21 @@ rvv_arg_type_info::get_tree_type (vector_type_index type_idx) const
   gcc_unreachable ();
 }
 
+tree
+rvv_arg_type_info::get_tuple_subpart_type (vector_type_index type_idx) const
+{
+  switch (type_idx)
+    {
+#define DEF_RVV_TUPLE_TYPE(NAME, NCHARS, ABI_NAME, SUBPART_TYPE, ARGS...)      \
+  case VECTOR_TYPE_##NAME:                                                     \
+    return builtin_types[VECTOR_TYPE_##SUBPART_TYPE].vector;
+#include "riscv-vector-builtins.def"
+    default:
+      gcc_unreachable ();
+    }
+  gcc_unreachable ();
+}
+
 function_instance::function_instance (const char *base_name_in,
 				      const function_base *base_in,
 				      const function_shape *shape_in,
diff --git a/gcc/config/riscv/riscv-vector-builtins.def b/gcc/config/riscv/riscv-vector-builtins.def
index b0d6edda1b6..78b3c7e33fd 100644
--- a/gcc/config/riscv/riscv-vector-builtins.def
+++ b/gcc/config/riscv/riscv-vector-builtins.def
@@ -73,15 +73,14 @@ along with GCC; see the file COPYING3.  If not see
 
 /* Use "DEF_RVV_TYPE_INDEX" macro to define RVV function types.  */
 #ifndef DEF_RVV_TYPE_INDEX
-#define DEF_RVV_TYPE_INDEX(VECTOR, MASK, SIGNED, UNSIGNED, EEW8_INDEX, EEW16_INDEX, \
-		      EEW32_INDEX, EEW64_INDEX, SHIFT, DOUBLE_TRUNC,           \
-		      QUAD_TRUNC, OCT_TRUNC, DOUBLE_TRUNC_SCALAR,              \
-		      DOUBLE_TRUNC_SIGNED, DOUBLE_TRUNC_UNSIGNED,              \
-		      DOUBLE_TRUNC_UNSIGNED_SCALAR, DOUBLE_TRUNC_FLOAT, FLOAT, \
-		      LMUL1, WLMUL1, EEW8_INTERPRET, EEW16_INTERPRET,          \
-		      EEW32_INTERPRET, EEW64_INTERPRET, X2_VLMUL_EXT,          \
-		      X4_VLMUL_EXT, X8_VLMUL_EXT, X16_VLMUL_EXT,               \
-		      X32_VLMUL_EXT, X64_VLMUL_EXT)
+#define DEF_RVV_TYPE_INDEX(                                                    \
+  VECTOR, MASK, SIGNED, UNSIGNED, EEW8_INDEX, EEW16_INDEX, EEW32_INDEX,        \
+  EEW64_INDEX, SHIFT, DOUBLE_TRUNC, QUAD_TRUNC, OCT_TRUNC,                     \
+  DOUBLE_TRUNC_SCALAR, DOUBLE_TRUNC_SIGNED, DOUBLE_TRUNC_UNSIGNED,             \
+  DOUBLE_TRUNC_UNSIGNED_SCALAR, DOUBLE_TRUNC_FLOAT, FLOAT, LMUL1, WLMUL1,      \
+  EEW8_INTERPRET, EEW16_INTERPRET, EEW32_INTERPRET, EEW64_INTERPRET,           \
+  X2_VLMUL_EXT, X4_VLMUL_EXT, X8_VLMUL_EXT, X16_VLMUL_EXT, X32_VLMUL_EXT,      \
+  X64_VLMUL_EXT, TUPLE_SUBPART)
 #endif
 
 /* SEW/LMUL = 64:
@@ -127,209 +126,6 @@ DEF_RVV_TYPE (vint8mf8_t, 15, __rvv_int8mf8_t, int8, VNx2QI, VNx1QI, VOID, _i8mf
 	      _e8mf8)
 DEF_RVV_TYPE (vuint8mf8_t, 16, __rvv_uint8mf8_t, uint8, VNx2QI, VNx1QI, VOID, _u8mf8,
 	      _u8, _e8mf8)
-/* LMUL = 1/4:
-   Machine mode = VNx4QImode when TARGET_MIN_VLEN >= 128.
-   Machine mode = VNx2QImode when TARGET_MIN_VLEN > 32.
-   Machine mode = VNx1QImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vint8mf4_t, 15, __rvv_int8mf4_t, int8, VNx4QI, VNx2QI, VNx1QI, _i8mf4,
-	      _i8, _e8mf4)
-DEF_RVV_TYPE (vuint8mf4_t, 16, __rvv_uint8mf4_t, uint8, VNx4QI, VNx2QI, VNx1QI, _u8mf4,
-	      _u8, _e8mf4)
-/* LMUL = 1/2:
-   Machine mode = VNx8QImode when TARGET_MIN_VLEN >= 128.
-   Machine mode = VNx4QImode when TARGET_MIN_VLEN > 32.
-   Machine mode = VNx2QImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vint8mf2_t, 15, __rvv_int8mf2_t, int8, VNx8QI, VNx4QI, VNx2QI, _i8mf2,
-	      _i8, _e8mf2)
-DEF_RVV_TYPE (vuint8mf2_t, 16, __rvv_uint8mf2_t, uint8, VNx8QI, VNx4QI, VNx2QI, _u8mf2,
-	      _u8, _e8mf2)
-/* LMUL = 1:
-   Machine mode = VNx16QImode when TARGET_MIN_VLEN >= 128.
-   Machine mode = VNx8QImode when TARGET_MIN_VLEN > 32.
-   Machine mode = VNx4QImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vint8m1_t, 14, __rvv_int8m1_t, int8, VNx16QI, VNx8QI, VNx4QI, _i8m1, _i8,
-	      _e8m1)
-DEF_RVV_TYPE (vuint8m1_t, 15, __rvv_uint8m1_t, uint8, VNx16QI, VNx8QI, VNx4QI, _u8m1,
-	      _u8, _e8m1)
-/* LMUL = 2:
-   Machine mode = VNx32QImode when TARGET_MIN_VLEN >= 128.
-   Machine mode = VNx16QImode when TARGET_MIN_VLEN > 32.
-   Machine mode = VNx8QImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vint8m2_t, 14, __rvv_int8m2_t, int8, VNx32QI, VNx16QI, VNx8QI, _i8m2, _i8,
-	      _e8m2)
-DEF_RVV_TYPE (vuint8m2_t, 15, __rvv_uint8m2_t, uint8, VNx32QI, VNx16QI, VNx8QI, _u8m2,
-	      _u8, _e8m2)
-/* LMUL = 4:
-   Machine mode = VNx64QImode when TARGET_MIN_VLEN >= 128.
-   Machine mode = VNx32QImode when TARGET_MIN_VLEN > 32.
-   Machine mode = VNx16QImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vint8m4_t, 14, __rvv_int8m4_t, int8, VNx64QI, VNx32QI, VNx16QI, _i8m4, _i8,
-	      _e8m4)
-DEF_RVV_TYPE (vuint8m4_t, 15, __rvv_uint8m4_t, uint8, VNx64QI, VNx32QI, VNx16QI, _u8m4,
-	      _u8, _e8m4)
-/* LMUL = 8:
-   Machine mode = VNx128QImode when TARGET_MIN_VLEN >= 128.
-   Machine mode = VNx64QImode when TARGET_MIN_VLEN > 32.
-   Machine mode = VNx32QImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vint8m8_t, 14, __rvv_int8m8_t, int8, VNx128QI, VNx64QI, VNx32QI, _i8m8, _i8,
-	      _e8m8)
-DEF_RVV_TYPE (vuint8m8_t, 15, __rvv_uint8m8_t, uint8, VNx128QI, VNx64QI, VNx32QI, _u8m8,
-	      _u8, _e8m8)
-
-/* LMUL = 1/4:
-   Only enble when TARGET_MIN_VLEN > 32.
-   Machine mode = VNx1HImode when TARGET_MIN_VLEN < 128.
-   Machine mode = VNx2HImode when TARGET_MIN_VLEN >= 128.  */
-DEF_RVV_TYPE (vint16mf4_t, 16, __rvv_int16mf4_t, int16, VNx2HI, VNx1HI, VOID, _i16mf4,
-	      _i16, _e16mf4)
-DEF_RVV_TYPE (vuint16mf4_t, 17, __rvv_uint16mf4_t, uint16, VNx2HI, VNx1HI, VOID,
-	      _u16mf4, _u16, _e16mf4)
-/* LMUL = 1/2:
-   Machine mode = VNx4HImode when TARGET_MIN_VLEN >= 128.
-   Machine mode = VNx2HImode when TARGET_MIN_VLEN > 32.
-   Machine mode = VNx1HImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vint16mf2_t, 16, __rvv_int16mf2_t, int16, VNx4HI, VNx2HI, VNx1HI, _i16mf2,
-	      _i16, _e16mf2)
-DEF_RVV_TYPE (vuint16mf2_t, 17, __rvv_uint16mf2_t, uint16, VNx4HI, VNx2HI, VNx1HI,
-	      _u16mf2, _u16, _e16mf2)
-/* LMUL = 1:
-   Machine mode = VNx8HImode when TARGET_MIN_VLEN >= 128.
-   Machine mode = VNx4HImode when TARGET_MIN_VLEN > 32.
-   Machine mode = VNx2HImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vint16m1_t, 15, __rvv_int16m1_t, int16, VNx8HI, VNx4HI, VNx2HI, _i16m1,
-	      _i16, _e16m1)
-DEF_RVV_TYPE (vuint16m1_t, 16, __rvv_uint16m1_t, uint16, VNx8HI, VNx4HI, VNx2HI, _u16m1,
-	      _u16, _e16m1)
-/* LMUL = 2:
-   Machine mode = VNx16HImode when TARGET_MIN_VLEN >= 128.
-   Machine mode = VNx8HImode when TARGET_MIN_VLEN > 32.
-   Machine mode = VNx4HImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vint16m2_t, 15, __rvv_int16m2_t, int16, VNx16HI, VNx8HI, VNx4HI, _i16m2,
-	      _i16, _e16m2)
-DEF_RVV_TYPE (vuint16m2_t, 16, __rvv_uint16m2_t, uint16, VNx16HI, VNx8HI, VNx4HI, _u16m2,
-	      _u16, _e16m2)
-/* LMUL = 4:
-   Machine mode = VNx32HImode when TARGET_MIN_VLEN >= 128.
-   Machine mode = VNx16HImode when TARGET_MIN_VLEN > 32.
-   Machine mode = VNx8HImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vint16m4_t, 15, __rvv_int16m4_t, int16, VNx32HI, VNx16HI, VNx8HI, _i16m4,
-	      _i16, _e16m4)
-DEF_RVV_TYPE (vuint16m4_t, 16, __rvv_uint16m4_t, uint16, VNx32HI, VNx16HI, VNx8HI,
-	      _u16m4, _u16, _e16m4)
-/* LMUL = 8:
-   Machine mode = VNx64HImode when TARGET_MIN_VLEN >= 128.
-   Machine mode = VNx32HImode when TARGET_MIN_VLEN > 32.
-   Machine mode = VNx16HImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vint16m8_t, 15, __rvv_int16m8_t, int16, VNx64HI, VNx32HI, VNx16HI, _i16m8,
-	      _i16, _e16m8)
-DEF_RVV_TYPE (vuint16m8_t, 16, __rvv_uint16m8_t, uint16, VNx64HI, VNx32HI, VNx16HI,
-	      _u16m8, _u16, _e16m8)
-
-/* LMUL = 1/2:
-   Only enble when TARGET_MIN_VLEN > 32.
-   Machine mode = VNx1SImode when TARGET_MIN_VLEN < 128.
-   Machine mode = VNx2SImode when TARGET_MIN_VLEN >= 128.  */
-DEF_RVV_TYPE (vint32mf2_t, 16, __rvv_int32mf2_t, int32, VNx2SI, VNx1SI, VOID, _i32mf2,
-	      _i32, _e32mf2)
-DEF_RVV_TYPE (vuint32mf2_t, 17, __rvv_uint32mf2_t, uint32, VNx2SI, VNx1SI, VOID,
-	      _u32mf2, _u32, _e32mf2)
-/* LMUL = 1:
-   Machine mode = VNx4SImode when TARGET_MIN_VLEN >= 128.
-   Machine mode = VNx2SImode when TARGET_MIN_VLEN > 32.
-   Machine mode = VNx1SImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vint32m1_t, 15, __rvv_int32m1_t, int32, VNx4SI, VNx2SI, VNx1SI, _i32m1,
-	      _i32, _e32m1)
-DEF_RVV_TYPE (vuint32m1_t, 16, __rvv_uint32m1_t, uint32, VNx4SI, VNx2SI, VNx1SI, _u32m1,
-	      _u32, _e32m1)
-/* LMUL = 2:
-   Machine mode = VNx8SImode when TARGET_MIN_VLEN >= 128.
-   Machine mode = VNx4SImode when TARGET_MIN_VLEN > 32.
-   Machine mode = VNx2SImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vint32m2_t, 15, __rvv_int32m2_t, int32, VNx8SI, VNx4SI, VNx2SI, _i32m2,
-	      _i32, _e32m2)
-DEF_RVV_TYPE (vuint32m2_t, 16, __rvv_uint32m2_t, uint32, VNx8SI, VNx4SI, VNx2SI, _u32m2,
-	      _u32, _e32m2)
-/* LMUL = 4:
-   Machine mode = VNx16SImode when TARGET_MIN_VLEN >= 128.
-   Machine mode = VNx8SImode when TARGET_MIN_VLEN > 32.
-   Machine mode = VNx4SImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vint32m4_t, 15, __rvv_int32m4_t, int32, VNx16SI, VNx8SI, VNx4SI, _i32m4,
-	      _i32, _e32m4)
-DEF_RVV_TYPE (vuint32m4_t, 16, __rvv_uint32m4_t, uint32, VNx16SI, VNx8SI, VNx4SI, _u32m4,
-	      _u32, _e32m4)
-/* LMUL = 8:
-   Machine mode = VNx32SImode when TARGET_MIN_VLEN >= 128.
-   Machine mode = VNx16SImode when TARGET_MIN_VLEN > 32.
-   Machine mode = VNx8SImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vint32m8_t, 15, __rvv_int32m8_t, int32, VNx32SI, VNx16SI, VNx8SI, _i32m8,
-	      _i32, _e32m8)
-DEF_RVV_TYPE (vuint32m8_t, 16, __rvv_uint32m8_t, uint32, VNx32SI, VNx16SI, VNx8SI,
-	      _u32m8, _u32, _e32m8)
-
-/* SEW = 64:
-   Disable when !TARGET_VECTOR_ELEN_64.  */
-DEF_RVV_TYPE (vint64m1_t, 15, __rvv_int64m1_t, int64, VNx2DI, VNx1DI, VOID, _i64m1,
-	      _i64, _e64m1)
-DEF_RVV_TYPE (vuint64m1_t, 16, __rvv_uint64m1_t, uint64, VNx2DI, VNx1DI, VOID, _u64m1,
-	      _u64, _e64m1)
-DEF_RVV_TYPE (vint64m2_t, 15, __rvv_int64m2_t, int64, VNx4DI, VNx2DI, VOID, _i64m2,
-	      _i64, _e64m2)
-DEF_RVV_TYPE (vuint64m2_t, 16, __rvv_uint64m2_t, uint64, VNx4DI, VNx2DI, VOID, _u64m2,
-	      _u64, _e64m2)
-DEF_RVV_TYPE (vint64m4_t, 15, __rvv_int64m4_t, int64, VNx8DI, VNx4DI, VOID, _i64m4,
-	      _i64, _e64m4)
-DEF_RVV_TYPE (vuint64m4_t, 16, __rvv_uint64m4_t, uint64, VNx8DI, VNx4DI, VOID, _u64m4,
-	      _u64, _e64m4)
-DEF_RVV_TYPE (vint64m8_t, 15, __rvv_int64m8_t, int64, VNx16DI, VNx8DI, VOID, _i64m8,
-	      _i64, _e64m8)
-DEF_RVV_TYPE (vuint64m8_t, 16, __rvv_uint64m8_t, uint64, VNx16DI, VNx8DI, VOID, _u64m8,
-	      _u64, _e64m8)
-
-/* Disable all when !TARGET_VECTOR_ELEN_FP_32.  */
-/* LMUL = 1/2:
-   Only enble when TARGET_MIN_VLEN > 32.
-   Machine mode = VNx1SFmode when TARGET_MIN_VLEN < 128.
-   Machine mode = VNx2SFmode when TARGET_MIN_VLEN >= 128.  */
-DEF_RVV_TYPE (vfloat32mf2_t, 18, __rvv_float32mf2_t, float, VNx2SF, VNx1SF, VOID,
-	      _f32mf2, _f32, _e32mf2)
-/* LMUL = 1:
-   Machine mode = VNx4SFmode when TARGET_MIN_VLEN >= 128.
-   Machine mode = VNx2SFmode when TARGET_MIN_VLEN > 32.
-   Machine mode = VNx1SFmode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vfloat32m1_t, 17, __rvv_float32m1_t, float, VNx4SF, VNx2SF, VNx1SF,
-	      _f32m1, _f32, _e32m1)
-/* LMUL = 2:
-   Machine mode = VNx8SFmode when TARGET_MIN_VLEN >= 128.
-   Machine mode = VNx4SFmode when TARGET_MIN_VLEN > 32.
-   Machine mode = VNx2SFmode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vfloat32m2_t, 17, __rvv_float32m2_t, float, VNx8SF, VNx4SF, VNx2SF,
-	      _f32m2, _f32, _e32m2)
-/* LMUL = 4:
-   Machine mode = VNx16SFmode when TARGET_MIN_VLEN >= 128.
-   Machine mode = VNx8SFmode when TARGET_MIN_VLEN > 32.
-   Machine mode = VNx4SFmode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vfloat32m4_t, 17, __rvv_float32m4_t, float, VNx16SF, VNx8SF, VNx4SF,
-	      _f32m4, _f32, _e32m4)
-/* LMUL = 8:
-   Machine mode = VNx32SFmode when TARGET_MIN_VLEN >= 128.
-   Machine mode = VNx16SFmode when TARGET_MIN_VLEN > 32.
-   Machine mode = VNx8SFmode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vfloat32m8_t, 17, __rvv_float32m8_t, float, VNx32SF, VNx16SF, VNx8SF,
-	      _f32m8, _f32, _e32m8)
-
-/* SEW = 64:
-   Disable when !TARGET_VECTOR_ELEN_FP_64.  */
-DEF_RVV_TYPE (vfloat64m1_t, 17, __rvv_float64m1_t, double, VNx2DF, VNx1DF, VOID, _f64m1,
-	      _f64, _e64m1)
-DEF_RVV_TYPE (vfloat64m2_t, 17, __rvv_float64m2_t, double, VNx4DF, VNx2DF, VOID, _f64m2,
-	      _f64, _e64m2)
-DEF_RVV_TYPE (vfloat64m4_t, 17, __rvv_float64m4_t, double, VNx8DF, VNx4DF, VOID, _f64m4,
-	      _f64, _e64m4)
-DEF_RVV_TYPE (vfloat64m8_t, 17, __rvv_float64m8_t, double, VNx16DF, VNx8DF, VOID, _f64m8,
-	      _f64, _e64m8)
-
-/* Define tuple type for segment loads/stores, each tuple type should always satisfy
-   naming with vint<SEW><LMUL>x<NF>_t. Note that it's always LMUL * NF <= 8.  */
 /* Define tuple types for SEW = 8, LMUL = MF8.  */
 DEF_RVV_TUPLE_TYPE (vint8mf8x2_t, 17, __rvv_int8mf8x2_t, vint8mf8_t, int8, 2, _i8mf8x2)
 DEF_RVV_TUPLE_TYPE (vuint8mf8x2_t, 18, __rvv_uint8mf8x2_t, vuint8mf8_t, uint8, 2, _u8mf8x2)
@@ -345,6 +141,14 @@ DEF_RVV_TUPLE_TYPE (vint8mf8x7_t, 17, __rvv_int8mf8x7_t, vint8mf8_t, int8, 7, _i
 DEF_RVV_TUPLE_TYPE (vuint8mf8x7_t, 18, __rvv_uint8mf8x7_t, vuint8mf8_t, uint8, 7, _u8mf8x7)
 DEF_RVV_TUPLE_TYPE (vint8mf8x8_t, 17, __rvv_int8mf8x8_t, vint8mf8_t, int8, 8, _i8mf8x8)
 DEF_RVV_TUPLE_TYPE (vuint8mf8x8_t, 18, __rvv_uint8mf8x8_t, vuint8mf8_t, uint8, 8, _u8mf8x8)
+/* LMUL = 1/4:
+   Machine mode = VNx4QImode when TARGET_MIN_VLEN >= 128.
+   Machine mode = VNx2QImode when TARGET_MIN_VLEN > 32.
+   Machine mode = VNx1QImode when TARGET_MIN_VLEN = 32.  */
+DEF_RVV_TYPE (vint8mf4_t, 15, __rvv_int8mf4_t, int8, VNx4QI, VNx2QI, VNx1QI, _i8mf4,
+	      _i8, _e8mf4)
+DEF_RVV_TYPE (vuint8mf4_t, 16, __rvv_uint8mf4_t, uint8, VNx4QI, VNx2QI, VNx1QI, _u8mf4,
+	      _u8, _e8mf4)
 /* Define tuple types for SEW = 8, LMUL = MF4.  */
 DEF_RVV_TUPLE_TYPE (vint8mf4x2_t, 17, __rvv_int8mf4x2_t, vint8mf4_t, int8, 2, _i8mf4x2)
 DEF_RVV_TUPLE_TYPE (vuint8mf4x2_t, 18, __rvv_uint8mf4x2_t, vuint8mf4_t, uint8, 2, _u8mf4x2)
@@ -360,6 +164,14 @@ DEF_RVV_TUPLE_TYPE (vint8mf4x7_t, 17, __rvv_int8mf4x7_t, vint8mf4_t, int8, 7, _i
 DEF_RVV_TUPLE_TYPE (vuint8mf4x7_t, 18, __rvv_uint8mf4x7_t, vuint8mf4_t, uint8, 7, _u8mf4x7)
 DEF_RVV_TUPLE_TYPE (vint8mf4x8_t, 17, __rvv_int8mf4x8_t, vint8mf4_t, int8, 8, _i8mf4x8)
 DEF_RVV_TUPLE_TYPE (vuint8mf4x8_t, 18, __rvv_uint8mf4x8_t, vuint8mf4_t, uint8, 8, _u8mf4x8)
+/* LMUL = 1/2:
+   Machine mode = VNx8QImode when TARGET_MIN_VLEN >= 128.
+   Machine mode = VNx4QImode when TARGET_MIN_VLEN > 32.
+   Machine mode = VNx2QImode when TARGET_MIN_VLEN = 32.  */
+DEF_RVV_TYPE (vint8mf2_t, 15, __rvv_int8mf2_t, int8, VNx8QI, VNx4QI, VNx2QI, _i8mf2,
+	      _i8, _e8mf2)
+DEF_RVV_TYPE (vuint8mf2_t, 16, __rvv_uint8mf2_t, uint8, VNx8QI, VNx4QI, VNx2QI, _u8mf2,
+	      _u8, _e8mf2)
 /* Define tuple types for SEW = 8, LMUL = MF2.  */
 DEF_RVV_TUPLE_TYPE (vint8mf2x2_t, 17, __rvv_int8mf2x2_t, vint8mf2_t, int8, 2, _i8mf2x2)
 DEF_RVV_TUPLE_TYPE (vuint8mf2x2_t, 18, __rvv_uint8mf2x2_t, vuint8mf2_t, uint8, 2, _u8mf2x2)
@@ -375,6 +187,14 @@ DEF_RVV_TUPLE_TYPE (vint8mf2x7_t, 17, __rvv_int8mf2x7_t, vint8mf2_t, int8, 7, _i
 DEF_RVV_TUPLE_TYPE (vuint8mf2x7_t, 18, __rvv_uint8mf2x7_t, vuint8mf2_t, uint8, 7, _u8mf2x7)
 DEF_RVV_TUPLE_TYPE (vint8mf2x8_t, 17, __rvv_int8mf2x8_t, vint8mf2_t, int8, 8, _i8mf2x8)
 DEF_RVV_TUPLE_TYPE (vuint8mf2x8_t, 18, __rvv_uint8mf2x8_t, vuint8mf2_t, uint8, 8, _u8mf2x8)
+/* LMUL = 1:
+   Machine mode = VNx16QImode when TARGET_MIN_VLEN >= 128.
+   Machine mode = VNx8QImode when TARGET_MIN_VLEN > 32.
+   Machine mode = VNx4QImode when TARGET_MIN_VLEN = 32.  */
+DEF_RVV_TYPE (vint8m1_t, 14, __rvv_int8m1_t, int8, VNx16QI, VNx8QI, VNx4QI, _i8m1, _i8,
+	      _e8m1)
+DEF_RVV_TYPE (vuint8m1_t, 15, __rvv_uint8m1_t, uint8, VNx16QI, VNx8QI, VNx4QI, _u8m1,
+	      _u8, _e8m1)
 /* Define tuple types for SEW = 8, LMUL = M1.  */
 DEF_RVV_TUPLE_TYPE (vint8m1x2_t, 16, __rvv_int8m1x2_t, vint8m1_t, int8, 2, _i8m1x2)
 DEF_RVV_TUPLE_TYPE (vuint8m1x2_t, 17, __rvv_uint8m1x2_t, vuint8m1_t, uint8, 2, _u8m1x2)
@@ -390,6 +210,14 @@ DEF_RVV_TUPLE_TYPE (vint8m1x7_t, 16, __rvv_int8m1x7_t, vint8m1_t, int8, 7, _i8m1
 DEF_RVV_TUPLE_TYPE (vuint8m1x7_t, 17, __rvv_uint8m1x7_t, vuint8m1_t, uint8, 7, _u8m1x7)
 DEF_RVV_TUPLE_TYPE (vint8m1x8_t, 16, __rvv_int8m1x8_t, vint8m1_t, int8, 8, _i8m1x8)
 DEF_RVV_TUPLE_TYPE (vuint8m1x8_t, 17, __rvv_uint8m1x8_t, vuint8m1_t, uint8, 8, _u8m1x8)
+/* LMUL = 2:
+   Machine mode = VNx32QImode when TARGET_MIN_VLEN >= 128.
+   Machine mode = VNx16QImode when TARGET_MIN_VLEN > 32.
+   Machine mode = VNx8QImode when TARGET_MIN_VLEN = 32.  */
+DEF_RVV_TYPE (vint8m2_t, 14, __rvv_int8m2_t, int8, VNx32QI, VNx16QI, VNx8QI, _i8m2, _i8,
+	      _e8m2)
+DEF_RVV_TYPE (vuint8m2_t, 15, __rvv_uint8m2_t, uint8, VNx32QI, VNx16QI, VNx8QI, _u8m2,
+	      _u8, _e8m2)
 /* Define tuple types for SEW = 8, LMUL = M2.  */
 DEF_RVV_TUPLE_TYPE (vint8m2x2_t, 16, __rvv_int8m2x2_t, vint8m2_t, int8, 2, _i8m2x2)
 DEF_RVV_TUPLE_TYPE (vuint8m2x2_t, 17, __rvv_uint8m2x2_t, vuint8m2_t, uint8, 2, _u8m2x2)
@@ -397,9 +225,34 @@ DEF_RVV_TUPLE_TYPE (vint8m2x3_t, 16, __rvv_int8m2x3_t, vint8m2_t, int8, 3, _i8m2
 DEF_RVV_TUPLE_TYPE (vuint8m2x3_t, 17, __rvv_uint8m2x3_t, vuint8m2_t, uint8, 3, _u8m2x3)
 DEF_RVV_TUPLE_TYPE (vint8m2x4_t, 16, __rvv_int8m2x4_t, vint8m2_t, int8, 4, _i8m2x4)
 DEF_RVV_TUPLE_TYPE (vuint8m2x4_t, 17, __rvv_uint8m2x4_t, vuint8m2_t, uint8, 4, _u8m2x4)
+/* LMUL = 4:
+   Machine mode = VNx64QImode when TARGET_MIN_VLEN >= 128.
+   Machine mode = VNx32QImode when TARGET_MIN_VLEN > 32.
+   Machine mode = VNx16QImode when TARGET_MIN_VLEN = 32.  */
+DEF_RVV_TYPE (vint8m4_t, 14, __rvv_int8m4_t, int8, VNx64QI, VNx32QI, VNx16QI, _i8m4, _i8,
+	      _e8m4)
+DEF_RVV_TYPE (vuint8m4_t, 15, __rvv_uint8m4_t, uint8, VNx64QI, VNx32QI, VNx16QI, _u8m4,
+	      _u8, _e8m4)
 /* Define tuple types for SEW = 8, LMUL = M4.  */
 DEF_RVV_TUPLE_TYPE (vint8m4x2_t, 16, __rvv_int8m4x2_t, vint8m4_t, int8, 2, _i8m4x2)
 DEF_RVV_TUPLE_TYPE (vuint8m4x2_t, 17, __rvv_uint8m4x2_t, vuint8m4_t, uint8, 2, _u8m4x2)
+/* LMUL = 8:
+   Machine mode = VNx128QImode when TARGET_MIN_VLEN >= 128.
+   Machine mode = VNx64QImode when TARGET_MIN_VLEN > 32.
+   Machine mode = VNx32QImode when TARGET_MIN_VLEN = 32.  */
+DEF_RVV_TYPE (vint8m8_t, 14, __rvv_int8m8_t, int8, VNx128QI, VNx64QI, VNx32QI, _i8m8, _i8,
+	      _e8m8)
+DEF_RVV_TYPE (vuint8m8_t, 15, __rvv_uint8m8_t, uint8, VNx128QI, VNx64QI, VNx32QI, _u8m8,
+	      _u8, _e8m8)
+
+/* LMUL = 1/4:
+   Only enble when TARGET_MIN_VLEN > 32.
+   Machine mode = VNx1HImode when TARGET_MIN_VLEN < 128.
+   Machine mode = VNx2HImode when TARGET_MIN_VLEN >= 128.  */
+DEF_RVV_TYPE (vint16mf4_t, 16, __rvv_int16mf4_t, int16, VNx2HI, VNx1HI, VOID, _i16mf4,
+	      _i16, _e16mf4)
+DEF_RVV_TYPE (vuint16mf4_t, 17, __rvv_uint16mf4_t, uint16, VNx2HI, VNx1HI, VOID,
+	      _u16mf4, _u16, _e16mf4)
 /* Define tuple types for SEW = 16, LMUL = MF4.  */
 DEF_RVV_TUPLE_TYPE (vint16mf4x2_t, 18, __rvv_int16mf4x2_t, vint16mf4_t, int16, 2, _i16mf4x2)
 DEF_RVV_TUPLE_TYPE (vuint16mf4x2_t, 19, __rvv_uint16mf4x2_t, vuint16mf4_t, uint16, 2, _u16mf4x2)
@@ -415,6 +268,14 @@ DEF_RVV_TUPLE_TYPE (vint16mf4x7_t, 18, __rvv_int16mf4x7_t, vint16mf4_t, int16, 7
 DEF_RVV_TUPLE_TYPE (vuint16mf4x7_t, 19, __rvv_uint16mf4x7_t, vuint16mf4_t, uint16, 7, _u16mf4x7)
 DEF_RVV_TUPLE_TYPE (vint16mf4x8_t, 18, __rvv_int16mf4x8_t, vint16mf4_t, int16, 8, _i16mf4x8)
 DEF_RVV_TUPLE_TYPE (vuint16mf4x8_t, 19, __rvv_uint16mf4x8_t, vuint16mf4_t, uint16, 8, _u16mf4x8)
+/* LMUL = 1/2:
+   Machine mode = VNx4HImode when TARGET_MIN_VLEN >= 128.
+   Machine mode = VNx2HImode when TARGET_MIN_VLEN > 32.
+   Machine mode = VNx1HImode when TARGET_MIN_VLEN = 32.  */
+DEF_RVV_TYPE (vint16mf2_t, 16, __rvv_int16mf2_t, int16, VNx4HI, VNx2HI, VNx1HI, _i16mf2,
+	      _i16, _e16mf2)
+DEF_RVV_TYPE (vuint16mf2_t, 17, __rvv_uint16mf2_t, uint16, VNx4HI, VNx2HI, VNx1HI,
+	      _u16mf2, _u16, _e16mf2)
 /* Define tuple types for SEW = 16, LMUL = MF2.  */
 DEF_RVV_TUPLE_TYPE (vint16mf2x2_t, 18, __rvv_int16mf2x2_t, vint16mf2_t, int16, 2, _i16mf2x2)
 DEF_RVV_TUPLE_TYPE (vuint16mf2x2_t, 19, __rvv_uint16mf2x2_t, vuint16mf2_t, uint16, 2, _u16mf2x2)
@@ -430,6 +291,14 @@ DEF_RVV_TUPLE_TYPE (vint16mf2x7_t, 18, __rvv_int16mf2x7_t, vint16mf2_t, int16, 7
 DEF_RVV_TUPLE_TYPE (vuint16mf2x7_t, 19, __rvv_uint16mf2x7_t, vuint16mf2_t, uint16, 7, _u16mf2x7)
 DEF_RVV_TUPLE_TYPE (vint16mf2x8_t, 18, __rvv_int16mf2x8_t, vint16mf2_t, int16, 8, _i16mf2x8)
 DEF_RVV_TUPLE_TYPE (vuint16mf2x8_t, 19, __rvv_uint16mf2x8_t, vuint16mf2_t, uint16, 8, _u16mf2x8)
+/* LMUL = 1:
+   Machine mode = VNx8HImode when TARGET_MIN_VLEN >= 128.
+   Machine mode = VNx4HImode when TARGET_MIN_VLEN > 32.
+   Machine mode = VNx2HImode when TARGET_MIN_VLEN = 32.  */
+DEF_RVV_TYPE (vint16m1_t, 15, __rvv_int16m1_t, int16, VNx8HI, VNx4HI, VNx2HI, _i16m1,
+	      _i16, _e16m1)
+DEF_RVV_TYPE (vuint16m1_t, 16, __rvv_uint16m1_t, uint16, VNx8HI, VNx4HI, VNx2HI, _u16m1,
+	      _u16, _e16m1)
 /* Define tuple types for SEW = 16, LMUL = M1.  */
 DEF_RVV_TUPLE_TYPE (vint16m1x2_t, 17, __rvv_int16m1x2_t, vint16m1_t, int16, 2, _i16m1x2)
 DEF_RVV_TUPLE_TYPE (vuint16m1x2_t, 18, __rvv_uint16m1x2_t, vuint16m1_t, uint16, 2, _u16m1x2)
@@ -445,6 +314,14 @@ DEF_RVV_TUPLE_TYPE (vint16m1x7_t, 17, __rvv_int16m1x7_t, vint16m1_t, int16, 7, _
 DEF_RVV_TUPLE_TYPE (vuint16m1x7_t, 18, __rvv_uint16m1x7_t, vuint16m1_t, uint16, 7, _u16m1x7)
 DEF_RVV_TUPLE_TYPE (vint16m1x8_t, 17, __rvv_int16m1x8_t, vint16m1_t, int16, 8, _i16m1x8)
 DEF_RVV_TUPLE_TYPE (vuint16m1x8_t, 18, __rvv_uint16m1x8_t, vuint16m1_t, uint16, 8, _u16m1x8)
+/* LMUL = 2:
+   Machine mode = VNx16HImode when TARGET_MIN_VLEN >= 128.
+   Machine mode = VNx8HImode when TARGET_MIN_VLEN > 32.
+   Machine mode = VNx4HImode when TARGET_MIN_VLEN = 32.  */
+DEF_RVV_TYPE (vint16m2_t, 15, __rvv_int16m2_t, int16, VNx16HI, VNx8HI, VNx4HI, _i16m2,
+	      _i16, _e16m2)
+DEF_RVV_TYPE (vuint16m2_t, 16, __rvv_uint16m2_t, uint16, VNx16HI, VNx8HI, VNx4HI, _u16m2,
+	      _u16, _e16m2)
 /* Define tuple types for SEW = 16, LMUL = M2.  */
 DEF_RVV_TUPLE_TYPE (vint16m2x2_t, 17, __rvv_int16m2x2_t, vint16m2_t, int16, 2, _i16m2x2)
 DEF_RVV_TUPLE_TYPE (vuint16m2x2_t, 18, __rvv_uint16m2x2_t, vuint16m2_t, uint16, 2, _u16m2x2)
@@ -452,9 +329,34 @@ DEF_RVV_TUPLE_TYPE (vint16m2x3_t, 17, __rvv_int16m2x3_t, vint16m2_t, int16, 3, _
 DEF_RVV_TUPLE_TYPE (vuint16m2x3_t, 18, __rvv_uint16m2x3_t, vuint16m2_t, uint16, 3, _u16m2x3)
 DEF_RVV_TUPLE_TYPE (vint16m2x4_t, 17, __rvv_int16m2x4_t, vint16m2_t, int16, 4, _i16m2x4)
 DEF_RVV_TUPLE_TYPE (vuint16m2x4_t, 18, __rvv_uint16m2x4_t, vuint16m2_t, uint16, 4, _u16m2x4)
+/* LMUL = 4:
+   Machine mode = VNx32HImode when TARGET_MIN_VLEN >= 128.
+   Machine mode = VNx16HImode when TARGET_MIN_VLEN > 32.
+   Machine mode = VNx8HImode when TARGET_MIN_VLEN = 32.  */
+DEF_RVV_TYPE (vint16m4_t, 15, __rvv_int16m4_t, int16, VNx32HI, VNx16HI, VNx8HI, _i16m4,
+	      _i16, _e16m4)
+DEF_RVV_TYPE (vuint16m4_t, 16, __rvv_uint16m4_t, uint16, VNx32HI, VNx16HI, VNx8HI,
+	      _u16m4, _u16, _e16m4)
 /* Define tuple types for SEW = 16, LMUL = M4.  */
 DEF_RVV_TUPLE_TYPE (vint16m4x2_t, 17, __rvv_int16m4x2_t, vint16m4_t, int16, 2, _i16m4x2)
 DEF_RVV_TUPLE_TYPE (vuint16m4x2_t, 18, __rvv_uint16m4x2_t, vuint16m4_t, uint16, 2, _u16m4x2)
+/* LMUL = 8:
+   Machine mode = VNx64HImode when TARGET_MIN_VLEN >= 128.
+   Machine mode = VNx32HImode when TARGET_MIN_VLEN > 32.
+   Machine mode = VNx16HImode when TARGET_MIN_VLEN = 32.  */
+DEF_RVV_TYPE (vint16m8_t, 15, __rvv_int16m8_t, int16, VNx64HI, VNx32HI, VNx16HI, _i16m8,
+	      _i16, _e16m8)
+DEF_RVV_TYPE (vuint16m8_t, 16, __rvv_uint16m8_t, uint16, VNx64HI, VNx32HI, VNx16HI,
+	      _u16m8, _u16, _e16m8)
+
+/* LMUL = 1/2:
+   Only enble when TARGET_MIN_VLEN > 32.
+   Machine mode = VNx1SImode when TARGET_MIN_VLEN < 128.
+   Machine mode = VNx2SImode when TARGET_MIN_VLEN >= 128.  */
+DEF_RVV_TYPE (vint32mf2_t, 16, __rvv_int32mf2_t, int32, VNx2SI, VNx1SI, VOID, _i32mf2,
+	      _i32, _e32mf2)
+DEF_RVV_TYPE (vuint32mf2_t, 17, __rvv_uint32mf2_t, uint32, VNx2SI, VNx1SI, VOID,
+	      _u32mf2, _u32, _e32mf2)
 /* Define tuple types for SEW = 32, LMUL = MF2.  */
 DEF_RVV_TUPLE_TYPE (vint32mf2x2_t, 18, __rvv_int32mf2x2_t, vint32mf2_t, int32, 2, _i32mf2x2)
 DEF_RVV_TUPLE_TYPE (vuint32mf2x2_t, 19, __rvv_uint32mf2x2_t, vuint32mf2_t, uint32, 2, _u32mf2x2)
@@ -470,6 +372,14 @@ DEF_RVV_TUPLE_TYPE (vint32mf2x7_t, 18, __rvv_int32mf2x7_t, vint32mf2_t, int32, 7
 DEF_RVV_TUPLE_TYPE (vuint32mf2x7_t, 19, __rvv_uint32mf2x7_t, vuint32mf2_t, uint32, 7, _u32mf2x7)
 DEF_RVV_TUPLE_TYPE (vint32mf2x8_t, 18, __rvv_int32mf2x8_t, vint32mf2_t, int32, 8, _i32mf2x8)
 DEF_RVV_TUPLE_TYPE (vuint32mf2x8_t, 19, __rvv_uint32mf2x8_t, vuint32mf2_t, uint32, 8, _u32mf2x8)
+/* LMUL = 1:
+   Machine mode = VNx4SImode when TARGET_MIN_VLEN >= 128.
+   Machine mode = VNx2SImode when TARGET_MIN_VLEN > 32.
+   Machine mode = VNx1SImode when TARGET_MIN_VLEN = 32.  */
+DEF_RVV_TYPE (vint32m1_t, 15, __rvv_int32m1_t, int32, VNx4SI, VNx2SI, VNx1SI, _i32m1,
+	      _i32, _e32m1)
+DEF_RVV_TYPE (vuint32m1_t, 16, __rvv_uint32m1_t, uint32, VNx4SI, VNx2SI, VNx1SI, _u32m1,
+	      _u32, _e32m1)
 /* Define tuple types for SEW = 32, LMUL = M1.  */
 DEF_RVV_TUPLE_TYPE (vint32m1x2_t, 17, __rvv_int32m1x2_t, vint32m1_t, int32, 2, _i32m1x2)
 DEF_RVV_TUPLE_TYPE (vuint32m1x2_t, 18, __rvv_uint32m1x2_t, vuint32m1_t, uint32, 2, _u32m1x2)
@@ -485,6 +395,14 @@ DEF_RVV_TUPLE_TYPE (vint32m1x7_t, 17, __rvv_int32m1x7_t, vint32m1_t, int32, 7, _
 DEF_RVV_TUPLE_TYPE (vuint32m1x7_t, 18, __rvv_uint32m1x7_t, vuint32m1_t, uint32, 7, _u32m1x7)
 DEF_RVV_TUPLE_TYPE (vint32m1x8_t, 17, __rvv_int32m1x8_t, vint32m1_t, int32, 8, _i32m1x8)
 DEF_RVV_TUPLE_TYPE (vuint32m1x8_t, 18, __rvv_uint32m1x8_t, vuint32m1_t, uint32, 8, _u32m1x8)
+/* LMUL = 2:
+   Machine mode = VNx8SImode when TARGET_MIN_VLEN >= 128.
+   Machine mode = VNx4SImode when TARGET_MIN_VLEN > 32.
+   Machine mode = VNx2SImode when TARGET_MIN_VLEN = 32.  */
+DEF_RVV_TYPE (vint32m2_t, 15, __rvv_int32m2_t, int32, VNx8SI, VNx4SI, VNx2SI, _i32m2,
+	      _i32, _e32m2)
+DEF_RVV_TYPE (vuint32m2_t, 16, __rvv_uint32m2_t, uint32, VNx8SI, VNx4SI, VNx2SI, _u32m2,
+	      _u32, _e32m2)
 /* Define tuple types for SEW = 32, LMUL = M2.  */
 DEF_RVV_TUPLE_TYPE (vint32m2x2_t, 17, __rvv_int32m2x2_t, vint32m2_t, int32, 2, _i32m2x2)
 DEF_RVV_TUPLE_TYPE (vuint32m2x2_t, 18, __rvv_uint32m2x2_t, vuint32m2_t, uint32, 2, _u32m2x2)
@@ -492,9 +410,32 @@ DEF_RVV_TUPLE_TYPE (vint32m2x3_t, 17, __rvv_int32m2x3_t, vint32m2_t, int32, 3, _
 DEF_RVV_TUPLE_TYPE (vuint32m2x3_t, 18, __rvv_uint32m2x3_t, vuint32m2_t, uint32, 3, _u32m2x3)
 DEF_RVV_TUPLE_TYPE (vint32m2x4_t, 17, __rvv_int32m2x4_t, vint32m2_t, int32, 4, _i32m2x4)
 DEF_RVV_TUPLE_TYPE (vuint32m2x4_t, 18, __rvv_uint32m2x4_t, vuint32m2_t, uint32, 4, _u32m2x4)
+/* LMUL = 4:
+   Machine mode = VNx16SImode when TARGET_MIN_VLEN >= 128.
+   Machine mode = VNx8SImode when TARGET_MIN_VLEN > 32.
+   Machine mode = VNx4SImode when TARGET_MIN_VLEN = 32.  */
+DEF_RVV_TYPE (vint32m4_t, 15, __rvv_int32m4_t, int32, VNx16SI, VNx8SI, VNx4SI, _i32m4,
+	      _i32, _e32m4)
+DEF_RVV_TYPE (vuint32m4_t, 16, __rvv_uint32m4_t, uint32, VNx16SI, VNx8SI, VNx4SI, _u32m4,
+	      _u32, _e32m4)
 /* Define tuple types for SEW = 32, LMUL = M4.  */
 DEF_RVV_TUPLE_TYPE (vint32m4x2_t, 17, __rvv_int32m4x2_t, vint32m4_t, int32, 2, _i32m4x2)
 DEF_RVV_TUPLE_TYPE (vuint32m4x2_t, 18, __rvv_uint32m4x2_t, vuint32m4_t, uint32, 2, _u32m4x2)
+/* LMUL = 8:
+   Machine mode = VNx32SImode when TARGET_MIN_VLEN >= 128.
+   Machine mode = VNx16SImode when TARGET_MIN_VLEN > 32.
+   Machine mode = VNx8SImode when TARGET_MIN_VLEN = 32.  */
+DEF_RVV_TYPE (vint32m8_t, 15, __rvv_int32m8_t, int32, VNx32SI, VNx16SI, VNx8SI, _i32m8,
+	      _i32, _e32m8)
+DEF_RVV_TYPE (vuint32m8_t, 16, __rvv_uint32m8_t, uint32, VNx32SI, VNx16SI, VNx8SI,
+	      _u32m8, _u32, _e32m8)
+
+/* SEW = 64:
+   Disable when !TARGET_VECTOR_ELEN_64.  */
+DEF_RVV_TYPE (vint64m1_t, 15, __rvv_int64m1_t, int64, VNx2DI, VNx1DI, VOID, _i64m1,
+	      _i64, _e64m1)
+DEF_RVV_TYPE (vuint64m1_t, 16, __rvv_uint64m1_t, uint64, VNx2DI, VNx1DI, VOID, _u64m1,
+	      _u64, _e64m1)
 /* Define tuple types for SEW = 64, LMUL = M1.  */
 DEF_RVV_TUPLE_TYPE (vint64m1x2_t, 17, __rvv_int64m1x2_t, vint64m1_t, int64, 2, _i64m1x2)
 DEF_RVV_TUPLE_TYPE (vuint64m1x2_t, 18, __rvv_uint64m1x2_t, vuint64m1_t, uint64, 2, _u64m1x2)
@@ -510,6 +451,10 @@ DEF_RVV_TUPLE_TYPE (vint64m1x7_t, 17, __rvv_int64m1x7_t, vint64m1_t, int64, 7, _
 DEF_RVV_TUPLE_TYPE (vuint64m1x7_t, 18, __rvv_uint64m1x7_t, vuint64m1_t, uint64, 7, _u64m1x7)
 DEF_RVV_TUPLE_TYPE (vint64m1x8_t, 17, __rvv_int64m1x8_t, vint64m1_t, int64, 8, _i64m1x8)
 DEF_RVV_TUPLE_TYPE (vuint64m1x8_t, 18, __rvv_uint64m1x8_t, vuint64m1_t, uint64, 8, _u64m1x8)
+DEF_RVV_TYPE (vint64m2_t, 15, __rvv_int64m2_t, int64, VNx4DI, VNx2DI, VOID, _i64m2,
+	      _i64, _e64m2)
+DEF_RVV_TYPE (vuint64m2_t, 16, __rvv_uint64m2_t, uint64, VNx4DI, VNx2DI, VOID, _u64m2,
+	      _u64, _e64m2)
 /* Define tuple types for SEW = 64, LMUL = M2.  */
 DEF_RVV_TUPLE_TYPE (vint64m2x2_t, 17, __rvv_int64m2x2_t, vint64m2_t, int64, 2, _i64m2x2)
 DEF_RVV_TUPLE_TYPE (vuint64m2x2_t, 18, __rvv_uint64m2x2_t, vuint64m2_t, uint64, 2, _u64m2x2)
@@ -517,11 +462,25 @@ DEF_RVV_TUPLE_TYPE (vint64m2x3_t, 17, __rvv_int64m2x3_t, vint64m2_t, int64, 3, _
 DEF_RVV_TUPLE_TYPE (vuint64m2x3_t, 18, __rvv_uint64m2x3_t, vuint64m2_t, uint64, 3, _u64m2x3)
 DEF_RVV_TUPLE_TYPE (vint64m2x4_t, 17, __rvv_int64m2x4_t, vint64m2_t, int64, 4, _i64m2x4)
 DEF_RVV_TUPLE_TYPE (vuint64m2x4_t, 18, __rvv_uint64m2x4_t, vuint64m2_t, uint64, 4, _u64m2x4)
+DEF_RVV_TYPE (vint64m4_t, 15, __rvv_int64m4_t, int64, VNx8DI, VNx4DI, VOID, _i64m4,
+	      _i64, _e64m4)
+DEF_RVV_TYPE (vuint64m4_t, 16, __rvv_uint64m4_t, uint64, VNx8DI, VNx4DI, VOID, _u64m4,
+	      _u64, _e64m4)
 /* Define tuple types for SEW = 64, LMUL = M4.  */
 DEF_RVV_TUPLE_TYPE (vint64m4x2_t, 17, __rvv_int64m4x2_t, vint64m4_t, int64, 2, _i64m4x2)
 DEF_RVV_TUPLE_TYPE (vuint64m4x2_t, 18, __rvv_uint64m4x2_t, vuint64m4_t, uint64, 2, _u64m4x2)
+DEF_RVV_TYPE (vint64m8_t, 15, __rvv_int64m8_t, int64, VNx16DI, VNx8DI, VOID, _i64m8,
+	      _i64, _e64m8)
+DEF_RVV_TYPE (vuint64m8_t, 16, __rvv_uint64m8_t, uint64, VNx16DI, VNx8DI, VOID, _u64m8,
+	      _u64, _e64m8)
 
-/* Define floating-point tuple types.  */
+/* Disable all when !TARGET_VECTOR_ELEN_FP_32.  */
+/* LMUL = 1/2:
+   Only enble when TARGET_MIN_VLEN > 32.
+   Machine mode = VNx1SFmode when TARGET_MIN_VLEN < 128.
+   Machine mode = VNx2SFmode when TARGET_MIN_VLEN >= 128.  */
+DEF_RVV_TYPE (vfloat32mf2_t, 18, __rvv_float32mf2_t, float, VNx2SF, VNx1SF, VOID,
+	      _f32mf2, _f32, _e32mf2)
 /* Define tuple types for SEW = 32, LMUL = MF2.  */
 DEF_RVV_TUPLE_TYPE (vfloat32mf2x2_t, 20, __rvv_float32mf2x2_t, vfloat32mf2_t, float, 2, _f32mf2x2)
 DEF_RVV_TUPLE_TYPE (vfloat32mf2x3_t, 20, __rvv_float32mf2x3_t, vfloat32mf2_t, float, 3, _f32mf2x3)
@@ -530,6 +489,12 @@ DEF_RVV_TUPLE_TYPE (vfloat32mf2x5_t, 20, __rvv_float32mf2x5_t, vfloat32mf2_t, fl
 DEF_RVV_TUPLE_TYPE (vfloat32mf2x6_t, 20, __rvv_float32mf2x6_t, vfloat32mf2_t, float, 6, _f32mf2x6)
 DEF_RVV_TUPLE_TYPE (vfloat32mf2x7_t, 20, __rvv_float32mf2x7_t, vfloat32mf2_t, float, 7, _f32mf2x7)
 DEF_RVV_TUPLE_TYPE (vfloat32mf2x8_t, 20, __rvv_float32mf2x8_t, vfloat32mf2_t, float, 8, _f32mf2x8)
+/* LMUL = 1:
+   Machine mode = VNx4SFmode when TARGET_MIN_VLEN >= 128.
+   Machine mode = VNx2SFmode when TARGET_MIN_VLEN > 32.
+   Machine mode = VNx1SFmode when TARGET_MIN_VLEN = 32.  */
+DEF_RVV_TYPE (vfloat32m1_t, 17, __rvv_float32m1_t, float, VNx4SF, VNx2SF, VNx1SF,
+	      _f32m1, _f32, _e32m1)
 /* Define tuple types for SEW = 32, LMUL = M1.  */
 DEF_RVV_TUPLE_TYPE (vfloat32m1x2_t, 19, __rvv_float32m1x2_t, vfloat32m1_t, double, 2, _f32m1x2)
 DEF_RVV_TUPLE_TYPE (vfloat32m1x3_t, 19, __rvv_float32m1x3_t, vfloat32m1_t, double, 3, _f32m1x3)
@@ -538,12 +503,35 @@ DEF_RVV_TUPLE_TYPE (vfloat32m1x5_t, 19, __rvv_float32m1x5_t, vfloat32m1_t, doubl
 DEF_RVV_TUPLE_TYPE (vfloat32m1x6_t, 19, __rvv_float32m1x6_t, vfloat32m1_t, double, 6, _f32m1x6)
 DEF_RVV_TUPLE_TYPE (vfloat32m1x7_t, 19, __rvv_float32m1x7_t, vfloat32m1_t, double, 7, _f32m1x7)
 DEF_RVV_TUPLE_TYPE (vfloat32m1x8_t, 19, __rvv_float32m1x8_t, vfloat32m1_t, double, 8, _f32m1x8)
+/* LMUL = 2:
+   Machine mode = VNx8SFmode when TARGET_MIN_VLEN >= 128.
+   Machine mode = VNx4SFmode when TARGET_MIN_VLEN > 32.
+   Machine mode = VNx2SFmode when TARGET_MIN_VLEN = 32.  */
+DEF_RVV_TYPE (vfloat32m2_t, 17, __rvv_float32m2_t, float, VNx8SF, VNx4SF, VNx2SF,
+	      _f32m2, _f32, _e32m2)
 /* Define tuple types for SEW = 32, LMUL = M2.  */
 DEF_RVV_TUPLE_TYPE (vfloat32m2x2_t, 19, __rvv_float32m2x2_t, vfloat32m2_t, double, 2, _f32m2x2)
 DEF_RVV_TUPLE_TYPE (vfloat32m2x3_t, 19, __rvv_float32m2x3_t, vfloat32m2_t, double, 3, _f32m2x3)
 DEF_RVV_TUPLE_TYPE (vfloat32m2x4_t, 19, __rvv_float32m2x4_t, vfloat32m2_t, double, 4, _f32m2x4)
+/* LMUL = 4:
+   Machine mode = VNx16SFmode when TARGET_MIN_VLEN >= 128.
+   Machine mode = VNx8SFmode when TARGET_MIN_VLEN > 32.
+   Machine mode = VNx4SFmode when TARGET_MIN_VLEN = 32.  */
+DEF_RVV_TYPE (vfloat32m4_t, 17, __rvv_float32m4_t, float, VNx16SF, VNx8SF, VNx4SF,
+	      _f32m4, _f32, _e32m4)
 /* Define tuple types for SEW = 32, LMUL = M4.  */
 DEF_RVV_TUPLE_TYPE (vfloat32m4x2_t, 19, __rvv_float32m4x2_t, vfloat32m4_t, double, 2, _f32m4x2)
+/* LMUL = 8:
+   Machine mode = VNx32SFmode when TARGET_MIN_VLEN >= 128.
+   Machine mode = VNx16SFmode when TARGET_MIN_VLEN > 32.
+   Machine mode = VNx8SFmode when TARGET_MIN_VLEN = 32.  */
+DEF_RVV_TYPE (vfloat32m8_t, 17, __rvv_float32m8_t, float, VNx32SF, VNx16SF, VNx8SF,
+	      _f32m8, _f32, _e32m8)
+
+/* SEW = 64:
+   Disable when !TARGET_VECTOR_ELEN_FP_64.  */
+DEF_RVV_TYPE (vfloat64m1_t, 17, __rvv_float64m1_t, double, VNx2DF, VNx1DF, VOID, _f64m1,
+	      _f64, _e64m1)
 /* Define tuple types for SEW = 64, LMUL = M1.  */
 DEF_RVV_TUPLE_TYPE (vfloat64m1x2_t, 19, __rvv_float64m1x2_t, vfloat64m1_t, double, 2, _f64m1x2)
 DEF_RVV_TUPLE_TYPE (vfloat64m1x3_t, 19, __rvv_float64m1x3_t, vfloat64m1_t, double, 3, _f64m1x3)
@@ -552,12 +540,18 @@ DEF_RVV_TUPLE_TYPE (vfloat64m1x5_t, 19, __rvv_float64m1x5_t, vfloat64m1_t, doubl
 DEF_RVV_TUPLE_TYPE (vfloat64m1x6_t, 19, __rvv_float64m1x6_t, vfloat64m1_t, double, 6, _f64m1x6)
 DEF_RVV_TUPLE_TYPE (vfloat64m1x7_t, 19, __rvv_float64m1x7_t, vfloat64m1_t, double, 7, _f64m1x7)
 DEF_RVV_TUPLE_TYPE (vfloat64m1x8_t, 19, __rvv_float64m1x8_t, vfloat64m1_t, double, 8, _f64m1x8)
+DEF_RVV_TYPE (vfloat64m2_t, 17, __rvv_float64m2_t, double, VNx4DF, VNx2DF, VOID, _f64m2,
+	      _f64, _e64m2)
 /* Define tuple types for SEW = 64, LMUL = M2.  */
 DEF_RVV_TUPLE_TYPE (vfloat64m2x2_t, 19, __rvv_float64m2x2_t, vfloat64m2_t, double, 2, _f64m2x2)
 DEF_RVV_TUPLE_TYPE (vfloat64m2x3_t, 19, __rvv_float64m2x3_t, vfloat64m2_t, double, 3, _f64m2x3)
 DEF_RVV_TUPLE_TYPE (vfloat64m2x4_t, 19, __rvv_float64m2x4_t, vfloat64m2_t, double, 4, _f64m2x4)
+DEF_RVV_TYPE (vfloat64m4_t, 17, __rvv_float64m4_t, double, VNx8DF, VNx4DF, VOID, _f64m4,
+	      _f64, _e64m4)
 /* Define tuple types for SEW = 64, LMUL = M4.  */
 DEF_RVV_TUPLE_TYPE (vfloat64m4x2_t, 19, __rvv_float64m4x2_t, vfloat64m4_t, double, 2, _f64m4x2)
+DEF_RVV_TYPE (vfloat64m8_t, 17, __rvv_float64m8_t, double, VNx16DF, VNx8DF, VOID, _f64m8,
+	      _f64, _e64m8)
 
 DEF_RVV_OP_TYPE (vv)
 DEF_RVV_OP_TYPE (vx)
@@ -647,6 +641,7 @@ DEF_RVV_BASE_TYPE (vlmul_ext_x16, get_vector_type (type_idx))
 DEF_RVV_BASE_TYPE (vlmul_ext_x32, get_vector_type (type_idx))
 DEF_RVV_BASE_TYPE (vlmul_ext_x64, get_vector_type (type_idx))
 DEF_RVV_BASE_TYPE (size_ptr, build_pointer_type (size_type_node))
+DEF_RVV_BASE_TYPE (tuple_subpart, get_tuple_subpart_type (type_idx))
 
 #include "riscv-vector-type-indexer.gen.def"
 
diff --git a/gcc/config/riscv/riscv-vector-builtins.h b/gcc/config/riscv/riscv-vector-builtins.h
index 93261a72134..9acfa035027 100644
--- a/gcc/config/riscv/riscv-vector-builtins.h
+++ b/gcc/config/riscv/riscv-vector-builtins.h
@@ -185,6 +185,7 @@ struct rvv_arg_type_info
   tree get_scalar_type (vector_type_index) const;
   tree get_vector_type (vector_type_index) const;
   tree get_tree_type (vector_type_index) const;
+  tree get_tuple_subpart_type (vector_type_index) const;
 };
 
 /* Static information for each operand.  */
@@ -656,6 +657,16 @@ function_base::can_be_overloaded_p (enum predication_type_index) const
   return true;
 }
 
+/* Return the single field in tuple type TYPE.  */
+inline tree
+tuple_type_field (tree type)
+{
+  for (tree field = TYPE_FIELDS (type); field; field = DECL_CHAIN (field))
+    if (TREE_CODE (field) == FIELD_DECL)
+      return field;
+  gcc_unreachable ();
+}
+
 } // end namespace riscv_vector
 
 #endif
-- 
2.36.1


^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH] RISC-V: Add tuple type vget/vset intrinsics
  2023-04-19 13:00 [PATCH] RISC-V: Add tuple type vget/vset intrinsics juzhe.zhong
@ 2023-05-03 10:41 ` Kito Cheng
  0 siblings, 0 replies; 2+ messages in thread
From: Kito Cheng @ 2023-05-03 10:41 UTC (permalink / raw)
  To: juzhe.zhong; +Cc: gcc-patches, palmer

Thanks, committed to trunk!

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2023-05-03 10:41 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-19 13:00 [PATCH] RISC-V: Add tuple type vget/vset intrinsics juzhe.zhong
2023-05-03 10:41 ` Kito Cheng

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).