public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH] RISC-V: Support vle.v/vse.v intrinsics
@ 2022-12-23  0:52 juzhe.zhong
  2022-12-23  0:56 ` 钟居哲
  0 siblings, 1 reply; 3+ messages in thread
From: juzhe.zhong @ 2022-12-23  0:52 UTC (permalink / raw)
  To: gcc-patches; +Cc: kito.cheng, palmer, Ju-Zhe Zhong

From: Ju-Zhe Zhong <juzhe.zhong@rivai.ai>

gcc/ChangeLog:

        * config/riscv/riscv-protos.h (get_avl_type_rtx): New function.
        * config/riscv/riscv-v.cc (get_avl_type_rtx): Ditto.
        * config/riscv/riscv-vector-builtins-bases.cc (class loadstore): New class.
        (BASE): Ditto.
        * config/riscv/riscv-vector-builtins-bases.h: Ditto.      
        * config/riscv/riscv-vector-builtins-functions.def (vle): Ditto.
        (vse): Ditto.
        * config/riscv/riscv-vector-builtins-shapes.cc (build_one): Ditto.
        (struct loadstore_def): Ditto.
        (SHAPE): Ditto.
        * config/riscv/riscv-vector-builtins-shapes.h: Ditto.
        * config/riscv/riscv-vector-builtins-types.def (DEF_RVV_U_OPS): New macro.
        (DEF_RVV_F_OPS): Ditto.
        (vuint8mf8_t): Add corresponding mask type.
        (vuint8mf4_t): Ditto.
        (vuint8mf2_t): Ditto.
        (vuint8m1_t): Ditto.
        (vuint8m2_t): Ditto.
        (vuint8m4_t): Ditto.
        (vuint8m8_t): Ditto.
        (vuint16mf4_t): Ditto.
        (vuint16mf2_t): Ditto.
        (vuint16m1_t): Ditto.
        (vuint16m2_t): Ditto.
        (vuint16m4_t): Ditto.
        (vuint16m8_t): Ditto.
        (vuint32mf2_t): Ditto.
        (vuint32m1_t): Ditto.
        (vuint32m2_t): Ditto.
        (vuint32m4_t): Ditto.
        (vuint32m8_t): Ditto.
        (vuint64m1_t): Ditto.
        (vuint64m2_t): Ditto.
        (vuint64m4_t): Ditto.
        (vuint64m8_t): Ditto.
        (vfloat32mf2_t): Ditto.
        (vfloat32m1_t): Ditto.
        (vfloat32m2_t): Ditto.
        (vfloat32m4_t): Ditto.
        (vfloat32m8_t): Ditto.
        (vfloat64m1_t): Ditto.
        (vfloat64m2_t): Ditto.
        (vfloat64m4_t): Ditto.
        (vfloat64m8_t): Ditto.
        * config/riscv/riscv-vector-builtins.cc (DEF_RVV_TYPE): Adjust for new macro.
        (DEF_RVV_I_OPS): Ditto.
        (DEF_RVV_U_OPS): New macro.
        (DEF_RVV_F_OPS): New macro.
        (use_real_mask_p): New function.
        (use_real_merge_p): Ditto.
        (get_tail_policy_for_pred): Ditto.
        (get_mask_policy_for_pred): Ditto.
        (function_builder::apply_predication): Ditto.
        (function_builder::append_base_name): Ditto.
        (function_builder::append_sew): Ditto.
        (function_expander::add_vundef_operand): Ditto.
        (function_expander::add_mem_operand): Ditto.
        (function_expander::use_contiguous_load_insn): Ditto.
        (function_expander::use_contiguous_store_insn): Ditto.
        * config/riscv/riscv-vector-builtins.def (DEF_RVV_TYPE): Adjust for adding mask type.
        (vbool64_t): Ditto.
        (vbool32_t): Ditto.
        (vbool16_t): Ditto.
        (vbool8_t): Ditto.
        (vbool4_t): Ditto.
        (vbool2_t): Ditto.
        (vbool1_t): Ditto.
        (vint8mf8_t): Ditto.
        (vint8mf4_t): Ditto.
        (vint8mf2_t): Ditto.
        (vint8m1_t): Ditto.
        (vint8m2_t): Ditto.
        (vint8m4_t): Ditto.
        (vint8m8_t): Ditto.
        (vint16mf4_t): Ditto.
        (vint16mf2_t): Ditto.
        (vint16m1_t): Ditto.
        (vint16m2_t): Ditto.
        (vint16m4_t): Ditto.
        (vint16m8_t): Ditto.
        (vint32mf2_t): Ditto.
        (vint32m1_t): Ditto.
        (vint32m2_t): Ditto.
        (vint32m4_t): Ditto.
        (vint32m8_t): Ditto.
        (vint64m1_t): Ditto.
        (vint64m2_t): Ditto.
        (vint64m4_t): Ditto.
        (vint64m8_t): Ditto.
        (vfloat32mf2_t): Ditto.
        (vfloat32m1_t): Ditto.
        (vfloat32m2_t): Ditto.
        (vfloat32m4_t): Ditto.
        (vfloat32m8_t): Ditto.
        (vfloat64m1_t): Ditto.
        (vfloat64m4_t): Ditto.
        * config/riscv/riscv-vector-builtins.h (function_expander::add_output_operand): New function.
        (function_expander::add_all_one_mask_operand): Ditto.
        (function_expander::add_fixed_operand): Ditto.
        (function_expander::vector_mode): Ditto.
        (function_base::apply_vl_p): Ditto.
        (function_base::can_be_overloaded_p): Ditto.
        * config/riscv/riscv-vsetvl.cc (get_vl): Remove restrict of supporting AVL is not VLMAX.
        * config/riscv/t-riscv: Add include file.

---
 gcc/config/riscv/riscv-protos.h               |   1 +
 gcc/config/riscv/riscv-v.cc                   |  10 +-
 .../riscv/riscv-vector-builtins-bases.cc      |  49 +++-
 .../riscv/riscv-vector-builtins-bases.h       |   2 +
 .../riscv/riscv-vector-builtins-functions.def |   3 +
 .../riscv/riscv-vector-builtins-shapes.cc     |  38 ++-
 .../riscv/riscv-vector-builtins-shapes.h      |   1 +
 .../riscv/riscv-vector-builtins-types.def     |  49 +++-
 gcc/config/riscv/riscv-vector-builtins.cc     | 236 +++++++++++++++++-
 gcc/config/riscv/riscv-vector-builtins.def    | 122 ++++-----
 gcc/config/riscv/riscv-vector-builtins.h      |  65 +++++
 gcc/config/riscv/riscv-vsetvl.cc              |   4 -
 gcc/config/riscv/t-riscv                      |   2 +-
 13 files changed, 506 insertions(+), 76 deletions(-)

diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
index cfd0f284f91..64ee56b8a7c 100644
--- a/gcc/config/riscv/riscv-protos.h
+++ b/gcc/config/riscv/riscv-protos.h
@@ -171,6 +171,7 @@ enum mask_policy
 };
 enum tail_policy get_prefer_tail_policy ();
 enum mask_policy get_prefer_mask_policy ();
+rtx get_avl_type_rtx (enum avl_type);
 }
 
 /* We classify builtin types into two classes:
diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
index f02a048f76d..b616ee3e6b3 100644
--- a/gcc/config/riscv/riscv-v.cc
+++ b/gcc/config/riscv/riscv-v.cc
@@ -79,8 +79,7 @@ public:
   }
   void add_avl_type_operand ()
   {
-    rtx vlmax_rtx = gen_int_mode (avl_type::VLMAX, Pmode);
-    add_input_operand (vlmax_rtx, Pmode);
+    add_input_operand (get_avl_type_rtx (avl_type::VLMAX), Pmode);
   }
 
   void expand (enum insn_code icode, bool temporary_volatile_p = false)
@@ -342,4 +341,11 @@ get_prefer_mask_policy ()
   return MASK_ANY;
 }
 
+/* Get avl_type rtx.  */
+rtx
+get_avl_type_rtx (enum avl_type type)
+{
+  return gen_int_mode (type, Pmode);
+}
+
 } // namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index c1193dbbfb5..10373e5ccf2 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -53,6 +53,11 @@ template<bool VLMAX_P>
 class vsetvl : public function_base
 {
 public:
+  bool apply_vl_p () const override
+  {
+    return false;
+  }
+
   rtx expand (function_expander &e) const override
   {
     if (VLMAX_P)
@@ -79,11 +84,47 @@ public:
   }
 };
 
+/* Implements vle.v/vse.v codegen.  */
+template <bool STORE_P>
+class loadstore : public function_base
+{
+  unsigned int call_properties (const function_instance &) const override
+  {
+    if (STORE_P)
+      return CP_WRITE_MEMORY;
+    else
+      return CP_READ_MEMORY;
+  }
+
+  bool can_be_overloaded_p (enum predication_type_index pred) const override
+  {
+    if (STORE_P)
+      return true;
+    return pred != PRED_TYPE_none && pred != PRED_TYPE_mu;
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    if (STORE_P)
+      return e.use_contiguous_store_insn (code_for_pred_mov (e.vector_mode ()));
+    else
+      return e.use_contiguous_load_insn (code_for_pred_mov (e.vector_mode ()));
+  }
+};
+
 static CONSTEXPR const vsetvl<false> vsetvl_obj;
 static CONSTEXPR const vsetvl<true> vsetvlmax_obj;
-namespace bases {
-const function_base *const vsetvl = &vsetvl_obj;
-const function_base *const vsetvlmax = &vsetvlmax_obj;
-}
+static CONSTEXPR const loadstore<false> vle_obj;
+static CONSTEXPR const loadstore<true> vse_obj;
+
+/* Declare the function base NAME, pointing it to an instance
+   of class <NAME>_obj.  */
+#define BASE(NAME) \
+  namespace bases { const function_base *const NAME = &NAME##_obj; }
+
+BASE (vsetvl)
+BASE (vsetvlmax)
+BASE (vle)
+BASE (vse)
 
 } // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.h b/gcc/config/riscv/riscv-vector-builtins-bases.h
index a0ae18eef03..79684bcb50d 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.h
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.h
@@ -26,6 +26,8 @@ namespace riscv_vector {
 namespace bases {
 extern const function_base *const vsetvl;
 extern const function_base *const vsetvlmax;
+extern const function_base *const vle;
+extern const function_base *const vse;
 }
 
 } // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.def b/gcc/config/riscv/riscv-vector-builtins-functions.def
index dc41537865e..e5ebb7d829c 100644
--- a/gcc/config/riscv/riscv-vector-builtins-functions.def
+++ b/gcc/config/riscv/riscv-vector-builtins-functions.def
@@ -39,5 +39,8 @@ along with GCC; see the file COPYING3. If not see
 /* 6. Configuration-Setting Instructions.  */
 DEF_RVV_FUNCTION (vsetvl, vsetvl, none_preds, i_none_size_size_ops)
 DEF_RVV_FUNCTION (vsetvlmax, vsetvlmax, none_preds, i_none_size_void_ops)
+/* 7. Vector Loads and Stores. */
+DEF_RVV_FUNCTION (vle, loadstore, full_preds, all_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (vse, loadstore, none_m_preds, all_v_scalar_ptr_ops)
 
 #undef DEF_RVV_FUNCTION
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index bb2ee8767a0..0332c031ce4 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -48,6 +48,7 @@ build_one (function_builder &b, const function_group_info &group,
   tree return_type = group.ops_infos.ret.get_tree_type (
     group.ops_infos.types[vec_type_idx].index);
   b.allocate_argument_types (function_instance, argument_types);
+  b.apply_predication (function_instance, return_type, argument_types);
   b.add_unique_function (function_instance, (*group.shape), return_type,
 			 argument_types);
 }
@@ -93,13 +94,46 @@ struct vsetvl_def : public build_base
     /* vsetvl* instruction doesn't have C++ overloaded functions.  */
     if (overloaded_p)
       return nullptr;
-    b.append_name ("__riscv_");
-    b.append_name (instance.base_name);
+    b.append_base_name (instance.base_name);
     b.append_name (type_suffixes[instance.type.index].vsetvl);
     return b.finish_name ();
   }
 };
+
+/* loadstore_def class.  */
+struct loadstore_def : public build_base
+{
+  char *get_name (function_builder &b, const function_instance &instance,
+		  bool overloaded_p) const override
+  {
+    /* Return nullptr if it can not be overloaded.  */
+    if (overloaded_p && !instance.base->can_be_overloaded_p (instance.pred))
+      return nullptr;
+
+    b.append_base_name (instance.base_name);
+
+    tree type = builtin_types[instance.type.index].vector;
+    machine_mode mode = TYPE_MODE (type);
+    int sew = GET_MODE_BITSIZE (GET_MODE_INNER (mode));
+    /* vop --> vop<sew>.  */
+    b.append_sew (sew);
+
+    /* vop<sew>_v --> vop<sew>_v_<type>.  */
+    if (!overloaded_p)
+      {
+	/* vop<sew> --> vop<sew>_v.  */
+	b.append_name (operand_suffixes[instance.op_info->op]);
+	/* vop<sew>_v --> vop<sew>_v_<type>.  */
+	b.append_name (type_suffixes[instance.type.index].vector);
+      }
+
+    b.append_name (predication_suffixes[instance.pred]);
+    return b.finish_name ();
+  }
+};
+
 SHAPE(vsetvl, vsetvl)
 SHAPE(vsetvl, vsetvlmax)
+SHAPE(loadstore, loadstore)
 
 } // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.h b/gcc/config/riscv/riscv-vector-builtins-shapes.h
index f2d876fb133..b17dcd88877 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.h
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.h
@@ -26,6 +26,7 @@ namespace riscv_vector {
 namespace shapes {
 extern const function_shape *const vsetvl;
 extern const function_shape *const vsetvlmax;
+extern const function_shape *const loadstore;
 }
 
 } // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-types.def b/gcc/config/riscv/riscv-vector-builtins-types.def
index f282a5e7654..6a867c99987 100644
--- a/gcc/config/riscv/riscv-vector-builtins-types.def
+++ b/gcc/config/riscv/riscv-vector-builtins-types.def
@@ -18,12 +18,24 @@ You should have received a copy of the GNU General Public License
 along with GCC; see the file COPYING3. If not see
 <http://www.gnu.org/licenses/>. */
 
-/* Use "DEF_ALL_SIGNED_INTEGER" macro include all signed integer which will be
+/* Use "DEF_RVV_I_OPS" macro include all signed integer which will be
    iterated and registered as intrinsic functions.  */
 #ifndef DEF_RVV_I_OPS
 #define DEF_RVV_I_OPS(TYPE, REQUIRE)
 #endif
 
+/* Use "DEF_RVV_U_OPS" macro include all unsigned integer which will be
+   iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_U_OPS
+#define DEF_RVV_U_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_F_OPS" macro include all floating-point which will be
+   iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_F_OPS
+#define DEF_RVV_F_OPS(TYPE, REQUIRE)
+#endif
+
 DEF_RVV_I_OPS (vint8mf8_t, RVV_REQUIRE_ZVE64)
 DEF_RVV_I_OPS (vint8mf4_t, 0)
 DEF_RVV_I_OPS (vint8mf2_t, 0)
@@ -47,4 +59,39 @@ DEF_RVV_I_OPS (vint64m2_t, RVV_REQUIRE_ZVE64)
 DEF_RVV_I_OPS (vint64m4_t, RVV_REQUIRE_ZVE64)
 DEF_RVV_I_OPS (vint64m8_t, RVV_REQUIRE_ZVE64)
 
+DEF_RVV_U_OPS (vuint8mf8_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_U_OPS (vuint8mf4_t, 0)
+DEF_RVV_U_OPS (vuint8mf2_t, 0)
+DEF_RVV_U_OPS (vuint8m1_t, 0)
+DEF_RVV_U_OPS (vuint8m2_t, 0)
+DEF_RVV_U_OPS (vuint8m4_t, 0)
+DEF_RVV_U_OPS (vuint8m8_t, 0)
+DEF_RVV_U_OPS (vuint16mf4_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_U_OPS (vuint16mf2_t, 0)
+DEF_RVV_U_OPS (vuint16m1_t, 0)
+DEF_RVV_U_OPS (vuint16m2_t, 0)
+DEF_RVV_U_OPS (vuint16m4_t, 0)
+DEF_RVV_U_OPS (vuint16m8_t, 0)
+DEF_RVV_U_OPS (vuint32mf2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_U_OPS (vuint32m1_t, 0)
+DEF_RVV_U_OPS (vuint32m2_t, 0)
+DEF_RVV_U_OPS (vuint32m4_t, 0)
+DEF_RVV_U_OPS (vuint32m8_t, 0)
+DEF_RVV_U_OPS (vuint64m1_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_U_OPS (vuint64m2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_U_OPS (vuint64m4_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_U_OPS (vuint64m8_t, RVV_REQUIRE_ZVE64)
+
+DEF_RVV_F_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ZVE64)
+DEF_RVV_F_OPS (vfloat32m1_t, RVV_REQUIRE_ELEN_FP_32)
+DEF_RVV_F_OPS (vfloat32m2_t, RVV_REQUIRE_ELEN_FP_32)
+DEF_RVV_F_OPS (vfloat32m4_t, RVV_REQUIRE_ELEN_FP_32)
+DEF_RVV_F_OPS (vfloat32m8_t, RVV_REQUIRE_ELEN_FP_32)
+DEF_RVV_F_OPS (vfloat64m1_t, RVV_REQUIRE_ELEN_FP_64)
+DEF_RVV_F_OPS (vfloat64m2_t, RVV_REQUIRE_ELEN_FP_64)
+DEF_RVV_F_OPS (vfloat64m4_t, RVV_REQUIRE_ELEN_FP_64)
+DEF_RVV_F_OPS (vfloat64m8_t, RVV_REQUIRE_ELEN_FP_64)
+
 #undef DEF_RVV_I_OPS
+#undef DEF_RVV_U_OPS
+#undef DEF_RVV_F_OPS
diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
index 43150aa47a4..9170776f979 100644
--- a/gcc/config/riscv/riscv-vector-builtins.cc
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -44,6 +44,7 @@
 #include "attribs.h"
 #include "targhooks.h"
 #include "regs.h"
+#include "emit-rtl.h"
 #include "riscv-vector-builtins.h"
 #include "riscv-vector-builtins-shapes.h"
 #include "riscv-vector-builtins-bases.h"
@@ -105,11 +106,20 @@ const char *const operand_suffixes[NUM_OP_TYPES] = {
 const rvv_builtin_suffixes type_suffixes[NUM_VECTOR_TYPES + 1] = {
 #define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, VECTOR_MODE,         \
 		     VECTOR_MODE_MIN_VLEN_32, VECTOR_SUFFIX, SCALAR_SUFFIX,    \
-		     VSETVL_SUFFIX)                                            \
+		     VSETVL_SUFFIX, MASK_TYPE)                                 \
   {#VECTOR_SUFFIX, #SCALAR_SUFFIX, #VSETVL_SUFFIX},
 #include "riscv-vector-builtins.def"
 };
 
+/* Mask type for each RVV type.  */
+const vector_type_index mask_types[NUM_VECTOR_TYPES + 1] = {
+#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, VECTOR_MODE,         \
+		     VECTOR_MODE_MIN_VLEN_32, VECTOR_SUFFIX, SCALAR_SUFFIX,    \
+		     VSETVL_SUFFIX, MASK_TYPE)                                 \
+  VECTOR_TYPE_##MASK_TYPE,
+#include "riscv-vector-builtins.def"
+};
+
 /* Static information about predication suffix for each RVV type.  */
 const char *const predication_suffixes[NUM_PRED_TYPES] = {
   "", /* PRED_TYPE_none.  */
@@ -123,6 +133,14 @@ static const rvv_type_info i_ops[] = {
 #include "riscv-vector-builtins-types.def"
   {NUM_VECTOR_TYPES, 0}};
 
+/* A list of all types will be registered for intrinsic functions.  */
+static const rvv_type_info all_ops[] = {
+#define DEF_RVV_I_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#define DEF_RVV_U_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#define DEF_RVV_F_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
 static CONSTEXPR const rvv_arg_type_info rvv_arg_type_info_end
   = rvv_arg_type_info (NUM_BASE_TYPES);
 
@@ -134,10 +152,28 @@ static CONSTEXPR const rvv_arg_type_info void_args[]
 static CONSTEXPR const rvv_arg_type_info size_args[]
   = {rvv_arg_type_info (RVV_BASE_size), rvv_arg_type_info_end};
 
+/* A list of args for vector_type func (const scalar_type *) function.  */
+static CONSTEXPR const rvv_arg_type_info scalar_const_ptr_args[]
+  = {rvv_arg_type_info (RVV_BASE_scalar_const_ptr), rvv_arg_type_info_end};
+
+/* A list of args for void func (scalar_type *, vector_type) function.  */
+static CONSTEXPR const rvv_arg_type_info scalar_ptr_args[]
+  = {rvv_arg_type_info (RVV_BASE_scalar_const_ptr),
+     rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
+
 /* A list of none preds that will be registered for intrinsic functions.  */
 static CONSTEXPR const predication_type_index none_preds[]
   = {PRED_TYPE_none, NUM_PRED_TYPES};
 
+/* vop/vop_m/vop_tu/vop_tum/vop_tumu/vop_mu will be registered.  */
+static CONSTEXPR const predication_type_index full_preds[]
+  = {PRED_TYPE_none, PRED_TYPE_m,  PRED_TYPE_tu,  PRED_TYPE_tum,
+     PRED_TYPE_tumu, PRED_TYPE_mu, NUM_PRED_TYPES};
+
+/* vop/vop_m will be registered.  */
+static CONSTEXPR const predication_type_index none_m_preds[]
+  = {PRED_TYPE_none, PRED_TYPE_m, NUM_PRED_TYPES};
+
 /* A static operand information for size_t func (void) function registration. */
 static CONSTEXPR const rvv_op_info i_none_size_void_ops
   = {i_ops,				/* Types */
@@ -153,6 +189,22 @@ static CONSTEXPR const rvv_op_info i_none_size_size_ops
      rvv_arg_type_info (RVV_BASE_size), /* Return type */
      size_args /* Args */};
 
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_scalar_const_ptr_ops
+  = {all_ops,				  /* Types */
+     OP_TYPE_v,				  /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     scalar_const_ptr_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_scalar_ptr_ops
+  = {all_ops,				/* Types */
+     OP_TYPE_v,				/* Suffix */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type */
+     scalar_ptr_args /* Args */};
+
 /* A list of all RVV intrinsic functions.  */
 static function_group_info function_groups[] = {
 #define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)                         \
@@ -362,6 +414,42 @@ check_required_extensions (uint64_t required_extensions)
   return true;
 }
 
+/* Return true if predication is using a real mask operand.  */
+static bool
+use_real_mask_p (enum predication_type_index pred)
+{
+  return pred == PRED_TYPE_m || pred == PRED_TYPE_tum || pred == PRED_TYPE_tumu
+	 || pred == PRED_TYPE_mu;
+}
+
+/* Return true if predication is using a real merge operand.  */
+static bool
+use_real_merge_p (enum predication_type_index pred)
+{
+  return pred == PRED_TYPE_tu || pred == PRED_TYPE_tum || pred == PRED_TYPE_tumu
+	 || pred == PRED_TYPE_mu;
+}
+
+/* Get TAIL policy for predication. If predication indicates TU, return the TU.
+   Otherwise, return the prefer default configuration.  */
+static rtx
+get_tail_policy_for_pred (enum predication_type_index pred)
+{
+  if (pred == PRED_TYPE_tu || pred == PRED_TYPE_tum || pred == PRED_TYPE_tumu)
+    return gen_int_mode (TAIL_UNDISTURBED, Pmode);
+  return gen_int_mode (get_prefer_tail_policy (), Pmode);
+}
+
+/* Get MASK policy for predication. If predication indicates MU, return the MU.
+   Otherwise, return the prefer default configuration.  */
+static rtx
+get_mask_policy_for_pred (enum predication_type_index pred)
+{
+  if (pred == PRED_TYPE_tumu || pred == PRED_TYPE_mu)
+    return gen_int_mode (MASK_UNDISTURBED, Pmode);
+  return gen_int_mode (get_prefer_mask_policy (), Pmode);
+}
+
 tree
 rvv_arg_type_info::get_tree_type (vector_type_index type_idx) const
 {
@@ -546,6 +634,28 @@ function_builder::allocate_argument_types (const function_instance &instance,
       instance.op_info->args[i].get_tree_type (instance.type.index));
 }
 
+/* Apply predication into argument_types.  */
+void
+function_builder::apply_predication (const function_instance &instance,
+				     tree return_type,
+				     vec<tree> &argument_types) const
+{
+  /* These predication types need to apply merge type.  */
+  if (instance.pred == PRED_TYPE_tu || instance.pred == PRED_TYPE_tum
+      || instance.pred == PRED_TYPE_tumu || instance.pred == PRED_TYPE_mu)
+    argument_types.quick_insert (0, return_type);
+
+  /* These predication types need to apply mask type.  */
+  tree mask_type = builtin_types[mask_types[instance.type.index]].vector;
+  if (instance.pred == PRED_TYPE_m || instance.pred == PRED_TYPE_tum
+      || instance.pred == PRED_TYPE_tumu || instance.pred == PRED_TYPE_mu)
+    argument_types.quick_insert (0, mask_type);
+
+  /* check if vl parameter need  */
+  if (instance.base->apply_vl_p ())
+    argument_types.quick_push (size_type_node);
+}
+
 /* Register all the functions in GROUP.  */
 void
 function_builder::register_function_group (const function_group_info &group)
@@ -560,6 +670,37 @@ function_builder::append_name (const char *name)
   obstack_grow (&m_string_obstack, name, strlen (name));
 }
 
+/* Add "__riscv_" and "name".  */
+void
+function_builder::append_base_name (const char *name)
+{
+  append_name ("__riscv_");
+  append_name (name);
+}
+
+/* Add SEW into function name.  */
+void
+function_builder::append_sew (int sew)
+{
+  switch (sew)
+    {
+    case 8:
+      append_name ("8");
+      break;
+    case 16:
+      append_name ("16");
+      break;
+    case 32:
+      append_name ("32");
+      break;
+    case 64:
+      append_name ("64");
+      break;
+    default:
+      gcc_unreachable ();
+    }
+}
+
 /* Zero-terminate and complete the function name being built.  */
 char *
 function_builder::finish_name ()
@@ -694,6 +835,99 @@ function_expander::add_input_operand (unsigned argno)
   add_input_operand (TYPE_MODE (TREE_TYPE (arg)), x);
 }
 
+/* Since we may normalize vop/vop_tu/vop_m/vop_tumu.. into a single patter.
+   We add a undef for the intrinsics that don't need a real merge.  */
+void
+function_expander::add_vundef_operand (machine_mode mode)
+{
+  rtx vundef = gen_rtx_UNSPEC (mode, gen_rtvec (1, const0_rtx), UNSPEC_VUNDEF);
+  add_input_operand (mode, vundef);
+}
+
+/* Add a memory operand with mode MODE and address ADDR.  */
+rtx
+function_expander::add_mem_operand (machine_mode mode, rtx addr)
+{
+  gcc_assert (VECTOR_MODE_P (mode));
+  rtx mem = gen_rtx_MEM (mode, memory_address (mode, addr));
+  /* The memory is only guaranteed to be element-aligned.  */
+  set_mem_align (mem, GET_MODE_ALIGNMENT (GET_MODE_INNER (mode)));
+  add_fixed_operand (mem);
+  return mem;
+}
+
+/* Use contiguous load INSN.  */
+rtx
+function_expander::use_contiguous_load_insn (insn_code icode)
+{
+  gcc_assert (call_expr_nargs (exp) > 0);
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  tree mask_type = builtin_types[mask_types[type.index]].vector;
+  machine_mode mask_mode = TYPE_MODE (mask_type);
+
+  /* Record the offset to get the argument.  */
+  int arg_offset = 0;
+
+  if (use_real_mask_p (pred))
+    add_input_operand (arg_offset++);
+  else
+    add_all_one_mask_operand (mask_mode);
+
+  if (use_real_merge_p (pred))
+    add_input_operand (arg_offset++);
+  else
+    add_vundef_operand (mode);
+
+  tree addr_arg = CALL_EXPR_ARG (exp, arg_offset++);
+  rtx addr = expand_normal (addr_arg);
+  add_mem_operand (mode, addr);
+
+  for (int argno = arg_offset; argno < call_expr_nargs (exp); argno++)
+    add_input_operand (argno);
+
+  add_input_operand (Pmode, get_tail_policy_for_pred (pred));
+  add_input_operand (Pmode, get_mask_policy_for_pred (pred));
+  add_input_operand (Pmode, get_avl_type_rtx (avl_type::NONVLMAX));
+
+  return generate_insn (icode);
+}
+
+/* Use contiguous store INSN.  */
+rtx
+function_expander::use_contiguous_store_insn (insn_code icode)
+{
+  gcc_assert (call_expr_nargs (exp) > 0);
+  machine_mode mode = TYPE_MODE (builtin_types[type.index].vector);
+  tree mask_type = builtin_types[mask_types[type.index]].vector;
+  machine_mode mask_mode = TYPE_MODE (mask_type);
+
+  /* Record the offset to get the argument.  */
+  int arg_offset = 0;
+
+  int addr_loc = use_real_mask_p (pred) ? 1 : 0;
+  tree addr_arg = CALL_EXPR_ARG (exp, addr_loc);
+  rtx addr = expand_normal (addr_arg);
+  rtx mem = add_mem_operand (mode, addr);
+
+  if (use_real_mask_p (pred))
+    add_input_operand (arg_offset++);
+  else
+    add_all_one_mask_operand (mask_mode);
+
+  /* To model "+m" constraint, we include memory operand into input.  */
+  add_input_operand (mode, mem);
+
+  arg_offset++;
+  for (int argno = arg_offset; argno < call_expr_nargs (exp); argno++)
+    add_input_operand (argno);
+
+  add_input_operand (Pmode, get_tail_policy_for_pred (pred));
+  add_input_operand (Pmode, get_mask_policy_for_pred (pred));
+  add_input_operand (Pmode, get_avl_type_rtx (avl_type::NONVLMAX));
+
+  return generate_insn (icode);
+}
+
 /* Generate instruction ICODE, given that its operands have already
    been added to M_OPS.  Return the value of the first operand.  */
 rtx
diff --git a/gcc/config/riscv/riscv-vector-builtins.def b/gcc/config/riscv/riscv-vector-builtins.def
index b7a633ed376..7636e34d595 100644
--- a/gcc/config/riscv/riscv-vector-builtins.def
+++ b/gcc/config/riscv/riscv-vector-builtins.def
@@ -44,7 +44,7 @@ along with GCC; see the file COPYING3.  If not see
 #ifndef DEF_RVV_TYPE
 #define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, VECTOR_MODE,         \
 		     VECTOR_MODE_MIN_VLEN_32, VECTOR_SUFFIX, SCALAR_SUFFIX,    \
-		     VSETVL_SUFFIX)
+		     VSETVL_SUFFIX, MASK_TYPE)
 #endif
 
 /* Use "DEF_RVV_OP_TYPE" macro to define RVV operand types.
@@ -61,212 +61,212 @@ along with GCC; see the file COPYING3.  If not see
 
 /* SEW/LMUL = 64:
    Only enable when TARGET_MIN_VLEN > 32 and machine mode = VNx1BImode.  */
-DEF_RVV_TYPE (vbool64_t, 14, __rvv_bool64_t, boolean, VNx1BI, VOID, _b64, , )
+DEF_RVV_TYPE (vbool64_t, 14, __rvv_bool64_t, boolean, VNx1BI, VOID, _b64, , , vbool64_t)
 /* SEW/LMUL = 32:
    Machine mode = VNx2BImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx1BImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vbool32_t, 14, __rvv_bool32_t, boolean, VNx2BI, VNx1BI, _b32, , )
+DEF_RVV_TYPE (vbool32_t, 14, __rvv_bool32_t, boolean, VNx2BI, VNx1BI, _b32, , , vbool32_t)
 /* SEW/LMUL = 16:
    Machine mode = VNx2BImode when TARGET_MIN_VLEN = 32.
    Machine mode = VNx4BImode when TARGET_MIN_VLEN > 32.  */
-DEF_RVV_TYPE (vbool16_t, 14, __rvv_bool16_t, boolean, VNx4BI, VNx2BI, _b16, , )
+DEF_RVV_TYPE (vbool16_t, 14, __rvv_bool16_t, boolean, VNx4BI, VNx2BI, _b16, , , vbool16_t)
 /* SEW/LMUL = 8:
    Machine mode = VNx8BImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx4BImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vbool8_t, 13, __rvv_bool8_t, boolean, VNx8BI, VNx4BI, _b8, , )
+DEF_RVV_TYPE (vbool8_t, 13, __rvv_bool8_t, boolean, VNx8BI, VNx4BI, _b8, , , vbool8_t)
 /* SEW/LMUL = 4:
    Machine mode = VNx16BImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx8BImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vbool4_t, 13, __rvv_bool4_t, boolean, VNx16BI, VNx8BI, _b4, , )
+DEF_RVV_TYPE (vbool4_t, 13, __rvv_bool4_t, boolean, VNx16BI, VNx8BI, _b4, , , vbool4_t)
 /* SEW/LMUL = 2:
    Machine mode = VNx32BImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx16BImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vbool2_t, 13, __rvv_bool2_t, boolean, VNx32BI, VNx16BI, _b2, , )
+DEF_RVV_TYPE (vbool2_t, 13, __rvv_bool2_t, boolean, VNx32BI, VNx16BI, _b2, , , vbool2_t)
 /* SEW/LMUL = 1:
    Machine mode = VNx64BImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx32BImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vbool1_t, 13, __rvv_bool1_t, boolean, VNx64BI, VNx32BI, _b1, , )
+DEF_RVV_TYPE (vbool1_t, 13, __rvv_bool1_t, boolean, VNx64BI, VNx32BI, _b1, , , vbool1_t)
 
 /* LMUL = 1/8:
    Only enble when TARGET_MIN_VLEN > 32 and machine mode = VNx1QImode.  */
 DEF_RVV_TYPE (vint8mf8_t, 15, __rvv_int8mf8_t, intQI, VNx1QI, VOID, _i8mf8, _i8,
-	      _e8mf8)
+	      _e8mf8, vbool64_t)
 DEF_RVV_TYPE (vuint8mf8_t, 16, __rvv_uint8mf8_t, unsigned_intQI, VNx1QI, VOID,
-	      _u8mf8, _u8, _e8mf8)
+	      _u8mf8, _u8, _e8mf8, vbool64_t)
 /* LMUL = 1/4:
    Machine mode = VNx2QImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx1QImode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vint8mf4_t, 15, __rvv_int8mf4_t, intQI, VNx2QI, VNx1QI, _i8mf4,
-	      _i8, _e8mf4)
+	      _i8, _e8mf4, vbool32_t)
 DEF_RVV_TYPE (vuint8mf4_t, 16, __rvv_uint8mf4_t, unsigned_intQI, VNx2QI, VNx1QI,
-	      _u8mf4, _u8, _e8mf4)
+	      _u8mf4, _u8, _e8mf4, vbool32_t)
 /* LMUL = 1/2:
    Machine mode = VNx4QImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx2QImode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vint8mf2_t, 15, __rvv_int8mf2_t, intQI, VNx4QI, VNx2QI, _i8mf2,
-	      _i8, _e8mf2)
+	      _i8, _e8mf2, vbool16_t)
 DEF_RVV_TYPE (vuint8mf2_t, 16, __rvv_uint8mf2_t, unsigned_intQI, VNx4QI, VNx2QI,
-	      _u8mf2, _u8, _e8mf2)
+	      _u8mf2, _u8, _e8mf2, vbool16_t)
 /* LMUL = 1:
    Machine mode = VNx8QImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx4QImode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vint8m1_t, 14, __rvv_int8m1_t, intQI, VNx8QI, VNx4QI, _i8m1, _i8,
-	      _e8m1)
+	      _e8m1, vbool8_t)
 DEF_RVV_TYPE (vuint8m1_t, 15, __rvv_uint8m1_t, unsigned_intQI, VNx8QI, VNx4QI,
-	      _u8m1, _u8, _e8m1)
+	      _u8m1, _u8, _e8m1, vbool8_t)
 /* LMUL = 2:
    Machine mode = VNx16QImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx8QImode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vint8m2_t, 14, __rvv_int8m2_t, intQI, VNx16QI, VNx8QI, _i8m2, _i8,
-	      _e8m2)
+	      _e8m2, vbool4_t)
 DEF_RVV_TYPE (vuint8m2_t, 15, __rvv_uint8m2_t, unsigned_intQI, VNx16QI, VNx8QI,
-	      _u8m2, _u8, _e8m2)
+	      _u8m2, _u8, _e8m2, vbool4_t)
 /* LMUL = 4:
    Machine mode = VNx32QImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx16QImode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vint8m4_t, 14, __rvv_int8m4_t, intQI, VNx32QI, VNx16QI, _i8m4,
-	      _i8, _e8m4)
+	      _i8, _e8m4, vbool2_t)
 DEF_RVV_TYPE (vuint8m4_t, 15, __rvv_uint8m4_t, unsigned_intQI, VNx32QI, VNx16QI,
-	      _u8m4, _u8, _e8m4)
+	      _u8m4, _u8, _e8m4, vbool2_t)
 /* LMUL = 8:
    Machine mode = VNx64QImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx32QImode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vint8m8_t, 14, __rvv_int8m8_t, intQI, VNx64QI, VNx32QI, _i8m8,
-	      _i8, _e8m8)
+	      _i8, _e8m8, vbool1_t)
 DEF_RVV_TYPE (vuint8m8_t, 15, __rvv_uint8m8_t, unsigned_intQI, VNx64QI, VNx32QI,
-	      _u8m8, _u8, _e8m8)
+	      _u8m8, _u8, _e8m8, vbool1_t)
 
 /* LMUL = 1/4:
    Only enble when TARGET_MIN_VLEN > 32 and machine mode = VNx1HImode.  */
 DEF_RVV_TYPE (vint16mf4_t, 16, __rvv_int16mf4_t, intHI, VNx1HI, VOID, _i16mf4,
-	      _i16, _e16mf4)
+	      _i16, _e16mf4, vbool64_t)
 DEF_RVV_TYPE (vuint16mf4_t, 17, __rvv_uint16mf4_t, unsigned_intHI, VNx1HI, VOID,
-	      _u16mf4, _u16, _e16mf4)
+	      _u16mf4, _u16, _e16mf4, vbool64_t)
 /* LMUL = 1/2:
    Machine mode = VNx2HImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx1HImode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vint16mf2_t, 16, __rvv_int16mf2_t, intHI, VNx2HI, VNx1HI, _i16mf2,
-	      _i16, _e16mf2)
+	      _i16, _e16mf2, vbool32_t)
 DEF_RVV_TYPE (vuint16mf2_t, 17, __rvv_uint16mf2_t, unsigned_intHI, VNx2HI,
-	      VNx1HI, _u16mf2, _u16, _e16mf2)
+	      VNx1HI, _u16mf2, _u16, _e16mf2, vbool32_t)
 /* LMUL = 1:
    Machine mode = VNx4HImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx2HImode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vint16m1_t, 15, __rvv_int16m1_t, intHI, VNx4HI, VNx2HI, _i16m1,
-	      _i16, _e16m1)
+	      _i16, _e16m1, vbool16_t)
 DEF_RVV_TYPE (vuint16m1_t, 16, __rvv_uint16m1_t, unsigned_intHI, VNx4HI, VNx2HI,
-	      _u16m1, _u16, _e16m1)
+	      _u16m1, _u16, _e16m1, vbool16_t)
 /* LMUL = 2:
    Machine mode = VNx8HImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx4HImode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vint16m2_t, 15, __rvv_int16m2_t, intHI, VNx8HI, VNx4HI, _i16m2,
-	      _i16, _e16m2)
+	      _i16, _e16m2, vbool8_t)
 DEF_RVV_TYPE (vuint16m2_t, 16, __rvv_uint16m2_t, unsigned_intHI, VNx8HI, VNx4HI,
-	      _u16m2, _u16, _e16m2)
+	      _u16m2, _u16, _e16m2, vbool8_t)
 /* LMUL = 4:
    Machine mode = VNx16HImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx8HImode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vint16m4_t, 15, __rvv_int16m4_t, intHI, VNx16HI, VNx8HI, _i16m4,
-	      _i16, _e16m4)
+	      _i16, _e16m4, vbool4_t)
 DEF_RVV_TYPE (vuint16m4_t, 16, __rvv_uint16m4_t, unsigned_intHI, VNx16HI,
-	      VNx8HI, _u16m4, _u16, _e16m4)
+	      VNx8HI, _u16m4, _u16, _e16m4, vbool4_t)
 /* LMUL = 8:
    Machine mode = VNx32HImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx16HImode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vint16m8_t, 15, __rvv_int16m8_t, intHI, VNx32HI, VNx16HI, _i16m8,
-	      _i16, _e16m8)
+	      _i16, _e16m8, vbool2_t)
 DEF_RVV_TYPE (vuint16m8_t, 16, __rvv_uint16m8_t, unsigned_intHI, VNx32HI,
-	      VNx16HI, _u16m8, _u16, _e16m8)
+	      VNx16HI, _u16m8, _u16, _e16m8, vbool2_t)
 
 /* LMUL = 1/2:
    Only enble when TARGET_MIN_VLEN > 32 and machine mode = VNx1SImode.  */
 DEF_RVV_TYPE (vint32mf2_t, 16, __rvv_int32mf2_t, int32, VNx1SI, VOID, _i32mf2,
-	      _i32, _e32mf2)
+	      _i32, _e32mf2, vbool64_t)
 DEF_RVV_TYPE (vuint32mf2_t, 17, __rvv_uint32mf2_t, unsigned_int32, VNx1SI, VOID,
-	      _u32mf2, _u32, _e32mf2)
+	      _u32mf2, _u32, _e32mf2, vbool64_t)
 /* LMUL = 1:
    Machine mode = VNx2SImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx1SImode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vint32m1_t, 15, __rvv_int32m1_t, int32, VNx2SI, VNx1SI, _i32m1,
-	      _i32, _e32m1)
+	      _i32, _e32m1, vbool32_t)
 DEF_RVV_TYPE (vuint32m1_t, 16, __rvv_uint32m1_t, unsigned_int32, VNx2SI, VNx1SI,
-	      _u32m1, _u32, _e32m1)
+	      _u32m1, _u32, _e32m1, vbool32_t)
 /* LMUL = 2:
    Machine mode = VNx4SImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx2SImode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vint32m2_t, 15, __rvv_int32m2_t, int32, VNx4SI, VNx2SI, _i32m2,
-	      _i32, _e32m2)
+	      _i32, _e32m2, vbool16_t)
 DEF_RVV_TYPE (vuint32m2_t, 16, __rvv_uint32m2_t, unsigned_int32, VNx4SI, VNx2SI,
-	      _u32m2, _u32, _e32m2)
+	      _u32m2, _u32, _e32m2, vbool16_t)
 /* LMUL = 4:
    Machine mode = VNx8SImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx4SImode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vint32m4_t, 15, __rvv_int32m4_t, int32, VNx8SI, VNx4SI, _i32m4,
-	      _i32, _e32m4)
+	      _i32, _e32m4, vbool8_t)
 DEF_RVV_TYPE (vuint32m4_t, 16, __rvv_uint32m4_t, unsigned_int32, VNx8SI, VNx4SI,
-	      _u32m4, _u32, _e32m4)
+	      _u32m4, _u32, _e32m4, vbool8_t)
 /* LMUL = 8:
    Machine mode = VNx16SImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx8SImode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vint32m8_t, 15, __rvv_int32m8_t, int32, VNx16SI, VNx8SI, _i32m8,
-	      _i32, _e32m8)
+	      _i32, _e32m8, vbool4_t)
 DEF_RVV_TYPE (vuint32m8_t, 16, __rvv_uint32m8_t, unsigned_int32, VNx16SI,
-	      VNx8SI, _u32m8, _u32, _e32m8)
+	      VNx8SI, _u32m8, _u32, _e32m8, vbool4_t)
 
 /* SEW = 64:
    Disable when TARGET_MIN_VLEN > 32.  */
 DEF_RVV_TYPE (vint64m1_t, 15, __rvv_int64m1_t, intDI, VNx1DI, VOID, _i64m1,
-	      _i64, _e64m1)
+	      _i64, _e64m1, vbool64_t)
 DEF_RVV_TYPE (vuint64m1_t, 16, __rvv_uint64m1_t, unsigned_intDI, VNx1DI, VOID,
-	      _u64m1, _u64, _e64m1)
+	      _u64m1, _u64, _e64m1, vbool64_t)
 DEF_RVV_TYPE (vint64m2_t, 15, __rvv_int64m2_t, intDI, VNx2DI, VOID, _i64m2,
-	      _i64, _e64m2)
+	      _i64, _e64m2, vbool32_t)
 DEF_RVV_TYPE (vuint64m2_t, 16, __rvv_uint64m2_t, unsigned_intDI, VNx2DI, VOID,
-	      _u64m2, _u64, _e64m2)
+	      _u64m2, _u64, _e64m2, vbool32_t)
 DEF_RVV_TYPE (vint64m4_t, 15, __rvv_int64m4_t, intDI, VNx4DI, VOID, _i64m4,
-	      _i64, _e64m4)
+	      _i64, _e64m4, vbool16_t)
 DEF_RVV_TYPE (vuint64m4_t, 16, __rvv_uint64m4_t, unsigned_intDI, VNx4DI, VOID,
-	      _u64m4, _u64, _e64m4)
+	      _u64m4, _u64, _e64m4, vbool16_t)
 DEF_RVV_TYPE (vint64m8_t, 15, __rvv_int64m8_t, intDI, VNx8DI, VOID, _i64m8,
-	      _i64, _e64m8)
+	      _i64, _e64m8, vbool8_t)
 DEF_RVV_TYPE (vuint64m8_t, 16, __rvv_uint64m8_t, unsigned_intDI, VNx8DI, VOID,
-	      _u64m8, _u64, _e64m8)
+	      _u64m8, _u64, _e64m8, vbool8_t)
 
 /* LMUL = 1/2:
    Only enble when TARGET_MIN_VLEN > 32 and machine mode = VNx1SFmode.  */
 DEF_RVV_TYPE (vfloat32mf2_t, 18, __rvv_float32mf2_t, float, VNx1SF, VOID,
-	      _f32mf2, _f32, _e32mf2)
+	      _f32mf2, _f32, _e32mf2, vbool64_t)
 /* LMUL = 1:
    Machine mode = VNx2SFmode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx1SFmode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vfloat32m1_t, 17, __rvv_float32m1_t, float, VNx2SF, VNx1SF,
-	      _f32m1, _f32, _e32m1)
+	      _f32m1, _f32, _e32m1, vbool32_t)
 /* LMUL = 2:
    Machine mode = VNx4SFmode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx2SFmode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vfloat32m2_t, 17, __rvv_float32m2_t, float, VNx4SF, VNx2SF,
-	      _f32m2, _f32, _e32m2)
+	      _f32m2, _f32, _e32m2, vbool16_t)
 /* LMUL = 4:
    Machine mode = VNx8SFmode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx4SFmode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vfloat32m4_t, 17, __rvv_float32m4_t, float, VNx8SF, VNx4SF,
-	      _f32m4, _f32, _e32m4)
+	      _f32m4, _f32, _e32m4, vbool8_t)
 /* LMUL = 8:
    Machine mode = VNx16SFmode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx8SFmode when TARGET_MIN_VLEN = 32.  */
 DEF_RVV_TYPE (vfloat32m8_t, 17, __rvv_float32m8_t, float, VNx16SF, VNx8SF,
-	      _f32m8, _f32, _e32m8)
+	      _f32m8, _f32, _e32m8, vbool4_t)
 
 /* SEW = 64:
    Disable when TARGET_VECTOR_FP64.  */
 DEF_RVV_TYPE (vfloat64m1_t, 17, __rvv_float64m1_t, double, VNx1DF, VOID, _f64m1,
-	      _f64, _e64m1)
+	      _f64, _e64m1, vbool64_t)
 DEF_RVV_TYPE (vfloat64m2_t, 17, __rvv_float64m2_t, double, VNx2DF, VOID, _f64m2,
-	      _f64, _e64m2)
+	      _f64, _e64m2, vbool32_t)
 DEF_RVV_TYPE (vfloat64m4_t, 17, __rvv_float64m4_t, double, VNx4DF, VOID, _f64m4,
-	      _f64, _e64m4)
+	      _f64, _e64m4, vbool16_t)
 DEF_RVV_TYPE (vfloat64m8_t, 17, __rvv_float64m8_t, double, VNx8DF, VOID, _f64m8,
-	      _f64, _e64m8)
+	      _f64, _e64m8, vbool8_t)
 
 DEF_RVV_OP_TYPE (vv)
 DEF_RVV_OP_TYPE (vx)
diff --git a/gcc/config/riscv/riscv-vector-builtins.h b/gcc/config/riscv/riscv-vector-builtins.h
index 425da12326c..c13df99cb5b 100644
--- a/gcc/config/riscv/riscv-vector-builtins.h
+++ b/gcc/config/riscv/riscv-vector-builtins.h
@@ -264,10 +264,13 @@ public:
   ~function_builder ();
 
   void allocate_argument_types (const function_instance &, vec<tree> &) const;
+  void apply_predication (const function_instance &, tree, vec<tree> &) const;
   void add_unique_function (const function_instance &, const function_shape *,
 			    tree, vec<tree> &);
   void register_function_group (const function_group_info &);
   void append_name (const char *);
+  void append_base_name (const char *);
+  void append_sew (int);
   char *finish_name ();
 
 private:
@@ -315,6 +318,16 @@ public:
 
   void add_input_operand (machine_mode, rtx);
   void add_input_operand (unsigned argno);
+  void add_output_operand (machine_mode, rtx);
+  void add_all_one_mask_operand (machine_mode mode);
+  void add_vundef_operand (machine_mode mode);
+  void add_fixed_operand (rtx);
+  rtx add_mem_operand (machine_mode, rtx);
+
+  machine_mode vector_mode (void) const;
+
+  rtx use_contiguous_load_insn (insn_code);
+  rtx use_contiguous_store_insn (insn_code);
   rtx generate_insn (insn_code);
 
   /* The function call expression.  */
@@ -342,6 +355,12 @@ public:
      in addition to reading its arguments and returning a result.  */
   virtual unsigned int call_properties (const function_instance &) const;
 
+  /* Return true if intrinsics should apply vl operand.  */
+  virtual bool apply_vl_p () const;
+
+  /* Return true if intrinsic can be overloaded.  */
+  virtual bool can_be_overloaded_p (enum predication_type_index) const;
+
   /* Expand the given call into rtl.  Return the result of the function,
      or an arbitrary value if the function doesn't return a result.  */
   virtual rtx expand (function_expander &) const = 0;
@@ -394,6 +413,37 @@ function_expander::add_input_operand (machine_mode mode, rtx op)
   create_input_operand (&m_ops[opno++], op, mode);
 }
 
+/* Create output and add it into M_OPS and increase OPNO.  */
+inline void
+function_expander::add_output_operand (machine_mode mode, rtx target)
+{
+  create_output_operand (&m_ops[opno++], target, mode);
+}
+
+/* Since we may normalize vop/vop_tu/vop_m/vop_tumu.. into a single patter.
+   We add a fake all true mask for the intrinsics that don't need a real mask.
+ */
+inline void
+function_expander::add_all_one_mask_operand (machine_mode mode)
+{
+  add_input_operand (mode, CONSTM1_RTX (mode));
+}
+
+/* Add an operand that must be X.  The only way of legitimizing an
+   invalid X is to reload the address of a MEM.  */
+inline void
+function_expander::add_fixed_operand (rtx x)
+{
+  create_fixed_operand (&m_ops[opno++], x);
+}
+
+/* Return the machine_mode of the corresponding vector type.  */
+inline machine_mode
+function_expander::vector_mode (void) const
+{
+  return TYPE_MODE (builtin_types[type.index].vector);
+}
+
 /* Default implementation of function_base::call_properties, with conservatively
    correct behavior for floating-point instructions.  */
 inline unsigned int
@@ -405,6 +455,21 @@ function_base::call_properties (const function_instance &instance) const
   return flags;
 }
 
+/* We choose to apply vl operand by default since most of the intrinsics
+   has vl operand.  */
+inline bool
+function_base::apply_vl_p () const
+{
+  return true;
+}
+
+/* Since most of intrinsics can be overloaded, we set it true by default.  */
+inline bool
+function_base::can_be_overloaded_p (enum predication_type_index) const
+{
+  return true;
+}
+
 } // end namespace riscv_vector
 
 #endif
diff --git a/gcc/config/riscv/riscv-vsetvl.cc b/gcc/config/riscv/riscv-vsetvl.cc
index 72f1e4059ab..01530c1ae75 100644
--- a/gcc/config/riscv/riscv-vsetvl.cc
+++ b/gcc/config/riscv/riscv-vsetvl.cc
@@ -302,10 +302,6 @@ get_vl (rtx_insn *rinsn)
 {
   if (has_vl_op (rinsn))
     {
-      /* We only call get_vl for VLMAX use VTYPE instruction.
-	 It's used to get the VL operand to emit VLMAX VSETVL instruction:
-	 vsetvl a5,zero,e32,m1,ta,ma.  */
-      gcc_assert (get_attr_avl_type (rinsn) == VLMAX);
       extract_insn_cached (rinsn);
       return recog_data.operand[get_attr_vl_op_idx (rinsn)];
     }
diff --git a/gcc/config/riscv/t-riscv b/gcc/config/riscv/t-riscv
index 7af9f5402ec..d30e0235356 100644
--- a/gcc/config/riscv/t-riscv
+++ b/gcc/config/riscv/t-riscv
@@ -9,7 +9,7 @@ riscv-vector-builtins.o: $(srcdir)/config/riscv/riscv-vector-builtins.cc \
   $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(TREE_H) $(RTL_H) $(TM_P_H) \
   memmodel.h insn-codes.h $(OPTABS_H) $(RECOG_H) $(DIAGNOSTIC_H) $(EXPR_H) \
   $(FUNCTION_H) fold-const.h gimplify.h explow.h stor-layout.h $(REGS_H) \
-  alias.h langhooks.h attribs.h stringpool.h \
+  alias.h langhooks.h attribs.h stringpool.h emit-rtl.h \
   $(srcdir)/config/riscv/riscv-vector-builtins.h \
   $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \
   $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \
-- 
2.36.3


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] RISC-V: Support vle.v/vse.v intrinsics
  2022-12-23  0:52 [PATCH] RISC-V: Support vle.v/vse.v intrinsics juzhe.zhong
@ 2022-12-23  0:56 ` 钟居哲
  2022-12-23  5:42   ` Kito Cheng
  0 siblings, 1 reply; 3+ messages in thread
From: 钟居哲 @ 2022-12-23  0:56 UTC (permalink / raw)
  To: 钟居哲, gcc-patches; +Cc: kito.cheng, palmer

[-- Attachment #1: Type: text/plain, Size: 45718 bytes --]

This patch is minimum intrinsics support for VSETVL PASS to support AVL model.
The corresponding unit-test for vle.v/vse.v should be added after I support AVL model 
and well tested VSETVL PASS patch.


juzhe.zhong@rivai.ai
 
From: juzhe.zhong
Date: 2022-12-23 08:52
To: gcc-patches
CC: kito.cheng; palmer; Ju-Zhe Zhong
Subject: [PATCH] RISC-V: Support vle.v/vse.v intrinsics
From: Ju-Zhe Zhong <juzhe.zhong@rivai.ai>
 
gcc/ChangeLog:
 
        * config/riscv/riscv-protos.h (get_avl_type_rtx): New function.
        * config/riscv/riscv-v.cc (get_avl_type_rtx): Ditto.
        * config/riscv/riscv-vector-builtins-bases.cc (class loadstore): New class.
        (BASE): Ditto.
        * config/riscv/riscv-vector-builtins-bases.h: Ditto.      
        * config/riscv/riscv-vector-builtins-functions.def (vle): Ditto.
        (vse): Ditto.
        * config/riscv/riscv-vector-builtins-shapes.cc (build_one): Ditto.
        (struct loadstore_def): Ditto.
        (SHAPE): Ditto.
        * config/riscv/riscv-vector-builtins-shapes.h: Ditto.
        * config/riscv/riscv-vector-builtins-types.def (DEF_RVV_U_OPS): New macro.
        (DEF_RVV_F_OPS): Ditto.
        (vuint8mf8_t): Add corresponding mask type.
        (vuint8mf4_t): Ditto.
        (vuint8mf2_t): Ditto.
        (vuint8m1_t): Ditto.
        (vuint8m2_t): Ditto.
        (vuint8m4_t): Ditto.
        (vuint8m8_t): Ditto.
        (vuint16mf4_t): Ditto.
        (vuint16mf2_t): Ditto.
        (vuint16m1_t): Ditto.
        (vuint16m2_t): Ditto.
        (vuint16m4_t): Ditto.
        (vuint16m8_t): Ditto.
        (vuint32mf2_t): Ditto.
        (vuint32m1_t): Ditto.
        (vuint32m2_t): Ditto.
        (vuint32m4_t): Ditto.
        (vuint32m8_t): Ditto.
        (vuint64m1_t): Ditto.
        (vuint64m2_t): Ditto.
        (vuint64m4_t): Ditto.
        (vuint64m8_t): Ditto.
        (vfloat32mf2_t): Ditto.
        (vfloat32m1_t): Ditto.
        (vfloat32m2_t): Ditto.
        (vfloat32m4_t): Ditto.
        (vfloat32m8_t): Ditto.
        (vfloat64m1_t): Ditto.
        (vfloat64m2_t): Ditto.
        (vfloat64m4_t): Ditto.
        (vfloat64m8_t): Ditto.
        * config/riscv/riscv-vector-builtins.cc (DEF_RVV_TYPE): Adjust for new macro.
        (DEF_RVV_I_OPS): Ditto.
        (DEF_RVV_U_OPS): New macro.
        (DEF_RVV_F_OPS): New macro.
        (use_real_mask_p): New function.
        (use_real_merge_p): Ditto.
        (get_tail_policy_for_pred): Ditto.
        (get_mask_policy_for_pred): Ditto.
        (function_builder::apply_predication): Ditto.
        (function_builder::append_base_name): Ditto.
        (function_builder::append_sew): Ditto.
        (function_expander::add_vundef_operand): Ditto.
        (function_expander::add_mem_operand): Ditto.
        (function_expander::use_contiguous_load_insn): Ditto.
        (function_expander::use_contiguous_store_insn): Ditto.
        * config/riscv/riscv-vector-builtins.def (DEF_RVV_TYPE): Adjust for adding mask type.
        (vbool64_t): Ditto.
        (vbool32_t): Ditto.
        (vbool16_t): Ditto.
        (vbool8_t): Ditto.
        (vbool4_t): Ditto.
        (vbool2_t): Ditto.
        (vbool1_t): Ditto.
        (vint8mf8_t): Ditto.
        (vint8mf4_t): Ditto.
        (vint8mf2_t): Ditto.
        (vint8m1_t): Ditto.
        (vint8m2_t): Ditto.
        (vint8m4_t): Ditto.
        (vint8m8_t): Ditto.
        (vint16mf4_t): Ditto.
        (vint16mf2_t): Ditto.
        (vint16m1_t): Ditto.
        (vint16m2_t): Ditto.
        (vint16m4_t): Ditto.
        (vint16m8_t): Ditto.
        (vint32mf2_t): Ditto.
        (vint32m1_t): Ditto.
        (vint32m2_t): Ditto.
        (vint32m4_t): Ditto.
        (vint32m8_t): Ditto.
        (vint64m1_t): Ditto.
        (vint64m2_t): Ditto.
        (vint64m4_t): Ditto.
        (vint64m8_t): Ditto.
        (vfloat32mf2_t): Ditto.
        (vfloat32m1_t): Ditto.
        (vfloat32m2_t): Ditto.
        (vfloat32m4_t): Ditto.
        (vfloat32m8_t): Ditto.
        (vfloat64m1_t): Ditto.
        (vfloat64m4_t): Ditto.
        * config/riscv/riscv-vector-builtins.h (function_expander::add_output_operand): New function.
        (function_expander::add_all_one_mask_operand): Ditto.
        (function_expander::add_fixed_operand): Ditto.
        (function_expander::vector_mode): Ditto.
        (function_base::apply_vl_p): Ditto.
        (function_base::can_be_overloaded_p): Ditto.
        * config/riscv/riscv-vsetvl.cc (get_vl): Remove restrict of supporting AVL is not VLMAX.
        * config/riscv/t-riscv: Add include file.
 
---
gcc/config/riscv/riscv-protos.h               |   1 +
gcc/config/riscv/riscv-v.cc                   |  10 +-
.../riscv/riscv-vector-builtins-bases.cc      |  49 +++-
.../riscv/riscv-vector-builtins-bases.h       |   2 +
.../riscv/riscv-vector-builtins-functions.def |   3 +
.../riscv/riscv-vector-builtins-shapes.cc     |  38 ++-
.../riscv/riscv-vector-builtins-shapes.h      |   1 +
.../riscv/riscv-vector-builtins-types.def     |  49 +++-
gcc/config/riscv/riscv-vector-builtins.cc     | 236 +++++++++++++++++-
gcc/config/riscv/riscv-vector-builtins.def    | 122 ++++-----
gcc/config/riscv/riscv-vector-builtins.h      |  65 +++++
gcc/config/riscv/riscv-vsetvl.cc              |   4 -
gcc/config/riscv/t-riscv                      |   2 +-
13 files changed, 506 insertions(+), 76 deletions(-)
 
diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
index cfd0f284f91..64ee56b8a7c 100644
--- a/gcc/config/riscv/riscv-protos.h
+++ b/gcc/config/riscv/riscv-protos.h
@@ -171,6 +171,7 @@ enum mask_policy
};
enum tail_policy get_prefer_tail_policy ();
enum mask_policy get_prefer_mask_policy ();
+rtx get_avl_type_rtx (enum avl_type);
}
/* We classify builtin types into two classes:
diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
index f02a048f76d..b616ee3e6b3 100644
--- a/gcc/config/riscv/riscv-v.cc
+++ b/gcc/config/riscv/riscv-v.cc
@@ -79,8 +79,7 @@ public:
   }
   void add_avl_type_operand ()
   {
-    rtx vlmax_rtx = gen_int_mode (avl_type::VLMAX, Pmode);
-    add_input_operand (vlmax_rtx, Pmode);
+    add_input_operand (get_avl_type_rtx (avl_type::VLMAX), Pmode);
   }
   void expand (enum insn_code icode, bool temporary_volatile_p = false)
@@ -342,4 +341,11 @@ get_prefer_mask_policy ()
   return MASK_ANY;
}
+/* Get avl_type rtx.  */
+rtx
+get_avl_type_rtx (enum avl_type type)
+{
+  return gen_int_mode (type, Pmode);
+}
+
} // namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index c1193dbbfb5..10373e5ccf2 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -53,6 +53,11 @@ template<bool VLMAX_P>
class vsetvl : public function_base
{
public:
+  bool apply_vl_p () const override
+  {
+    return false;
+  }
+
   rtx expand (function_expander &e) const override
   {
     if (VLMAX_P)
@@ -79,11 +84,47 @@ public:
   }
};
+/* Implements vle.v/vse.v codegen.  */
+template <bool STORE_P>
+class loadstore : public function_base
+{
+  unsigned int call_properties (const function_instance &) const override
+  {
+    if (STORE_P)
+      return CP_WRITE_MEMORY;
+    else
+      return CP_READ_MEMORY;
+  }
+
+  bool can_be_overloaded_p (enum predication_type_index pred) const override
+  {
+    if (STORE_P)
+      return true;
+    return pred != PRED_TYPE_none && pred != PRED_TYPE_mu;
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    if (STORE_P)
+      return e.use_contiguous_store_insn (code_for_pred_mov (e.vector_mode ()));
+    else
+      return e.use_contiguous_load_insn (code_for_pred_mov (e.vector_mode ()));
+  }
+};
+
static CONSTEXPR const vsetvl<false> vsetvl_obj;
static CONSTEXPR const vsetvl<true> vsetvlmax_obj;
-namespace bases {
-const function_base *const vsetvl = &vsetvl_obj;
-const function_base *const vsetvlmax = &vsetvlmax_obj;
-}
+static CONSTEXPR const loadstore<false> vle_obj;
+static CONSTEXPR const loadstore<true> vse_obj;
+
+/* Declare the function base NAME, pointing it to an instance
+   of class <NAME>_obj.  */
+#define BASE(NAME) \
+  namespace bases { const function_base *const NAME = &NAME##_obj; }
+
+BASE (vsetvl)
+BASE (vsetvlmax)
+BASE (vle)
+BASE (vse)
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.h b/gcc/config/riscv/riscv-vector-builtins-bases.h
index a0ae18eef03..79684bcb50d 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.h
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.h
@@ -26,6 +26,8 @@ namespace riscv_vector {
namespace bases {
extern const function_base *const vsetvl;
extern const function_base *const vsetvlmax;
+extern const function_base *const vle;
+extern const function_base *const vse;
}
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.def b/gcc/config/riscv/riscv-vector-builtins-functions.def
index dc41537865e..e5ebb7d829c 100644
--- a/gcc/config/riscv/riscv-vector-builtins-functions.def
+++ b/gcc/config/riscv/riscv-vector-builtins-functions.def
@@ -39,5 +39,8 @@ along with GCC; see the file COPYING3. If not see
/* 6. Configuration-Setting Instructions.  */
DEF_RVV_FUNCTION (vsetvl, vsetvl, none_preds, i_none_size_size_ops)
DEF_RVV_FUNCTION (vsetvlmax, vsetvlmax, none_preds, i_none_size_void_ops)
+/* 7. Vector Loads and Stores. */
+DEF_RVV_FUNCTION (vle, loadstore, full_preds, all_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (vse, loadstore, none_m_preds, all_v_scalar_ptr_ops)
#undef DEF_RVV_FUNCTION
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index bb2ee8767a0..0332c031ce4 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -48,6 +48,7 @@ build_one (function_builder &b, const function_group_info &group,
   tree return_type = group.ops_infos.ret.get_tree_type (
     group.ops_infos.types[vec_type_idx].index);
   b.allocate_argument_types (function_instance, argument_types);
+  b.apply_predication (function_instance, return_type, argument_types);
   b.add_unique_function (function_instance, (*group.shape), return_type,
argument_types);
}
@@ -93,13 +94,46 @@ struct vsetvl_def : public build_base
     /* vsetvl* instruction doesn't have C++ overloaded functions.  */
     if (overloaded_p)
       return nullptr;
-    b.append_name ("__riscv_");
-    b.append_name (instance.base_name);
+    b.append_base_name (instance.base_name);
     b.append_name (type_suffixes[instance.type.index].vsetvl);
     return b.finish_name ();
   }
};
+
+/* loadstore_def class.  */
+struct loadstore_def : public build_base
+{
+  char *get_name (function_builder &b, const function_instance &instance,
+   bool overloaded_p) const override
+  {
+    /* Return nullptr if it can not be overloaded.  */
+    if (overloaded_p && !instance.base->can_be_overloaded_p (instance.pred))
+      return nullptr;
+
+    b.append_base_name (instance.base_name);
+
+    tree type = builtin_types[instance.type.index].vector;
+    machine_mode mode = TYPE_MODE (type);
+    int sew = GET_MODE_BITSIZE (GET_MODE_INNER (mode));
+    /* vop --> vop<sew>.  */
+    b.append_sew (sew);
+
+    /* vop<sew>_v --> vop<sew>_v_<type>.  */
+    if (!overloaded_p)
+      {
+ /* vop<sew> --> vop<sew>_v.  */
+ b.append_name (operand_suffixes[instance.op_info->op]);
+ /* vop<sew>_v --> vop<sew>_v_<type>.  */
+ b.append_name (type_suffixes[instance.type.index].vector);
+      }
+
+    b.append_name (predication_suffixes[instance.pred]);
+    return b.finish_name ();
+  }
+};
+
SHAPE(vsetvl, vsetvl)
SHAPE(vsetvl, vsetvlmax)
+SHAPE(loadstore, loadstore)
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.h b/gcc/config/riscv/riscv-vector-builtins-shapes.h
index f2d876fb133..b17dcd88877 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.h
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.h
@@ -26,6 +26,7 @@ namespace riscv_vector {
namespace shapes {
extern const function_shape *const vsetvl;
extern const function_shape *const vsetvlmax;
+extern const function_shape *const loadstore;
}
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-types.def b/gcc/config/riscv/riscv-vector-builtins-types.def
index f282a5e7654..6a867c99987 100644
--- a/gcc/config/riscv/riscv-vector-builtins-types.def
+++ b/gcc/config/riscv/riscv-vector-builtins-types.def
@@ -18,12 +18,24 @@ You should have received a copy of the GNU General Public License
along with GCC; see the file COPYING3. If not see
<http://www.gnu.org/licenses/>. */
-/* Use "DEF_ALL_SIGNED_INTEGER" macro include all signed integer which will be
+/* Use "DEF_RVV_I_OPS" macro include all signed integer which will be
    iterated and registered as intrinsic functions.  */
#ifndef DEF_RVV_I_OPS
#define DEF_RVV_I_OPS(TYPE, REQUIRE)
#endif
+/* Use "DEF_RVV_U_OPS" macro include all unsigned integer which will be
+   iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_U_OPS
+#define DEF_RVV_U_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_F_OPS" macro include all floating-point which will be
+   iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_F_OPS
+#define DEF_RVV_F_OPS(TYPE, REQUIRE)
+#endif
+
DEF_RVV_I_OPS (vint8mf8_t, RVV_REQUIRE_ZVE64)
DEF_RVV_I_OPS (vint8mf4_t, 0)
DEF_RVV_I_OPS (vint8mf2_t, 0)
@@ -47,4 +59,39 @@ DEF_RVV_I_OPS (vint64m2_t, RVV_REQUIRE_ZVE64)
DEF_RVV_I_OPS (vint64m4_t, RVV_REQUIRE_ZVE64)
DEF_RVV_I_OPS (vint64m8_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_U_OPS (vuint8mf8_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_U_OPS (vuint8mf4_t, 0)
+DEF_RVV_U_OPS (vuint8mf2_t, 0)
+DEF_RVV_U_OPS (vuint8m1_t, 0)
+DEF_RVV_U_OPS (vuint8m2_t, 0)
+DEF_RVV_U_OPS (vuint8m4_t, 0)
+DEF_RVV_U_OPS (vuint8m8_t, 0)
+DEF_RVV_U_OPS (vuint16mf4_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_U_OPS (vuint16mf2_t, 0)
+DEF_RVV_U_OPS (vuint16m1_t, 0)
+DEF_RVV_U_OPS (vuint16m2_t, 0)
+DEF_RVV_U_OPS (vuint16m4_t, 0)
+DEF_RVV_U_OPS (vuint16m8_t, 0)
+DEF_RVV_U_OPS (vuint32mf2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_U_OPS (vuint32m1_t, 0)
+DEF_RVV_U_OPS (vuint32m2_t, 0)
+DEF_RVV_U_OPS (vuint32m4_t, 0)
+DEF_RVV_U_OPS (vuint32m8_t, 0)
+DEF_RVV_U_OPS (vuint64m1_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_U_OPS (vuint64m2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_U_OPS (vuint64m4_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_U_OPS (vuint64m8_t, RVV_REQUIRE_ZVE64)
+
+DEF_RVV_F_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ZVE64)
+DEF_RVV_F_OPS (vfloat32m1_t, RVV_REQUIRE_ELEN_FP_32)
+DEF_RVV_F_OPS (vfloat32m2_t, RVV_REQUIRE_ELEN_FP_32)
+DEF_RVV_F_OPS (vfloat32m4_t, RVV_REQUIRE_ELEN_FP_32)
+DEF_RVV_F_OPS (vfloat32m8_t, RVV_REQUIRE_ELEN_FP_32)
+DEF_RVV_F_OPS (vfloat64m1_t, RVV_REQUIRE_ELEN_FP_64)
+DEF_RVV_F_OPS (vfloat64m2_t, RVV_REQUIRE_ELEN_FP_64)
+DEF_RVV_F_OPS (vfloat64m4_t, RVV_REQUIRE_ELEN_FP_64)
+DEF_RVV_F_OPS (vfloat64m8_t, RVV_REQUIRE_ELEN_FP_64)
+
#undef DEF_RVV_I_OPS
+#undef DEF_RVV_U_OPS
+#undef DEF_RVV_F_OPS
diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
index 43150aa47a4..9170776f979 100644
--- a/gcc/config/riscv/riscv-vector-builtins.cc
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -44,6 +44,7 @@
#include "attribs.h"
#include "targhooks.h"
#include "regs.h"
+#include "emit-rtl.h"
#include "riscv-vector-builtins.h"
#include "riscv-vector-builtins-shapes.h"
#include "riscv-vector-builtins-bases.h"
@@ -105,11 +106,20 @@ const char *const operand_suffixes[NUM_OP_TYPES] = {
const rvv_builtin_suffixes type_suffixes[NUM_VECTOR_TYPES + 1] = {
#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, VECTOR_MODE,         \
     VECTOR_MODE_MIN_VLEN_32, VECTOR_SUFFIX, SCALAR_SUFFIX,    \
-      VSETVL_SUFFIX)                                            \
+      VSETVL_SUFFIX, MASK_TYPE)                                 \
   {#VECTOR_SUFFIX, #SCALAR_SUFFIX, #VSETVL_SUFFIX},
#include "riscv-vector-builtins.def"
};
+/* Mask type for each RVV type.  */
+const vector_type_index mask_types[NUM_VECTOR_TYPES + 1] = {
+#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, VECTOR_MODE,         \
+      VECTOR_MODE_MIN_VLEN_32, VECTOR_SUFFIX, SCALAR_SUFFIX,    \
+      VSETVL_SUFFIX, MASK_TYPE)                                 \
+  VECTOR_TYPE_##MASK_TYPE,
+#include "riscv-vector-builtins.def"
+};
+
/* Static information about predication suffix for each RVV type.  */
const char *const predication_suffixes[NUM_PRED_TYPES] = {
   "", /* PRED_TYPE_none.  */
@@ -123,6 +133,14 @@ static const rvv_type_info i_ops[] = {
#include "riscv-vector-builtins-types.def"
   {NUM_VECTOR_TYPES, 0}};
+/* A list of all types will be registered for intrinsic functions.  */
+static const rvv_type_info all_ops[] = {
+#define DEF_RVV_I_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#define DEF_RVV_U_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#define DEF_RVV_F_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
static CONSTEXPR const rvv_arg_type_info rvv_arg_type_info_end
   = rvv_arg_type_info (NUM_BASE_TYPES);
@@ -134,10 +152,28 @@ static CONSTEXPR const rvv_arg_type_info void_args[]
static CONSTEXPR const rvv_arg_type_info size_args[]
   = {rvv_arg_type_info (RVV_BASE_size), rvv_arg_type_info_end};
+/* A list of args for vector_type func (const scalar_type *) function.  */
+static CONSTEXPR const rvv_arg_type_info scalar_const_ptr_args[]
+  = {rvv_arg_type_info (RVV_BASE_scalar_const_ptr), rvv_arg_type_info_end};
+
+/* A list of args for void func (scalar_type *, vector_type) function.  */
+static CONSTEXPR const rvv_arg_type_info scalar_ptr_args[]
+  = {rvv_arg_type_info (RVV_BASE_scalar_const_ptr),
+     rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
+
/* A list of none preds that will be registered for intrinsic functions.  */
static CONSTEXPR const predication_type_index none_preds[]
   = {PRED_TYPE_none, NUM_PRED_TYPES};
+/* vop/vop_m/vop_tu/vop_tum/vop_tumu/vop_mu will be registered.  */
+static CONSTEXPR const predication_type_index full_preds[]
+  = {PRED_TYPE_none, PRED_TYPE_m,  PRED_TYPE_tu,  PRED_TYPE_tum,
+     PRED_TYPE_tumu, PRED_TYPE_mu, NUM_PRED_TYPES};
+
+/* vop/vop_m will be registered.  */
+static CONSTEXPR const predication_type_index none_m_preds[]
+  = {PRED_TYPE_none, PRED_TYPE_m, NUM_PRED_TYPES};
+
/* A static operand information for size_t func (void) function registration. */
static CONSTEXPR const rvv_op_info i_none_size_void_ops
   = {i_ops, /* Types */
@@ -153,6 +189,22 @@ static CONSTEXPR const rvv_op_info i_none_size_size_ops
      rvv_arg_type_info (RVV_BASE_size), /* Return type */
      size_args /* Args */};
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_scalar_const_ptr_ops
+  = {all_ops,   /* Types */
+     OP_TYPE_v,   /* Suffix */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+     scalar_const_ptr_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info all_v_scalar_ptr_ops
+  = {all_ops, /* Types */
+     OP_TYPE_v, /* Suffix */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type */
+     scalar_ptr_args /* Args */};
+
/* A list of all RVV intrinsic functions.  */
static function_group_info function_groups[] = {
#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)                         \
@@ -362,6 +414,42 @@ check_required_extensions (uint64_t required_extensions)
   return true;
}
+/* Return true if predication is using a real mask operand.  */
+static bool
+use_real_mask_p (enum predication_type_index pred)
+{
+  return pred == PRED_TYPE_m || pred == PRED_TYPE_tum || pred == PRED_TYPE_tumu
+ || pred == PRED_TYPE_mu;
+}
+
+/* Return true if predication is using a real merge operand.  */
+static bool
+use_real_merge_p (enum predication_type_index pred)
+{
+  return pred == PRED_TYPE_tu || pred == PRED_TYPE_tum || pred == PRED_TYPE_tumu
+ || pred == PRED_TYPE_mu;
+}
+
+/* Get TAIL policy for predication. If predication indicates TU, return the TU.
+   Otherwise, return the prefer default configuration.  */
+static rtx
+get_tail_policy_for_pred (enum predication_type_index pred)
+{
+  if (pred == PRED_TYPE_tu || pred == PRED_TYPE_tum || pred == PRED_TYPE_tumu)
+    return gen_int_mode (TAIL_UNDISTURBED, Pmode);
+  return gen_int_mode (get_prefer_tail_policy (), Pmode);
+}
+
+/* Get MASK policy for predication. If predication indicates MU, return the MU.
+   Otherwise, return the prefer default configuration.  */
+static rtx
+get_mask_policy_for_pred (enum predication_type_index pred)
+{
+  if (pred == PRED_TYPE_tumu || pred == PRED_TYPE_mu)
+    return gen_int_mode (MASK_UNDISTURBED, Pmode);
+  return gen_int_mode (get_prefer_mask_policy (), Pmode);
+}
+
tree
rvv_arg_type_info::get_tree_type (vector_type_index type_idx) const
{
@@ -546,6 +634,28 @@ function_builder::allocate_argument_types (const function_instance &instance,
       instance.op_info->args[i].get_tree_type (instance.type.index));
}
+/* Apply predication into argument_types.  */
+void
+function_builder::apply_predication (const function_instance &instance,
+      tree return_type,
+      vec<tree> &argument_types) const
+{
+  /* These predication types need to apply merge type.  */
+  if (instance.pred == PRED_TYPE_tu || instance.pred == PRED_TYPE_tum
+      || instance.pred == PRED_TYPE_tumu || instance.pred == PRED_TYPE_mu)
+    argument_types.quick_insert (0, return_type);
+
+  /* These predication types need to apply mask type.  */
+  tree mask_type = builtin_types[mask_types[instance.type.index]].vector;
+  if (instance.pred == PRED_TYPE_m || instance.pred == PRED_TYPE_tum
+      || instance.pred == PRED_TYPE_tumu || instance.pred == PRED_TYPE_mu)
+    argument_types.quick_insert (0, mask_type);
+
+  /* check if vl parameter need  */
+  if (instance.base->apply_vl_p ())
+    argument_types.quick_push (size_type_node);
+}
+
/* Register all the functions in GROUP.  */
void
function_builder::register_function_group (const function_group_info &group)
@@ -560,6 +670,37 @@ function_builder::append_name (const char *name)
   obstack_grow (&m_string_obstack, name, strlen (name));
}
+/* Add "__riscv_" and "name".  */
+void
+function_builder::append_base_name (const char *name)
+{
+  append_name ("__riscv_");
+  append_name (name);
+}
+
+/* Add SEW into function name.  */
+void
+function_builder::append_sew (int sew)
+{
+  switch (sew)
+    {
+    case 8:
+      append_name ("8");
+      break;
+    case 16:
+      append_name ("16");
+      break;
+    case 32:
+      append_name ("32");
+      break;
+    case 64:
+      append_name ("64");
+      break;
+    default:
+      gcc_unreachable ();
+    }
+}
+
/* Zero-terminate and complete the function name being built.  */
char *
function_builder::finish_name ()
@@ -694,6 +835,99 @@ function_expander::add_input_operand (unsigned argno)
   add_input_operand (TYPE_MODE (TREE_TYPE (arg)), x);
}
+/* Since we may normalize vop/vop_tu/vop_m/vop_tumu.. into a single patter.
+   We add a undef for the intrinsics that don't need a real merge.  */
+void
+function_expander::add_vundef_operand (machine_mode mode)
+{
+  rtx vundef = gen_rtx_UNSPEC (mode, gen_rtvec (1, const0_rtx), UNSPEC_VUNDEF);
+  add_input_operand (mode, vundef);
+}
+
+/* Add a memory operand with mode MODE and address ADDR.  */
+rtx
+function_expander::add_mem_operand (machine_mode mode, rtx addr)
+{
+  gcc_assert (VECTOR_MODE_P (mode));
+  rtx mem = gen_rtx_MEM (mode, memory_address (mode, addr));
+  /* The memory is only guaranteed to be element-aligned.  */
+  set_mem_align (mem, GET_MODE_ALIGNMENT (GET_MODE_INNER (mode)));
+  add_fixed_operand (mem);
+  return mem;
+}
+
+/* Use contiguous load INSN.  */
+rtx
+function_expander::use_contiguous_load_insn (insn_code icode)
+{
+  gcc_assert (call_expr_nargs (exp) > 0);
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  tree mask_type = builtin_types[mask_types[type.index]].vector;
+  machine_mode mask_mode = TYPE_MODE (mask_type);
+
+  /* Record the offset to get the argument.  */
+  int arg_offset = 0;
+
+  if (use_real_mask_p (pred))
+    add_input_operand (arg_offset++);
+  else
+    add_all_one_mask_operand (mask_mode);
+
+  if (use_real_merge_p (pred))
+    add_input_operand (arg_offset++);
+  else
+    add_vundef_operand (mode);
+
+  tree addr_arg = CALL_EXPR_ARG (exp, arg_offset++);
+  rtx addr = expand_normal (addr_arg);
+  add_mem_operand (mode, addr);
+
+  for (int argno = arg_offset; argno < call_expr_nargs (exp); argno++)
+    add_input_operand (argno);
+
+  add_input_operand (Pmode, get_tail_policy_for_pred (pred));
+  add_input_operand (Pmode, get_mask_policy_for_pred (pred));
+  add_input_operand (Pmode, get_avl_type_rtx (avl_type::NONVLMAX));
+
+  return generate_insn (icode);
+}
+
+/* Use contiguous store INSN.  */
+rtx
+function_expander::use_contiguous_store_insn (insn_code icode)
+{
+  gcc_assert (call_expr_nargs (exp) > 0);
+  machine_mode mode = TYPE_MODE (builtin_types[type.index].vector);
+  tree mask_type = builtin_types[mask_types[type.index]].vector;
+  machine_mode mask_mode = TYPE_MODE (mask_type);
+
+  /* Record the offset to get the argument.  */
+  int arg_offset = 0;
+
+  int addr_loc = use_real_mask_p (pred) ? 1 : 0;
+  tree addr_arg = CALL_EXPR_ARG (exp, addr_loc);
+  rtx addr = expand_normal (addr_arg);
+  rtx mem = add_mem_operand (mode, addr);
+
+  if (use_real_mask_p (pred))
+    add_input_operand (arg_offset++);
+  else
+    add_all_one_mask_operand (mask_mode);
+
+  /* To model "+m" constraint, we include memory operand into input.  */
+  add_input_operand (mode, mem);
+
+  arg_offset++;
+  for (int argno = arg_offset; argno < call_expr_nargs (exp); argno++)
+    add_input_operand (argno);
+
+  add_input_operand (Pmode, get_tail_policy_for_pred (pred));
+  add_input_operand (Pmode, get_mask_policy_for_pred (pred));
+  add_input_operand (Pmode, get_avl_type_rtx (avl_type::NONVLMAX));
+
+  return generate_insn (icode);
+}
+
/* Generate instruction ICODE, given that its operands have already
    been added to M_OPS.  Return the value of the first operand.  */
rtx
diff --git a/gcc/config/riscv/riscv-vector-builtins.def b/gcc/config/riscv/riscv-vector-builtins.def
index b7a633ed376..7636e34d595 100644
--- a/gcc/config/riscv/riscv-vector-builtins.def
+++ b/gcc/config/riscv/riscv-vector-builtins.def
@@ -44,7 +44,7 @@ along with GCC; see the file COPYING3.  If not see
#ifndef DEF_RVV_TYPE
#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, VECTOR_MODE,         \
     VECTOR_MODE_MIN_VLEN_32, VECTOR_SUFFIX, SCALAR_SUFFIX,    \
-      VSETVL_SUFFIX)
+      VSETVL_SUFFIX, MASK_TYPE)
#endif
/* Use "DEF_RVV_OP_TYPE" macro to define RVV operand types.
@@ -61,212 +61,212 @@ along with GCC; see the file COPYING3.  If not see
/* SEW/LMUL = 64:
    Only enable when TARGET_MIN_VLEN > 32 and machine mode = VNx1BImode.  */
-DEF_RVV_TYPE (vbool64_t, 14, __rvv_bool64_t, boolean, VNx1BI, VOID, _b64, , )
+DEF_RVV_TYPE (vbool64_t, 14, __rvv_bool64_t, boolean, VNx1BI, VOID, _b64, , , vbool64_t)
/* SEW/LMUL = 32:
    Machine mode = VNx2BImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx1BImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vbool32_t, 14, __rvv_bool32_t, boolean, VNx2BI, VNx1BI, _b32, , )
+DEF_RVV_TYPE (vbool32_t, 14, __rvv_bool32_t, boolean, VNx2BI, VNx1BI, _b32, , , vbool32_t)
/* SEW/LMUL = 16:
    Machine mode = VNx2BImode when TARGET_MIN_VLEN = 32.
    Machine mode = VNx4BImode when TARGET_MIN_VLEN > 32.  */
-DEF_RVV_TYPE (vbool16_t, 14, __rvv_bool16_t, boolean, VNx4BI, VNx2BI, _b16, , )
+DEF_RVV_TYPE (vbool16_t, 14, __rvv_bool16_t, boolean, VNx4BI, VNx2BI, _b16, , , vbool16_t)
/* SEW/LMUL = 8:
    Machine mode = VNx8BImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx4BImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vbool8_t, 13, __rvv_bool8_t, boolean, VNx8BI, VNx4BI, _b8, , )
+DEF_RVV_TYPE (vbool8_t, 13, __rvv_bool8_t, boolean, VNx8BI, VNx4BI, _b8, , , vbool8_t)
/* SEW/LMUL = 4:
    Machine mode = VNx16BImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx8BImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vbool4_t, 13, __rvv_bool4_t, boolean, VNx16BI, VNx8BI, _b4, , )
+DEF_RVV_TYPE (vbool4_t, 13, __rvv_bool4_t, boolean, VNx16BI, VNx8BI, _b4, , , vbool4_t)
/* SEW/LMUL = 2:
    Machine mode = VNx32BImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx16BImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vbool2_t, 13, __rvv_bool2_t, boolean, VNx32BI, VNx16BI, _b2, , )
+DEF_RVV_TYPE (vbool2_t, 13, __rvv_bool2_t, boolean, VNx32BI, VNx16BI, _b2, , , vbool2_t)
/* SEW/LMUL = 1:
    Machine mode = VNx64BImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx32BImode when TARGET_MIN_VLEN = 32.  */
-DEF_RVV_TYPE (vbool1_t, 13, __rvv_bool1_t, boolean, VNx64BI, VNx32BI, _b1, , )
+DEF_RVV_TYPE (vbool1_t, 13, __rvv_bool1_t, boolean, VNx64BI, VNx32BI, _b1, , , vbool1_t)
/* LMUL = 1/8:
    Only enble when TARGET_MIN_VLEN > 32 and machine mode = VNx1QImode.  */
DEF_RVV_TYPE (vint8mf8_t, 15, __rvv_int8mf8_t, intQI, VNx1QI, VOID, _i8mf8, _i8,
-       _e8mf8)
+       _e8mf8, vbool64_t)
DEF_RVV_TYPE (vuint8mf8_t, 16, __rvv_uint8mf8_t, unsigned_intQI, VNx1QI, VOID,
-       _u8mf8, _u8, _e8mf8)
+       _u8mf8, _u8, _e8mf8, vbool64_t)
/* LMUL = 1/4:
    Machine mode = VNx2QImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx1QImode when TARGET_MIN_VLEN = 32.  */
DEF_RVV_TYPE (vint8mf4_t, 15, __rvv_int8mf4_t, intQI, VNx2QI, VNx1QI, _i8mf4,
-       _i8, _e8mf4)
+       _i8, _e8mf4, vbool32_t)
DEF_RVV_TYPE (vuint8mf4_t, 16, __rvv_uint8mf4_t, unsigned_intQI, VNx2QI, VNx1QI,
-       _u8mf4, _u8, _e8mf4)
+       _u8mf4, _u8, _e8mf4, vbool32_t)
/* LMUL = 1/2:
    Machine mode = VNx4QImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx2QImode when TARGET_MIN_VLEN = 32.  */
DEF_RVV_TYPE (vint8mf2_t, 15, __rvv_int8mf2_t, intQI, VNx4QI, VNx2QI, _i8mf2,
-       _i8, _e8mf2)
+       _i8, _e8mf2, vbool16_t)
DEF_RVV_TYPE (vuint8mf2_t, 16, __rvv_uint8mf2_t, unsigned_intQI, VNx4QI, VNx2QI,
-       _u8mf2, _u8, _e8mf2)
+       _u8mf2, _u8, _e8mf2, vbool16_t)
/* LMUL = 1:
    Machine mode = VNx8QImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx4QImode when TARGET_MIN_VLEN = 32.  */
DEF_RVV_TYPE (vint8m1_t, 14, __rvv_int8m1_t, intQI, VNx8QI, VNx4QI, _i8m1, _i8,
-       _e8m1)
+       _e8m1, vbool8_t)
DEF_RVV_TYPE (vuint8m1_t, 15, __rvv_uint8m1_t, unsigned_intQI, VNx8QI, VNx4QI,
-       _u8m1, _u8, _e8m1)
+       _u8m1, _u8, _e8m1, vbool8_t)
/* LMUL = 2:
    Machine mode = VNx16QImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx8QImode when TARGET_MIN_VLEN = 32.  */
DEF_RVV_TYPE (vint8m2_t, 14, __rvv_int8m2_t, intQI, VNx16QI, VNx8QI, _i8m2, _i8,
-       _e8m2)
+       _e8m2, vbool4_t)
DEF_RVV_TYPE (vuint8m2_t, 15, __rvv_uint8m2_t, unsigned_intQI, VNx16QI, VNx8QI,
-       _u8m2, _u8, _e8m2)
+       _u8m2, _u8, _e8m2, vbool4_t)
/* LMUL = 4:
    Machine mode = VNx32QImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx16QImode when TARGET_MIN_VLEN = 32.  */
DEF_RVV_TYPE (vint8m4_t, 14, __rvv_int8m4_t, intQI, VNx32QI, VNx16QI, _i8m4,
-       _i8, _e8m4)
+       _i8, _e8m4, vbool2_t)
DEF_RVV_TYPE (vuint8m4_t, 15, __rvv_uint8m4_t, unsigned_intQI, VNx32QI, VNx16QI,
-       _u8m4, _u8, _e8m4)
+       _u8m4, _u8, _e8m4, vbool2_t)
/* LMUL = 8:
    Machine mode = VNx64QImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx32QImode when TARGET_MIN_VLEN = 32.  */
DEF_RVV_TYPE (vint8m8_t, 14, __rvv_int8m8_t, intQI, VNx64QI, VNx32QI, _i8m8,
-       _i8, _e8m8)
+       _i8, _e8m8, vbool1_t)
DEF_RVV_TYPE (vuint8m8_t, 15, __rvv_uint8m8_t, unsigned_intQI, VNx64QI, VNx32QI,
-       _u8m8, _u8, _e8m8)
+       _u8m8, _u8, _e8m8, vbool1_t)
/* LMUL = 1/4:
    Only enble when TARGET_MIN_VLEN > 32 and machine mode = VNx1HImode.  */
DEF_RVV_TYPE (vint16mf4_t, 16, __rvv_int16mf4_t, intHI, VNx1HI, VOID, _i16mf4,
-       _i16, _e16mf4)
+       _i16, _e16mf4, vbool64_t)
DEF_RVV_TYPE (vuint16mf4_t, 17, __rvv_uint16mf4_t, unsigned_intHI, VNx1HI, VOID,
-       _u16mf4, _u16, _e16mf4)
+       _u16mf4, _u16, _e16mf4, vbool64_t)
/* LMUL = 1/2:
    Machine mode = VNx2HImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx1HImode when TARGET_MIN_VLEN = 32.  */
DEF_RVV_TYPE (vint16mf2_t, 16, __rvv_int16mf2_t, intHI, VNx2HI, VNx1HI, _i16mf2,
-       _i16, _e16mf2)
+       _i16, _e16mf2, vbool32_t)
DEF_RVV_TYPE (vuint16mf2_t, 17, __rvv_uint16mf2_t, unsigned_intHI, VNx2HI,
-       VNx1HI, _u16mf2, _u16, _e16mf2)
+       VNx1HI, _u16mf2, _u16, _e16mf2, vbool32_t)
/* LMUL = 1:
    Machine mode = VNx4HImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx2HImode when TARGET_MIN_VLEN = 32.  */
DEF_RVV_TYPE (vint16m1_t, 15, __rvv_int16m1_t, intHI, VNx4HI, VNx2HI, _i16m1,
-       _i16, _e16m1)
+       _i16, _e16m1, vbool16_t)
DEF_RVV_TYPE (vuint16m1_t, 16, __rvv_uint16m1_t, unsigned_intHI, VNx4HI, VNx2HI,
-       _u16m1, _u16, _e16m1)
+       _u16m1, _u16, _e16m1, vbool16_t)
/* LMUL = 2:
    Machine mode = VNx8HImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx4HImode when TARGET_MIN_VLEN = 32.  */
DEF_RVV_TYPE (vint16m2_t, 15, __rvv_int16m2_t, intHI, VNx8HI, VNx4HI, _i16m2,
-       _i16, _e16m2)
+       _i16, _e16m2, vbool8_t)
DEF_RVV_TYPE (vuint16m2_t, 16, __rvv_uint16m2_t, unsigned_intHI, VNx8HI, VNx4HI,
-       _u16m2, _u16, _e16m2)
+       _u16m2, _u16, _e16m2, vbool8_t)
/* LMUL = 4:
    Machine mode = VNx16HImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx8HImode when TARGET_MIN_VLEN = 32.  */
DEF_RVV_TYPE (vint16m4_t, 15, __rvv_int16m4_t, intHI, VNx16HI, VNx8HI, _i16m4,
-       _i16, _e16m4)
+       _i16, _e16m4, vbool4_t)
DEF_RVV_TYPE (vuint16m4_t, 16, __rvv_uint16m4_t, unsigned_intHI, VNx16HI,
-       VNx8HI, _u16m4, _u16, _e16m4)
+       VNx8HI, _u16m4, _u16, _e16m4, vbool4_t)
/* LMUL = 8:
    Machine mode = VNx32HImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx16HImode when TARGET_MIN_VLEN = 32.  */
DEF_RVV_TYPE (vint16m8_t, 15, __rvv_int16m8_t, intHI, VNx32HI, VNx16HI, _i16m8,
-       _i16, _e16m8)
+       _i16, _e16m8, vbool2_t)
DEF_RVV_TYPE (vuint16m8_t, 16, __rvv_uint16m8_t, unsigned_intHI, VNx32HI,
-       VNx16HI, _u16m8, _u16, _e16m8)
+       VNx16HI, _u16m8, _u16, _e16m8, vbool2_t)
/* LMUL = 1/2:
    Only enble when TARGET_MIN_VLEN > 32 and machine mode = VNx1SImode.  */
DEF_RVV_TYPE (vint32mf2_t, 16, __rvv_int32mf2_t, int32, VNx1SI, VOID, _i32mf2,
-       _i32, _e32mf2)
+       _i32, _e32mf2, vbool64_t)
DEF_RVV_TYPE (vuint32mf2_t, 17, __rvv_uint32mf2_t, unsigned_int32, VNx1SI, VOID,
-       _u32mf2, _u32, _e32mf2)
+       _u32mf2, _u32, _e32mf2, vbool64_t)
/* LMUL = 1:
    Machine mode = VNx2SImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx1SImode when TARGET_MIN_VLEN = 32.  */
DEF_RVV_TYPE (vint32m1_t, 15, __rvv_int32m1_t, int32, VNx2SI, VNx1SI, _i32m1,
-       _i32, _e32m1)
+       _i32, _e32m1, vbool32_t)
DEF_RVV_TYPE (vuint32m1_t, 16, __rvv_uint32m1_t, unsigned_int32, VNx2SI, VNx1SI,
-       _u32m1, _u32, _e32m1)
+       _u32m1, _u32, _e32m1, vbool32_t)
/* LMUL = 2:
    Machine mode = VNx4SImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx2SImode when TARGET_MIN_VLEN = 32.  */
DEF_RVV_TYPE (vint32m2_t, 15, __rvv_int32m2_t, int32, VNx4SI, VNx2SI, _i32m2,
-       _i32, _e32m2)
+       _i32, _e32m2, vbool16_t)
DEF_RVV_TYPE (vuint32m2_t, 16, __rvv_uint32m2_t, unsigned_int32, VNx4SI, VNx2SI,
-       _u32m2, _u32, _e32m2)
+       _u32m2, _u32, _e32m2, vbool16_t)
/* LMUL = 4:
    Machine mode = VNx8SImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx4SImode when TARGET_MIN_VLEN = 32.  */
DEF_RVV_TYPE (vint32m4_t, 15, __rvv_int32m4_t, int32, VNx8SI, VNx4SI, _i32m4,
-       _i32, _e32m4)
+       _i32, _e32m4, vbool8_t)
DEF_RVV_TYPE (vuint32m4_t, 16, __rvv_uint32m4_t, unsigned_int32, VNx8SI, VNx4SI,
-       _u32m4, _u32, _e32m4)
+       _u32m4, _u32, _e32m4, vbool8_t)
/* LMUL = 8:
    Machine mode = VNx16SImode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx8SImode when TARGET_MIN_VLEN = 32.  */
DEF_RVV_TYPE (vint32m8_t, 15, __rvv_int32m8_t, int32, VNx16SI, VNx8SI, _i32m8,
-       _i32, _e32m8)
+       _i32, _e32m8, vbool4_t)
DEF_RVV_TYPE (vuint32m8_t, 16, __rvv_uint32m8_t, unsigned_int32, VNx16SI,
-       VNx8SI, _u32m8, _u32, _e32m8)
+       VNx8SI, _u32m8, _u32, _e32m8, vbool4_t)
/* SEW = 64:
    Disable when TARGET_MIN_VLEN > 32.  */
DEF_RVV_TYPE (vint64m1_t, 15, __rvv_int64m1_t, intDI, VNx1DI, VOID, _i64m1,
-       _i64, _e64m1)
+       _i64, _e64m1, vbool64_t)
DEF_RVV_TYPE (vuint64m1_t, 16, __rvv_uint64m1_t, unsigned_intDI, VNx1DI, VOID,
-       _u64m1, _u64, _e64m1)
+       _u64m1, _u64, _e64m1, vbool64_t)
DEF_RVV_TYPE (vint64m2_t, 15, __rvv_int64m2_t, intDI, VNx2DI, VOID, _i64m2,
-       _i64, _e64m2)
+       _i64, _e64m2, vbool32_t)
DEF_RVV_TYPE (vuint64m2_t, 16, __rvv_uint64m2_t, unsigned_intDI, VNx2DI, VOID,
-       _u64m2, _u64, _e64m2)
+       _u64m2, _u64, _e64m2, vbool32_t)
DEF_RVV_TYPE (vint64m4_t, 15, __rvv_int64m4_t, intDI, VNx4DI, VOID, _i64m4,
-       _i64, _e64m4)
+       _i64, _e64m4, vbool16_t)
DEF_RVV_TYPE (vuint64m4_t, 16, __rvv_uint64m4_t, unsigned_intDI, VNx4DI, VOID,
-       _u64m4, _u64, _e64m4)
+       _u64m4, _u64, _e64m4, vbool16_t)
DEF_RVV_TYPE (vint64m8_t, 15, __rvv_int64m8_t, intDI, VNx8DI, VOID, _i64m8,
-       _i64, _e64m8)
+       _i64, _e64m8, vbool8_t)
DEF_RVV_TYPE (vuint64m8_t, 16, __rvv_uint64m8_t, unsigned_intDI, VNx8DI, VOID,
-       _u64m8, _u64, _e64m8)
+       _u64m8, _u64, _e64m8, vbool8_t)
/* LMUL = 1/2:
    Only enble when TARGET_MIN_VLEN > 32 and machine mode = VNx1SFmode.  */
DEF_RVV_TYPE (vfloat32mf2_t, 18, __rvv_float32mf2_t, float, VNx1SF, VOID,
-       _f32mf2, _f32, _e32mf2)
+       _f32mf2, _f32, _e32mf2, vbool64_t)
/* LMUL = 1:
    Machine mode = VNx2SFmode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx1SFmode when TARGET_MIN_VLEN = 32.  */
DEF_RVV_TYPE (vfloat32m1_t, 17, __rvv_float32m1_t, float, VNx2SF, VNx1SF,
-       _f32m1, _f32, _e32m1)
+       _f32m1, _f32, _e32m1, vbool32_t)
/* LMUL = 2:
    Machine mode = VNx4SFmode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx2SFmode when TARGET_MIN_VLEN = 32.  */
DEF_RVV_TYPE (vfloat32m2_t, 17, __rvv_float32m2_t, float, VNx4SF, VNx2SF,
-       _f32m2, _f32, _e32m2)
+       _f32m2, _f32, _e32m2, vbool16_t)
/* LMUL = 4:
    Machine mode = VNx8SFmode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx4SFmode when TARGET_MIN_VLEN = 32.  */
DEF_RVV_TYPE (vfloat32m4_t, 17, __rvv_float32m4_t, float, VNx8SF, VNx4SF,
-       _f32m4, _f32, _e32m4)
+       _f32m4, _f32, _e32m4, vbool8_t)
/* LMUL = 8:
    Machine mode = VNx16SFmode when TARGET_MIN_VLEN > 32.
    Machine mode = VNx8SFmode when TARGET_MIN_VLEN = 32.  */
DEF_RVV_TYPE (vfloat32m8_t, 17, __rvv_float32m8_t, float, VNx16SF, VNx8SF,
-       _f32m8, _f32, _e32m8)
+       _f32m8, _f32, _e32m8, vbool4_t)
/* SEW = 64:
    Disable when TARGET_VECTOR_FP64.  */
DEF_RVV_TYPE (vfloat64m1_t, 17, __rvv_float64m1_t, double, VNx1DF, VOID, _f64m1,
-       _f64, _e64m1)
+       _f64, _e64m1, vbool64_t)
DEF_RVV_TYPE (vfloat64m2_t, 17, __rvv_float64m2_t, double, VNx2DF, VOID, _f64m2,
-       _f64, _e64m2)
+       _f64, _e64m2, vbool32_t)
DEF_RVV_TYPE (vfloat64m4_t, 17, __rvv_float64m4_t, double, VNx4DF, VOID, _f64m4,
-       _f64, _e64m4)
+       _f64, _e64m4, vbool16_t)
DEF_RVV_TYPE (vfloat64m8_t, 17, __rvv_float64m8_t, double, VNx8DF, VOID, _f64m8,
-       _f64, _e64m8)
+       _f64, _e64m8, vbool8_t)
DEF_RVV_OP_TYPE (vv)
DEF_RVV_OP_TYPE (vx)
diff --git a/gcc/config/riscv/riscv-vector-builtins.h b/gcc/config/riscv/riscv-vector-builtins.h
index 425da12326c..c13df99cb5b 100644
--- a/gcc/config/riscv/riscv-vector-builtins.h
+++ b/gcc/config/riscv/riscv-vector-builtins.h
@@ -264,10 +264,13 @@ public:
   ~function_builder ();
   void allocate_argument_types (const function_instance &, vec<tree> &) const;
+  void apply_predication (const function_instance &, tree, vec<tree> &) const;
   void add_unique_function (const function_instance &, const function_shape *,
    tree, vec<tree> &);
   void register_function_group (const function_group_info &);
   void append_name (const char *);
+  void append_base_name (const char *);
+  void append_sew (int);
   char *finish_name ();
private:
@@ -315,6 +318,16 @@ public:
   void add_input_operand (machine_mode, rtx);
   void add_input_operand (unsigned argno);
+  void add_output_operand (machine_mode, rtx);
+  void add_all_one_mask_operand (machine_mode mode);
+  void add_vundef_operand (machine_mode mode);
+  void add_fixed_operand (rtx);
+  rtx add_mem_operand (machine_mode, rtx);
+
+  machine_mode vector_mode (void) const;
+
+  rtx use_contiguous_load_insn (insn_code);
+  rtx use_contiguous_store_insn (insn_code);
   rtx generate_insn (insn_code);
   /* The function call expression.  */
@@ -342,6 +355,12 @@ public:
      in addition to reading its arguments and returning a result.  */
   virtual unsigned int call_properties (const function_instance &) const;
+  /* Return true if intrinsics should apply vl operand.  */
+  virtual bool apply_vl_p () const;
+
+  /* Return true if intrinsic can be overloaded.  */
+  virtual bool can_be_overloaded_p (enum predication_type_index) const;
+
   /* Expand the given call into rtl.  Return the result of the function,
      or an arbitrary value if the function doesn't return a result.  */
   virtual rtx expand (function_expander &) const = 0;
@@ -394,6 +413,37 @@ function_expander::add_input_operand (machine_mode mode, rtx op)
   create_input_operand (&m_ops[opno++], op, mode);
}
+/* Create output and add it into M_OPS and increase OPNO.  */
+inline void
+function_expander::add_output_operand (machine_mode mode, rtx target)
+{
+  create_output_operand (&m_ops[opno++], target, mode);
+}
+
+/* Since we may normalize vop/vop_tu/vop_m/vop_tumu.. into a single patter.
+   We add a fake all true mask for the intrinsics that don't need a real mask.
+ */
+inline void
+function_expander::add_all_one_mask_operand (machine_mode mode)
+{
+  add_input_operand (mode, CONSTM1_RTX (mode));
+}
+
+/* Add an operand that must be X.  The only way of legitimizing an
+   invalid X is to reload the address of a MEM.  */
+inline void
+function_expander::add_fixed_operand (rtx x)
+{
+  create_fixed_operand (&m_ops[opno++], x);
+}
+
+/* Return the machine_mode of the corresponding vector type.  */
+inline machine_mode
+function_expander::vector_mode (void) const
+{
+  return TYPE_MODE (builtin_types[type.index].vector);
+}
+
/* Default implementation of function_base::call_properties, with conservatively
    correct behavior for floating-point instructions.  */
inline unsigned int
@@ -405,6 +455,21 @@ function_base::call_properties (const function_instance &instance) const
   return flags;
}
+/* We choose to apply vl operand by default since most of the intrinsics
+   has vl operand.  */
+inline bool
+function_base::apply_vl_p () const
+{
+  return true;
+}
+
+/* Since most of intrinsics can be overloaded, we set it true by default.  */
+inline bool
+function_base::can_be_overloaded_p (enum predication_type_index) const
+{
+  return true;
+}
+
} // end namespace riscv_vector
#endif
diff --git a/gcc/config/riscv/riscv-vsetvl.cc b/gcc/config/riscv/riscv-vsetvl.cc
index 72f1e4059ab..01530c1ae75 100644
--- a/gcc/config/riscv/riscv-vsetvl.cc
+++ b/gcc/config/riscv/riscv-vsetvl.cc
@@ -302,10 +302,6 @@ get_vl (rtx_insn *rinsn)
{
   if (has_vl_op (rinsn))
     {
-      /* We only call get_vl for VLMAX use VTYPE instruction.
- It's used to get the VL operand to emit VLMAX VSETVL instruction:
- vsetvl a5,zero,e32,m1,ta,ma.  */
-      gcc_assert (get_attr_avl_type (rinsn) == VLMAX);
       extract_insn_cached (rinsn);
       return recog_data.operand[get_attr_vl_op_idx (rinsn)];
     }
diff --git a/gcc/config/riscv/t-riscv b/gcc/config/riscv/t-riscv
index 7af9f5402ec..d30e0235356 100644
--- a/gcc/config/riscv/t-riscv
+++ b/gcc/config/riscv/t-riscv
@@ -9,7 +9,7 @@ riscv-vector-builtins.o: $(srcdir)/config/riscv/riscv-vector-builtins.cc \
   $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(TREE_H) $(RTL_H) $(TM_P_H) \
   memmodel.h insn-codes.h $(OPTABS_H) $(RECOG_H) $(DIAGNOSTIC_H) $(EXPR_H) \
   $(FUNCTION_H) fold-const.h gimplify.h explow.h stor-layout.h $(REGS_H) \
-  alias.h langhooks.h attribs.h stringpool.h \
+  alias.h langhooks.h attribs.h stringpool.h emit-rtl.h \
   $(srcdir)/config/riscv/riscv-vector-builtins.h \
   $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \
   $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \
-- 
2.36.3
 

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] RISC-V: Support vle.v/vse.v intrinsics
  2022-12-23  0:56 ` 钟居哲
@ 2022-12-23  5:42   ` Kito Cheng
  0 siblings, 0 replies; 3+ messages in thread
From: Kito Cheng @ 2022-12-23  5:42 UTC (permalink / raw)
  To: 钟居哲; +Cc: gcc-patches, palmer

Committed, thanks :)

On Fri, Dec 23, 2022 at 8:57 AM 钟居哲 <juzhe.zhong@rivai.ai> wrote:
>
> This patch is minimum intrinsics support for VSETVL PASS to support AVL model.
> The corresponding unit-test for vle.v/vse.v should be added after I support AVL model
> and well tested VSETVL PASS patch.
>
>
> juzhe.zhong@rivai.ai
>
> From: juzhe.zhong
> Date: 2022-12-23 08:52
> To: gcc-patches
> CC: kito.cheng; palmer; Ju-Zhe Zhong
> Subject: [PATCH] RISC-V: Support vle.v/vse.v intrinsics
> From: Ju-Zhe Zhong <juzhe.zhong@rivai.ai>
>
> gcc/ChangeLog:
>
>         * config/riscv/riscv-protos.h (get_avl_type_rtx): New function.
>         * config/riscv/riscv-v.cc (get_avl_type_rtx): Ditto.
>         * config/riscv/riscv-vector-builtins-bases.cc (class loadstore): New class.
>         (BASE): Ditto.
>         * config/riscv/riscv-vector-builtins-bases.h: Ditto.
>         * config/riscv/riscv-vector-builtins-functions.def (vle): Ditto.
>         (vse): Ditto.
>         * config/riscv/riscv-vector-builtins-shapes.cc (build_one): Ditto.
>         (struct loadstore_def): Ditto.
>         (SHAPE): Ditto.
>         * config/riscv/riscv-vector-builtins-shapes.h: Ditto.
>         * config/riscv/riscv-vector-builtins-types.def (DEF_RVV_U_OPS): New macro.
>         (DEF_RVV_F_OPS): Ditto.
>         (vuint8mf8_t): Add corresponding mask type.
>         (vuint8mf4_t): Ditto.
>         (vuint8mf2_t): Ditto.
>         (vuint8m1_t): Ditto.
>         (vuint8m2_t): Ditto.
>         (vuint8m4_t): Ditto.
>         (vuint8m8_t): Ditto.
>         (vuint16mf4_t): Ditto.
>         (vuint16mf2_t): Ditto.
>         (vuint16m1_t): Ditto.
>         (vuint16m2_t): Ditto.
>         (vuint16m4_t): Ditto.
>         (vuint16m8_t): Ditto.
>         (vuint32mf2_t): Ditto.
>         (vuint32m1_t): Ditto.
>         (vuint32m2_t): Ditto.
>         (vuint32m4_t): Ditto.
>         (vuint32m8_t): Ditto.
>         (vuint64m1_t): Ditto.
>         (vuint64m2_t): Ditto.
>         (vuint64m4_t): Ditto.
>         (vuint64m8_t): Ditto.
>         (vfloat32mf2_t): Ditto.
>         (vfloat32m1_t): Ditto.
>         (vfloat32m2_t): Ditto.
>         (vfloat32m4_t): Ditto.
>         (vfloat32m8_t): Ditto.
>         (vfloat64m1_t): Ditto.
>         (vfloat64m2_t): Ditto.
>         (vfloat64m4_t): Ditto.
>         (vfloat64m8_t): Ditto.
>         * config/riscv/riscv-vector-builtins.cc (DEF_RVV_TYPE): Adjust for new macro.
>         (DEF_RVV_I_OPS): Ditto.
>         (DEF_RVV_U_OPS): New macro.
>         (DEF_RVV_F_OPS): New macro.
>         (use_real_mask_p): New function.
>         (use_real_merge_p): Ditto.
>         (get_tail_policy_for_pred): Ditto.
>         (get_mask_policy_for_pred): Ditto.
>         (function_builder::apply_predication): Ditto.
>         (function_builder::append_base_name): Ditto.
>         (function_builder::append_sew): Ditto.
>         (function_expander::add_vundef_operand): Ditto.
>         (function_expander::add_mem_operand): Ditto.
>         (function_expander::use_contiguous_load_insn): Ditto.
>         (function_expander::use_contiguous_store_insn): Ditto.
>         * config/riscv/riscv-vector-builtins.def (DEF_RVV_TYPE): Adjust for adding mask type.
>         (vbool64_t): Ditto.
>         (vbool32_t): Ditto.
>         (vbool16_t): Ditto.
>         (vbool8_t): Ditto.
>         (vbool4_t): Ditto.
>         (vbool2_t): Ditto.
>         (vbool1_t): Ditto.
>         (vint8mf8_t): Ditto.
>         (vint8mf4_t): Ditto.
>         (vint8mf2_t): Ditto.
>         (vint8m1_t): Ditto.
>         (vint8m2_t): Ditto.
>         (vint8m4_t): Ditto.
>         (vint8m8_t): Ditto.
>         (vint16mf4_t): Ditto.
>         (vint16mf2_t): Ditto.
>         (vint16m1_t): Ditto.
>         (vint16m2_t): Ditto.
>         (vint16m4_t): Ditto.
>         (vint16m8_t): Ditto.
>         (vint32mf2_t): Ditto.
>         (vint32m1_t): Ditto.
>         (vint32m2_t): Ditto.
>         (vint32m4_t): Ditto.
>         (vint32m8_t): Ditto.
>         (vint64m1_t): Ditto.
>         (vint64m2_t): Ditto.
>         (vint64m4_t): Ditto.
>         (vint64m8_t): Ditto.
>         (vfloat32mf2_t): Ditto.
>         (vfloat32m1_t): Ditto.
>         (vfloat32m2_t): Ditto.
>         (vfloat32m4_t): Ditto.
>         (vfloat32m8_t): Ditto.
>         (vfloat64m1_t): Ditto.
>         (vfloat64m4_t): Ditto.
>         * config/riscv/riscv-vector-builtins.h (function_expander::add_output_operand): New function.
>         (function_expander::add_all_one_mask_operand): Ditto.
>         (function_expander::add_fixed_operand): Ditto.
>         (function_expander::vector_mode): Ditto.
>         (function_base::apply_vl_p): Ditto.
>         (function_base::can_be_overloaded_p): Ditto.
>         * config/riscv/riscv-vsetvl.cc (get_vl): Remove restrict of supporting AVL is not VLMAX.
>         * config/riscv/t-riscv: Add include file.
>
> ---
> gcc/config/riscv/riscv-protos.h               |   1 +
> gcc/config/riscv/riscv-v.cc                   |  10 +-
> .../riscv/riscv-vector-builtins-bases.cc      |  49 +++-
> .../riscv/riscv-vector-builtins-bases.h       |   2 +
> .../riscv/riscv-vector-builtins-functions.def |   3 +
> .../riscv/riscv-vector-builtins-shapes.cc     |  38 ++-
> .../riscv/riscv-vector-builtins-shapes.h      |   1 +
> .../riscv/riscv-vector-builtins-types.def     |  49 +++-
> gcc/config/riscv/riscv-vector-builtins.cc     | 236 +++++++++++++++++-
> gcc/config/riscv/riscv-vector-builtins.def    | 122 ++++-----
> gcc/config/riscv/riscv-vector-builtins.h      |  65 +++++
> gcc/config/riscv/riscv-vsetvl.cc              |   4 -
> gcc/config/riscv/t-riscv                      |   2 +-
> 13 files changed, 506 insertions(+), 76 deletions(-)
>
> diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
> index cfd0f284f91..64ee56b8a7c 100644
> --- a/gcc/config/riscv/riscv-protos.h
> +++ b/gcc/config/riscv/riscv-protos.h
> @@ -171,6 +171,7 @@ enum mask_policy
> };
> enum tail_policy get_prefer_tail_policy ();
> enum mask_policy get_prefer_mask_policy ();
> +rtx get_avl_type_rtx (enum avl_type);
> }
> /* We classify builtin types into two classes:
> diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
> index f02a048f76d..b616ee3e6b3 100644
> --- a/gcc/config/riscv/riscv-v.cc
> +++ b/gcc/config/riscv/riscv-v.cc
> @@ -79,8 +79,7 @@ public:
>    }
>    void add_avl_type_operand ()
>    {
> -    rtx vlmax_rtx = gen_int_mode (avl_type::VLMAX, Pmode);
> -    add_input_operand (vlmax_rtx, Pmode);
> +    add_input_operand (get_avl_type_rtx (avl_type::VLMAX), Pmode);
>    }
>    void expand (enum insn_code icode, bool temporary_volatile_p = false)
> @@ -342,4 +341,11 @@ get_prefer_mask_policy ()
>    return MASK_ANY;
> }
> +/* Get avl_type rtx.  */
> +rtx
> +get_avl_type_rtx (enum avl_type type)
> +{
> +  return gen_int_mode (type, Pmode);
> +}
> +
> } // namespace riscv_vector
> diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
> index c1193dbbfb5..10373e5ccf2 100644
> --- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
> +++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
> @@ -53,6 +53,11 @@ template<bool VLMAX_P>
> class vsetvl : public function_base
> {
> public:
> +  bool apply_vl_p () const override
> +  {
> +    return false;
> +  }
> +
>    rtx expand (function_expander &e) const override
>    {
>      if (VLMAX_P)
> @@ -79,11 +84,47 @@ public:
>    }
> };
> +/* Implements vle.v/vse.v codegen.  */
> +template <bool STORE_P>
> +class loadstore : public function_base
> +{
> +  unsigned int call_properties (const function_instance &) const override
> +  {
> +    if (STORE_P)
> +      return CP_WRITE_MEMORY;
> +    else
> +      return CP_READ_MEMORY;
> +  }
> +
> +  bool can_be_overloaded_p (enum predication_type_index pred) const override
> +  {
> +    if (STORE_P)
> +      return true;
> +    return pred != PRED_TYPE_none && pred != PRED_TYPE_mu;
> +  }
> +
> +  rtx expand (function_expander &e) const override
> +  {
> +    if (STORE_P)
> +      return e.use_contiguous_store_insn (code_for_pred_mov (e.vector_mode ()));
> +    else
> +      return e.use_contiguous_load_insn (code_for_pred_mov (e.vector_mode ()));
> +  }
> +};
> +
> static CONSTEXPR const vsetvl<false> vsetvl_obj;
> static CONSTEXPR const vsetvl<true> vsetvlmax_obj;
> -namespace bases {
> -const function_base *const vsetvl = &vsetvl_obj;
> -const function_base *const vsetvlmax = &vsetvlmax_obj;
> -}
> +static CONSTEXPR const loadstore<false> vle_obj;
> +static CONSTEXPR const loadstore<true> vse_obj;
> +
> +/* Declare the function base NAME, pointing it to an instance
> +   of class <NAME>_obj.  */
> +#define BASE(NAME) \
> +  namespace bases { const function_base *const NAME = &NAME##_obj; }
> +
> +BASE (vsetvl)
> +BASE (vsetvlmax)
> +BASE (vle)
> +BASE (vse)
> } // end namespace riscv_vector
> diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.h b/gcc/config/riscv/riscv-vector-builtins-bases.h
> index a0ae18eef03..79684bcb50d 100644
> --- a/gcc/config/riscv/riscv-vector-builtins-bases.h
> +++ b/gcc/config/riscv/riscv-vector-builtins-bases.h
> @@ -26,6 +26,8 @@ namespace riscv_vector {
> namespace bases {
> extern const function_base *const vsetvl;
> extern const function_base *const vsetvlmax;
> +extern const function_base *const vle;
> +extern const function_base *const vse;
> }
> } // end namespace riscv_vector
> diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.def b/gcc/config/riscv/riscv-vector-builtins-functions.def
> index dc41537865e..e5ebb7d829c 100644
> --- a/gcc/config/riscv/riscv-vector-builtins-functions.def
> +++ b/gcc/config/riscv/riscv-vector-builtins-functions.def
> @@ -39,5 +39,8 @@ along with GCC; see the file COPYING3. If not see
> /* 6. Configuration-Setting Instructions.  */
> DEF_RVV_FUNCTION (vsetvl, vsetvl, none_preds, i_none_size_size_ops)
> DEF_RVV_FUNCTION (vsetvlmax, vsetvlmax, none_preds, i_none_size_void_ops)
> +/* 7. Vector Loads and Stores. */
> +DEF_RVV_FUNCTION (vle, loadstore, full_preds, all_v_scalar_const_ptr_ops)
> +DEF_RVV_FUNCTION (vse, loadstore, none_m_preds, all_v_scalar_ptr_ops)
> #undef DEF_RVV_FUNCTION
> diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
> index bb2ee8767a0..0332c031ce4 100644
> --- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
> +++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
> @@ -48,6 +48,7 @@ build_one (function_builder &b, const function_group_info &group,
>    tree return_type = group.ops_infos.ret.get_tree_type (
>      group.ops_infos.types[vec_type_idx].index);
>    b.allocate_argument_types (function_instance, argument_types);
> +  b.apply_predication (function_instance, return_type, argument_types);
>    b.add_unique_function (function_instance, (*group.shape), return_type,
> argument_types);
> }
> @@ -93,13 +94,46 @@ struct vsetvl_def : public build_base
>      /* vsetvl* instruction doesn't have C++ overloaded functions.  */
>      if (overloaded_p)
>        return nullptr;
> -    b.append_name ("__riscv_");
> -    b.append_name (instance.base_name);
> +    b.append_base_name (instance.base_name);
>      b.append_name (type_suffixes[instance.type.index].vsetvl);
>      return b.finish_name ();
>    }
> };
> +
> +/* loadstore_def class.  */
> +struct loadstore_def : public build_base
> +{
> +  char *get_name (function_builder &b, const function_instance &instance,
> +   bool overloaded_p) const override
> +  {
> +    /* Return nullptr if it can not be overloaded.  */
> +    if (overloaded_p && !instance.base->can_be_overloaded_p (instance.pred))
> +      return nullptr;
> +
> +    b.append_base_name (instance.base_name);
> +
> +    tree type = builtin_types[instance.type.index].vector;
> +    machine_mode mode = TYPE_MODE (type);
> +    int sew = GET_MODE_BITSIZE (GET_MODE_INNER (mode));
> +    /* vop --> vop<sew>.  */
> +    b.append_sew (sew);
> +
> +    /* vop<sew>_v --> vop<sew>_v_<type>.  */
> +    if (!overloaded_p)
> +      {
> + /* vop<sew> --> vop<sew>_v.  */
> + b.append_name (operand_suffixes[instance.op_info->op]);
> + /* vop<sew>_v --> vop<sew>_v_<type>.  */
> + b.append_name (type_suffixes[instance.type.index].vector);
> +      }
> +
> +    b.append_name (predication_suffixes[instance.pred]);
> +    return b.finish_name ();
> +  }
> +};
> +
> SHAPE(vsetvl, vsetvl)
> SHAPE(vsetvl, vsetvlmax)
> +SHAPE(loadstore, loadstore)
> } // end namespace riscv_vector
> diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.h b/gcc/config/riscv/riscv-vector-builtins-shapes.h
> index f2d876fb133..b17dcd88877 100644
> --- a/gcc/config/riscv/riscv-vector-builtins-shapes.h
> +++ b/gcc/config/riscv/riscv-vector-builtins-shapes.h
> @@ -26,6 +26,7 @@ namespace riscv_vector {
> namespace shapes {
> extern const function_shape *const vsetvl;
> extern const function_shape *const vsetvlmax;
> +extern const function_shape *const loadstore;
> }
> } // end namespace riscv_vector
> diff --git a/gcc/config/riscv/riscv-vector-builtins-types.def b/gcc/config/riscv/riscv-vector-builtins-types.def
> index f282a5e7654..6a867c99987 100644
> --- a/gcc/config/riscv/riscv-vector-builtins-types.def
> +++ b/gcc/config/riscv/riscv-vector-builtins-types.def
> @@ -18,12 +18,24 @@ You should have received a copy of the GNU General Public License
> along with GCC; see the file COPYING3. If not see
> <http://www.gnu.org/licenses/>. */
> -/* Use "DEF_ALL_SIGNED_INTEGER" macro include all signed integer which will be
> +/* Use "DEF_RVV_I_OPS" macro include all signed integer which will be
>     iterated and registered as intrinsic functions.  */
> #ifndef DEF_RVV_I_OPS
> #define DEF_RVV_I_OPS(TYPE, REQUIRE)
> #endif
> +/* Use "DEF_RVV_U_OPS" macro include all unsigned integer which will be
> +   iterated and registered as intrinsic functions.  */
> +#ifndef DEF_RVV_U_OPS
> +#define DEF_RVV_U_OPS(TYPE, REQUIRE)
> +#endif
> +
> +/* Use "DEF_RVV_F_OPS" macro include all floating-point which will be
> +   iterated and registered as intrinsic functions.  */
> +#ifndef DEF_RVV_F_OPS
> +#define DEF_RVV_F_OPS(TYPE, REQUIRE)
> +#endif
> +
> DEF_RVV_I_OPS (vint8mf8_t, RVV_REQUIRE_ZVE64)
> DEF_RVV_I_OPS (vint8mf4_t, 0)
> DEF_RVV_I_OPS (vint8mf2_t, 0)
> @@ -47,4 +59,39 @@ DEF_RVV_I_OPS (vint64m2_t, RVV_REQUIRE_ZVE64)
> DEF_RVV_I_OPS (vint64m4_t, RVV_REQUIRE_ZVE64)
> DEF_RVV_I_OPS (vint64m8_t, RVV_REQUIRE_ZVE64)
> +DEF_RVV_U_OPS (vuint8mf8_t, RVV_REQUIRE_ZVE64)
> +DEF_RVV_U_OPS (vuint8mf4_t, 0)
> +DEF_RVV_U_OPS (vuint8mf2_t, 0)
> +DEF_RVV_U_OPS (vuint8m1_t, 0)
> +DEF_RVV_U_OPS (vuint8m2_t, 0)
> +DEF_RVV_U_OPS (vuint8m4_t, 0)
> +DEF_RVV_U_OPS (vuint8m8_t, 0)
> +DEF_RVV_U_OPS (vuint16mf4_t, RVV_REQUIRE_ZVE64)
> +DEF_RVV_U_OPS (vuint16mf2_t, 0)
> +DEF_RVV_U_OPS (vuint16m1_t, 0)
> +DEF_RVV_U_OPS (vuint16m2_t, 0)
> +DEF_RVV_U_OPS (vuint16m4_t, 0)
> +DEF_RVV_U_OPS (vuint16m8_t, 0)
> +DEF_RVV_U_OPS (vuint32mf2_t, RVV_REQUIRE_ZVE64)
> +DEF_RVV_U_OPS (vuint32m1_t, 0)
> +DEF_RVV_U_OPS (vuint32m2_t, 0)
> +DEF_RVV_U_OPS (vuint32m4_t, 0)
> +DEF_RVV_U_OPS (vuint32m8_t, 0)
> +DEF_RVV_U_OPS (vuint64m1_t, RVV_REQUIRE_ZVE64)
> +DEF_RVV_U_OPS (vuint64m2_t, RVV_REQUIRE_ZVE64)
> +DEF_RVV_U_OPS (vuint64m4_t, RVV_REQUIRE_ZVE64)
> +DEF_RVV_U_OPS (vuint64m8_t, RVV_REQUIRE_ZVE64)
> +
> +DEF_RVV_F_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ZVE64)
> +DEF_RVV_F_OPS (vfloat32m1_t, RVV_REQUIRE_ELEN_FP_32)
> +DEF_RVV_F_OPS (vfloat32m2_t, RVV_REQUIRE_ELEN_FP_32)
> +DEF_RVV_F_OPS (vfloat32m4_t, RVV_REQUIRE_ELEN_FP_32)
> +DEF_RVV_F_OPS (vfloat32m8_t, RVV_REQUIRE_ELEN_FP_32)
> +DEF_RVV_F_OPS (vfloat64m1_t, RVV_REQUIRE_ELEN_FP_64)
> +DEF_RVV_F_OPS (vfloat64m2_t, RVV_REQUIRE_ELEN_FP_64)
> +DEF_RVV_F_OPS (vfloat64m4_t, RVV_REQUIRE_ELEN_FP_64)
> +DEF_RVV_F_OPS (vfloat64m8_t, RVV_REQUIRE_ELEN_FP_64)
> +
> #undef DEF_RVV_I_OPS
> +#undef DEF_RVV_U_OPS
> +#undef DEF_RVV_F_OPS
> diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
> index 43150aa47a4..9170776f979 100644
> --- a/gcc/config/riscv/riscv-vector-builtins.cc
> +++ b/gcc/config/riscv/riscv-vector-builtins.cc
> @@ -44,6 +44,7 @@
> #include "attribs.h"
> #include "targhooks.h"
> #include "regs.h"
> +#include "emit-rtl.h"
> #include "riscv-vector-builtins.h"
> #include "riscv-vector-builtins-shapes.h"
> #include "riscv-vector-builtins-bases.h"
> @@ -105,11 +106,20 @@ const char *const operand_suffixes[NUM_OP_TYPES] = {
> const rvv_builtin_suffixes type_suffixes[NUM_VECTOR_TYPES + 1] = {
> #define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, VECTOR_MODE,         \
>      VECTOR_MODE_MIN_VLEN_32, VECTOR_SUFFIX, SCALAR_SUFFIX,    \
> -      VSETVL_SUFFIX)                                            \
> +      VSETVL_SUFFIX, MASK_TYPE)                                 \
>    {#VECTOR_SUFFIX, #SCALAR_SUFFIX, #VSETVL_SUFFIX},
> #include "riscv-vector-builtins.def"
> };
> +/* Mask type for each RVV type.  */
> +const vector_type_index mask_types[NUM_VECTOR_TYPES + 1] = {
> +#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, VECTOR_MODE,         \
> +      VECTOR_MODE_MIN_VLEN_32, VECTOR_SUFFIX, SCALAR_SUFFIX,    \
> +      VSETVL_SUFFIX, MASK_TYPE)                                 \
> +  VECTOR_TYPE_##MASK_TYPE,
> +#include "riscv-vector-builtins.def"
> +};
> +
> /* Static information about predication suffix for each RVV type.  */
> const char *const predication_suffixes[NUM_PRED_TYPES] = {
>    "", /* PRED_TYPE_none.  */
> @@ -123,6 +133,14 @@ static const rvv_type_info i_ops[] = {
> #include "riscv-vector-builtins-types.def"
>    {NUM_VECTOR_TYPES, 0}};
> +/* A list of all types will be registered for intrinsic functions.  */
> +static const rvv_type_info all_ops[] = {
> +#define DEF_RVV_I_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
> +#define DEF_RVV_U_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
> +#define DEF_RVV_F_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
> +#include "riscv-vector-builtins-types.def"
> +  {NUM_VECTOR_TYPES, 0}};
> +
> static CONSTEXPR const rvv_arg_type_info rvv_arg_type_info_end
>    = rvv_arg_type_info (NUM_BASE_TYPES);
> @@ -134,10 +152,28 @@ static CONSTEXPR const rvv_arg_type_info void_args[]
> static CONSTEXPR const rvv_arg_type_info size_args[]
>    = {rvv_arg_type_info (RVV_BASE_size), rvv_arg_type_info_end};
> +/* A list of args for vector_type func (const scalar_type *) function.  */
> +static CONSTEXPR const rvv_arg_type_info scalar_const_ptr_args[]
> +  = {rvv_arg_type_info (RVV_BASE_scalar_const_ptr), rvv_arg_type_info_end};
> +
> +/* A list of args for void func (scalar_type *, vector_type) function.  */
> +static CONSTEXPR const rvv_arg_type_info scalar_ptr_args[]
> +  = {rvv_arg_type_info (RVV_BASE_scalar_const_ptr),
> +     rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
> +
> /* A list of none preds that will be registered for intrinsic functions.  */
> static CONSTEXPR const predication_type_index none_preds[]
>    = {PRED_TYPE_none, NUM_PRED_TYPES};
> +/* vop/vop_m/vop_tu/vop_tum/vop_tumu/vop_mu will be registered.  */
> +static CONSTEXPR const predication_type_index full_preds[]
> +  = {PRED_TYPE_none, PRED_TYPE_m,  PRED_TYPE_tu,  PRED_TYPE_tum,
> +     PRED_TYPE_tumu, PRED_TYPE_mu, NUM_PRED_TYPES};
> +
> +/* vop/vop_m will be registered.  */
> +static CONSTEXPR const predication_type_index none_m_preds[]
> +  = {PRED_TYPE_none, PRED_TYPE_m, NUM_PRED_TYPES};
> +
> /* A static operand information for size_t func (void) function registration. */
> static CONSTEXPR const rvv_op_info i_none_size_void_ops
>    = {i_ops, /* Types */
> @@ -153,6 +189,22 @@ static CONSTEXPR const rvv_op_info i_none_size_size_ops
>       rvv_arg_type_info (RVV_BASE_size), /* Return type */
>       size_args /* Args */};
> +/* A static operand information for vector_type func (const scalar_type *)
> + * function registration. */
> +static CONSTEXPR const rvv_op_info all_v_scalar_const_ptr_ops
> +  = {all_ops,   /* Types */
> +     OP_TYPE_v,   /* Suffix */
> +     rvv_arg_type_info (RVV_BASE_vector), /* Return type */
> +     scalar_const_ptr_args /* Args */};
> +
> +/* A static operand information for void func (scalar_type *, vector_type)
> + * function registration. */
> +static CONSTEXPR const rvv_op_info all_v_scalar_ptr_ops
> +  = {all_ops, /* Types */
> +     OP_TYPE_v, /* Suffix */
> +     rvv_arg_type_info (RVV_BASE_void), /* Return type */
> +     scalar_ptr_args /* Args */};
> +
> /* A list of all RVV intrinsic functions.  */
> static function_group_info function_groups[] = {
> #define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)                         \
> @@ -362,6 +414,42 @@ check_required_extensions (uint64_t required_extensions)
>    return true;
> }
> +/* Return true if predication is using a real mask operand.  */
> +static bool
> +use_real_mask_p (enum predication_type_index pred)
> +{
> +  return pred == PRED_TYPE_m || pred == PRED_TYPE_tum || pred == PRED_TYPE_tumu
> + || pred == PRED_TYPE_mu;
> +}
> +
> +/* Return true if predication is using a real merge operand.  */
> +static bool
> +use_real_merge_p (enum predication_type_index pred)
> +{
> +  return pred == PRED_TYPE_tu || pred == PRED_TYPE_tum || pred == PRED_TYPE_tumu
> + || pred == PRED_TYPE_mu;
> +}
> +
> +/* Get TAIL policy for predication. If predication indicates TU, return the TU.
> +   Otherwise, return the prefer default configuration.  */
> +static rtx
> +get_tail_policy_for_pred (enum predication_type_index pred)
> +{
> +  if (pred == PRED_TYPE_tu || pred == PRED_TYPE_tum || pred == PRED_TYPE_tumu)
> +    return gen_int_mode (TAIL_UNDISTURBED, Pmode);
> +  return gen_int_mode (get_prefer_tail_policy (), Pmode);
> +}
> +
> +/* Get MASK policy for predication. If predication indicates MU, return the MU.
> +   Otherwise, return the prefer default configuration.  */
> +static rtx
> +get_mask_policy_for_pred (enum predication_type_index pred)
> +{
> +  if (pred == PRED_TYPE_tumu || pred == PRED_TYPE_mu)
> +    return gen_int_mode (MASK_UNDISTURBED, Pmode);
> +  return gen_int_mode (get_prefer_mask_policy (), Pmode);
> +}
> +
> tree
> rvv_arg_type_info::get_tree_type (vector_type_index type_idx) const
> {
> @@ -546,6 +634,28 @@ function_builder::allocate_argument_types (const function_instance &instance,
>        instance.op_info->args[i].get_tree_type (instance.type.index));
> }
> +/* Apply predication into argument_types.  */
> +void
> +function_builder::apply_predication (const function_instance &instance,
> +      tree return_type,
> +      vec<tree> &argument_types) const
> +{
> +  /* These predication types need to apply merge type.  */
> +  if (instance.pred == PRED_TYPE_tu || instance.pred == PRED_TYPE_tum
> +      || instance.pred == PRED_TYPE_tumu || instance.pred == PRED_TYPE_mu)
> +    argument_types.quick_insert (0, return_type);
> +
> +  /* These predication types need to apply mask type.  */
> +  tree mask_type = builtin_types[mask_types[instance.type.index]].vector;
> +  if (instance.pred == PRED_TYPE_m || instance.pred == PRED_TYPE_tum
> +      || instance.pred == PRED_TYPE_tumu || instance.pred == PRED_TYPE_mu)
> +    argument_types.quick_insert (0, mask_type);
> +
> +  /* check if vl parameter need  */
> +  if (instance.base->apply_vl_p ())
> +    argument_types.quick_push (size_type_node);
> +}
> +
> /* Register all the functions in GROUP.  */
> void
> function_builder::register_function_group (const function_group_info &group)
> @@ -560,6 +670,37 @@ function_builder::append_name (const char *name)
>    obstack_grow (&m_string_obstack, name, strlen (name));
> }
> +/* Add "__riscv_" and "name".  */
> +void
> +function_builder::append_base_name (const char *name)
> +{
> +  append_name ("__riscv_");
> +  append_name (name);
> +}
> +
> +/* Add SEW into function name.  */
> +void
> +function_builder::append_sew (int sew)
> +{
> +  switch (sew)
> +    {
> +    case 8:
> +      append_name ("8");
> +      break;
> +    case 16:
> +      append_name ("16");
> +      break;
> +    case 32:
> +      append_name ("32");
> +      break;
> +    case 64:
> +      append_name ("64");
> +      break;
> +    default:
> +      gcc_unreachable ();
> +    }
> +}
> +
> /* Zero-terminate and complete the function name being built.  */
> char *
> function_builder::finish_name ()
> @@ -694,6 +835,99 @@ function_expander::add_input_operand (unsigned argno)
>    add_input_operand (TYPE_MODE (TREE_TYPE (arg)), x);
> }
> +/* Since we may normalize vop/vop_tu/vop_m/vop_tumu.. into a single patter.
> +   We add a undef for the intrinsics that don't need a real merge.  */
> +void
> +function_expander::add_vundef_operand (machine_mode mode)
> +{
> +  rtx vundef = gen_rtx_UNSPEC (mode, gen_rtvec (1, const0_rtx), UNSPEC_VUNDEF);
> +  add_input_operand (mode, vundef);
> +}
> +
> +/* Add a memory operand with mode MODE and address ADDR.  */
> +rtx
> +function_expander::add_mem_operand (machine_mode mode, rtx addr)
> +{
> +  gcc_assert (VECTOR_MODE_P (mode));
> +  rtx mem = gen_rtx_MEM (mode, memory_address (mode, addr));
> +  /* The memory is only guaranteed to be element-aligned.  */
> +  set_mem_align (mem, GET_MODE_ALIGNMENT (GET_MODE_INNER (mode)));
> +  add_fixed_operand (mem);
> +  return mem;
> +}
> +
> +/* Use contiguous load INSN.  */
> +rtx
> +function_expander::use_contiguous_load_insn (insn_code icode)
> +{
> +  gcc_assert (call_expr_nargs (exp) > 0);
> +  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
> +  tree mask_type = builtin_types[mask_types[type.index]].vector;
> +  machine_mode mask_mode = TYPE_MODE (mask_type);
> +
> +  /* Record the offset to get the argument.  */
> +  int arg_offset = 0;
> +
> +  if (use_real_mask_p (pred))
> +    add_input_operand (arg_offset++);
> +  else
> +    add_all_one_mask_operand (mask_mode);
> +
> +  if (use_real_merge_p (pred))
> +    add_input_operand (arg_offset++);
> +  else
> +    add_vundef_operand (mode);
> +
> +  tree addr_arg = CALL_EXPR_ARG (exp, arg_offset++);
> +  rtx addr = expand_normal (addr_arg);
> +  add_mem_operand (mode, addr);
> +
> +  for (int argno = arg_offset; argno < call_expr_nargs (exp); argno++)
> +    add_input_operand (argno);
> +
> +  add_input_operand (Pmode, get_tail_policy_for_pred (pred));
> +  add_input_operand (Pmode, get_mask_policy_for_pred (pred));
> +  add_input_operand (Pmode, get_avl_type_rtx (avl_type::NONVLMAX));
> +
> +  return generate_insn (icode);
> +}
> +
> +/* Use contiguous store INSN.  */
> +rtx
> +function_expander::use_contiguous_store_insn (insn_code icode)
> +{
> +  gcc_assert (call_expr_nargs (exp) > 0);
> +  machine_mode mode = TYPE_MODE (builtin_types[type.index].vector);
> +  tree mask_type = builtin_types[mask_types[type.index]].vector;
> +  machine_mode mask_mode = TYPE_MODE (mask_type);
> +
> +  /* Record the offset to get the argument.  */
> +  int arg_offset = 0;
> +
> +  int addr_loc = use_real_mask_p (pred) ? 1 : 0;
> +  tree addr_arg = CALL_EXPR_ARG (exp, addr_loc);
> +  rtx addr = expand_normal (addr_arg);
> +  rtx mem = add_mem_operand (mode, addr);
> +
> +  if (use_real_mask_p (pred))
> +    add_input_operand (arg_offset++);
> +  else
> +    add_all_one_mask_operand (mask_mode);
> +
> +  /* To model "+m" constraint, we include memory operand into input.  */
> +  add_input_operand (mode, mem);
> +
> +  arg_offset++;
> +  for (int argno = arg_offset; argno < call_expr_nargs (exp); argno++)
> +    add_input_operand (argno);
> +
> +  add_input_operand (Pmode, get_tail_policy_for_pred (pred));
> +  add_input_operand (Pmode, get_mask_policy_for_pred (pred));
> +  add_input_operand (Pmode, get_avl_type_rtx (avl_type::NONVLMAX));
> +
> +  return generate_insn (icode);
> +}
> +
> /* Generate instruction ICODE, given that its operands have already
>     been added to M_OPS.  Return the value of the first operand.  */
> rtx
> diff --git a/gcc/config/riscv/riscv-vector-builtins.def b/gcc/config/riscv/riscv-vector-builtins.def
> index b7a633ed376..7636e34d595 100644
> --- a/gcc/config/riscv/riscv-vector-builtins.def
> +++ b/gcc/config/riscv/riscv-vector-builtins.def
> @@ -44,7 +44,7 @@ along with GCC; see the file COPYING3.  If not see
> #ifndef DEF_RVV_TYPE
> #define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, VECTOR_MODE,         \
>      VECTOR_MODE_MIN_VLEN_32, VECTOR_SUFFIX, SCALAR_SUFFIX,    \
> -      VSETVL_SUFFIX)
> +      VSETVL_SUFFIX, MASK_TYPE)
> #endif
> /* Use "DEF_RVV_OP_TYPE" macro to define RVV operand types.
> @@ -61,212 +61,212 @@ along with GCC; see the file COPYING3.  If not see
> /* SEW/LMUL = 64:
>     Only enable when TARGET_MIN_VLEN > 32 and machine mode = VNx1BImode.  */
> -DEF_RVV_TYPE (vbool64_t, 14, __rvv_bool64_t, boolean, VNx1BI, VOID, _b64, , )
> +DEF_RVV_TYPE (vbool64_t, 14, __rvv_bool64_t, boolean, VNx1BI, VOID, _b64, , , vbool64_t)
> /* SEW/LMUL = 32:
>     Machine mode = VNx2BImode when TARGET_MIN_VLEN > 32.
>     Machine mode = VNx1BImode when TARGET_MIN_VLEN = 32.  */
> -DEF_RVV_TYPE (vbool32_t, 14, __rvv_bool32_t, boolean, VNx2BI, VNx1BI, _b32, , )
> +DEF_RVV_TYPE (vbool32_t, 14, __rvv_bool32_t, boolean, VNx2BI, VNx1BI, _b32, , , vbool32_t)
> /* SEW/LMUL = 16:
>     Machine mode = VNx2BImode when TARGET_MIN_VLEN = 32.
>     Machine mode = VNx4BImode when TARGET_MIN_VLEN > 32.  */
> -DEF_RVV_TYPE (vbool16_t, 14, __rvv_bool16_t, boolean, VNx4BI, VNx2BI, _b16, , )
> +DEF_RVV_TYPE (vbool16_t, 14, __rvv_bool16_t, boolean, VNx4BI, VNx2BI, _b16, , , vbool16_t)
> /* SEW/LMUL = 8:
>     Machine mode = VNx8BImode when TARGET_MIN_VLEN > 32.
>     Machine mode = VNx4BImode when TARGET_MIN_VLEN = 32.  */
> -DEF_RVV_TYPE (vbool8_t, 13, __rvv_bool8_t, boolean, VNx8BI, VNx4BI, _b8, , )
> +DEF_RVV_TYPE (vbool8_t, 13, __rvv_bool8_t, boolean, VNx8BI, VNx4BI, _b8, , , vbool8_t)
> /* SEW/LMUL = 4:
>     Machine mode = VNx16BImode when TARGET_MIN_VLEN > 32.
>     Machine mode = VNx8BImode when TARGET_MIN_VLEN = 32.  */
> -DEF_RVV_TYPE (vbool4_t, 13, __rvv_bool4_t, boolean, VNx16BI, VNx8BI, _b4, , )
> +DEF_RVV_TYPE (vbool4_t, 13, __rvv_bool4_t, boolean, VNx16BI, VNx8BI, _b4, , , vbool4_t)
> /* SEW/LMUL = 2:
>     Machine mode = VNx32BImode when TARGET_MIN_VLEN > 32.
>     Machine mode = VNx16BImode when TARGET_MIN_VLEN = 32.  */
> -DEF_RVV_TYPE (vbool2_t, 13, __rvv_bool2_t, boolean, VNx32BI, VNx16BI, _b2, , )
> +DEF_RVV_TYPE (vbool2_t, 13, __rvv_bool2_t, boolean, VNx32BI, VNx16BI, _b2, , , vbool2_t)
> /* SEW/LMUL = 1:
>     Machine mode = VNx64BImode when TARGET_MIN_VLEN > 32.
>     Machine mode = VNx32BImode when TARGET_MIN_VLEN = 32.  */
> -DEF_RVV_TYPE (vbool1_t, 13, __rvv_bool1_t, boolean, VNx64BI, VNx32BI, _b1, , )
> +DEF_RVV_TYPE (vbool1_t, 13, __rvv_bool1_t, boolean, VNx64BI, VNx32BI, _b1, , , vbool1_t)
> /* LMUL = 1/8:
>     Only enble when TARGET_MIN_VLEN > 32 and machine mode = VNx1QImode.  */
> DEF_RVV_TYPE (vint8mf8_t, 15, __rvv_int8mf8_t, intQI, VNx1QI, VOID, _i8mf8, _i8,
> -       _e8mf8)
> +       _e8mf8, vbool64_t)
> DEF_RVV_TYPE (vuint8mf8_t, 16, __rvv_uint8mf8_t, unsigned_intQI, VNx1QI, VOID,
> -       _u8mf8, _u8, _e8mf8)
> +       _u8mf8, _u8, _e8mf8, vbool64_t)
> /* LMUL = 1/4:
>     Machine mode = VNx2QImode when TARGET_MIN_VLEN > 32.
>     Machine mode = VNx1QImode when TARGET_MIN_VLEN = 32.  */
> DEF_RVV_TYPE (vint8mf4_t, 15, __rvv_int8mf4_t, intQI, VNx2QI, VNx1QI, _i8mf4,
> -       _i8, _e8mf4)
> +       _i8, _e8mf4, vbool32_t)
> DEF_RVV_TYPE (vuint8mf4_t, 16, __rvv_uint8mf4_t, unsigned_intQI, VNx2QI, VNx1QI,
> -       _u8mf4, _u8, _e8mf4)
> +       _u8mf4, _u8, _e8mf4, vbool32_t)
> /* LMUL = 1/2:
>     Machine mode = VNx4QImode when TARGET_MIN_VLEN > 32.
>     Machine mode = VNx2QImode when TARGET_MIN_VLEN = 32.  */
> DEF_RVV_TYPE (vint8mf2_t, 15, __rvv_int8mf2_t, intQI, VNx4QI, VNx2QI, _i8mf2,
> -       _i8, _e8mf2)
> +       _i8, _e8mf2, vbool16_t)
> DEF_RVV_TYPE (vuint8mf2_t, 16, __rvv_uint8mf2_t, unsigned_intQI, VNx4QI, VNx2QI,
> -       _u8mf2, _u8, _e8mf2)
> +       _u8mf2, _u8, _e8mf2, vbool16_t)
> /* LMUL = 1:
>     Machine mode = VNx8QImode when TARGET_MIN_VLEN > 32.
>     Machine mode = VNx4QImode when TARGET_MIN_VLEN = 32.  */
> DEF_RVV_TYPE (vint8m1_t, 14, __rvv_int8m1_t, intQI, VNx8QI, VNx4QI, _i8m1, _i8,
> -       _e8m1)
> +       _e8m1, vbool8_t)
> DEF_RVV_TYPE (vuint8m1_t, 15, __rvv_uint8m1_t, unsigned_intQI, VNx8QI, VNx4QI,
> -       _u8m1, _u8, _e8m1)
> +       _u8m1, _u8, _e8m1, vbool8_t)
> /* LMUL = 2:
>     Machine mode = VNx16QImode when TARGET_MIN_VLEN > 32.
>     Machine mode = VNx8QImode when TARGET_MIN_VLEN = 32.  */
> DEF_RVV_TYPE (vint8m2_t, 14, __rvv_int8m2_t, intQI, VNx16QI, VNx8QI, _i8m2, _i8,
> -       _e8m2)
> +       _e8m2, vbool4_t)
> DEF_RVV_TYPE (vuint8m2_t, 15, __rvv_uint8m2_t, unsigned_intQI, VNx16QI, VNx8QI,
> -       _u8m2, _u8, _e8m2)
> +       _u8m2, _u8, _e8m2, vbool4_t)
> /* LMUL = 4:
>     Machine mode = VNx32QImode when TARGET_MIN_VLEN > 32.
>     Machine mode = VNx16QImode when TARGET_MIN_VLEN = 32.  */
> DEF_RVV_TYPE (vint8m4_t, 14, __rvv_int8m4_t, intQI, VNx32QI, VNx16QI, _i8m4,
> -       _i8, _e8m4)
> +       _i8, _e8m4, vbool2_t)
> DEF_RVV_TYPE (vuint8m4_t, 15, __rvv_uint8m4_t, unsigned_intQI, VNx32QI, VNx16QI,
> -       _u8m4, _u8, _e8m4)
> +       _u8m4, _u8, _e8m4, vbool2_t)
> /* LMUL = 8:
>     Machine mode = VNx64QImode when TARGET_MIN_VLEN > 32.
>     Machine mode = VNx32QImode when TARGET_MIN_VLEN = 32.  */
> DEF_RVV_TYPE (vint8m8_t, 14, __rvv_int8m8_t, intQI, VNx64QI, VNx32QI, _i8m8,
> -       _i8, _e8m8)
> +       _i8, _e8m8, vbool1_t)
> DEF_RVV_TYPE (vuint8m8_t, 15, __rvv_uint8m8_t, unsigned_intQI, VNx64QI, VNx32QI,
> -       _u8m8, _u8, _e8m8)
> +       _u8m8, _u8, _e8m8, vbool1_t)
> /* LMUL = 1/4:
>     Only enble when TARGET_MIN_VLEN > 32 and machine mode = VNx1HImode.  */
> DEF_RVV_TYPE (vint16mf4_t, 16, __rvv_int16mf4_t, intHI, VNx1HI, VOID, _i16mf4,
> -       _i16, _e16mf4)
> +       _i16, _e16mf4, vbool64_t)
> DEF_RVV_TYPE (vuint16mf4_t, 17, __rvv_uint16mf4_t, unsigned_intHI, VNx1HI, VOID,
> -       _u16mf4, _u16, _e16mf4)
> +       _u16mf4, _u16, _e16mf4, vbool64_t)
> /* LMUL = 1/2:
>     Machine mode = VNx2HImode when TARGET_MIN_VLEN > 32.
>     Machine mode = VNx1HImode when TARGET_MIN_VLEN = 32.  */
> DEF_RVV_TYPE (vint16mf2_t, 16, __rvv_int16mf2_t, intHI, VNx2HI, VNx1HI, _i16mf2,
> -       _i16, _e16mf2)
> +       _i16, _e16mf2, vbool32_t)
> DEF_RVV_TYPE (vuint16mf2_t, 17, __rvv_uint16mf2_t, unsigned_intHI, VNx2HI,
> -       VNx1HI, _u16mf2, _u16, _e16mf2)
> +       VNx1HI, _u16mf2, _u16, _e16mf2, vbool32_t)
> /* LMUL = 1:
>     Machine mode = VNx4HImode when TARGET_MIN_VLEN > 32.
>     Machine mode = VNx2HImode when TARGET_MIN_VLEN = 32.  */
> DEF_RVV_TYPE (vint16m1_t, 15, __rvv_int16m1_t, intHI, VNx4HI, VNx2HI, _i16m1,
> -       _i16, _e16m1)
> +       _i16, _e16m1, vbool16_t)
> DEF_RVV_TYPE (vuint16m1_t, 16, __rvv_uint16m1_t, unsigned_intHI, VNx4HI, VNx2HI,
> -       _u16m1, _u16, _e16m1)
> +       _u16m1, _u16, _e16m1, vbool16_t)
> /* LMUL = 2:
>     Machine mode = VNx8HImode when TARGET_MIN_VLEN > 32.
>     Machine mode = VNx4HImode when TARGET_MIN_VLEN = 32.  */
> DEF_RVV_TYPE (vint16m2_t, 15, __rvv_int16m2_t, intHI, VNx8HI, VNx4HI, _i16m2,
> -       _i16, _e16m2)
> +       _i16, _e16m2, vbool8_t)
> DEF_RVV_TYPE (vuint16m2_t, 16, __rvv_uint16m2_t, unsigned_intHI, VNx8HI, VNx4HI,
> -       _u16m2, _u16, _e16m2)
> +       _u16m2, _u16, _e16m2, vbool8_t)
> /* LMUL = 4:
>     Machine mode = VNx16HImode when TARGET_MIN_VLEN > 32.
>     Machine mode = VNx8HImode when TARGET_MIN_VLEN = 32.  */
> DEF_RVV_TYPE (vint16m4_t, 15, __rvv_int16m4_t, intHI, VNx16HI, VNx8HI, _i16m4,
> -       _i16, _e16m4)
> +       _i16, _e16m4, vbool4_t)
> DEF_RVV_TYPE (vuint16m4_t, 16, __rvv_uint16m4_t, unsigned_intHI, VNx16HI,
> -       VNx8HI, _u16m4, _u16, _e16m4)
> +       VNx8HI, _u16m4, _u16, _e16m4, vbool4_t)
> /* LMUL = 8:
>     Machine mode = VNx32HImode when TARGET_MIN_VLEN > 32.
>     Machine mode = VNx16HImode when TARGET_MIN_VLEN = 32.  */
> DEF_RVV_TYPE (vint16m8_t, 15, __rvv_int16m8_t, intHI, VNx32HI, VNx16HI, _i16m8,
> -       _i16, _e16m8)
> +       _i16, _e16m8, vbool2_t)
> DEF_RVV_TYPE (vuint16m8_t, 16, __rvv_uint16m8_t, unsigned_intHI, VNx32HI,
> -       VNx16HI, _u16m8, _u16, _e16m8)
> +       VNx16HI, _u16m8, _u16, _e16m8, vbool2_t)
> /* LMUL = 1/2:
>     Only enble when TARGET_MIN_VLEN > 32 and machine mode = VNx1SImode.  */
> DEF_RVV_TYPE (vint32mf2_t, 16, __rvv_int32mf2_t, int32, VNx1SI, VOID, _i32mf2,
> -       _i32, _e32mf2)
> +       _i32, _e32mf2, vbool64_t)
> DEF_RVV_TYPE (vuint32mf2_t, 17, __rvv_uint32mf2_t, unsigned_int32, VNx1SI, VOID,
> -       _u32mf2, _u32, _e32mf2)
> +       _u32mf2, _u32, _e32mf2, vbool64_t)
> /* LMUL = 1:
>     Machine mode = VNx2SImode when TARGET_MIN_VLEN > 32.
>     Machine mode = VNx1SImode when TARGET_MIN_VLEN = 32.  */
> DEF_RVV_TYPE (vint32m1_t, 15, __rvv_int32m1_t, int32, VNx2SI, VNx1SI, _i32m1,
> -       _i32, _e32m1)
> +       _i32, _e32m1, vbool32_t)
> DEF_RVV_TYPE (vuint32m1_t, 16, __rvv_uint32m1_t, unsigned_int32, VNx2SI, VNx1SI,
> -       _u32m1, _u32, _e32m1)
> +       _u32m1, _u32, _e32m1, vbool32_t)
> /* LMUL = 2:
>     Machine mode = VNx4SImode when TARGET_MIN_VLEN > 32.
>     Machine mode = VNx2SImode when TARGET_MIN_VLEN = 32.  */
> DEF_RVV_TYPE (vint32m2_t, 15, __rvv_int32m2_t, int32, VNx4SI, VNx2SI, _i32m2,
> -       _i32, _e32m2)
> +       _i32, _e32m2, vbool16_t)
> DEF_RVV_TYPE (vuint32m2_t, 16, __rvv_uint32m2_t, unsigned_int32, VNx4SI, VNx2SI,
> -       _u32m2, _u32, _e32m2)
> +       _u32m2, _u32, _e32m2, vbool16_t)
> /* LMUL = 4:
>     Machine mode = VNx8SImode when TARGET_MIN_VLEN > 32.
>     Machine mode = VNx4SImode when TARGET_MIN_VLEN = 32.  */
> DEF_RVV_TYPE (vint32m4_t, 15, __rvv_int32m4_t, int32, VNx8SI, VNx4SI, _i32m4,
> -       _i32, _e32m4)
> +       _i32, _e32m4, vbool8_t)
> DEF_RVV_TYPE (vuint32m4_t, 16, __rvv_uint32m4_t, unsigned_int32, VNx8SI, VNx4SI,
> -       _u32m4, _u32, _e32m4)
> +       _u32m4, _u32, _e32m4, vbool8_t)
> /* LMUL = 8:
>     Machine mode = VNx16SImode when TARGET_MIN_VLEN > 32.
>     Machine mode = VNx8SImode when TARGET_MIN_VLEN = 32.  */
> DEF_RVV_TYPE (vint32m8_t, 15, __rvv_int32m8_t, int32, VNx16SI, VNx8SI, _i32m8,
> -       _i32, _e32m8)
> +       _i32, _e32m8, vbool4_t)
> DEF_RVV_TYPE (vuint32m8_t, 16, __rvv_uint32m8_t, unsigned_int32, VNx16SI,
> -       VNx8SI, _u32m8, _u32, _e32m8)
> +       VNx8SI, _u32m8, _u32, _e32m8, vbool4_t)
> /* SEW = 64:
>     Disable when TARGET_MIN_VLEN > 32.  */
> DEF_RVV_TYPE (vint64m1_t, 15, __rvv_int64m1_t, intDI, VNx1DI, VOID, _i64m1,
> -       _i64, _e64m1)
> +       _i64, _e64m1, vbool64_t)
> DEF_RVV_TYPE (vuint64m1_t, 16, __rvv_uint64m1_t, unsigned_intDI, VNx1DI, VOID,
> -       _u64m1, _u64, _e64m1)
> +       _u64m1, _u64, _e64m1, vbool64_t)
> DEF_RVV_TYPE (vint64m2_t, 15, __rvv_int64m2_t, intDI, VNx2DI, VOID, _i64m2,
> -       _i64, _e64m2)
> +       _i64, _e64m2, vbool32_t)
> DEF_RVV_TYPE (vuint64m2_t, 16, __rvv_uint64m2_t, unsigned_intDI, VNx2DI, VOID,
> -       _u64m2, _u64, _e64m2)
> +       _u64m2, _u64, _e64m2, vbool32_t)
> DEF_RVV_TYPE (vint64m4_t, 15, __rvv_int64m4_t, intDI, VNx4DI, VOID, _i64m4,
> -       _i64, _e64m4)
> +       _i64, _e64m4, vbool16_t)
> DEF_RVV_TYPE (vuint64m4_t, 16, __rvv_uint64m4_t, unsigned_intDI, VNx4DI, VOID,
> -       _u64m4, _u64, _e64m4)
> +       _u64m4, _u64, _e64m4, vbool16_t)
> DEF_RVV_TYPE (vint64m8_t, 15, __rvv_int64m8_t, intDI, VNx8DI, VOID, _i64m8,
> -       _i64, _e64m8)
> +       _i64, _e64m8, vbool8_t)
> DEF_RVV_TYPE (vuint64m8_t, 16, __rvv_uint64m8_t, unsigned_intDI, VNx8DI, VOID,
> -       _u64m8, _u64, _e64m8)
> +       _u64m8, _u64, _e64m8, vbool8_t)
> /* LMUL = 1/2:
>     Only enble when TARGET_MIN_VLEN > 32 and machine mode = VNx1SFmode.  */
> DEF_RVV_TYPE (vfloat32mf2_t, 18, __rvv_float32mf2_t, float, VNx1SF, VOID,
> -       _f32mf2, _f32, _e32mf2)
> +       _f32mf2, _f32, _e32mf2, vbool64_t)
> /* LMUL = 1:
>     Machine mode = VNx2SFmode when TARGET_MIN_VLEN > 32.
>     Machine mode = VNx1SFmode when TARGET_MIN_VLEN = 32.  */
> DEF_RVV_TYPE (vfloat32m1_t, 17, __rvv_float32m1_t, float, VNx2SF, VNx1SF,
> -       _f32m1, _f32, _e32m1)
> +       _f32m1, _f32, _e32m1, vbool32_t)
> /* LMUL = 2:
>     Machine mode = VNx4SFmode when TARGET_MIN_VLEN > 32.
>     Machine mode = VNx2SFmode when TARGET_MIN_VLEN = 32.  */
> DEF_RVV_TYPE (vfloat32m2_t, 17, __rvv_float32m2_t, float, VNx4SF, VNx2SF,
> -       _f32m2, _f32, _e32m2)
> +       _f32m2, _f32, _e32m2, vbool16_t)
> /* LMUL = 4:
>     Machine mode = VNx8SFmode when TARGET_MIN_VLEN > 32.
>     Machine mode = VNx4SFmode when TARGET_MIN_VLEN = 32.  */
> DEF_RVV_TYPE (vfloat32m4_t, 17, __rvv_float32m4_t, float, VNx8SF, VNx4SF,
> -       _f32m4, _f32, _e32m4)
> +       _f32m4, _f32, _e32m4, vbool8_t)
> /* LMUL = 8:
>     Machine mode = VNx16SFmode when TARGET_MIN_VLEN > 32.
>     Machine mode = VNx8SFmode when TARGET_MIN_VLEN = 32.  */
> DEF_RVV_TYPE (vfloat32m8_t, 17, __rvv_float32m8_t, float, VNx16SF, VNx8SF,
> -       _f32m8, _f32, _e32m8)
> +       _f32m8, _f32, _e32m8, vbool4_t)
> /* SEW = 64:
>     Disable when TARGET_VECTOR_FP64.  */
> DEF_RVV_TYPE (vfloat64m1_t, 17, __rvv_float64m1_t, double, VNx1DF, VOID, _f64m1,
> -       _f64, _e64m1)
> +       _f64, _e64m1, vbool64_t)
> DEF_RVV_TYPE (vfloat64m2_t, 17, __rvv_float64m2_t, double, VNx2DF, VOID, _f64m2,
> -       _f64, _e64m2)
> +       _f64, _e64m2, vbool32_t)
> DEF_RVV_TYPE (vfloat64m4_t, 17, __rvv_float64m4_t, double, VNx4DF, VOID, _f64m4,
> -       _f64, _e64m4)
> +       _f64, _e64m4, vbool16_t)
> DEF_RVV_TYPE (vfloat64m8_t, 17, __rvv_float64m8_t, double, VNx8DF, VOID, _f64m8,
> -       _f64, _e64m8)
> +       _f64, _e64m8, vbool8_t)
> DEF_RVV_OP_TYPE (vv)
> DEF_RVV_OP_TYPE (vx)
> diff --git a/gcc/config/riscv/riscv-vector-builtins.h b/gcc/config/riscv/riscv-vector-builtins.h
> index 425da12326c..c13df99cb5b 100644
> --- a/gcc/config/riscv/riscv-vector-builtins.h
> +++ b/gcc/config/riscv/riscv-vector-builtins.h
> @@ -264,10 +264,13 @@ public:
>    ~function_builder ();
>    void allocate_argument_types (const function_instance &, vec<tree> &) const;
> +  void apply_predication (const function_instance &, tree, vec<tree> &) const;
>    void add_unique_function (const function_instance &, const function_shape *,
>     tree, vec<tree> &);
>    void register_function_group (const function_group_info &);
>    void append_name (const char *);
> +  void append_base_name (const char *);
> +  void append_sew (int);
>    char *finish_name ();
> private:
> @@ -315,6 +318,16 @@ public:
>    void add_input_operand (machine_mode, rtx);
>    void add_input_operand (unsigned argno);
> +  void add_output_operand (machine_mode, rtx);
> +  void add_all_one_mask_operand (machine_mode mode);
> +  void add_vundef_operand (machine_mode mode);
> +  void add_fixed_operand (rtx);
> +  rtx add_mem_operand (machine_mode, rtx);
> +
> +  machine_mode vector_mode (void) const;
> +
> +  rtx use_contiguous_load_insn (insn_code);
> +  rtx use_contiguous_store_insn (insn_code);
>    rtx generate_insn (insn_code);
>    /* The function call expression.  */
> @@ -342,6 +355,12 @@ public:
>       in addition to reading its arguments and returning a result.  */
>    virtual unsigned int call_properties (const function_instance &) const;
> +  /* Return true if intrinsics should apply vl operand.  */
> +  virtual bool apply_vl_p () const;
> +
> +  /* Return true if intrinsic can be overloaded.  */
> +  virtual bool can_be_overloaded_p (enum predication_type_index) const;
> +
>    /* Expand the given call into rtl.  Return the result of the function,
>       or an arbitrary value if the function doesn't return a result.  */
>    virtual rtx expand (function_expander &) const = 0;
> @@ -394,6 +413,37 @@ function_expander::add_input_operand (machine_mode mode, rtx op)
>    create_input_operand (&m_ops[opno++], op, mode);
> }
> +/* Create output and add it into M_OPS and increase OPNO.  */
> +inline void
> +function_expander::add_output_operand (machine_mode mode, rtx target)
> +{
> +  create_output_operand (&m_ops[opno++], target, mode);
> +}
> +
> +/* Since we may normalize vop/vop_tu/vop_m/vop_tumu.. into a single patter.
> +   We add a fake all true mask for the intrinsics that don't need a real mask.
> + */
> +inline void
> +function_expander::add_all_one_mask_operand (machine_mode mode)
> +{
> +  add_input_operand (mode, CONSTM1_RTX (mode));
> +}
> +
> +/* Add an operand that must be X.  The only way of legitimizing an
> +   invalid X is to reload the address of a MEM.  */
> +inline void
> +function_expander::add_fixed_operand (rtx x)
> +{
> +  create_fixed_operand (&m_ops[opno++], x);
> +}
> +
> +/* Return the machine_mode of the corresponding vector type.  */
> +inline machine_mode
> +function_expander::vector_mode (void) const
> +{
> +  return TYPE_MODE (builtin_types[type.index].vector);
> +}
> +
> /* Default implementation of function_base::call_properties, with conservatively
>     correct behavior for floating-point instructions.  */
> inline unsigned int
> @@ -405,6 +455,21 @@ function_base::call_properties (const function_instance &instance) const
>    return flags;
> }
> +/* We choose to apply vl operand by default since most of the intrinsics
> +   has vl operand.  */
> +inline bool
> +function_base::apply_vl_p () const
> +{
> +  return true;
> +}
> +
> +/* Since most of intrinsics can be overloaded, we set it true by default.  */
> +inline bool
> +function_base::can_be_overloaded_p (enum predication_type_index) const
> +{
> +  return true;
> +}
> +
> } // end namespace riscv_vector
> #endif
> diff --git a/gcc/config/riscv/riscv-vsetvl.cc b/gcc/config/riscv/riscv-vsetvl.cc
> index 72f1e4059ab..01530c1ae75 100644
> --- a/gcc/config/riscv/riscv-vsetvl.cc
> +++ b/gcc/config/riscv/riscv-vsetvl.cc
> @@ -302,10 +302,6 @@ get_vl (rtx_insn *rinsn)
> {
>    if (has_vl_op (rinsn))
>      {
> -      /* We only call get_vl for VLMAX use VTYPE instruction.
> - It's used to get the VL operand to emit VLMAX VSETVL instruction:
> - vsetvl a5,zero,e32,m1,ta,ma.  */
> -      gcc_assert (get_attr_avl_type (rinsn) == VLMAX);
>        extract_insn_cached (rinsn);
>        return recog_data.operand[get_attr_vl_op_idx (rinsn)];
>      }
> diff --git a/gcc/config/riscv/t-riscv b/gcc/config/riscv/t-riscv
> index 7af9f5402ec..d30e0235356 100644
> --- a/gcc/config/riscv/t-riscv
> +++ b/gcc/config/riscv/t-riscv
> @@ -9,7 +9,7 @@ riscv-vector-builtins.o: $(srcdir)/config/riscv/riscv-vector-builtins.cc \
>    $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(TREE_H) $(RTL_H) $(TM_P_H) \
>    memmodel.h insn-codes.h $(OPTABS_H) $(RECOG_H) $(DIAGNOSTIC_H) $(EXPR_H) \
>    $(FUNCTION_H) fold-const.h gimplify.h explow.h stor-layout.h $(REGS_H) \
> -  alias.h langhooks.h attribs.h stringpool.h \
> +  alias.h langhooks.h attribs.h stringpool.h emit-rtl.h \
>    $(srcdir)/config/riscv/riscv-vector-builtins.h \
>    $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \
>    $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \
> --
> 2.36.3
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2022-12-23  5:43 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-23  0:52 [PATCH] RISC-V: Support vle.v/vse.v intrinsics juzhe.zhong
2022-12-23  0:56 ` 钟居哲
2022-12-23  5:42   ` Kito Cheng

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).