public inbox for gcc-cvs@sourceware.org
help / color / mirror / Atom feed
* [gcc r13-6276] RISC-V: Add floating-point RVV C/C++ api
@ 2023-02-22 13:44 Kito Cheng
0 siblings, 0 replies; only message in thread
From: Kito Cheng @ 2023-02-22 13:44 UTC (permalink / raw)
To: gcc-cvs
https://gcc.gnu.org/g:dc244cdc05a0cc4a7c40c5c5027c12cc1dc6e4d3
commit r13-6276-gdc244cdc05a0cc4a7c40c5c5027c12cc1dc6e4d3
Author: Ju-Zhe Zhong <juzhe.zhong@rivai.ai>
Date: Fri Feb 17 20:51:14 2023 +0800
RISC-V: Add floating-point RVV C/C++ api
Add RVV floating-point C/C++ api tests.
The api unit-tests are all passed but not commit.
They are located at RISC-V foundation repo:
https://github.com/riscv-collab/riscv-gcc/tree/rvv-submission-v1
gcc/ChangeLog:
* config/riscv/iterators.md: New iterator.
* config/riscv/riscv-vector-builtins-bases.cc (class widen_binop): New class.
(enum ternop_type): New enum.
(class vmacc): New class.
(class imac): Ditto.
(class vnmsac): Ditto.
(enum widen_ternop_type): New enum.
(class vmadd): Ditto.
(class vnmsub): Ditto.
(class iwmac): Ditto.
(class vwmacc): Ditto.
(class vwmaccu): Ditto.
(class vwmaccsu): Ditto.
(class vwmaccus): Ditto.
(class reverse_binop): Ditto.
(class vfmacc): Ditto.
(class vfnmsac): Ditto.
(class vfmadd): Ditto.
(class vfnmsub): Ditto.
(class vfnmacc): Ditto.
(class vfmsac): Ditto.
(class vfnmadd): Ditto.
(class vfmsub): Ditto.
(class vfwmacc): Ditto.
(class vfwnmacc): Ditto.
(class vfwmsac): Ditto.
(class vfwnmsac): Ditto.
(class float_misc): Ditto.
(class fcmp): Ditto.
(class vfclass): Ditto.
(class vfcvt_x): Ditto.
(class vfcvt_rtz_x): Ditto.
(class vfcvt_f): Ditto.
(class vfwcvt_x): Ditto.
(class vfwcvt_rtz_x): Ditto.
(class vfwcvt_f): Ditto.
(class vfncvt_x): Ditto.
(class vfncvt_rtz_x): Ditto.
(class vfncvt_f): Ditto.
(class vfncvt_rod_f): Ditto.
(BASE): Ditto.
* config/riscv/riscv-vector-builtins-bases.h:
* config/riscv/riscv-vector-builtins-functions.def (vzext): Ditto.
(vsext): Ditto.
(vfadd): Ditto.
(vfsub): Ditto.
(vfrsub): Ditto.
(vfwadd): Ditto.
(vfwsub): Ditto.
(vfmul): Ditto.
(vfdiv): Ditto.
(vfrdiv): Ditto.
(vfwmul): Ditto.
(vfmacc): Ditto.
(vfnmsac): Ditto.
(vfmadd): Ditto.
(vfnmsub): Ditto.
(vfnmacc): Ditto.
(vfmsac): Ditto.
(vfnmadd): Ditto.
(vfmsub): Ditto.
(vfwmacc): Ditto.
(vfwnmacc): Ditto.
(vfwmsac): Ditto.
(vfwnmsac): Ditto.
(vfsqrt): Ditto.
(vfrsqrt7): Ditto.
(vfrec7): Ditto.
(vfmin): Ditto.
(vfmax): Ditto.
(vfsgnj): Ditto.
(vfsgnjn): Ditto.
(vfsgnjx): Ditto.
(vfneg): Ditto.
(vfabs): Ditto.
(vmfeq): Ditto.
(vmfne): Ditto.
(vmflt): Ditto.
(vmfle): Ditto.
(vmfgt): Ditto.
(vmfge): Ditto.
(vfclass): Ditto.
(vfmerge): Ditto.
(vfmv_v): Ditto.
(vfcvt_x): Ditto.
(vfcvt_xu): Ditto.
(vfcvt_rtz_x): Ditto.
(vfcvt_rtz_xu): Ditto.
(vfcvt_f): Ditto.
(vfwcvt_x): Ditto.
(vfwcvt_xu): Ditto.
(vfwcvt_rtz_x): Ditto.
(vfwcvt_rtz_xu): Ditto.
(vfwcvt_f): Ditto.
(vfncvt_x): Ditto.
(vfncvt_xu): Ditto.
(vfncvt_rtz_x): Ditto.
(vfncvt_rtz_xu): Ditto.
(vfncvt_f): Ditto.
(vfncvt_rod_f): Ditto.
* config/riscv/riscv-vector-builtins-shapes.cc (struct alu_def): Ditto.
(struct move_def): Ditto.
* config/riscv/riscv-vector-builtins-types.def (DEF_RVV_WEXTF_OPS): New macro.
(DEF_RVV_CONVERT_I_OPS): Ditto.
(DEF_RVV_CONVERT_U_OPS): Ditto.
(DEF_RVV_WCONVERT_I_OPS): Ditto.
(DEF_RVV_WCONVERT_U_OPS): Ditto.
(DEF_RVV_WCONVERT_F_OPS): Ditto.
(vfloat64m1_t): Ditto.
(vfloat64m2_t): Ditto.
(vfloat64m4_t): Ditto.
(vfloat64m8_t): Ditto.
(vint32mf2_t): Ditto.
(vint32m1_t): Ditto.
(vint32m2_t): Ditto.
(vint32m4_t): Ditto.
(vint32m8_t): Ditto.
(vint64m1_t): Ditto.
(vint64m2_t): Ditto.
(vint64m4_t): Ditto.
(vint64m8_t): Ditto.
(vuint32mf2_t): Ditto.
(vuint32m1_t): Ditto.
(vuint32m2_t): Ditto.
(vuint32m4_t): Ditto.
(vuint32m8_t): Ditto.
(vuint64m1_t): Ditto.
(vuint64m2_t): Ditto.
(vuint64m4_t): Ditto.
(vuint64m8_t): Ditto.
* config/riscv/riscv-vector-builtins.cc (DEF_RVV_CONVERT_I_OPS): Ditto.
(DEF_RVV_CONVERT_U_OPS): Ditto.
(DEF_RVV_WCONVERT_I_OPS): Ditto.
(DEF_RVV_WCONVERT_U_OPS): Ditto.
(DEF_RVV_WCONVERT_F_OPS): Ditto.
(DEF_RVV_F_OPS): Ditto.
(DEF_RVV_WEXTF_OPS): Ditto.
(required_extensions_p): Adjust for floating-point support.
(check_required_extensions): Ditto.
(unsigned_base_type_p): Ditto.
(get_mode_for_bitsize): Ditto.
(rvv_arg_type_info::get_base_vector_type): Ditto.
(rvv_arg_type_info::get_tree_type): Ditto.
* config/riscv/riscv-vector-builtins.def (v_f): New define.
(f): New define.
(f_v): New define.
(xu_v): New define.
(f_w): New define.
(xu_w): New define.
* config/riscv/riscv-vector-builtins.h (enum rvv_base_type): New enum.
(function_expander::arg_mode): New function.
* config/riscv/vector-iterators.md (sof): New iterator.
(vfrecp): Ditto.
(copysign): Ditto.
(n): Ditto.
(msac): Ditto.
(msub): Ditto.
(fixuns_trunc): Ditto.
(floatuns): Ditto.
* config/riscv/vector.md (@pred_broadcast<mode>): New pattern.
(@pred_<optab><mode>): Ditto.
(@pred_<optab><mode>_scalar): Ditto.
(@pred_<optab><mode>_reverse_scalar): Ditto.
(@pred_<copysign><mode>): Ditto.
(@pred_<copysign><mode>_scalar): Ditto.
(@pred_mul_<optab><mode>): Ditto.
(pred_mul_<optab><mode>_undef_merge): Ditto.
(*pred_<madd_nmsub><mode>): Ditto.
(*pred_<macc_nmsac><mode>): Ditto.
(*pred_mul_<optab><mode>): Ditto.
(@pred_mul_<optab><mode>_scalar): Ditto.
(*pred_mul_<optab><mode>_undef_merge_scalar): Ditto.
(*pred_<madd_nmsub><mode>_scalar): Ditto.
(*pred_<macc_nmsac><mode>_scalar): Ditto.
(*pred_mul_<optab><mode>_scalar): Ditto.
(@pred_neg_mul_<optab><mode>): Ditto.
(pred_neg_mul_<optab><mode>_undef_merge): Ditto.
(*pred_<nmadd_msub><mode>): Ditto.
(*pred_<nmacc_msac><mode>): Ditto.
(*pred_neg_mul_<optab><mode>): Ditto.
(@pred_neg_mul_<optab><mode>_scalar): Ditto.
(*pred_neg_mul_<optab><mode>_undef_merge_scalar): Ditto.
(*pred_<nmadd_msub><mode>_scalar): Ditto.
(*pred_<nmacc_msac><mode>_scalar): Ditto.
(*pred_neg_mul_<optab><mode>_scalar): Ditto.
(@pred_<misc_op><mode>): Ditto.
(@pred_class<mode>): Ditto.
(@pred_dual_widen_<optab><mode>): Ditto.
(@pred_dual_widen_<optab><mode>_scalar): Ditto.
(@pred_single_widen_<plus_minus:optab><mode>): Ditto.
(@pred_single_widen_<plus_minus:optab><mode>_scalar): Ditto.
(@pred_widen_mul_<optab><mode>): Ditto.
(@pred_widen_mul_<optab><mode>_scalar): Ditto.
(@pred_widen_neg_mul_<optab><mode>): Ditto.
(@pred_widen_neg_mul_<optab><mode>_scalar): Ditto.
(@pred_cmp<mode>): Ditto.
(*pred_cmp<mode>): Ditto.
(*pred_cmp<mode>_narrow): Ditto.
(@pred_cmp<mode>_scalar): Ditto.
(*pred_cmp<mode>_scalar): Ditto.
(*pred_cmp<mode>_scalar_narrow): Ditto.
(@pred_eqne<mode>_scalar): Ditto.
(*pred_eqne<mode>_scalar): Ditto.
(*pred_eqne<mode>_scalar_narrow): Ditto.
(@pred_merge<mode>_scalar): Ditto.
(@pred_fcvt_x<v_su>_f<mode>): Ditto.
(@pred_<fix_cvt><mode>): Ditto.
(@pred_<float_cvt><mode>): Ditto.
(@pred_widen_fcvt_x<v_su>_f<mode>): Ditto.
(@pred_widen_<fix_cvt><mode>): Ditto.
(@pred_widen_<float_cvt><mode>): Ditto.
(@pred_extend<mode>): Ditto.
(@pred_narrow_fcvt_x<v_su>_f<mode>): Ditto.
(@pred_narrow_<fix_cvt><mode>): Ditto.
(@pred_narrow_<float_cvt><mode>): Ditto.
(@pred_trunc<mode>): Ditto.
(@pred_rod_trunc<mode>): Ditto.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/ternop_vv_constraint-3.c: New test.
* gcc.target/riscv/rvv/base/ternop_vv_constraint-4.c: New test.
* gcc.target/riscv/rvv/base/ternop_vv_constraint-5.c: New test.
* gcc.target/riscv/rvv/base/ternop_vv_constraint-6.c: New test.
* gcc.target/riscv/rvv/base/ternop_vx_constraint-8.c: New test.
* gcc.target/riscv/rvv/base/ternop_vx_constraint-9.c: New test.
Diff:
---
gcc/config/riscv/iterators.md | 8 +-
gcc/config/riscv/riscv-vector-builtins-bases.cc | 777 ++++++++--
gcc/config/riscv/riscv-vector-builtins-bases.h | 60 +
.../riscv/riscv-vector-builtins-functions.def | 146 +-
gcc/config/riscv/riscv-vector-builtins-shapes.cc | 24 +-
gcc/config/riscv/riscv-vector-builtins-types.def | 87 ++
gcc/config/riscv/riscv-vector-builtins.cc | 408 ++++-
gcc/config/riscv/riscv-vector-builtins.def | 6 +-
gcc/config/riscv/riscv-vector-builtins.h | 12 +
gcc/config/riscv/vector-iterators.md | 101 +-
gcc/config/riscv/vector.md | 1580 +++++++++++++++++++-
.../riscv/rvv/base/ternop_vv_constraint-3.c | 83 +
.../riscv/rvv/base/ternop_vv_constraint-4.c | 83 +
.../riscv/rvv/base/ternop_vv_constraint-5.c | 83 +
.../riscv/rvv/base/ternop_vv_constraint-6.c | 83 +
.../riscv/rvv/base/ternop_vx_constraint-8.c | 71 +
.../riscv/rvv/base/ternop_vx_constraint-9.c | 71 +
17 files changed, 3472 insertions(+), 211 deletions(-)
diff --git a/gcc/config/riscv/iterators.md b/gcc/config/riscv/iterators.md
index f95dd405e12..5b70ab20758 100644
--- a/gcc/config/riscv/iterators.md
+++ b/gcc/config/riscv/iterators.md
@@ -175,7 +175,9 @@
(gt "") (gtu "u")
(ge "") (geu "u")
(lt "") (ltu "u")
- (le "") (leu "u")])
+ (le "") (leu "u")
+ (fix "") (unsigned_fix "u")
+ (float "") (unsigned_float "u")])
;; <su> is like <u>, but the signed form expands to "s" rather than "".
(define_code_attr su [(sign_extend "s") (zero_extend "u")])
@@ -204,6 +206,8 @@
(mult "mul")
(not "one_cmpl")
(neg "neg")
+ (abs "abs")
+ (sqrt "sqrt")
(ss_plus "ssadd")
(us_plus "usadd")
(ss_minus "sssub")
@@ -235,6 +239,8 @@
(mult "mul")
(not "not")
(neg "neg")
+ (abs "abs")
+ (sqrt "sqrt")
(ss_plus "sadd")
(us_plus "saddu")
(ss_minus "ssub")
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index 88142217e45..bfcfab55bb9 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -155,8 +155,11 @@ public:
};
/* Implements
- * vadd/vsub/vand/vor/vxor/vsll/vsra/vsrl/vmin/vmax/vminu/vmaxu/vdiv/vrem/vdivu/vremu/vsadd/vsaddu/vssub/vssubu.
- */
+ vadd/vsub/vand/vor/vxor/vsll/vsra/vsrl/
+ vmin/vmax/vminu/vmaxu/vdiv/vrem/vdivu/
+ vremu/vsadd/vsaddu/vssub/vssubu
+ vfadd/vfsub/
+*/
template<rtx_code CODE>
class binop : public function_base
{
@@ -166,6 +169,7 @@ public:
switch (e.op_info->op)
{
case OP_TYPE_vx:
+ case OP_TYPE_vf:
return e.use_exact_insn (code_for_pred_scalar (CODE, e.vector_mode ()));
case OP_TYPE_vv:
return e.use_exact_insn (code_for_pred (CODE, e.vector_mode ()));
@@ -239,8 +243,8 @@ public:
}
};
-/* Implements vwadd/vwsub/vwmul. */
-template<rtx_code CODE1, rtx_code CODE2>
+/* Implements vwadd/vwsub/vwmul/vfwadd/vfwsub/vfwmul. */
+template<rtx_code CODE1, rtx_code CODE2 = FLOAT_EXTEND>
class widen_binop : public function_base
{
public:
@@ -265,6 +269,31 @@ public:
}
}
};
+template<rtx_code CODE>
+class widen_binop<CODE, FLOAT_EXTEND> : public function_base
+{
+public:
+ rtx expand (function_expander &e) const override
+ {
+ switch (e.op_info->op)
+ {
+ case OP_TYPE_vv:
+ return e.use_exact_insn (
+ code_for_pred_dual_widen (CODE, e.vector_mode ()));
+ case OP_TYPE_vf:
+ return e.use_exact_insn (
+ code_for_pred_dual_widen_scalar (CODE, e.vector_mode ()));
+ case OP_TYPE_wv:
+ return e.use_exact_insn (
+ code_for_pred_single_widen (CODE, e.vector_mode ()));
+ case OP_TYPE_wf:
+ return e.use_exact_insn (
+ code_for_pred_single_widen_scalar (CODE, e.vector_mode ()));
+ default:
+ gcc_unreachable ();
+ }
+ }
+};
/* Implements vwmulsu. */
class vwmulsu : public function_base
@@ -426,7 +455,7 @@ public:
}
};
-/* Implements vmerge. */
+/* Implements vmerge/vfmerge. */
class vmerge : public function_base
{
public:
@@ -439,6 +468,7 @@ public:
case OP_TYPE_vvm:
return e.use_exact_insn (code_for_pred_merge (e.vector_mode ()));
case OP_TYPE_vxm:
+ case OP_TYPE_vfm:
return e.use_exact_insn (code_for_pred_merge_scalar (e.vector_mode ()));
default:
gcc_unreachable ();
@@ -446,7 +476,7 @@ public:
}
};
-/* Implements vmv.v.x/vmv.v.v. */
+/* Implements vmv.v.x/vmv.v.v/vfmv.v.f. */
class vmv_v : public function_base
{
public:
@@ -457,6 +487,7 @@ public:
case OP_TYPE_v:
return e.use_exact_insn (code_for_pred_mov (e.vector_mode ()));
case OP_TYPE_x:
+ case OP_TYPE_f:
return e.use_exact_insn (code_for_pred_broadcast (e.vector_mode ()));
default:
gcc_unreachable ();
@@ -539,132 +570,144 @@ public:
}
};
-/* Enumerates types of ternary operations.
- We have 2 types ternop:
- - 1. accumulator is vd:
- vmacc.vv vd,vs1,vs2 # vd = vs1 * vs2 + vd.
- - 2. accumulator is vs2:
- vmadd.vv vd,vs1,vs2 # vd = vs1 * vd + vs2. */
-enum ternop_type
+/* Implements vmacc/vnmsac/vmadd/vnmsub. */
+class vmacc : public function_base
{
- TERNOP_VMACC,
- TERNOP_VNMSAC,
- TERNOP_VMADD,
- TERNOP_VNMSUB,
+public:
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ if (e.op_info->op == OP_TYPE_vx)
+ return e.use_ternop_insn (true,
+ code_for_pred_mul_scalar (PLUS,
+ e.vector_mode ()));
+ if (e.op_info->op == OP_TYPE_vv)
+ return e.use_ternop_insn (true,
+ code_for_pred_mul (PLUS, e.vector_mode ()));
+ gcc_unreachable ();
+ }
};
-/* Implements vmacc/vnmsac/vmadd/vnmsub. */
-template<ternop_type TERNOP_TYPE>
-class imac : public function_base
+class vnmsac : public function_base
{
public:
bool has_merge_operand_p () const override { return false; }
rtx expand (function_expander &e) const override
{
- switch (TERNOP_TYPE)
- {
- case TERNOP_VMACC:
- if (e.op_info->op == OP_TYPE_vx)
- return e.use_ternop_insn (
- true, code_for_pred_mul_scalar (PLUS, e.vector_mode ()));
- if (e.op_info->op == OP_TYPE_vv)
- return e.use_ternop_insn (true,
- code_for_pred_mul (PLUS, e.vector_mode ()));
- break;
- case TERNOP_VNMSAC:
- if (e.op_info->op == OP_TYPE_vx)
- return e.use_ternop_insn (
- true, code_for_pred_mul_scalar (MINUS, e.vector_mode ()));
- if (e.op_info->op == OP_TYPE_vv)
- return e.use_ternop_insn (true, code_for_pred_mul (MINUS,
- e.vector_mode ()));
- break;
- case TERNOP_VMADD:
- if (e.op_info->op == OP_TYPE_vx)
- return e.use_ternop_insn (
- false, code_for_pred_mul_scalar (PLUS, e.vector_mode ()));
- if (e.op_info->op == OP_TYPE_vv)
- return e.use_ternop_insn (false,
- code_for_pred_mul (PLUS, e.vector_mode ()));
- break;
- case TERNOP_VNMSUB:
- if (e.op_info->op == OP_TYPE_vx)
- return e.use_ternop_insn (
- false, code_for_pred_mul_scalar (MINUS, e.vector_mode ()));
- if (e.op_info->op == OP_TYPE_vv)
- return e.use_ternop_insn (false,
- code_for_pred_mul (MINUS,
- e.vector_mode ()));
- break;
- default:
- break;
- }
+ if (e.op_info->op == OP_TYPE_vx)
+ return e.use_ternop_insn (true,
+ code_for_pred_mul_scalar (MINUS,
+ e.vector_mode ()));
+ if (e.op_info->op == OP_TYPE_vv)
+ return e.use_ternop_insn (true,
+ code_for_pred_mul (MINUS, e.vector_mode ()));
gcc_unreachable ();
}
};
-/* Enumerates types of widen ternary operations.
- We have 4 types ternop:
- - 1. vwmacc.
- - 2. vwmaccu.
- - 3. vwmaccsu.
- - 4. vwmaccus. */
-enum widen_ternop_type
+class vmadd : public function_base
{
- WIDEN_TERNOP_VWMACC,
- WIDEN_TERNOP_VWMACCU,
- WIDEN_TERNOP_VWMACCSU,
- WIDEN_TERNOP_VWMACCUS,
+public:
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ if (e.op_info->op == OP_TYPE_vx)
+ return e.use_ternop_insn (false,
+ code_for_pred_mul_scalar (PLUS,
+ e.vector_mode ()));
+ if (e.op_info->op == OP_TYPE_vv)
+ return e.use_ternop_insn (false,
+ code_for_pred_mul (PLUS, e.vector_mode ()));
+ gcc_unreachable ();
+ }
};
+class vnmsub : public function_base
+{
+public:
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ if (e.op_info->op == OP_TYPE_vx)
+ return e.use_ternop_insn (false,
+ code_for_pred_mul_scalar (MINUS,
+ e.vector_mode ()));
+ if (e.op_info->op == OP_TYPE_vv)
+ return e.use_ternop_insn (false,
+ code_for_pred_mul (MINUS, e.vector_mode ()));
+ gcc_unreachable ();
+ }
+};
+
+
/* Implements vwmacc<su><su>. */
-template<widen_ternop_type WIDEN_TERNOP_TYPE>
-class iwmac : public function_base
+class vwmacc : public function_base
{
public:
bool has_merge_operand_p () const override { return false; }
rtx expand (function_expander &e) const override
{
- switch (WIDEN_TERNOP_TYPE)
- {
- case WIDEN_TERNOP_VWMACC:
- if (e.op_info->op == OP_TYPE_vx)
- return e.use_widen_ternop_insn (
- code_for_pred_widen_mul_plus_scalar (SIGN_EXTEND,
- e.vector_mode ()));
- if (e.op_info->op == OP_TYPE_vv)
- return e.use_widen_ternop_insn (
- code_for_pred_widen_mul_plus (SIGN_EXTEND, e.vector_mode ()));
- break;
- case WIDEN_TERNOP_VWMACCU:
- if (e.op_info->op == OP_TYPE_vx)
- return e.use_widen_ternop_insn (
- code_for_pred_widen_mul_plus_scalar (ZERO_EXTEND,
- e.vector_mode ()));
- if (e.op_info->op == OP_TYPE_vv)
- return e.use_widen_ternop_insn (
- code_for_pred_widen_mul_plus (ZERO_EXTEND, e.vector_mode ()));
- break;
- case WIDEN_TERNOP_VWMACCSU:
- if (e.op_info->op == OP_TYPE_vx)
- return e.use_widen_ternop_insn (
- code_for_pred_widen_mul_plussu_scalar (e.vector_mode ()));
- if (e.op_info->op == OP_TYPE_vv)
- return e.use_widen_ternop_insn (
- code_for_pred_widen_mul_plussu (e.vector_mode ()));
- break;
- case WIDEN_TERNOP_VWMACCUS:
- return e.use_widen_ternop_insn (
- code_for_pred_widen_mul_plusus_scalar (e.vector_mode ()));
- default:
- break;
- }
+ if (e.op_info->op == OP_TYPE_vx)
+ return e.use_widen_ternop_insn (
+ code_for_pred_widen_mul_plus_scalar (SIGN_EXTEND, e.vector_mode ()));
+ if (e.op_info->op == OP_TYPE_vv)
+ return e.use_widen_ternop_insn (
+ code_for_pred_widen_mul_plus (SIGN_EXTEND, e.vector_mode ()));
+ gcc_unreachable ();
+ }
+};
+
+class vwmaccu : public function_base
+{
+public:
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ if (e.op_info->op == OP_TYPE_vx)
+ return e.use_widen_ternop_insn (
+ code_for_pred_widen_mul_plus_scalar (ZERO_EXTEND, e.vector_mode ()));
+ if (e.op_info->op == OP_TYPE_vv)
+ return e.use_widen_ternop_insn (
+ code_for_pred_widen_mul_plus (ZERO_EXTEND, e.vector_mode ()));
gcc_unreachable ();
}
};
+class vwmaccsu : public function_base
+{
+public:
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ if (e.op_info->op == OP_TYPE_vx)
+ return e.use_widen_ternop_insn (
+ code_for_pred_widen_mul_plussu_scalar (e.vector_mode ()));
+ if (e.op_info->op == OP_TYPE_vv)
+ return e.use_widen_ternop_insn (
+ code_for_pred_widen_mul_plussu (e.vector_mode ()));
+ gcc_unreachable ();
+ }
+};
+
+class vwmaccus : public function_base
+{
+public:
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_widen_ternop_insn (
+ code_for_pred_widen_mul_plusus_scalar (e.vector_mode ()));
+ }
+};
+
/* Implements vmand/vmnand/vmandn/vmxor/vmor/vmnor/vmorn/vmxnor */
template<rtx_code CODE>
class mask_logic : public function_base
@@ -844,6 +887,402 @@ public:
}
};
+/* Implements vfrsub/vfrdiv. */
+template<rtx_code CODE>
+class reverse_binop : public function_base
+{
+public:
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_reverse_scalar (CODE, e.vector_mode ()));
+ }
+};
+
+class vfmacc : public function_base
+{
+public:
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ if (e.op_info->op == OP_TYPE_vf)
+ return e.use_ternop_insn (true,
+ code_for_pred_mul_scalar (PLUS,
+ e.vector_mode ()));
+ if (e.op_info->op == OP_TYPE_vv)
+ return e.use_ternop_insn (true,
+ code_for_pred_mul (PLUS, e.vector_mode ()));
+ gcc_unreachable ();
+ }
+};
+
+class vfnmsac : public function_base
+{
+public:
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ if (e.op_info->op == OP_TYPE_vf)
+ return e.use_ternop_insn (true,
+ code_for_pred_mul_scalar (MINUS,
+ e.vector_mode ()));
+ if (e.op_info->op == OP_TYPE_vv)
+ return e.use_ternop_insn (true,
+ code_for_pred_mul (MINUS, e.vector_mode ()));
+ gcc_unreachable ();
+ }
+};
+
+class vfmadd : public function_base
+{
+public:
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ if (e.op_info->op == OP_TYPE_vf)
+ return e.use_ternop_insn (false,
+ code_for_pred_mul_scalar (PLUS,
+ e.vector_mode ()));
+ if (e.op_info->op == OP_TYPE_vv)
+ return e.use_ternop_insn (false,
+ code_for_pred_mul (PLUS, e.vector_mode ()));
+ gcc_unreachable ();
+ }
+};
+
+class vfnmsub : public function_base
+{
+public:
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ if (e.op_info->op == OP_TYPE_vf)
+ return e.use_ternop_insn (false,
+ code_for_pred_mul_scalar (MINUS,
+ e.vector_mode ()));
+ if (e.op_info->op == OP_TYPE_vv)
+ return e.use_ternop_insn (false,
+ code_for_pred_mul (MINUS, e.vector_mode ()));
+ gcc_unreachable ();
+ }
+};
+
+class vfnmacc : public function_base
+{
+public:
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ if (e.op_info->op == OP_TYPE_vf)
+ return e.use_ternop_insn (
+ true, code_for_pred_neg_mul_scalar (PLUS, e.vector_mode ()));
+ if (e.op_info->op == OP_TYPE_vv)
+ return e.use_ternop_insn (true,
+ code_for_pred_neg_mul (PLUS, e.vector_mode ()));
+ gcc_unreachable ();
+ }
+};
+
+class vfmsac : public function_base
+{
+public:
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ if (e.op_info->op == OP_TYPE_vf)
+ return e.use_ternop_insn (
+ true, code_for_pred_neg_mul_scalar (MINUS, e.vector_mode ()));
+ if (e.op_info->op == OP_TYPE_vv)
+ return e.use_ternop_insn (true, code_for_pred_neg_mul (MINUS,
+ e.vector_mode ()));
+ gcc_unreachable ();
+ }
+};
+
+class vfnmadd : public function_base
+{
+public:
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ if (e.op_info->op == OP_TYPE_vf)
+ return e.use_ternop_insn (
+ false, code_for_pred_neg_mul_scalar (PLUS, e.vector_mode ()));
+ if (e.op_info->op == OP_TYPE_vv)
+ return e.use_ternop_insn (false,
+ code_for_pred_neg_mul (PLUS, e.vector_mode ()));
+ gcc_unreachable ();
+ }
+};
+
+class vfmsub : public function_base
+{
+public:
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ if (e.op_info->op == OP_TYPE_vf)
+ return e.use_ternop_insn (
+ false, code_for_pred_neg_mul_scalar (MINUS, e.vector_mode ()));
+ if (e.op_info->op == OP_TYPE_vv)
+ return e.use_ternop_insn (false,
+ code_for_pred_neg_mul (MINUS,
+ e.vector_mode ()));
+ gcc_unreachable ();
+ }
+};
+
+class vfwmacc : public function_base
+{
+public:
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ if (e.op_info->op == OP_TYPE_vf)
+ return e.use_widen_ternop_insn (
+ code_for_pred_widen_mul_scalar (PLUS, e.vector_mode ()));
+ if (e.op_info->op == OP_TYPE_vv)
+ return e.use_widen_ternop_insn (
+ code_for_pred_widen_mul (PLUS, e.vector_mode ()));
+ gcc_unreachable ();
+ }
+};
+
+class vfwnmacc : public function_base
+{
+public:
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ if (e.op_info->op == OP_TYPE_vf)
+ return e.use_widen_ternop_insn (
+ code_for_pred_widen_neg_mul_scalar (PLUS, e.vector_mode ()));
+ if (e.op_info->op == OP_TYPE_vv)
+ return e.use_widen_ternop_insn (
+ code_for_pred_widen_neg_mul (PLUS, e.vector_mode ()));
+ gcc_unreachable ();
+ }
+};
+
+class vfwmsac : public function_base
+{
+public:
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ if (e.op_info->op == OP_TYPE_vf)
+ return e.use_widen_ternop_insn (
+ code_for_pred_widen_neg_mul_scalar (MINUS, e.vector_mode ()));
+ if (e.op_info->op == OP_TYPE_vv)
+ return e.use_widen_ternop_insn (
+ code_for_pred_widen_neg_mul (MINUS, e.vector_mode ()));
+ gcc_unreachable ();
+ }
+};
+
+class vfwnmsac : public function_base
+{
+public:
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ if (e.op_info->op == OP_TYPE_vf)
+ return e.use_widen_ternop_insn (
+ code_for_pred_widen_mul_scalar (MINUS, e.vector_mode ()));
+ if (e.op_info->op == OP_TYPE_vv)
+ return e.use_widen_ternop_insn (
+ code_for_pred_widen_mul (MINUS, e.vector_mode ()));
+ gcc_unreachable ();
+ }
+};
+
+/* Implements vfsqrt7/vfrec7/vfclass/vfsgnj/vfsgnjn/vfsgnjx. */
+template<int UNSPEC>
+class float_misc : public function_base
+{
+public:
+ rtx expand (function_expander &e) const override
+ {
+ if (e.op_info->op == OP_TYPE_vf)
+ return e.use_exact_insn (code_for_pred_scalar (UNSPEC, e.vector_mode ()));
+ if (e.op_info->op == OP_TYPE_vv || e.op_info->op == OP_TYPE_v)
+ return e.use_exact_insn (code_for_pred (UNSPEC, e.vector_mode ()));
+ gcc_unreachable ();
+ }
+};
+
+/* Implements vmfeq/vmfne/vmflt/vmfgt/vmfle/vmfge. */
+template<rtx_code CODE>
+class fcmp : public function_base
+{
+public:
+ rtx expand (function_expander &e) const override
+ {
+ switch (e.op_info->op)
+ {
+ case OP_TYPE_vf: {
+ if (CODE == EQ || CODE == NE)
+ return e.use_compare_insn (CODE, code_for_pred_eqne_scalar (
+ e.vector_mode ()));
+ else
+ return e.use_compare_insn (CODE, code_for_pred_cmp_scalar (
+ e.vector_mode ()));
+ }
+ case OP_TYPE_vv: {
+ return e.use_compare_insn (CODE,
+ code_for_pred_cmp (e.vector_mode ()));
+ }
+ default:
+ gcc_unreachable ();
+ }
+ }
+};
+
+/* Implements vfclass. */
+class vfclass : public function_base
+{
+public:
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (code_for_pred_class (e.arg_mode (0)));
+ }
+};
+
+/* Implements vfcvt.x. */
+template<int UNSPEC>
+class vfcvt_x : public function_base
+{
+public:
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (code_for_pred_fcvt_x_f (UNSPEC, e.arg_mode (0)));
+ }
+};
+
+/* Implements vfcvt.rtz.x. */
+template<rtx_code CODE>
+class vfcvt_rtz_x : public function_base
+{
+public:
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (code_for_pred (CODE, e.arg_mode (0)));
+ }
+};
+
+class vfcvt_f : public function_base
+{
+public:
+ rtx expand (function_expander &e) const override
+ {
+ if (e.op_info->op == OP_TYPE_x_v)
+ return e.use_exact_insn (code_for_pred (FLOAT, e.vector_mode ()));
+ if (e.op_info->op == OP_TYPE_xu_v)
+ return e.use_exact_insn (
+ code_for_pred (UNSIGNED_FLOAT, e.vector_mode ()));
+ gcc_unreachable ();
+ }
+};
+
+/* Implements vfwcvt.x. */
+template<int UNSPEC>
+class vfwcvt_x : public function_base
+{
+public:
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_widen_fcvt_x_f (UNSPEC, e.vector_mode ()));
+ }
+};
+
+/* Implements vfwcvt.rtz.x. */
+template<rtx_code CODE>
+class vfwcvt_rtz_x : public function_base
+{
+public:
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (code_for_pred_widen (CODE, e.vector_mode ()));
+ }
+};
+
+class vfwcvt_f : public function_base
+{
+public:
+ rtx expand (function_expander &e) const override
+ {
+ if (e.op_info->op == OP_TYPE_f_v)
+ return e.use_exact_insn (code_for_pred_extend (e.vector_mode ()));
+ if (e.op_info->op == OP_TYPE_x_v)
+ return e.use_exact_insn (code_for_pred_widen (FLOAT, e.vector_mode ()));
+ if (e.op_info->op == OP_TYPE_xu_v)
+ return e.use_exact_insn (
+ code_for_pred_widen (UNSIGNED_FLOAT, e.vector_mode ()));
+ gcc_unreachable ();
+ }
+};
+
+/* Implements vfncvt.x. */
+template<int UNSPEC>
+class vfncvt_x : public function_base
+{
+public:
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_narrow_fcvt_x_f (UNSPEC, e.arg_mode (0)));
+ }
+};
+
+/* Implements vfncvt.rtz.x. */
+template<rtx_code CODE>
+class vfncvt_rtz_x : public function_base
+{
+public:
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (code_for_pred_narrow (CODE, e.vector_mode ()));
+ }
+};
+
+class vfncvt_f : public function_base
+{
+public:
+ rtx expand (function_expander &e) const override
+ {
+ if (e.op_info->op == OP_TYPE_f_w)
+ return e.use_exact_insn (code_for_pred_trunc (e.vector_mode ()));
+ if (e.op_info->op == OP_TYPE_x_w)
+ return e.use_exact_insn (code_for_pred_narrow (FLOAT, e.arg_mode (0)));
+ if (e.op_info->op == OP_TYPE_xu_w)
+ return e.use_exact_insn (
+ code_for_pred_narrow (UNSIGNED_FLOAT, e.arg_mode (0)));
+ gcc_unreachable ();
+ }
+};
+
+class vfncvt_rod_f : public function_base
+{
+public:
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (code_for_pred_rod_trunc (e.vector_mode ()));
+ }
+};
+
static CONSTEXPR const vsetvl<false> vsetvl_obj;
static CONSTEXPR const vsetvl<true> vsetvlmax_obj;
static CONSTEXPR const loadstore<false, LST_UNIT_STRIDE, false> vle_obj;
@@ -921,14 +1360,14 @@ static CONSTEXPR const icmp<LTU> vmsltu_obj;
static CONSTEXPR const icmp<GTU> vmsgtu_obj;
static CONSTEXPR const icmp<LEU> vmsleu_obj;
static CONSTEXPR const icmp<GEU> vmsgeu_obj;
-static CONSTEXPR const imac<TERNOP_VMACC> vmacc_obj;
-static CONSTEXPR const imac<TERNOP_VNMSAC> vnmsac_obj;
-static CONSTEXPR const imac<TERNOP_VMADD> vmadd_obj;
-static CONSTEXPR const imac<TERNOP_VNMSUB> vnmsub_obj;
-static CONSTEXPR const iwmac<WIDEN_TERNOP_VWMACC> vwmacc_obj;
-static CONSTEXPR const iwmac<WIDEN_TERNOP_VWMACCU> vwmaccu_obj;
-static CONSTEXPR const iwmac<WIDEN_TERNOP_VWMACCSU> vwmaccsu_obj;
-static CONSTEXPR const iwmac<WIDEN_TERNOP_VWMACCUS> vwmaccus_obj;
+static CONSTEXPR const vmacc vmacc_obj;
+static CONSTEXPR const vnmsac vnmsac_obj;
+static CONSTEXPR const vmadd vmadd_obj;
+static CONSTEXPR const vnmsub vnmsub_obj;
+static CONSTEXPR const vwmacc vwmacc_obj;
+static CONSTEXPR const vwmaccu vwmaccu_obj;
+static CONSTEXPR const vwmaccsu vwmaccsu_obj;
+static CONSTEXPR const vwmaccus vwmaccus_obj;
static CONSTEXPR const binop<SS_PLUS> vsadd_obj;
static CONSTEXPR const binop<SS_MINUS> vssub_obj;
static CONSTEXPR const binop<US_PLUS> vsaddu_obj;
@@ -961,6 +1400,62 @@ static CONSTEXPR const mask_misc<UNSPEC_VMSIF> vmsif_obj;
static CONSTEXPR const mask_misc<UNSPEC_VMSOF> vmsof_obj;
static CONSTEXPR const viota viota_obj;
static CONSTEXPR const vid vid_obj;
+static CONSTEXPR const binop<PLUS> vfadd_obj;
+static CONSTEXPR const binop<MINUS> vfsub_obj;
+static CONSTEXPR const reverse_binop<MINUS> vfrsub_obj;
+static CONSTEXPR const widen_binop<PLUS> vfwadd_obj;
+static CONSTEXPR const widen_binop<MINUS> vfwsub_obj;
+static CONSTEXPR const binop<MULT> vfmul_obj;
+static CONSTEXPR const binop<DIV> vfdiv_obj;
+static CONSTEXPR const reverse_binop<DIV> vfrdiv_obj;
+static CONSTEXPR const widen_binop<MULT> vfwmul_obj;
+static CONSTEXPR const vfmacc vfmacc_obj;
+static CONSTEXPR const vfnmsac vfnmsac_obj;
+static CONSTEXPR const vfmadd vfmadd_obj;
+static CONSTEXPR const vfnmsub vfnmsub_obj;
+static CONSTEXPR const vfnmacc vfnmacc_obj;
+static CONSTEXPR const vfmsac vfmsac_obj;
+static CONSTEXPR const vfnmadd vfnmadd_obj;
+static CONSTEXPR const vfmsub vfmsub_obj;
+static CONSTEXPR const vfwmacc vfwmacc_obj;
+static CONSTEXPR const vfwnmacc vfwnmacc_obj;
+static CONSTEXPR const vfwmsac vfwmsac_obj;
+static CONSTEXPR const vfwnmsac vfwnmsac_obj;
+static CONSTEXPR const unop<SQRT> vfsqrt_obj;
+static CONSTEXPR const float_misc<UNSPEC_VFRSQRT7> vfrsqrt7_obj;
+static CONSTEXPR const float_misc<UNSPEC_VFREC7> vfrec7_obj;
+static CONSTEXPR const binop<SMIN> vfmin_obj;
+static CONSTEXPR const binop<SMAX> vfmax_obj;
+static CONSTEXPR const float_misc<UNSPEC_VCOPYSIGN> vfsgnj_obj;
+static CONSTEXPR const float_misc<UNSPEC_VNCOPYSIGN> vfsgnjn_obj;
+static CONSTEXPR const float_misc<UNSPEC_VXORSIGN> vfsgnjx_obj;
+static CONSTEXPR const unop<NEG> vfneg_obj;
+static CONSTEXPR const unop<ABS> vfabs_obj;
+static CONSTEXPR const fcmp<EQ> vmfeq_obj;
+static CONSTEXPR const fcmp<NE> vmfne_obj;
+static CONSTEXPR const fcmp<LT> vmflt_obj;
+static CONSTEXPR const fcmp<GT> vmfgt_obj;
+static CONSTEXPR const fcmp<LE> vmfle_obj;
+static CONSTEXPR const fcmp<GE> vmfge_obj;
+static CONSTEXPR const vfclass vfclass_obj;
+static CONSTEXPR const vmerge vfmerge_obj;
+static CONSTEXPR const vmv_v vfmv_v_obj;
+static CONSTEXPR const vfcvt_x<UNSPEC_VFCVT> vfcvt_x_obj;
+static CONSTEXPR const vfcvt_x<UNSPEC_UNSIGNED_VFCVT> vfcvt_xu_obj;
+static CONSTEXPR const vfcvt_rtz_x<FIX> vfcvt_rtz_x_obj;
+static CONSTEXPR const vfcvt_rtz_x<UNSIGNED_FIX> vfcvt_rtz_xu_obj;
+static CONSTEXPR const vfcvt_f vfcvt_f_obj;
+static CONSTEXPR const vfwcvt_x<UNSPEC_VFCVT> vfwcvt_x_obj;
+static CONSTEXPR const vfwcvt_x<UNSPEC_UNSIGNED_VFCVT> vfwcvt_xu_obj;
+static CONSTEXPR const vfwcvt_rtz_x<FIX> vfwcvt_rtz_x_obj;
+static CONSTEXPR const vfwcvt_rtz_x<UNSIGNED_FIX> vfwcvt_rtz_xu_obj;
+static CONSTEXPR const vfwcvt_f vfwcvt_f_obj;
+static CONSTEXPR const vfncvt_x<UNSPEC_VFCVT> vfncvt_x_obj;
+static CONSTEXPR const vfncvt_x<UNSPEC_UNSIGNED_VFCVT> vfncvt_xu_obj;
+static CONSTEXPR const vfncvt_rtz_x<FIX> vfncvt_rtz_x_obj;
+static CONSTEXPR const vfncvt_rtz_x<UNSIGNED_FIX> vfncvt_rtz_xu_obj;
+static CONSTEXPR const vfncvt_f vfncvt_f_obj;
+static CONSTEXPR const vfncvt_rod_f vfncvt_rod_f_obj;
/* Declare the function base NAME, pointing it to an instance
of class <NAME>_obj. */
@@ -1084,5 +1579,61 @@ BASE (vmsif)
BASE (vmsof)
BASE (viota)
BASE (vid)
+BASE (vfadd)
+BASE (vfsub)
+BASE (vfrsub)
+BASE (vfwadd)
+BASE (vfwsub)
+BASE (vfmul)
+BASE (vfdiv)
+BASE (vfrdiv)
+BASE (vfwmul)
+BASE (vfmacc)
+BASE (vfnmsac)
+BASE (vfmadd)
+BASE (vfnmsub)
+BASE (vfnmacc)
+BASE (vfmsac)
+BASE (vfnmadd)
+BASE (vfmsub)
+BASE (vfwmacc)
+BASE (vfwnmacc)
+BASE (vfwmsac)
+BASE (vfwnmsac)
+BASE (vfsqrt)
+BASE (vfrsqrt7)
+BASE (vfrec7)
+BASE (vfmin)
+BASE (vfmax)
+BASE (vfsgnj)
+BASE (vfsgnjn)
+BASE (vfsgnjx)
+BASE (vfneg)
+BASE (vfabs)
+BASE (vmfeq)
+BASE (vmfne)
+BASE (vmflt)
+BASE (vmfgt)
+BASE (vmfle)
+BASE (vmfge)
+BASE (vfclass)
+BASE (vfmerge)
+BASE (vfmv_v)
+BASE (vfcvt_x)
+BASE (vfcvt_xu)
+BASE (vfcvt_rtz_x)
+BASE (vfcvt_rtz_xu)
+BASE (vfcvt_f)
+BASE (vfwcvt_x)
+BASE (vfwcvt_xu)
+BASE (vfwcvt_rtz_x)
+BASE (vfwcvt_rtz_xu)
+BASE (vfwcvt_f)
+BASE (vfncvt_x)
+BASE (vfncvt_xu)
+BASE (vfncvt_rtz_x)
+BASE (vfncvt_rtz_xu)
+BASE (vfncvt_f)
+BASE (vfncvt_rod_f)
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.h b/gcc/config/riscv/riscv-vector-builtins-bases.h
index e136cd91147..5583dda3a08 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.h
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.h
@@ -143,6 +143,66 @@ extern const function_base *const vmsif;
extern const function_base *const vmsof;
extern const function_base *const viota;
extern const function_base *const vid;
+extern const function_base *const vfadd;
+extern const function_base *const vfadd;
+extern const function_base *const vfsub;
+extern const function_base *const vfsub;
+extern const function_base *const vfrsub;
+extern const function_base *const vfwadd;
+extern const function_base *const vfwsub;
+extern const function_base *const vfmul;
+extern const function_base *const vfmul;
+extern const function_base *const vfdiv;
+extern const function_base *const vfdiv;
+extern const function_base *const vfrdiv;
+extern const function_base *const vfwmul;
+extern const function_base *const vfmacc;
+extern const function_base *const vfnmsac;
+extern const function_base *const vfmadd;
+extern const function_base *const vfnmsub;
+extern const function_base *const vfnmacc;
+extern const function_base *const vfmsac;
+extern const function_base *const vfnmadd;
+extern const function_base *const vfmsub;
+extern const function_base *const vfwmacc;
+extern const function_base *const vfwnmacc;
+extern const function_base *const vfwmsac;
+extern const function_base *const vfwnmsac;
+extern const function_base *const vfsqrt;
+extern const function_base *const vfrsqrt7;
+extern const function_base *const vfrec7;
+extern const function_base *const vfmin;
+extern const function_base *const vfmax;
+extern const function_base *const vfsgnj;
+extern const function_base *const vfsgnjn;
+extern const function_base *const vfsgnjx;
+extern const function_base *const vfneg;
+extern const function_base *const vfabs;
+extern const function_base *const vmfeq;
+extern const function_base *const vmfne;
+extern const function_base *const vmflt;
+extern const function_base *const vmfgt;
+extern const function_base *const vmfle;
+extern const function_base *const vmfge;
+extern const function_base *const vfclass;
+extern const function_base *const vfmerge;
+extern const function_base *const vfmv_v;
+extern const function_base *const vfcvt_x;
+extern const function_base *const vfcvt_xu;
+extern const function_base *const vfcvt_rtz_x;
+extern const function_base *const vfcvt_rtz_xu;
+extern const function_base *const vfcvt_f;
+extern const function_base *const vfwcvt_x;
+extern const function_base *const vfwcvt_xu;
+extern const function_base *const vfwcvt_rtz_x;
+extern const function_base *const vfwcvt_rtz_xu;
+extern const function_base *const vfwcvt_f;
+extern const function_base *const vfncvt_x;
+extern const function_base *const vfncvt_xu;
+extern const function_base *const vfncvt_rtz_x;
+extern const function_base *const vfncvt_rtz_xu;
+extern const function_base *const vfncvt_f;
+extern const function_base *const vfncvt_rod_f;
}
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.def b/gcc/config/riscv/riscv-vector-builtins-functions.def
index 38bf1b694bb..1ca0537216b 100644
--- a/gcc/config/riscv/riscv-vector-builtins-functions.def
+++ b/gcc/config/riscv/riscv-vector-builtins-functions.def
@@ -105,12 +105,12 @@ DEF_RVV_FUNCTION (vwcvt_x, alu, full_preds, i_x_x_v_ops)
DEF_RVV_FUNCTION (vwcvtu_x, alu, full_preds, u_x_x_v_ops)
// 11.3. Vector Integer Extension
-DEF_RVV_FUNCTION (vzext, alu, full_preds, u_vf2_ops)
-DEF_RVV_FUNCTION (vzext, alu, full_preds, u_vf4_ops)
-DEF_RVV_FUNCTION (vzext, alu, full_preds, u_vf8_ops)
-DEF_RVV_FUNCTION (vsext, alu, full_preds, i_vf2_ops)
-DEF_RVV_FUNCTION (vsext, alu, full_preds, i_vf4_ops)
-DEF_RVV_FUNCTION (vsext, alu, full_preds, i_vf8_ops)
+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf2_ops)
+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf4_ops)
+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf8_ops)
+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf2_ops)
+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf4_ops)
+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf8_ops)
// 11.4. Vector Integer Add-with-Carry/Subtract-with-Borrow Instructions
DEF_RVV_FUNCTION (vadc, no_mask_policy, none_tu_preds, iu_vvvm_ops)
@@ -275,7 +275,139 @@ DEF_RVV_FUNCTION (vnclipu, narrow_alu, full_preds, u_narrow_shift_vwx_ops)
DEF_RVV_FUNCTION (vnclip, narrow_alu, full_preds, i_narrow_shift_vwv_ops)
DEF_RVV_FUNCTION (vnclip, narrow_alu, full_preds, i_narrow_shift_vwx_ops)
-/* TODO: 13. Vector Floating-Point Instructions. */
+/* 13. Vector Floating-Point Instructions. */
+
+// 13.2. Vector Single-Width Floating-Point Add/Subtract Instructions
+DEF_RVV_FUNCTION (vfadd, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfadd, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfsub, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsub, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfrsub, alu, full_preds, f_vvf_ops)
+
+// 13.3. Vector Widening Floating-Point Add/Subtract Instructions
+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wvf_ops)
+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wvf_ops)
+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wwv_ops)
+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wwf_ops)
+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wwv_ops)
+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wwf_ops)
+
+// 13.4. Vector Single-Width Floating-Point Multiply/Divide Instructions
+DEF_RVV_FUNCTION (vfmul, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfmul, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfdiv, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfdiv, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfrdiv, alu, full_preds, f_vvf_ops)
+
+// 13.5. Vector Widening Floating-Point Multiply
+DEF_RVV_FUNCTION (vfwmul, alu, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwmul, alu, full_preds, f_wvf_ops)
+
+// 13.6. Vector Single-Width Floating-Point Fused Multiply-Add Instructions
+DEF_RVV_FUNCTION (vfmacc, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmacc, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmsac, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmsac, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmadd, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmadd, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmsub, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmsub, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmacc, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmacc, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmsac, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmsac, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmadd, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmadd, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmsub, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmsub, alu, full_preds, f_vvfv_ops)
+
+// 13.7. Vector Widening Floating-Point Fused Multiply-Add Instructions
+DEF_RVV_FUNCTION (vfwmacc, alu, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwmacc, alu, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwnmacc, alu, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwnmacc, alu, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwmsac, alu, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwmsac, alu, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwnmsac, alu, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwnmsac, alu, full_preds, f_wwfv_ops)
+
+// 13.8. Vector Floating-Point Square-Root Instruction
+DEF_RVV_FUNCTION (vfsqrt, alu, full_preds, f_v_ops)
+
+// 13.9. Vector Floating-Point Reciprocal Square-Root Estimate Instruction
+DEF_RVV_FUNCTION (vfrsqrt7, alu, full_preds, f_v_ops)
+
+// 13.10. Vector Floating-Point Reciprocal Estimate Instruction
+DEF_RVV_FUNCTION (vfrec7, alu, full_preds, f_v_ops)
+
+// 13.11. Vector Floating-Point MIN/MAX Instructions
+DEF_RVV_FUNCTION (vfmin, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfmin, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfmax, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfmax, alu, full_preds, f_vvf_ops)
+
+// 13.12. Vector Floating-Point Sign-Injection Instructions
+DEF_RVV_FUNCTION (vfsgnj, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsgnj, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfsgnjn, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsgnjn, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfsgnjx, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsgnjx, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfneg, alu, full_preds, f_v_ops)
+DEF_RVV_FUNCTION (vfabs, alu, full_preds, f_v_ops)
+
+// 13.13. Vector Floating-Point Compare Instructions
+DEF_RVV_FUNCTION (vmfeq, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfeq, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmfne, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfne, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmflt, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmflt, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmfle, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfle, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmfgt, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfgt, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmfge, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfge, return_mask, none_m_mu_preds, f_mvf_ops)
+
+// 13.14. Vector Floating-Point Classify Instruction
+DEF_RVV_FUNCTION (vfclass, alu, full_preds, f_to_u_v_ops)
+
+// 13.15. Vector Floating-Point Merge Instruction
+DEF_RVV_FUNCTION (vfmerge, no_mask_policy, none_tu_preds, f_vvfm_ops)
+
+// 13.16. Vector Floating-Point Move Instruction
+DEF_RVV_FUNCTION (vfmv_v, move, none_tu_preds, f_f_ops)
+
+// 13.17. Single-Width Floating-Point/Integer Type-Convert Instructions
+DEF_RVV_FUNCTION (vfcvt_x, alu, full_preds, f_to_i_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_xu, alu, full_preds, f_to_u_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_rtz_x, alu, full_preds, f_to_i_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_rtz_xu, alu, full_preds, f_to_u_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_f, alu, full_preds, i_to_f_x_v_ops)
+DEF_RVV_FUNCTION (vfcvt_f, alu, full_preds, u_to_f_xu_v_ops)
+
+// 13.18. Widening Floating-Point/Integer Type-Convert Instructions
+DEF_RVV_FUNCTION (vfwcvt_x, alu, full_preds, f_to_wi_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_xu, alu, full_preds, f_to_wu_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_rtz_x, alu, full_preds, f_to_wi_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_rtz_xu, alu, full_preds, f_to_wu_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, i_to_wf_x_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, u_to_wf_xu_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, f_to_wf_f_v_ops)
+
+// 13.19. Narrowing Floating-Point/Integer Type-Convert Instructions
+DEF_RVV_FUNCTION (vfncvt_x, narrow_alu, full_preds, f_to_ni_f_w_ops)
+DEF_RVV_FUNCTION (vfncvt_xu, narrow_alu, full_preds, f_to_nu_f_w_ops)
+DEF_RVV_FUNCTION (vfncvt_rtz_x, narrow_alu, full_preds, f_to_ni_f_w_ops)
+DEF_RVV_FUNCTION (vfncvt_rtz_xu, narrow_alu, full_preds, f_to_nu_f_w_ops)
+DEF_RVV_FUNCTION (vfncvt_f, narrow_alu, full_preds, i_to_nf_x_w_ops)
+DEF_RVV_FUNCTION (vfncvt_f, narrow_alu, full_preds, u_to_nf_xu_w_ops)
+DEF_RVV_FUNCTION (vfncvt_f, narrow_alu, full_preds, f_to_nf_f_w_ops)
+DEF_RVV_FUNCTION (vfncvt_rod_f, narrow_alu, full_preds, f_to_nf_f_w_ops)
+
/* TODO: 14. Vector Reduction Operations. */
/* 15. Vector Mask Instructions. */
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index abf169dea4c..1fbf0f4e902 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -197,22 +197,12 @@ struct alu_def : public build_base
b.append_base_name (instance.base_name);
- /* vop<sew> --> vop<sew>_<op>. According to rvv-intrinsic-doc, _vv/_vx/_v
- API doesn't have OP suffix in overloaded function name, otherwise, we
- always append OP suffix in function name. For example, vsext_vf2. */
- if (instance.op_info->op == OP_TYPE_vv || instance.op_info->op == OP_TYPE_vx
- || instance.op_info->op == OP_TYPE_v
- || instance.op_info->op == OP_TYPE_x_v)
- {
- if (!overloaded_p)
- b.append_name (operand_suffixes[instance.op_info->op]);
- }
- else
- b.append_name (operand_suffixes[instance.op_info->op]);
-
/* vop<sew>_<op> --> vop<sew>_<op>_<type>. */
if (!overloaded_p)
- b.append_name (type_suffixes[instance.type.index].vector);
+ {
+ b.append_name (operand_suffixes[instance.op_info->op]);
+ b.append_name (type_suffixes[instance.type.index].vector);
+ }
/* According to rvv-intrinsic-doc, it does not add "_m" suffix
for vop_m C++ overloaded API. */
@@ -333,9 +323,9 @@ struct move_def : public build_base
char *get_name (function_builder &b, const function_instance &instance,
bool overloaded_p) const override
{
- /* vmv.v.x (PRED_none) can not be overloaded. */
- if (instance.op_info->op == OP_TYPE_x && overloaded_p
- && instance.pred == PRED_TYPE_none)
+ /* vmv.v.x/vfmv.v.f (PRED_none) can not be overloaded. */
+ if ((instance.op_info->op == OP_TYPE_x || instance.op_info->op == OP_TYPE_f)
+ && overloaded_p && instance.pred == PRED_TYPE_none)
return nullptr;
b.append_base_name (instance.base_name);
diff --git a/gcc/config/riscv/riscv-vector-builtins-types.def b/gcc/config/riscv/riscv-vector-builtins-types.def
index 0a562bd283f..bb3811d2d90 100644
--- a/gcc/config/riscv/riscv-vector-builtins-types.def
+++ b/gcc/config/riscv/riscv-vector-builtins-types.def
@@ -92,6 +92,47 @@ along with GCC; see the file COPYING3. If not see
#define DEF_RVV_FULL_V_U_OPS(TYPE, REQUIRE)
#endif
+/* Use "DEF_RVV_WEXTF_OPS" macro include Double-Widening float which
+ will be iterated and registered as intrinsic functions. */
+#ifndef DEF_RVV_WEXTF_OPS
+#define DEF_RVV_WEXTF_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_CONVERT_I_OPS" macro include all integer that will be converted
+ in the float with same nunits which will be iterated and registered as
+ intrinsic functions. */
+#ifndef DEF_RVV_CONVERT_I_OPS
+#define DEF_RVV_CONVERT_I_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_CONVERT_U_OPS" macro include all unsigned integer that will be
+ converted in the float with same nunits which will be iterated and registered
+ as intrinsic functions. */
+#ifndef DEF_RVV_CONVERT_U_OPS
+#define DEF_RVV_CONVERT_U_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_WCONVERT_I_OPS" macro include all integer that will be widen
+ converted in the float with same nunits which will be iterated and registered
+ as intrinsic functions. */
+#ifndef DEF_RVV_WCONVERT_I_OPS
+#define DEF_RVV_WCONVERT_I_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_WCONVERT_U_OPS" macro include all unsigned integer that will be
+ widen converted in the float with same nunits which will be iterated and
+ registered as intrinsic functions. */
+#ifndef DEF_RVV_WCONVERT_U_OPS
+#define DEF_RVV_WCONVERT_U_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_WCONVERT_F_OPS" macro include all unsigned integer that will be
+ widen converted in the float with same nunits which will be iterated and
+ registered as intrinsic functions. */
+#ifndef DEF_RVV_WCONVERT_F_OPS
+#define DEF_RVV_WCONVERT_F_OPS(TYPE, REQUIRE)
+#endif
+
DEF_RVV_I_OPS (vint8mf8_t, RVV_REQUIRE_ZVE64)
DEF_RVV_I_OPS (vint8mf4_t, 0)
DEF_RVV_I_OPS (vint8mf2_t, 0)
@@ -264,6 +305,46 @@ DEF_RVV_FULL_V_U_OPS (vuint64m2_t, RVV_REQUIRE_FULL_V)
DEF_RVV_FULL_V_U_OPS (vuint64m4_t, RVV_REQUIRE_FULL_V)
DEF_RVV_FULL_V_U_OPS (vuint64m8_t, RVV_REQUIRE_FULL_V)
+DEF_RVV_WEXTF_OPS (vfloat64m1_t, RVV_REQUIRE_ELEN_FP_64)
+DEF_RVV_WEXTF_OPS (vfloat64m2_t, RVV_REQUIRE_ELEN_FP_64)
+DEF_RVV_WEXTF_OPS (vfloat64m4_t, RVV_REQUIRE_ELEN_FP_64)
+DEF_RVV_WEXTF_OPS (vfloat64m8_t, RVV_REQUIRE_ELEN_FP_64)
+
+DEF_RVV_CONVERT_I_OPS (vint32mf2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_CONVERT_I_OPS (vint32m1_t, 0)
+DEF_RVV_CONVERT_I_OPS (vint32m2_t, 0)
+DEF_RVV_CONVERT_I_OPS (vint32m4_t, 0)
+DEF_RVV_CONVERT_I_OPS (vint32m8_t, 0)
+DEF_RVV_CONVERT_I_OPS (vint64m1_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_CONVERT_I_OPS (vint64m2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_CONVERT_I_OPS (vint64m4_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_CONVERT_I_OPS (vint64m8_t, RVV_REQUIRE_ZVE64)
+
+DEF_RVV_CONVERT_U_OPS (vuint32mf2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_CONVERT_U_OPS (vuint32m1_t, 0)
+DEF_RVV_CONVERT_U_OPS (vuint32m2_t, 0)
+DEF_RVV_CONVERT_U_OPS (vuint32m4_t, 0)
+DEF_RVV_CONVERT_U_OPS (vuint32m8_t, 0)
+DEF_RVV_CONVERT_U_OPS (vuint64m1_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_CONVERT_U_OPS (vuint64m2_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_CONVERT_U_OPS (vuint64m4_t, RVV_REQUIRE_ZVE64)
+DEF_RVV_CONVERT_U_OPS (vuint64m8_t, RVV_REQUIRE_ZVE64)
+
+DEF_RVV_WCONVERT_I_OPS (vint64m1_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ZVE64)
+DEF_RVV_WCONVERT_I_OPS (vint64m2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ZVE64)
+DEF_RVV_WCONVERT_I_OPS (vint64m4_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ZVE64)
+DEF_RVV_WCONVERT_I_OPS (vint64m8_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ZVE64)
+
+DEF_RVV_WCONVERT_U_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ZVE64)
+DEF_RVV_WCONVERT_U_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ZVE64)
+DEF_RVV_WCONVERT_U_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ZVE64)
+DEF_RVV_WCONVERT_U_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ZVE64)
+
+DEF_RVV_WCONVERT_F_OPS (vfloat64m1_t, RVV_REQUIRE_ELEN_FP_64)
+DEF_RVV_WCONVERT_F_OPS (vfloat64m2_t, RVV_REQUIRE_ELEN_FP_64)
+DEF_RVV_WCONVERT_F_OPS (vfloat64m4_t, RVV_REQUIRE_ELEN_FP_64)
+DEF_RVV_WCONVERT_F_OPS (vfloat64m8_t, RVV_REQUIRE_ELEN_FP_64)
+
#undef DEF_RVV_I_OPS
#undef DEF_RVV_U_OPS
#undef DEF_RVV_F_OPS
@@ -276,3 +357,9 @@ DEF_RVV_FULL_V_U_OPS (vuint64m8_t, RVV_REQUIRE_FULL_V)
#undef DEF_RVV_OEXTU_OPS
#undef DEF_RVV_FULL_V_I_OPS
#undef DEF_RVV_FULL_V_U_OPS
+#undef DEF_RVV_WEXTF_OPS
+#undef DEF_RVV_CONVERT_I_OPS
+#undef DEF_RVV_CONVERT_U_OPS
+#undef DEF_RVV_WCONVERT_I_OPS
+#undef DEF_RVV_WCONVERT_U_OPS
+#undef DEF_RVV_WCONVERT_F_OPS
diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
index 3747cad672f..7858a6d0e86 100644
--- a/gcc/config/riscv/riscv-vector-builtins.cc
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -147,12 +147,42 @@ static const rvv_type_info full_v_u_ops[] = {
#include "riscv-vector-builtins-types.def"
{NUM_VECTOR_TYPES, 0}};
-/* A list of all signed integer will be registered for intrinsic functions. */
+/* A list of all unsigned integer will be registered for intrinsic functions. */
static const rvv_type_info u_ops[] = {
#define DEF_RVV_U_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
#include "riscv-vector-builtins-types.def"
{NUM_VECTOR_TYPES, 0}};
+/* A list of all signed integer will be registered for intrinsic functions. */
+static const rvv_type_info convert_i_ops[] = {
+#define DEF_RVV_CONVERT_I_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+ {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all unsigned integer will be registered for intrinsic functions. */
+static const rvv_type_info convert_u_ops[] = {
+#define DEF_RVV_CONVERT_U_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+ {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all signed integer will be registered for intrinsic functions. */
+static const rvv_type_info wconvert_i_ops[] = {
+#define DEF_RVV_WCONVERT_I_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+ {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all unsigned integer will be registered for intrinsic functions. */
+static const rvv_type_info wconvert_u_ops[] = {
+#define DEF_RVV_WCONVERT_U_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+ {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all floating-point will be registered for intrinsic functions. */
+static const rvv_type_info wconvert_f_ops[] = {
+#define DEF_RVV_WCONVERT_F_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+ {NUM_VECTOR_TYPES, 0}};
+
/* A list of all integer will be registered for intrinsic functions. */
static const rvv_type_info iu_ops[] = {
#define DEF_RVV_I_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
@@ -174,6 +204,12 @@ static const rvv_type_info b_ops[] = {
#include "riscv-vector-builtins-types.def"
{NUM_VECTOR_TYPES, 0}};
+/* A list of all float will be registered for intrinsic functions. */
+static const rvv_type_info f_ops[] = {
+#define DEF_RVV_F_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+ {NUM_VECTOR_TYPES, 0}};
+
/* A list of Double-Widening signed integer will be registered for intrinsic
* functions. */
static const rvv_type_info wexti_ops[] = {
@@ -181,6 +217,13 @@ static const rvv_type_info wexti_ops[] = {
#include "riscv-vector-builtins-types.def"
{NUM_VECTOR_TYPES, 0}};
+/* A list of Double-Widening float will be registered for intrinsic functions.
+ */
+static const rvv_type_info wextf_ops[] = {
+#define DEF_RVV_WEXTF_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+ {NUM_VECTOR_TYPES, 0}};
+
/* A list of Quad-Widening signed integer will be registered for intrinsic
* functions. */
static const rvv_type_info qexti_ops[] = {
@@ -375,6 +418,19 @@ static CONSTEXPR const rvv_arg_type_info shift_wv_args[]
static CONSTEXPR const rvv_arg_type_info v_args[]
= {rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
+/* A list of args for vector_type func (vector_type) function. */
+static CONSTEXPR const rvv_arg_type_info f_v_args[]
+ = {rvv_arg_type_info (RVV_BASE_float_vector), rvv_arg_type_info_end};
+
+/* A list of args for vector_type func (vector_type) function. */
+static CONSTEXPR const rvv_arg_type_info trunc_f_v_args[]
+ = {rvv_arg_type_info (RVV_BASE_double_trunc_float_vector),
+ rvv_arg_type_info_end};
+
+/* A list of args for vector_type func (vector_type) function. */
+static CONSTEXPR const rvv_arg_type_info w_v_args[]
+ = {rvv_arg_type_info (RVV_BASE_double_trunc_vector), rvv_arg_type_info_end};
+
/* A list of args for vector_type func (vector_type) function. */
static CONSTEXPR const rvv_arg_type_info m_args[]
= {rvv_arg_type_info (RVV_BASE_mask), rvv_arg_type_info_end};
@@ -479,6 +535,24 @@ static CONSTEXPR const rvv_arg_type_info vf8_args[]
static CONSTEXPR const rvv_arg_type_info x_x_v_args[]
= {rvv_arg_type_info (RVV_BASE_double_trunc_vector), rvv_arg_type_info_end};
+/* A list of args for vector_type func (vector_type) function. */
+static CONSTEXPR const rvv_arg_type_info x_v_args[]
+ = {rvv_arg_type_info (RVV_BASE_signed_vector), rvv_arg_type_info_end};
+
+/* A list of args for vector_type func (vector_type) function. */
+static CONSTEXPR const rvv_arg_type_info xu_v_args[]
+ = {rvv_arg_type_info (RVV_BASE_unsigned_vector), rvv_arg_type_info_end};
+
+/* A list of args for vector_type func (vector_type) function. */
+static CONSTEXPR const rvv_arg_type_info w_x_v_args[]
+ = {rvv_arg_type_info (RVV_BASE_double_trunc_signed_vector),
+ rvv_arg_type_info_end};
+
+/* A list of args for vector_type func (vector_type) function. */
+static CONSTEXPR const rvv_arg_type_info w_xu_v_args[]
+ = {rvv_arg_type_info (RVV_BASE_double_trunc_unsigned_vector),
+ rvv_arg_type_info_end};
+
/* A list of none preds that will be registered for intrinsic functions. */
static CONSTEXPR const predication_type_index none_preds[]
= {PRED_TYPE_none, NUM_PRED_TYPES};
@@ -707,6 +781,22 @@ static CONSTEXPR const rvv_op_info iu_vvxv_ops
rvv_arg_type_info (RVV_BASE_vector), /* Return type */
vxv_args /* Args */};
+/* A static operand information for vector_type func (vector_type, vector_type,
+ * vector_type) function registration. */
+static CONSTEXPR const rvv_op_info f_vvvv_ops
+ = {f_ops, /* Types */
+ OP_TYPE_vv, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ vvv_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type, scalar_type,
+ * vector_type) function registration. */
+static CONSTEXPR const rvv_op_info f_vvfv_ops
+ = {f_ops, /* Types */
+ OP_TYPE_vf, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ vxv_args /* Args */};
+
/* A static operand information for vector_type func (vector_type, vector_type,
* mask_type) function registration. */
static CONSTEXPR const rvv_op_info iu_vvvm_ops
@@ -731,6 +821,14 @@ static CONSTEXPR const rvv_op_info iu_vvxm_ops
rvv_arg_type_info (RVV_BASE_vector), /* Return type */
vxm_args /* Args */};
+/* A static operand information for vector_type func (vector_type, scalar_type,
+ * mask_type) function registration. */
+static CONSTEXPR const rvv_op_info f_vvfm_ops
+ = {f_ops, /* Types */
+ OP_TYPE_vfm, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ vxm_args /* Args */};
+
/* A static operand information for mask_type func (vector_type, vector_type,
* mask_type) function registration. */
static CONSTEXPR const rvv_op_info iu_mvvm_ops
@@ -771,6 +869,14 @@ static CONSTEXPR const rvv_op_info u_mvv_ops
rvv_arg_type_info (RVV_BASE_mask), /* Return type */
vv_args /* Args */};
+/* A static operand information for mask_type func (vector_type, vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info f_mvv_ops
+ = {f_ops, /* Types */
+ OP_TYPE_vv, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_mask), /* Return type */
+ vv_args /* Args */};
+
/* A static operand information for mask_type func (vector_type, scalar_type)
* function registration. */
static CONSTEXPR const rvv_op_info iu_mvx_ops
@@ -795,6 +901,14 @@ static CONSTEXPR const rvv_op_info u_mvx_ops
rvv_arg_type_info (RVV_BASE_mask), /* Return type */
vx_args /* Args */};
+/* A static operand information for mask_type func (vector_type, scalar_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info f_mvf_ops
+ = {f_ops, /* Types */
+ OP_TYPE_vf, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_mask), /* Return type */
+ vx_args /* Args */};
+
/* A static operand information for vector_type func (vector_type, vector_type)
* function registration. */
static CONSTEXPR const rvv_op_info i_vvv_ops
@@ -811,6 +925,22 @@ static CONSTEXPR const rvv_op_info u_vvv_ops
rvv_arg_type_info (RVV_BASE_vector), /* Return type */
vv_args /* Args */};
+/* A static operand information for vector_type func (vector_type, vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info f_vvv_ops
+ = {f_ops, /* Types */
+ OP_TYPE_vv, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ vv_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type, vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info f_vvf_ops
+ = {f_ops, /* Types */
+ OP_TYPE_vf, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ vx_args /* Args */};
+
/* A static operand information for vector_type func (vector_type, vector_type)
* function registration. */
static CONSTEXPR const rvv_op_info full_v_i_vvv_ops
@@ -940,6 +1070,135 @@ static CONSTEXPR const rvv_op_info iu_v_ops
rvv_arg_type_info (RVV_BASE_vector), /* Return type */
v_args /* Args */};
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info f_v_ops
+ = {f_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info f_to_u_v_ops
+ = {convert_u_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ f_v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info f_to_i_f_v_ops
+ = {convert_i_ops, /* Types */
+ OP_TYPE_f_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ f_v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info f_to_wi_f_v_ops
+ = {wconvert_i_ops, /* Types */
+ OP_TYPE_f_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ trunc_f_v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info f_to_ni_f_w_ops
+ = {f_ops, /* Types */
+ OP_TYPE_f_w, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_double_trunc_signed_vector), /* Return type */
+ v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info f_to_nu_f_w_ops
+ = {f_ops, /* Types */
+ OP_TYPE_f_w, /* Suffix */
+ rvv_arg_type_info (
+ RVV_BASE_double_trunc_unsigned_vector), /* Return type */
+ v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info i_to_f_x_v_ops
+ = {f_ops, /* Types */
+ OP_TYPE_x_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ x_v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info u_to_f_xu_v_ops
+ = {f_ops, /* Types */
+ OP_TYPE_xu_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ xu_v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info i_to_wf_x_v_ops
+ = {f_ops, /* Types */
+ OP_TYPE_x_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ w_x_v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info u_to_wf_xu_v_ops
+ = {f_ops, /* Types */
+ OP_TYPE_xu_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ w_xu_v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info i_to_nf_x_w_ops
+ = {wconvert_i_ops, /* Types */
+ OP_TYPE_x_w, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_double_trunc_float_vector), /* Return type */
+ v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info u_to_nf_xu_w_ops
+ = {wconvert_u_ops, /* Types */
+ OP_TYPE_xu_w, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_double_trunc_float_vector), /* Return type */
+ v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info f_to_u_f_v_ops
+ = {convert_u_ops, /* Types */
+ OP_TYPE_f_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ f_v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info f_to_wu_f_v_ops
+ = {wconvert_u_ops, /* Types */
+ OP_TYPE_f_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ trunc_f_v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info f_to_wf_f_v_ops
+ = {f_ops, /* Types */
+ OP_TYPE_f_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ w_v_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info f_to_nf_f_w_ops
+ = {wconvert_f_ops, /* Types */
+ OP_TYPE_f_w, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_double_trunc_float_vector), /* Return type */
+ v_args /* Args */};
+
/* A static operand information for vector_type func (vector_type)
* function registration. */
static CONSTEXPR const rvv_op_info all_v_ops
@@ -956,6 +1215,14 @@ static CONSTEXPR const rvv_op_info iu_x_ops
rvv_arg_type_info (RVV_BASE_vector), /* Return type */
x_args /* Args */};
+/* A static operand information for vector_type func (scalar_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info f_f_ops
+ = {f_ops, /* Types */
+ OP_TYPE_f, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ x_args /* Args */};
+
/* A static operand information for vector_type func (double demote type)
* function registration. */
static CONSTEXPR const rvv_op_info i_vf2_ops
@@ -1012,6 +1279,14 @@ static CONSTEXPR const rvv_op_info i_wvv_ops
rvv_arg_type_info (RVV_BASE_vector), /* Return type */
wvv_args /* Args */};
+/* A static operand information for vector_type func (double demote type, double
+ * demote type) function registration. */
+static CONSTEXPR const rvv_op_info f_wvv_ops
+ = {wextf_ops, /* Types */
+ OP_TYPE_vv, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ wvv_args /* Args */};
+
/* A static operand information for vector_type func (vector_type, double demote
* type, double demote type) function registration. */
static CONSTEXPR const rvv_op_info i_wwvv_ops
@@ -1028,6 +1303,22 @@ static CONSTEXPR const rvv_op_info i_wwxv_ops
rvv_arg_type_info (RVV_BASE_vector), /* Return type */
wwxv_args /* Args */};
+/* A static operand information for vector_type func (vector_type, double demote
+ * type, double demote type) function registration. */
+static CONSTEXPR const rvv_op_info f_wwvv_ops
+ = {wextf_ops, /* Types */
+ OP_TYPE_vv, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ wwvv_args /* Args */};
+
+/* A static operand information for vector_type func (vector_type, double demote
+ * scalar_type, double demote type) function registration. */
+static CONSTEXPR const rvv_op_info f_wwfv_ops
+ = {wextf_ops, /* Types */
+ OP_TYPE_vf, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ wwxv_args /* Args */};
+
/* A static operand information for vector_type func (vector_type, double demote
* type, double demote type) function registration. */
static CONSTEXPR const rvv_op_info u_wwvv_ops
@@ -1092,6 +1383,14 @@ static CONSTEXPR const rvv_op_info i_wvx_ops
rvv_arg_type_info (RVV_BASE_vector), /* Return type */
wvx_args /* Args */};
+/* A static operand information for vector_type func (double demote type, double
+ * demote scalar_type) function registration. */
+static CONSTEXPR const rvv_op_info f_wvf_ops
+ = {wextf_ops, /* Types */
+ OP_TYPE_vf, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ wvx_args /* Args */};
+
/* A static operand information for vector_type func (signed double demote type,
* unsigned double demote scalar_type) function registration. */
static CONSTEXPR const rvv_op_info i_su_wvx_ops
@@ -1108,6 +1407,14 @@ static CONSTEXPR const rvv_op_info i_wwv_ops
rvv_arg_type_info (RVV_BASE_vector), /* Return type */
wwv_args /* Args */};
+/* A static operand information for vector_type func (vector_type, double
+ * demote type) function registration. */
+static CONSTEXPR const rvv_op_info f_wwv_ops
+ = {wextf_ops, /* Types */
+ OP_TYPE_wv, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ wwv_args /* Args */};
+
/* A static operand information for vector_type func (vector_type, double
* demote scalar_type) function registration. */
static CONSTEXPR const rvv_op_info i_wwx_ops
@@ -1116,6 +1423,14 @@ static CONSTEXPR const rvv_op_info i_wwx_ops
rvv_arg_type_info (RVV_BASE_vector), /* Return type */
wwx_args /* Args */};
+/* A static operand information for vector_type func (vector_type, double
+ * demote scalar_type) function registration. */
+static CONSTEXPR const rvv_op_info f_wwf_ops
+ = {wextf_ops, /* Types */
+ OP_TYPE_wf, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ wwx_args /* Args */};
+
/* A static operand information for vector_type func (vector_type, double
* demote type) function registration. */
static CONSTEXPR const rvv_op_info u_wwv_ops
@@ -1388,9 +1703,11 @@ register_vector_type (vector_type_index type)
static bool
required_extensions_p (enum rvv_base_type type)
{
- return type == RVV_BASE_vector || type == RVV_BASE_uint8_index
- || type == RVV_BASE_uint16_index || type == RVV_BASE_uint32_index
- || type == RVV_BASE_uint64_index;
+ return type == RVV_BASE_uint8_index || type == RVV_BASE_uint16_index
+ || type == RVV_BASE_uint32_index || type == RVV_BASE_uint64_index
+ || type == RVV_BASE_float_vector
+ || type == RVV_BASE_double_trunc_float_vector
+ || type == RVV_BASE_double_trunc_vector;
}
/* Check whether all the RVV_REQUIRE_* values in REQUIRED_EXTENSIONS are
@@ -1410,7 +1727,7 @@ check_required_extensions (const function_instance &instance)
enum vector_type_index vector_type
= op_info->args[i].get_base_vector_type (type);
if (vector_type == NUM_VECTOR_TYPES)
- continue;
+ return false;
required_extensions |= op_info->types[vector_type].required_extensions;
/* According to RVV ISA, EEW=64 index of indexed loads/stores require
@@ -1474,17 +1791,42 @@ get_mask_policy_for_pred (enum predication_type_index pred)
return gen_int_mode (get_prefer_mask_policy (), Pmode);
}
+static bool
+unsigned_base_type_p (rvv_base_type base_type)
+{
+ return base_type == RVV_BASE_double_trunc_unsigned_vector
+ || base_type == RVV_BASE_double_trunc_unsigned_scalar
+ || base_type == RVV_BASE_unsigned_vector
+ || base_type == RVV_BASE_uint8_index
+ || base_type == RVV_BASE_uint16_index
+ || base_type == RVV_BASE_uint32_index
+ || base_type == RVV_BASE_uint64_index
+ || base_type == RVV_BASE_shift_vector;
+}
+
+static machine_mode
+get_mode_for_bitsize (poly_int64 bitsize, bool float_mode_p)
+{
+ if (float_mode_p)
+ return float_mode_for_size (bitsize).require ();
+ else
+ return int_mode_for_size (bitsize, 0).require ();
+}
+
vector_type_index
rvv_arg_type_info::get_base_vector_type (tree type) const
{
if (!type)
return NUM_VECTOR_TYPES;
+
poly_int64 nunits = GET_MODE_NUNITS (TYPE_MODE (type));
machine_mode inner_mode = GET_MODE_INNER (TYPE_MODE (type));
+ poly_int64 bitsize = GET_MODE_BITSIZE (inner_mode);
+
bool unsigned_p = TYPE_UNSIGNED (type);
- if (base_type == RVV_BASE_double_trunc_unsigned_vector
- || base_type == RVV_BASE_double_trunc_unsigned_scalar)
+ if (unsigned_base_type_p (base_type))
unsigned_p = true;
+
switch (base_type)
{
case RVV_BASE_mask:
@@ -1492,50 +1834,46 @@ rvv_arg_type_info::get_base_vector_type (tree type) const
break;
case RVV_BASE_uint8_index:
inner_mode = E_QImode;
- unsigned_p = true;
break;
case RVV_BASE_uint16_index:
inner_mode = E_HImode;
- unsigned_p = true;
break;
case RVV_BASE_uint32_index:
inner_mode = E_SImode;
- unsigned_p = true;
break;
case RVV_BASE_uint64_index:
inner_mode = E_DImode;
- unsigned_p = true;
break;
case RVV_BASE_shift_vector:
inner_mode = GET_MODE_INNER (TYPE_MODE (type));
- unsigned_p = true;
break;
case RVV_BASE_double_trunc_vector:
case RVV_BASE_double_trunc_scalar:
+ inner_mode = get_mode_for_bitsize (exact_div (bitsize, 2),
+ FLOAT_MODE_P (inner_mode));
+ break;
case RVV_BASE_double_trunc_unsigned_vector:
case RVV_BASE_double_trunc_unsigned_scalar:
- if (inner_mode == DImode)
- inner_mode = SImode;
- else if (inner_mode == SImode)
- inner_mode = HImode;
- else if (inner_mode == HImode)
- inner_mode = QImode;
- else
- gcc_unreachable ();
+ case RVV_BASE_double_trunc_signed_vector:
+ inner_mode = int_mode_for_size (exact_div (bitsize, 2), 0).require ();
break;
case RVV_BASE_quad_trunc_vector:
- if (inner_mode == DImode)
- inner_mode = HImode;
- else if (inner_mode == SImode)
- inner_mode = QImode;
- else
- gcc_unreachable ();
+ inner_mode = get_mode_for_bitsize (exact_div (bitsize, 4),
+ FLOAT_MODE_P (inner_mode));
break;
case RVV_BASE_oct_trunc_vector:
- if (inner_mode == DImode)
- inner_mode = QImode;
- else
- gcc_unreachable ();
+ inner_mode = get_mode_for_bitsize (exact_div (bitsize, 8),
+ FLOAT_MODE_P (inner_mode));
+ break;
+ case RVV_BASE_float_vector:
+ inner_mode = float_mode_for_size (bitsize).require ();
+ break;
+ case RVV_BASE_double_trunc_float_vector:
+ inner_mode = float_mode_for_size (exact_div (bitsize, 2)).require ();
+ break;
+ case RVV_BASE_signed_vector:
+ case RVV_BASE_unsigned_vector:
+ inner_mode = int_mode_for_mode (inner_mode).require ();
break;
default:
return NUM_VECTOR_TYPES;
@@ -1552,7 +1890,7 @@ rvv_arg_type_info::get_base_vector_type (tree type) const
if (!vector_type)
continue;
- if (GET_MODE_CLASS (TYPE_MODE (vector_type)) != MODE_VECTOR_BOOL
+ if (GET_MODE_CLASS (TYPE_MODE (vector_type)) == MODE_VECTOR_INT
&& TYPE_UNSIGNED (vector_type) != unsigned_p)
continue;
@@ -1581,9 +1919,6 @@ rvv_arg_type_info::get_tree_type (vector_type_index type_idx) const
type is always the signed type + 1 (They have same SEW and LMUL).
For example 'vuint8mf8_t' enum = 'vint8mf8_t' enum + 1.
Note: We dont't allow type_idx to be unsigned type. */
- case RVV_BASE_unsigned_vector:
- gcc_assert (!TYPE_UNSIGNED (builtin_types[type_idx].vector));
- return builtin_types[type_idx + 1].vector;
case RVV_BASE_unsigned_scalar:
gcc_assert (!TYPE_UNSIGNED (builtin_types[type_idx].scalar));
return builtin_types[type_idx + 1].scalar;
@@ -1621,8 +1956,13 @@ rvv_arg_type_info::get_tree_type (vector_type_index type_idx) const
case RVV_BASE_double_trunc_vector:
case RVV_BASE_quad_trunc_vector:
case RVV_BASE_oct_trunc_vector:
+ case RVV_BASE_double_trunc_signed_vector:
case RVV_BASE_double_trunc_unsigned_vector:
case RVV_BASE_mask:
+ case RVV_BASE_float_vector:
+ case RVV_BASE_double_trunc_float_vector:
+ case RVV_BASE_signed_vector:
+ case RVV_BASE_unsigned_vector:
if (get_base_vector_type (builtin_types[type_idx].vector)
!= NUM_VECTOR_TYPES)
return builtin_types[get_base_vector_type (
diff --git a/gcc/config/riscv/riscv-vector-builtins.def b/gcc/config/riscv/riscv-vector-builtins.def
index baafed8a4e9..bb672f3b449 100644
--- a/gcc/config/riscv/riscv-vector-builtins.def
+++ b/gcc/config/riscv/riscv-vector-builtins.def
@@ -288,7 +288,11 @@ DEF_RVV_OP_TYPE (vf)
DEF_RVV_OP_TYPE (vm)
DEF_RVV_OP_TYPE (wf)
DEF_RVV_OP_TYPE (vfm)
-DEF_RVV_OP_TYPE (v_f)
+DEF_RVV_OP_TYPE (f)
+DEF_RVV_OP_TYPE (f_v)
+DEF_RVV_OP_TYPE (xu_v)
+DEF_RVV_OP_TYPE (f_w)
+DEF_RVV_OP_TYPE (xu_w)
DEF_RVV_PRED_TYPE (ta)
DEF_RVV_PRED_TYPE (tu)
diff --git a/gcc/config/riscv/riscv-vector-builtins.h b/gcc/config/riscv/riscv-vector-builtins.h
index b9d73670789..db6ab389e64 100644
--- a/gcc/config/riscv/riscv-vector-builtins.h
+++ b/gcc/config/riscv/riscv-vector-builtins.h
@@ -141,6 +141,7 @@ enum rvv_base_type
RVV_BASE_vector,
RVV_BASE_scalar,
RVV_BASE_mask,
+ RVV_BASE_signed_vector,
RVV_BASE_unsigned_vector,
RVV_BASE_unsigned_scalar,
RVV_BASE_vector_ptr,
@@ -160,8 +161,11 @@ enum rvv_base_type
RVV_BASE_quad_trunc_vector,
RVV_BASE_oct_trunc_vector,
RVV_BASE_double_trunc_scalar,
+ RVV_BASE_double_trunc_signed_vector,
RVV_BASE_double_trunc_unsigned_vector,
RVV_BASE_double_trunc_unsigned_scalar,
+ RVV_BASE_float_vector,
+ RVV_BASE_double_trunc_float_vector,
NUM_BASE_TYPES
};
@@ -343,6 +347,7 @@ public:
machine_mode vector_mode (void) const;
machine_mode index_mode (void) const;
+ machine_mode arg_mode (int) const;
rtx use_exact_insn (insn_code);
rtx use_contiguous_load_insn (insn_code);
@@ -492,6 +497,13 @@ function_expander::index_mode (void) const
return TYPE_MODE (op_info->args[1].get_tree_type (type.index));
}
+/* Return the machine_mode of the corresponding arg type. */
+inline machine_mode
+function_expander::arg_mode (int idx) const
+{
+ return TYPE_MODE (op_info->args[idx].get_tree_type (type.index));
+}
+
/* Default implementation of function_base::call_properties, with conservatively
correct behavior for floating-point instructions. */
inline unsigned int
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 023b0b329c4..127e1b07fcf 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -54,6 +54,18 @@
UNSPEC_VMSIF
UNSPEC_VMSOF
UNSPEC_VIOTA
+
+ UNSPEC_VFRSQRT7
+ UNSPEC_VFREC7
+ UNSPEC_VFCLASS
+
+ UNSPEC_VCOPYSIGN
+ UNSPEC_VNCOPYSIGN
+ UNSPEC_VXORSIGN
+
+ UNSPEC_VFCVT
+ UNSPEC_UNSIGNED_VFCVT
+ UNSPEC_ROD
])
(define_mode_iterator V [
@@ -81,6 +93,18 @@
(VNx4DI "TARGET_MIN_VLEN > 32") (VNx8DI "TARGET_MIN_VLEN > 32")
])
+(define_mode_iterator VF [
+ (VNx1SF "TARGET_VECTOR_ELEN_FP_32")
+ (VNx2SF "TARGET_VECTOR_ELEN_FP_32")
+ (VNx4SF "TARGET_VECTOR_ELEN_FP_32")
+ (VNx8SF "TARGET_VECTOR_ELEN_FP_32")
+ (VNx16SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (VNx1DF "TARGET_VECTOR_ELEN_FP_64")
+ (VNx2DF "TARGET_VECTOR_ELEN_FP_64")
+ (VNx4DF "TARGET_VECTOR_ELEN_FP_64")
+ (VNx8DF "TARGET_VECTOR_ELEN_FP_64")
+])
+
(define_mode_iterator VFULLI [
VNx1QI VNx2QI VNx4QI VNx8QI VNx16QI VNx32QI (VNx64QI "TARGET_MIN_VLEN > 32")
VNx1HI VNx2HI VNx4HI VNx8HI VNx16HI (VNx32HI "TARGET_MIN_VLEN > 32")
@@ -210,6 +234,20 @@
(VNx4DI "TARGET_MIN_VLEN > 32") (VNx8DI "TARGET_MIN_VLEN > 32")
])
+(define_mode_iterator VWEXTF [
+ (VNx1DF "TARGET_VECTOR_ELEN_FP_64")
+ (VNx2DF "TARGET_VECTOR_ELEN_FP_64")
+ (VNx4DF "TARGET_VECTOR_ELEN_FP_64")
+ (VNx8DF "TARGET_VECTOR_ELEN_FP_64")
+])
+
+(define_mode_iterator VWCONVERTI [
+ (VNx1DI "TARGET_MIN_VLEN > 32 && TARGET_VECTOR_ELEN_FP_32")
+ (VNx2DI "TARGET_MIN_VLEN > 32 && TARGET_VECTOR_ELEN_FP_32")
+ (VNx4DI "TARGET_MIN_VLEN > 32 && TARGET_VECTOR_ELEN_FP_32")
+ (VNx8DI "TARGET_MIN_VLEN > 32 && TARGET_VECTOR_ELEN_FP_32")
+])
+
(define_mode_iterator VQEXTI [
VNx1SI VNx2SI VNx4SI VNx8SI (VNx16SI "TARGET_MIN_VLEN > 32")
(VNx1DI "TARGET_MIN_VLEN > 32") (VNx2DI "TARGET_MIN_VLEN > 32")
@@ -266,15 +304,16 @@
])
(define_mode_attr V_DOUBLE_TRUNC [
- (VNx1HI "VNx1QI") (VNx2HI "VNx2QI") (VNx4HI "VNx4QI") (VNx8HI "VNx8QI")
+ (VNx1HI "VNx1QI") (VNx2HI "VNx2QI") (VNx4HI "VNx4QI") (VNx8HI "VNx8QI")
(VNx16HI "VNx16QI") (VNx32HI "VNx32QI")
- (VNx1SI "VNx1HI") (VNx2SI "VNx2HI") (VNx4SI "VNx4HI") (VNx8SI "VNx8HI")
+ (VNx1SI "VNx1HI") (VNx2SI "VNx2HI") (VNx4SI "VNx4HI") (VNx8SI "VNx8HI")
(VNx16SI "VNx16HI")
(VNx1DI "VNx1SI") (VNx2DI "VNx2SI") (VNx4DI "VNx4SI") (VNx8DI "VNx8SI")
+ (VNx1DF "VNx1SF") (VNx2DF "VNx2SF") (VNx4DF "VNx4SF") (VNx8DF "VNx8SF")
])
(define_mode_attr V_QUAD_TRUNC [
- (VNx1SI "VNx1QI") (VNx2SI "VNx2QI") (VNx4SI "VNx4QI") (VNx8SI "VNx8QI")
+ (VNx1SI "VNx1QI") (VNx2SI "VNx2QI") (VNx4SI "VNx4QI") (VNx8SI "VNx8QI")
(VNx16SI "VNx16QI")
(VNx1DI "VNx1HI") (VNx2DI "VNx2HI")
(VNx4DI "VNx4HI") (VNx8DI "VNx8HI")
@@ -284,6 +323,17 @@
(VNx1DI "VNx1QI") (VNx2DI "VNx2QI") (VNx4DI "VNx4QI") (VNx8DI "VNx8QI")
])
+(define_mode_attr VCONVERT [
+ (VNx1SF "VNx1SI") (VNx2SF "VNx2SI") (VNx4SF "VNx4SI") (VNx8SF "VNx8SI") (VNx16SF "VNx16SI")
+ (VNx1DF "VNx1DI") (VNx2DF "VNx2DI") (VNx4DF "VNx4DI") (VNx8DF "VNx8DI")
+])
+
+(define_mode_attr VNCONVERT [
+ (VNx1SF "VNx1HI") (VNx2SF "VNx2HI") (VNx4SF "VNx4HI") (VNx8SF "VNx8HI") (VNx16SF "VNx16HI")
+ (VNx1DI "VNx1SF") (VNx2DI "VNx2SF") (VNx4DI "VNx4SF") (VNx8DI "VNx8SF")
+ (VNx1DF "VNx1SI") (VNx2DF "VNx2SI") (VNx4DF "VNx4SI") (VNx8DF "VNx8SI")
+])
+
(define_int_iterator ORDER [UNSPEC_ORDERED UNSPEC_UNORDERED])
(define_int_iterator VMULH [UNSPEC_VMULHS UNSPEC_VMULHU UNSPEC_VMULHSU])
@@ -300,12 +350,17 @@
(define_int_iterator VMISC [UNSPEC_VMSBF UNSPEC_VMSIF UNSPEC_VMSOF])
+(define_int_iterator VFMISC [UNSPEC_VFRSQRT7 UNSPEC_VFREC7])
+
+(define_int_iterator VFCVTS [UNSPEC_VFCVT UNSPEC_UNSIGNED_VFCVT])
+
(define_int_attr order [
(UNSPEC_ORDERED "o") (UNSPEC_UNORDERED "u")
])
(define_int_attr v_su [(UNSPEC_VMULHS "") (UNSPEC_VMULHU "u") (UNSPEC_VMULHSU "su")
- (UNSPEC_VNCLIP "") (UNSPEC_VNCLIPU "u")])
+ (UNSPEC_VNCLIP "") (UNSPEC_VNCLIPU "u")
+ (UNSPEC_VFCVT "") (UNSPEC_UNSIGNED_VFCVT "u")])
(define_int_attr sat_op [(UNSPEC_VAADDU "aaddu") (UNSPEC_VAADD "aadd")
(UNSPEC_VASUBU "asubu") (UNSPEC_VASUB "asub")
(UNSPEC_VSMUL "smul") (UNSPEC_VSSRL "ssrl")
@@ -316,7 +371,19 @@
(UNSPEC_VSSRA "vsshift") (UNSPEC_VNCLIP "vnclip")
(UNSPEC_VNCLIPU "vnclip")])
-(define_int_attr misc_op [(UNSPEC_VMSBF "sbf") (UNSPEC_VMSIF "sif") (UNSPEC_VMSOF "sof")])
+(define_int_attr misc_op [(UNSPEC_VMSBF "sbf") (UNSPEC_VMSIF "sif") (UNSPEC_VMSOF "sof")
+ (UNSPEC_VFRSQRT7 "rsqrt7") (UNSPEC_VFREC7 "rec7")])
+
+(define_int_attr float_insn_type [(UNSPEC_VFRSQRT7 "vfsqrt") (UNSPEC_VFREC7 "vfrecp")])
+
+(define_int_iterator VCOPYSIGNS [UNSPEC_VCOPYSIGN UNSPEC_VNCOPYSIGN UNSPEC_VXORSIGN])
+
+(define_int_attr copysign [(UNSPEC_VCOPYSIGN "copysign")
+ (UNSPEC_VNCOPYSIGN "ncopysign")
+ (UNSPEC_VXORSIGN "xorsign")])
+
+(define_int_attr nx [(UNSPEC_VCOPYSIGN "") (UNSPEC_VNCOPYSIGN "n")
+ (UNSPEC_VXORSIGN "x")])
(define_code_iterator any_int_binop [plus minus and ior xor ashift ashiftrt lshiftrt
smax umax smin umin mult div udiv mod umod
@@ -339,8 +406,21 @@
(define_code_attr macc_nmsac [(plus "macc") (minus "nmsac")])
(define_code_attr madd_nmsub [(plus "madd") (minus "nmsub")])
+(define_code_attr nmacc_msac [(plus "nmacc") (minus "msac")])
+(define_code_attr nmadd_msub [(plus "nmadd") (minus "msub")])
(define_code_iterator and_ior [and ior])
+
+(define_code_iterator any_float_binop [plus mult smax smin minus div])
+(define_code_iterator commutative_float_binop [plus mult smax smin])
+(define_code_iterator non_commutative_float_binop [minus div])
+(define_code_iterator any_float_unop [neg abs sqrt])
+
+(define_code_iterator any_fix [fix unsigned_fix])
+(define_code_iterator any_float [float unsigned_float])
+(define_code_attr fix_cvt [(fix "fix_trunc") (unsigned_fix "fixuns_trunc")])
+(define_code_attr float_cvt [(float "float") (unsigned_float "floatuns")])
+
(define_code_attr ninsn [(and "nand") (ior "nor") (xor "xnor")])
(define_code_attr binop_rhs1_predicate [
@@ -459,6 +539,17 @@
(minus "walu")
(mult "wmul")])
+(define_code_attr float_insn_type [
+ (plus "vfalu")
+ (mult "vfmul")
+ (smax "vfminmax")
+ (smin "vfminmax")
+ (minus "vfalu")
+ (div "vfdiv")
+ (neg "vfsgnj")
+ (abs "vfsgnj")
+ (sqrt "vfsqrt")])
+
;; <binop_vi_variant_insn> expands to the insn name of binop matching constraint rhs1 is immediate.
;; minus is negated as vadd and ss_minus is negated as vsadd, others remain <insn>.
(define_code_attr binop_vi_variant_insn [(ashift "sll.vi")
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index c131738c75f..51647386e0e 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -147,7 +147,11 @@
vext,viwalu,viwmul,vicalu,vnshift,\
vimuladd,vimerge,vaalu,vsmul,vsshift,\
vnclip,viminmax,viwmuladd,vmpop,vmffs,vmsfs,\
- vmiota,vmidx")
+ vmiota,vmidx,vfalu,vfmul,vfminmax,vfdiv,\
+ vfwalu,vfwmul,vfsqrt,vfrecp,vfsgnj,vfcmp,\
+ vfmerge,vfcvtitof,vfcvtftoi,vfwcvtitof,\
+ vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,\
+ vfncvtftof,vfmuladd,vfwmuladd,vfclass")
(const_int INVALID_ATTRIBUTE)
(eq_attr "mode" "VNx1QI,VNx1BI")
(symbol_ref "riscv_vector::get_ratio(E_VNx1QImode)")
@@ -200,20 +204,24 @@
(cond [(eq_attr "type" "vlde,vimov,vfmov,vldm,vlds,vmalu,vldux,vldox,vicmp,\
vialu,vshift,viminmax,vimul,vidiv,vsalu,vext,viwalu,\
viwmul,vnshift,vaalu,vsmul,vsshift,vnclip,vmsfs,\
- vmiota,vmidx")
+ vmiota,vmidx,vfalu,vfmul,vfminmax,vfdiv,vfwalu,vfwmul,\
+ vfsqrt,vfrecp,vfsgnj,vfcmp,vfcvtitof,vfcvtftoi,vfwcvtitof,\
+ vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,vfncvtftof,vfclass")
(const_int 2)
- (eq_attr "type" "vimerge")
+ (eq_attr "type" "vimerge,vfmerge")
(const_int 1)
- (eq_attr "type" "vimuladd,viwmuladd")
+ (eq_attr "type" "vimuladd,viwmuladd,vfmuladd,vfwmuladd")
(const_int 5)]
(const_int INVALID_ATTRIBUTE)))
;; The index of operand[] to get the avl op.
(define_attr "vl_op_idx" ""
(cond [(eq_attr "type" "vlde,vste,vimov,vfmov,vldm,vstm,vmalu,vsts,vstux,\
- vstox,vext,vmsfs,vmiota")
+ vstox,vext,vmsfs,vmiota,vfsqrt,vfrecp,vfcvtitof,\
+ vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,\
+ vfncvtftoi,vfncvtftof,vfclass")
(const_int 4)
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -225,10 +233,11 @@
(eq_attr "type" "vldux,vldox,vialu,vshift,viminmax,vimul,vidiv,vsalu,\
viwalu,viwmul,vnshift,vimerge,vaalu,vsmul,\
- vsshift,vnclip")
+ vsshift,vnclip,vfalu,vfmul,vfminmax,vfdiv,vfwalu,vfwmul,\
+ vfsgnj,vfmerge")
(const_int 5)
- (eq_attr "type" "vicmp,vimuladd,viwmuladd")
+ (eq_attr "type" "vicmp,vimuladd,viwmuladd,vfcmp,vfmuladd,vfwmuladd")
(const_int 6)
(eq_attr "type" "vmpop,vmffs,vmidx")
@@ -237,7 +246,9 @@
;; The tail policy op value.
(define_attr "ta" ""
- (cond [(eq_attr "type" "vlde,vimov,vfmov,vext,vmiota")
+ (cond [(eq_attr "type" "vlde,vimov,vfmov,vext,vmiota,vfsqrt,vfrecp,\
+ vfcvtitof,vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,\
+ vfncvtitof,vfncvtftoi,vfncvtftof,vfclass")
(symbol_ref "riscv_vector::get_ta(operands[5])")
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -249,10 +260,11 @@
(eq_attr "type" "vldux,vldox,vialu,vshift,viminmax,vimul,vidiv,vsalu,\
viwalu,viwmul,vnshift,vimerge,vaalu,vsmul,\
- vsshift,vnclip")
+ vsshift,vnclip,vfalu,vfmul,vfminmax,vfdiv,\
+ vfwalu,vfwmul,vfsgnj,vfmerge")
(symbol_ref "riscv_vector::get_ta(operands[6])")
- (eq_attr "type" "vimuladd,viwmuladd")
+ (eq_attr "type" "vimuladd,viwmuladd,vfmuladd,vfwmuladd")
(symbol_ref "riscv_vector::get_ta(operands[7])")
(eq_attr "type" "vmidx")
@@ -261,7 +273,9 @@
;; The mask policy op value.
(define_attr "ma" ""
- (cond [(eq_attr "type" "vlde,vext,vmiota")
+ (cond [(eq_attr "type" "vlde,vext,vmiota,vfsqrt,vfrecp,vfcvtitof,vfcvtftoi,\
+ vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,\
+ vfncvtftof,vfclass")
(symbol_ref "riscv_vector::get_ma(operands[6])")
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -273,10 +287,11 @@
(eq_attr "type" "vldux,vldox,vialu,vshift,viminmax,vimul,vidiv,vsalu,\
viwalu,viwmul,vnshift,vaalu,vsmul,vsshift,\
- vnclip,vicmp")
+ vnclip,vicmp,vfalu,vfmul,vfminmax,vfdiv,\
+ vfwalu,vfwmul,vfsgnj,vfcmp")
(symbol_ref "riscv_vector::get_ma(operands[7])")
- (eq_attr "type" "vimuladd,viwmuladd")
+ (eq_attr "type" "vimuladd,viwmuladd,vfmuladd,vfwmuladd")
(symbol_ref "riscv_vector::get_ma(operands[8])")
(eq_attr "type" "vmsfs,vmidx")
@@ -285,7 +300,9 @@
;; The avl type value.
(define_attr "avl_type" ""
- (cond [(eq_attr "type" "vlde,vlde,vste,vimov,vimov,vimov,vfmov,vext,vimerge")
+ (cond [(eq_attr "type" "vlde,vlde,vste,vimov,vimov,vimov,vfmov,vext,vimerge,\
+ vfsqrt,vfrecp,vfmerge,vfcvtitof,vfcvtftoi,vfwcvtitof,\
+ vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,vfncvtftof,vfclass")
(symbol_ref "INTVAL (operands[7])")
(eq_attr "type" "vldm,vstm,vimov,vmalu,vmalu")
(symbol_ref "INTVAL (operands[5])")
@@ -299,12 +316,13 @@
(eq_attr "type" "vldux,vldox,vialu,vshift,viminmax,vimul,vidiv,vsalu,\
viwalu,viwmul,vnshift,vimuladd,vaalu,vsmul,vsshift,\
- vnclip,vicmp")
+ vnclip,vicmp,vfalu,vfmul,vfminmax,vfdiv,vfwalu,vfwmul,\
+ vfsgnj,vfcmp,vfmuladd")
(symbol_ref "INTVAL (operands[8])")
(eq_attr "type" "vstux,vstox")
(symbol_ref "INTVAL (operands[5])")
- (eq_attr "type" "vimuladd,viwmuladd")
+ (eq_attr "type" "vimuladd,viwmuladd,vfwmuladd")
(symbol_ref "INTVAL (operands[9])")
(eq_attr "type" "vmsfs,vmidx")
@@ -974,32 +992,31 @@
;; To use LICM optimization, we postpone generation of vlse.v to split stage since
;; a memory access instruction can not be optimized by LICM (Loop invariant).
(define_insn_and_split "@pred_broadcast<mode>"
- [(set (match_operand:V 0 "register_operand" "=vr, vr, vr, vr")
- (if_then_else:V
+ [(set (match_operand:VI 0 "register_operand" "=vr, vr, vr")
+ (if_then_else:VI
(unspec:<VM>
- [(match_operand:<VM> 1 "vector_mask_operand" " Wc1, Wc1, vm, Wc1")
- (match_operand 4 "vector_length_operand" " rK, rK, rK, rK")
- (match_operand 5 "const_int_operand" " i, i, i, i")
- (match_operand 6 "const_int_operand" " i, i, i, i")
- (match_operand 7 "const_int_operand" " i, i, i, i")
+ [(match_operand:<VM> 1 "vector_mask_operand" " Wc1, vm, Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i")
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
- (vec_duplicate:V
- (match_operand:<VEL> 3 "direct_broadcast_operand" " r, f, Wdm, Wdm"))
- (match_operand:V 2 "vector_merge_operand" "0vu, 0vu, 0vu, 0vu")))]
+ (vec_duplicate:VI
+ (match_operand:<VEL> 3 "direct_broadcast_operand" " r, Wdm, Wdm"))
+ (match_operand:VI 2 "vector_merge_operand" "0vu, 0vu, 0vu")))]
"TARGET_VECTOR"
"@
vmv.v.x\t%0,%3
- vfmv.v.f\t%0,%3
vlse<sew>.v\t%0,%3,zero,%1.t
vlse<sew>.v\t%0,%3,zero"
- "!FLOAT_MODE_P (<MODE>mode) && register_operand (operands[3], <VEL>mode)
+ "register_operand (operands[3], <VEL>mode)
&& GET_MODE_BITSIZE (<VEL>mode) > GET_MODE_BITSIZE (Pmode)"
[(set (match_dup 0)
- (if_then_else:V (unspec:<VM> [(match_dup 1) (match_dup 4)
+ (if_then_else:VI (unspec:<VM> [(match_dup 1) (match_dup 4)
(match_dup 5) (match_dup 6) (match_dup 7)
(reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
- (vec_duplicate:V (match_dup 3))
+ (vec_duplicate:VI (match_dup 3))
(match_dup 2)))]
{
gcc_assert (can_create_pseudo_p ());
@@ -1010,7 +1027,29 @@
m = gen_rtx_MEM (<VEL>mode, force_reg (Pmode, XEXP (m, 0)));
operands[3] = m;
}
- [(set_attr "type" "vimov,vfmov,vlds,vlds")
+ [(set_attr "type" "vimov,vlds,vlds")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_broadcast<mode>"
+ [(set (match_operand:VF 0 "register_operand" "=vr, vr, vr")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " Wc1, vm, Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (vec_duplicate:VF
+ (match_operand:<VEL> 3 "direct_broadcast_operand" " f, Wdm, Wdm"))
+ (match_operand:VF 2 "vector_merge_operand" "0vu, 0vu, 0vu")))]
+ "TARGET_VECTOR"
+ "@
+ vfmv.v.f\t%0,%3
+ vlse<sew>.v\t%0,%3,zero,%1.t
+ vlse<sew>.v\t%0,%3,zero"
+ [(set_attr "type" "vfmov")
(set_attr "mode" "<MODE>")])
;; -------------------------------------------------------------------------------
@@ -3242,7 +3281,7 @@
(set_attr "mode" "<V_DOUBLE_TRUNC>")])
;; -------------------------------------------------------------------------------
-;; ---- Predicated comparison operations
+;; ---- Predicated integer comparison operations
;; -------------------------------------------------------------------------------
;; Includes:
;; - 11.8 Vector Integer Comparision Instructions
@@ -4352,7 +4391,7 @@
(set_attr "mode" "<MODE>")])
;; -------------------------------------------------------------------------------
-;; ---- Predicated integer ternary operations
+;; ---- Predicated widen integer ternary operations
;; -------------------------------------------------------------------------------
;; Includes:
;; - 11.14 Vector Widening Integer Multiply-Add Instructions
@@ -4667,3 +4706,1478 @@
"vid.v\t%0%p1"
[(set_attr "type" "vmidx")
(set_attr "mode" "<MODE>")])
+
+;; -------------------------------------------------------------------------------
+;; ---- Predicated floating-point binary operations
+;; -------------------------------------------------------------------------------
+;; Includes:
+;; - 13.2 Vector Single-Width Floating-Point Add/Subtract Instructions
+;; - 13.4 Vector Single-Width Floating-Point Multiply/Divide Instructions
+;; - 13.11 Vector Floating-Point MIN/MAX Instructions
+;; - 13.12 Vector Floating-Point Sign-Injection Instructions
+;; -------------------------------------------------------------------------------
+
+(define_insn "@pred_<optab><mode>"
+ [(set (match_operand:VF 0 "register_operand" "=vd, vr")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (any_float_binop:VF
+ (match_operand:VF 3 "register_operand" " vr, vr")
+ (match_operand:VF 4 "register_operand" " vr, vr"))
+ (match_operand:VF 2 "vector_merge_operand" "0vu,0vu")))]
+ "TARGET_VECTOR"
+ "vf<insn>.vv\t%0,%3,%4%p1"
+ [(set_attr "type" "<float_insn_type>")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_<optab><mode>_scalar"
+ [(set (match_operand:VF 0 "register_operand" "=vd, vr")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (commutative_float_binop:VF
+ (vec_duplicate:VF
+ (match_operand:<VEL> 4 "register_operand" " r, r"))
+ (match_operand:VF 3 "register_operand" " vr, vr"))
+ (match_operand:VF 2 "vector_merge_operand" "0vu,0vu")))]
+ "TARGET_VECTOR"
+ "vf<insn>.vf\t%0,%3,%4%p1"
+ [(set_attr "type" "<float_insn_type>")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_<optab><mode>_scalar"
+ [(set (match_operand:VF 0 "register_operand" "=vd, vr")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (non_commutative_float_binop:VF
+ (match_operand:VF 3 "register_operand" " vr, vr")
+ (vec_duplicate:VF
+ (match_operand:<VEL> 4 "register_operand" " r, r")))
+ (match_operand:VF 2 "vector_merge_operand" "0vu,0vu")))]
+ "TARGET_VECTOR"
+ "vf<insn>.vf\t%0,%3,%4%p1"
+ [(set_attr "type" "<float_insn_type>")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_<optab><mode>_reverse_scalar"
+ [(set (match_operand:VF 0 "register_operand" "=vd, vr")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (non_commutative_float_binop:VF
+ (vec_duplicate:VF
+ (match_operand:<VEL> 4 "register_operand" " r, r"))
+ (match_operand:VF 3 "register_operand" " vr, vr"))
+ (match_operand:VF 2 "vector_merge_operand" "0vu,0vu")))]
+ "TARGET_VECTOR"
+ "vfr<insn>.vf\t%0,%3,%4%p1"
+ [(set_attr "type" "<float_insn_type>")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_<copysign><mode>"
+ [(set (match_operand:VF 0 "register_operand" "=vd, vr")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VF
+ [(match_operand:VF 3 "register_operand" " vr, vr")
+ (match_operand:VF 4 "register_operand" " vr, vr")] VCOPYSIGNS)
+ (match_operand:VF 2 "vector_merge_operand" "0vu,0vu")))]
+ "TARGET_VECTOR"
+ "vfsgnj<nx>.vv\t%0,%3,%4%p1"
+ [(set_attr "type" "vfsgnj")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_<copysign><mode>_scalar"
+ [(set (match_operand:VF 0 "register_operand" "=vd, vr")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VF
+ [(match_operand:VF 3 "register_operand" " vr, vr")
+ (vec_duplicate:VF
+ (match_operand:<VEL> 4 "register_operand" " f, f"))] VCOPYSIGNS)
+ (match_operand:VF 2 "vector_merge_operand" "0vu,0vu")))]
+ "TARGET_VECTOR"
+ "vfsgnj<nx>.vf\t%0,%3,%4%p1"
+ [(set_attr "type" "vfsgnj")
+ (set_attr "mode" "<MODE>")])
+
+;; -------------------------------------------------------------------------------
+;; ---- Predicated floating-point ternary operations
+;; -------------------------------------------------------------------------------
+;; Includes:
+;; - 13.6 Vector Single-Width Floating-Point Fused Multiply-Add Instructions
+;; -------------------------------------------------------------------------------
+
+(define_expand "@pred_mul_<optab><mode>"
+ [(set (match_operand:VF 0 "register_operand")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand")
+ (match_operand 6 "vector_length_operand")
+ (match_operand 7 "const_int_operand")
+ (match_operand 8 "const_int_operand")
+ (match_operand 9 "const_int_operand")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (plus_minus:VF
+ (mult:VF
+ (match_operand:VF 2 "register_operand")
+ (match_operand:VF 3 "register_operand"))
+ (match_operand:VF 4 "register_operand"))
+ (match_operand:VF 5 "vector_merge_operand")))]
+ "TARGET_VECTOR"
+{
+ /* Swap the multiplication operands if the fallback value is the
+ second of the two. */
+ if (rtx_equal_p (operands[3], operands[5]))
+ std::swap (operands[2], operands[3]);
+})
+
+(define_insn "pred_mul_<optab><mode>_undef_merge"
+ [(set (match_operand:VF 0 "register_operand" "=vd, vr, vd, vr, ?&vr")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1, vm,Wc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i")
+ (match_operand 9 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (plus_minus:VF
+ (mult:VF
+ (match_operand:VF 2 "register_operand" " %0, 0, vr, vr, vr")
+ (match_operand:VF 3 "register_operand" " vr, vr, vr, vr, vr"))
+ (match_operand:VF 4 "register_operand" " vr, vr, 0, 0, vr"))
+ (match_operand:VF 5 "vector_undef_operand" " vu, vu, vu, vu, vu")))]
+ "TARGET_VECTOR"
+ "@
+ vf<madd_nmsub>.vv\t%0,%3,%4%p1
+ vf<madd_nmsub>.vv\t%0,%3,%4%p1
+ vf<macc_nmsac>.vv\t%0,%2,%3%p1
+ vf<macc_nmsac>.vv\t%0,%2,%3%p1
+ vmv.v.v\t%0,%4\;vf<macc_nmsac>.vv\t%0,%2,%3%p1"
+ [(set_attr "type" "vfmuladd")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_<madd_nmsub><mode>"
+ [(set (match_operand:VF 0 "register_operand" "=vd, vr, ?&vr")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (plus_minus:VF
+ (mult:VF
+ (match_operand:VF 2 "register_operand" " 0, 0, vr")
+ (match_operand:VF 3 "register_operand" " vr, vr, vr"))
+ (match_operand:VF 4 "register_operand" " vr, vr, vr"))
+ (match_dup 2)))]
+ "TARGET_VECTOR"
+ "@
+ vf<madd_nmsub>.vv\t%0,%3,%4%p1
+ vf<madd_nmsub>.vv\t%0,%3,%4%p1
+ vmv.v.v\t%0,%2\;vf<madd_nmsub>.vv\t%0,%3,%4%p1"
+ [(set_attr "type" "vfmuladd")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "4")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[6])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[7])"))
+ (set (attr "avl_type") (symbol_ref "INTVAL (operands[8])"))])
+
+(define_insn "*pred_<macc_nmsac><mode>"
+ [(set (match_operand:VF 0 "register_operand" "=vd, vr, ?&vr")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (plus_minus:VF
+ (mult:VF
+ (match_operand:VF 2 "register_operand" " vr, vr, vr")
+ (match_operand:VF 3 "register_operand" " vr, vr, vr"))
+ (match_operand:VF 4 "register_operand" " 0, 0, vr"))
+ (match_dup 4)))]
+ "TARGET_VECTOR"
+ "@
+ vf<macc_nmsac>.vv\t%0,%2,%3%p1
+ vf<macc_nmsac>.vv\t%0,%2,%3%p1
+ vmv.v.v\t%0,%4\;vf<macc_nmsac>.vv\t%0,%2,%3%p1"
+ [(set_attr "type" "vfmuladd")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "2")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[6])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[7])"))
+ (set (attr "avl_type") (symbol_ref "INTVAL (operands[8])"))])
+
+(define_insn_and_rewrite "*pred_mul_<optab><mode>"
+ [(set (match_operand:VF 0 "register_operand" "=&vr, ?&vr")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (match_operand 9 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (plus_minus:VF
+ (mult:VF
+ (match_operand:VF 2 "register_operand" " vr, vr")
+ (match_operand:VF 3 "register_operand" " vr, vr"))
+ (match_operand:VF 4 "vector_arith_operand" " vr, vr"))
+ (match_operand:VF 5 "register_operand" " 0, vr")))]
+ "TARGET_VECTOR
+ && !rtx_equal_p (operands[2], operands[5])
+ && !rtx_equal_p (operands[3], operands[5])
+ && !rtx_equal_p (operands[4], operands[5])"
+ "@
+ vmv.v.v\t%0,%4\;vf<macc_nmsac>.vv\t%0,%2,%3%p1
+ #"
+ "&& reload_completed
+ && !rtx_equal_p (operands[0], operands[5])"
+ {
+ emit_insn (gen_pred_merge<mode> (operands[0], RVV_VUNDEF (<MODE>mode),
+ operands[5], operands[4], operands[1], operands[6],
+ operands[7], operands[9]));
+ operands[5] = operands[4] = operands[0];
+ }
+ [(set_attr "type" "vfmuladd")
+ (set_attr "mode" "<MODE>")])
+
+(define_expand "@pred_mul_<optab><mode>_scalar"
+ [(set (match_operand:VF 0 "register_operand")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand")
+ (match_operand 6 "vector_length_operand")
+ (match_operand 7 "const_int_operand")
+ (match_operand 8 "const_int_operand")
+ (match_operand 9 "const_int_operand")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (plus_minus:VF
+ (mult:VF
+ (vec_duplicate:VF
+ (match_operand:<VEL> 2 "register_operand"))
+ (match_operand:VF 3 "register_operand"))
+ (match_operand:VF 4 "register_operand"))
+ (match_operand:VF 5 "vector_merge_operand")))]
+ "TARGET_VECTOR"
+{})
+
+(define_insn "*pred_mul_<optab><mode>_undef_merge_scalar"
+ [(set (match_operand:VF 0 "register_operand" "=vd, vr, vd, vr, ?&vr")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1, vm,Wc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i")
+ (match_operand 9 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (plus_minus:VF
+ (mult:VF
+ (vec_duplicate:VF
+ (match_operand:<VEL> 2 "register_operand" " f, f, f, f, f"))
+ (match_operand:VF 3 "register_operand" " 0, 0, vr, vr, vr"))
+ (match_operand:VF 4 "register_operand" " vr, vr, 0, 0, vr"))
+ (match_operand:VF 5 "vector_undef_operand" " vu, vu, vu, vu, vu")))]
+ "TARGET_VECTOR"
+ "@
+ vf<madd_nmsub>.vf\t%0,%2,%4%p1
+ vf<madd_nmsub>.vf\t%0,%2,%4%p1
+ vf<macc_nmsac>.vf\t%0,%2,%3%p1
+ vf<macc_nmsac>.vf\t%0,%2,%3%p1
+ vmv.v.v\t%0,%4\;vf<macc_nmsac>.vf\t%0,%2,%3%p1"
+ [(set_attr "type" "vfmuladd")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_<madd_nmsub><mode>_scalar"
+ [(set (match_operand:VF 0 "register_operand" "=vd, vr, ?&vr")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (plus_minus:VF
+ (mult:VF
+ (vec_duplicate:VF
+ (match_operand:<VEL> 2 "register_operand" " f, f, vr"))
+ (match_operand:VF 3 "register_operand" " 0, 0, vr"))
+ (match_operand:VF 4 "register_operand" " vr, vr, vr"))
+ (match_dup 3)))]
+ "TARGET_VECTOR"
+ "@
+ vf<madd_nmsub>.vf\t%0,%2,%4%p1
+ vf<madd_nmsub>.vf\t%0,%2,%4%p1
+ vmv.v.v\t%0,%2\;vf<madd_nmsub>.vf\t%0,%2,%4%p1"
+ [(set_attr "type" "vfmuladd")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "4")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[6])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[7])"))
+ (set (attr "avl_type") (symbol_ref "INTVAL (operands[8])"))])
+
+(define_insn "*pred_<macc_nmsac><mode>_scalar"
+ [(set (match_operand:VF 0 "register_operand" "=vd, vr, ?&vr")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (plus_minus:VF
+ (mult:VF
+ (vec_duplicate:VF
+ (match_operand:<VEL> 2 "register_operand" " f, f, vr"))
+ (match_operand:VF 3 "register_operand" " vr, vr, vr"))
+ (match_operand:VF 4 "register_operand" " 0, 0, vr"))
+ (match_dup 4)))]
+ "TARGET_VECTOR"
+ "@
+ vf<macc_nmsac>.vf\t%0,%2,%3%p1
+ vf<macc_nmsac>.vf\t%0,%2,%3%p1
+ vmv.v.v\t%0,%4\;vf<macc_nmsac>.vf\t%0,%2,%3%p1"
+ [(set_attr "type" "vfmuladd")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "2")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[6])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[7])"))
+ (set (attr "avl_type") (symbol_ref "INTVAL (operands[8])"))])
+
+(define_insn_and_rewrite "*pred_mul_<optab><mode>_scalar"
+ [(set (match_operand:VF 0 "register_operand" "=&vr, ?&vr")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (match_operand 9 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (plus_minus:VF
+ (mult:VF
+ (vec_duplicate:VF
+ (match_operand:<VEL> 2 "register_operand" " f, f"))
+ (match_operand:VF 3 "register_operand" " vr, vr"))
+ (match_operand:VF 4 "vector_arith_operand" " vr, vr"))
+ (match_operand:VF 5 "register_operand" " 0, vr")))]
+ "TARGET_VECTOR
+ && !rtx_equal_p (operands[3], operands[5])
+ && !rtx_equal_p (operands[4], operands[5])"
+ "@
+ vmv.v.v\t%0,%4\;vf<macc_nmsac>.vf\t%0,%2,%3%p1
+ #"
+ "&& reload_completed
+ && !rtx_equal_p (operands[0], operands[5])"
+ {
+ emit_insn (gen_pred_merge<mode> (operands[0], RVV_VUNDEF (<MODE>mode),
+ operands[5], operands[4], operands[1], operands[6],
+ operands[7], operands[9]));
+ operands[5] = operands[4] = operands[0];
+ }
+ [(set_attr "type" "vfmuladd")
+ (set_attr "mode" "<MODE>")])
+
+(define_expand "@pred_neg_mul_<optab><mode>"
+ [(set (match_operand:VF 0 "register_operand")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand")
+ (match_operand 6 "vector_length_operand")
+ (match_operand 7 "const_int_operand")
+ (match_operand 8 "const_int_operand")
+ (match_operand 9 "const_int_operand")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (neg:VF
+ (plus_minus:VF
+ (match_operand:VF 4 "register_operand")
+ (mult:VF
+ (match_operand:VF 2 "register_operand")
+ (match_operand:VF 3 "register_operand"))))
+ (match_operand:VF 5 "vector_merge_operand")))]
+ "TARGET_VECTOR"
+{
+ /* Swap the multiplication operands if the fallback value is the
+ second of the two. */
+ if (rtx_equal_p (operands[3], operands[5]))
+ std::swap (operands[2], operands[3]);
+})
+
+(define_insn "pred_neg_mul_<optab><mode>_undef_merge"
+ [(set (match_operand:VF 0 "register_operand" "=vd, vr, vd, vr, ?&vr")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1, vm,Wc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i")
+ (match_operand 9 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (neg:VF
+ (plus_minus:VF
+ (match_operand:VF 4 "register_operand" " vr, vr, 0, 0, vr")
+ (mult:VF
+ (match_operand:VF 2 "register_operand" " %0, 0, vr, vr, vr")
+ (match_operand:VF 3 "register_operand" " vr, vr, vr, vr, vr"))))
+ (match_operand:VF 5 "vector_undef_operand" " vu, vu, vu, vu, vu")))]
+ "TARGET_VECTOR"
+ "@
+ vf<nmadd_msub>.vv\t%0,%3,%4%p1
+ vf<nmadd_msub>.vv\t%0,%3,%4%p1
+ vf<nmacc_msac>.vv\t%0,%2,%3%p1
+ vf<nmacc_msac>.vv\t%0,%2,%3%p1
+ vmv.v.v\t%0,%4\;vf<nmacc_msac>.vv\t%0,%2,%3%p1"
+ [(set_attr "type" "vfmuladd")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_<nmadd_msub><mode>"
+ [(set (match_operand:VF 0 "register_operand" "=vd, vr, ?&vr")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (neg:VF
+ (plus_minus:VF
+ (match_operand:VF 4 "register_operand" " vr, vr, vr")
+ (mult:VF
+ (match_operand:VF 2 "register_operand" " 0, 0, vr")
+ (match_operand:VF 3 "register_operand" " vr, vr, vr"))))
+ (match_dup 2)))]
+ "TARGET_VECTOR"
+ "@
+ vf<nmadd_msub>.vv\t%0,%3,%4%p1
+ vf<nmadd_msub>.vv\t%0,%3,%4%p1
+ vmv.v.v\t%0,%2\;vf<nmadd_msub>.vv\t%0,%3,%4%p1"
+ [(set_attr "type" "vfmuladd")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "4")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[6])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[7])"))
+ (set (attr "avl_type") (symbol_ref "INTVAL (operands[8])"))])
+
+(define_insn "*pred_<nmacc_msac><mode>"
+ [(set (match_operand:VF 0 "register_operand" "=vd, vr, ?&vr")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (neg:VF
+ (plus_minus:VF
+ (match_operand:VF 4 "register_operand" " 0, 0, vr")
+ (mult:VF
+ (match_operand:VF 2 "register_operand" " vr, vr, vr")
+ (match_operand:VF 3 "register_operand" " vr, vr, vr"))))
+ (match_dup 4)))]
+ "TARGET_VECTOR"
+ "@
+ vf<nmacc_msac>.vv\t%0,%2,%3%p1
+ vf<nmacc_msac>.vv\t%0,%2,%3%p1
+ vmv.v.v\t%0,%4\;vf<nmacc_msac>.vv\t%0,%2,%3%p1"
+ [(set_attr "type" "vfmuladd")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "2")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[6])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[7])"))
+ (set (attr "avl_type") (symbol_ref "INTVAL (operands[8])"))])
+
+(define_insn_and_rewrite "*pred_neg_mul_<optab><mode>"
+ [(set (match_operand:VF 0 "register_operand" "=&vr, ?&vr")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (match_operand 9 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (neg:VF
+ (plus_minus:VF
+ (match_operand:VF 4 "vector_arith_operand" " vr, vr")
+ (mult:VF
+ (match_operand:VF 2 "register_operand" " vr, vr")
+ (match_operand:VF 3 "register_operand" " vr, vr"))))
+ (match_operand:VF 5 "register_operand" " 0, vr")))]
+ "TARGET_VECTOR
+ && !rtx_equal_p (operands[2], operands[5])
+ && !rtx_equal_p (operands[3], operands[5])
+ && !rtx_equal_p (operands[4], operands[5])"
+ "@
+ vmv.v.v\t%0,%4\;vf<nmacc_msac>.vv\t%0,%2,%3%p1
+ #"
+ "&& reload_completed
+ && !rtx_equal_p (operands[0], operands[5])"
+ {
+ emit_insn (gen_pred_merge<mode> (operands[0], RVV_VUNDEF (<MODE>mode),
+ operands[5], operands[4], operands[1], operands[6],
+ operands[7], operands[9]));
+ operands[5] = operands[4] = operands[0];
+ }
+ [(set_attr "type" "vfmuladd")
+ (set_attr "mode" "<MODE>")])
+
+(define_expand "@pred_neg_mul_<optab><mode>_scalar"
+ [(set (match_operand:VF 0 "register_operand")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand")
+ (match_operand 6 "vector_length_operand")
+ (match_operand 7 "const_int_operand")
+ (match_operand 8 "const_int_operand")
+ (match_operand 9 "const_int_operand")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (neg:VF
+ (plus_minus:VF
+ (match_operand:VF 4 "register_operand")
+ (mult:VF
+ (vec_duplicate:VF
+ (match_operand:<VEL> 2 "register_operand"))
+ (match_operand:VF 3 "register_operand"))))
+ (match_operand:VF 5 "vector_merge_operand")))]
+ "TARGET_VECTOR"
+{})
+
+(define_insn "*pred_neg_mul_<optab><mode>_undef_merge_scalar"
+ [(set (match_operand:VF 0 "register_operand" "=vd, vr, vd, vr, ?&vr")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1, vm,Wc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i")
+ (match_operand 9 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (neg:VF
+ (plus_minus:VF
+ (match_operand:VF 4 "register_operand" " vr, vr, 0, 0, vr")
+ (mult:VF
+ (vec_duplicate:VF
+ (match_operand:<VEL> 2 "register_operand" " f, f, f, f, f"))
+ (match_operand:VF 3 "register_operand" " 0, 0, vr, vr, vr"))))
+ (match_operand:VF 5 "vector_undef_operand" " vu, vu, vu, vu, vu")))]
+ "TARGET_VECTOR"
+ "@
+ vf<nmadd_msub>.vf\t%0,%2,%4%p1
+ vf<nmadd_msub>.vf\t%0,%2,%4%p1
+ vf<nmacc_msac>.vf\t%0,%2,%3%p1
+ vf<nmacc_msac>.vf\t%0,%2,%3%p1
+ vmv.v.v\t%0,%4\;vf<nmacc_msac>.vf\t%0,%2,%3%p1"
+ [(set_attr "type" "vfmuladd")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_<nmadd_msub><mode>_scalar"
+ [(set (match_operand:VF 0 "register_operand" "=vd, vr, ?&vr")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (neg:VF
+ (plus_minus:VF
+ (match_operand:VF 4 "register_operand" " vr, vr, vr")
+ (mult:VF
+ (vec_duplicate:VF
+ (match_operand:<VEL> 2 "register_operand" " f, f, vr"))
+ (match_operand:VF 3 "register_operand" " 0, 0, vr"))))
+ (match_dup 3)))]
+ "TARGET_VECTOR"
+ "@
+ vf<nmadd_msub>.vf\t%0,%2,%4%p1
+ vf<nmadd_msub>.vf\t%0,%2,%4%p1
+ vmv.v.v\t%0,%2\;vf<nmadd_msub>.vf\t%0,%2,%4%p1"
+ [(set_attr "type" "vfmuladd")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "4")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[6])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[7])"))
+ (set (attr "avl_type") (symbol_ref "INTVAL (operands[8])"))])
+
+(define_insn "*pred_<nmacc_msac><mode>_scalar"
+ [(set (match_operand:VF 0 "register_operand" "=vd, vr, ?&vr")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (neg:VF
+ (plus_minus:VF
+ (match_operand:VF 4 "register_operand" " 0, 0, vr")
+ (mult:VF
+ (vec_duplicate:VF
+ (match_operand:<VEL> 2 "register_operand" " f, f, vr"))
+ (match_operand:VF 3 "register_operand" " vr, vr, vr"))))
+ (match_dup 4)))]
+ "TARGET_VECTOR"
+ "@
+ vf<nmacc_msac>.vf\t%0,%2,%3%p1
+ vf<nmacc_msac>.vf\t%0,%2,%3%p1
+ vmv.v.v\t%0,%4\;vf<nmacc_msac>.vf\t%0,%2,%3%p1"
+ [(set_attr "type" "vfmuladd")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "2")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[6])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[7])"))
+ (set (attr "avl_type") (symbol_ref "INTVAL (operands[8])"))])
+
+(define_insn_and_rewrite "*pred_neg_mul_<optab><mode>_scalar"
+ [(set (match_operand:VF 0 "register_operand" "=&vr, ?&vr")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (match_operand 9 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (neg:VF
+ (plus_minus:VF
+ (match_operand:VF 4 "vector_arith_operand" " vr, vr")
+ (mult:VF
+ (vec_duplicate:VF
+ (match_operand:<VEL> 2 "register_operand" " f, f"))
+ (match_operand:VF 3 "register_operand" " vr, vr"))))
+ (match_operand:VF 5 "register_operand" " 0, vr")))]
+ "TARGET_VECTOR
+ && !rtx_equal_p (operands[3], operands[5])
+ && !rtx_equal_p (operands[4], operands[5])"
+ "@
+ vmv.v.v\t%0,%4\;vf<nmacc_msac>.vf\t%0,%2,%3%p1
+ #"
+ "&& reload_completed
+ && !rtx_equal_p (operands[0], operands[5])"
+ {
+ emit_insn (gen_pred_merge<mode> (operands[0], RVV_VUNDEF (<MODE>mode),
+ operands[5], operands[4], operands[1], operands[6],
+ operands[7], operands[9]));
+ operands[5] = operands[4] = operands[0];
+ }
+ [(set_attr "type" "vfmuladd")
+ (set_attr "mode" "<MODE>")])
+
+;; -------------------------------------------------------------------------------
+;; ---- Predicated floating-point unary operations
+;; -------------------------------------------------------------------------------
+;; Includes:
+;; - 13.8 Vector Floating-Point Square-Root Instruction
+;; - 13.9 Vector Floating-Point Reciprocal Square-Root Estimate Instruction
+;; - 13.10 Vector Floating-Point Reciprocal Estimate Instruction
+;; - 13.12 Vector Floating-Point Sign-Injection Instructions (vfneg.v/vfabs.v)
+;; - 13.14 Vector Floating-Point Classify Instruction
+;; -------------------------------------------------------------------------------
+
+(define_insn "@pred_<optab><mode>"
+ [(set (match_operand:VF 0 "register_operand" "=vd, vr")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK")
+ (match_operand 5 "const_int_operand" " i, i")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (any_float_unop:VF
+ (match_operand:VF 3 "register_operand" " vr, vr"))
+ (match_operand:VF 2 "vector_merge_operand" "0vu,0vu")))]
+ "TARGET_VECTOR"
+ "vf<insn>.v\t%0,%3%p1"
+ [(set_attr "type" "<float_insn_type>")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type") (symbol_ref "INTVAL (operands[7])"))])
+
+(define_insn "@pred_<misc_op><mode>"
+ [(set (match_operand:VF 0 "register_operand" "=vd, vr")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK")
+ (match_operand 5 "const_int_operand" " i, i")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VF
+ [(match_operand:VF 3 "register_operand" " vr, vr")] VFMISC)
+ (match_operand:VF 2 "vector_merge_operand" "0vu,0vu")))]
+ "TARGET_VECTOR"
+ "vf<misc_op>.v\t%0,%3%p1"
+ [(set_attr "type" "<float_insn_type>")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_class<mode>"
+ [(set (match_operand:<VCONVERT> 0 "register_operand" "=vd, vr")
+ (if_then_else:<VCONVERT>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK")
+ (match_operand 5 "const_int_operand" " i, i")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<VCONVERT>
+ [(match_operand:VF 3 "register_operand" " vr, vr")] UNSPEC_VFCLASS)
+ (match_operand:<VCONVERT> 2 "vector_merge_operand" "0vu,0vu")))]
+ "TARGET_VECTOR"
+ "vfclass.v\t%0,%3%p1"
+ [(set_attr "type" "vfclass")
+ (set_attr "mode" "<MODE>")])
+
+;; -------------------------------------------------------------------------------
+;; ---- Predicated floating-point widen binary operations
+;; -------------------------------------------------------------------------------
+;; Includes:
+;; - 13.3 Vector Widening Floating-Point Add/Subtract Instructions
+;; - 13.5 Vector Widening Floating-Point Multiply
+;; -------------------------------------------------------------------------------
+
+;; Vector Widening Add/Subtract/Multiply.
+(define_insn "@pred_dual_widen_<optab><mode>"
+ [(set (match_operand:VWEXTF 0 "register_operand" "=&vr")
+ (if_then_else:VWEXTF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (match_operand 8 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (any_widen_binop:VWEXTF
+ (float_extend:VWEXTF
+ (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" " vr"))
+ (float_extend:VWEXTF
+ (match_operand:<V_DOUBLE_TRUNC> 4 "register_operand" " vr")))
+ (match_operand:VWEXTF 2 "vector_merge_operand" " 0vu")))]
+ "TARGET_VECTOR"
+ "vfw<insn>.vv\t%0,%3,%4%p1"
+ [(set_attr "type" "vf<widen_binop_insn_type>")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+(define_insn "@pred_dual_widen_<optab><mode>_scalar"
+ [(set (match_operand:VWEXTF 0 "register_operand" "=&vr")
+ (if_then_else:VWEXTF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (match_operand 8 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (any_widen_binop:VWEXTF
+ (float_extend:VWEXTF
+ (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" " vr"))
+ (float_extend:VWEXTF
+ (vec_duplicate:<V_DOUBLE_TRUNC>
+ (match_operand:<VSUBEL> 4 "register_operand" " f"))))
+ (match_operand:VWEXTF 2 "vector_merge_operand" " 0vu")))]
+ "TARGET_VECTOR"
+ "vfw<insn>.vf\t%0,%3,%4%p1"
+ [(set_attr "type" "vf<widen_binop_insn_type>")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+(define_insn "@pred_single_widen_<plus_minus:optab><mode>"
+ [(set (match_operand:VWEXTF 0 "register_operand" "=&vr")
+ (if_then_else:VWEXTF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (match_operand 8 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (plus_minus:VWEXTF
+ (match_operand:VWEXTF 3 "register_operand" " vr")
+ (float_extend:VWEXTF
+ (match_operand:<V_DOUBLE_TRUNC> 4 "register_operand" " vr")))
+ (match_operand:VWEXTF 2 "vector_merge_operand" " 0vu")))]
+ "TARGET_VECTOR"
+ "vfw<insn>.wv\t%0,%3,%4%p1"
+ [(set_attr "type" "vf<widen_binop_insn_type>")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+(define_insn "@pred_single_widen_<plus_minus:optab><mode>_scalar"
+ [(set (match_operand:VWEXTF 0 "register_operand" "=&vr")
+ (if_then_else:VWEXTF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (match_operand 8 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (plus_minus:VWEXTF
+ (match_operand:VWEXTF 3 "register_operand" " vr")
+ (float_extend:VWEXTF
+ (vec_duplicate:<V_DOUBLE_TRUNC>
+ (match_operand:<VSUBEL> 4 "register_operand" " f"))))
+ (match_operand:VWEXTF 2 "vector_merge_operand" " 0vu")))]
+ "TARGET_VECTOR"
+ "vfw<insn>.wf\t%0,%3,%4%p1"
+ [(set_attr "type" "vf<widen_binop_insn_type>")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+;; -------------------------------------------------------------------------------
+;; ---- Predicated widen floating-point ternary operations
+;; -------------------------------------------------------------------------------
+;; Includes:
+;; - 13.7 Vector Widening Floating-Point Fused Multiply-Add Instructions
+;; -------------------------------------------------------------------------------
+
+(define_insn "@pred_widen_mul_<optab><mode>"
+ [(set (match_operand:VWEXTF 0 "register_operand" "=&vr")
+ (if_then_else:VWEXTF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 6 "vector_length_operand" " rK")
+ (match_operand 7 "const_int_operand" " i")
+ (match_operand 8 "const_int_operand" " i")
+ (match_operand 9 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (plus_minus:VWEXTF
+ (match_operand:VWEXTF 2 "register_operand" " 0")
+ (mult:VWEXTF
+ (float_extend:VWEXTF
+ (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" " vr"))
+ (float_extend:VWEXTF
+ (match_operand:<V_DOUBLE_TRUNC> 4 "register_operand" " vr"))))
+ (match_operand:VWEXTF 5 "vector_merge_operand" " 0vu")))]
+ "TARGET_VECTOR"
+ "vfw<macc_nmsac>.vv\t%0,%3,%4%p1"
+ [(set_attr "type" "vfwmuladd")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+(define_insn "@pred_widen_mul_<optab><mode>_scalar"
+ [(set (match_operand:VWEXTF 0 "register_operand" "=&vr")
+ (if_then_else:VWEXTF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 6 "vector_length_operand" " rK")
+ (match_operand 7 "const_int_operand" " i")
+ (match_operand 8 "const_int_operand" " i")
+ (match_operand 9 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (plus_minus:VWEXTF
+ (match_operand:VWEXTF 2 "register_operand" " 0")
+ (mult:VWEXTF
+ (float_extend:VWEXTF
+ (vec_duplicate:<V_DOUBLE_TRUNC>
+ (match_operand:<VSUBEL> 3 "register_operand" " r")))
+ (float_extend:VWEXTF
+ (match_operand:<V_DOUBLE_TRUNC> 4 "register_operand" " vr"))))
+ (match_operand:VWEXTF 5 "vector_merge_operand" " 0vu")))]
+ "TARGET_VECTOR"
+ "vfw<macc_nmsac>.vf\t%0,%3,%4%p1"
+ [(set_attr "type" "vfwmuladd")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+(define_insn "@pred_widen_neg_mul_<optab><mode>"
+ [(set (match_operand:VWEXTF 0 "register_operand" "=&vr")
+ (if_then_else:VWEXTF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 6 "vector_length_operand" " rK")
+ (match_operand 7 "const_int_operand" " i")
+ (match_operand 8 "const_int_operand" " i")
+ (match_operand 9 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (neg:VWEXTF
+ (plus_minus:VWEXTF
+ (match_operand:VWEXTF 2 "register_operand" " 0")
+ (mult:VWEXTF
+ (float_extend:VWEXTF
+ (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" " vr"))
+ (float_extend:VWEXTF
+ (match_operand:<V_DOUBLE_TRUNC> 4 "register_operand" " vr")))))
+ (match_operand:VWEXTF 5 "vector_merge_operand" " 0vu")))]
+ "TARGET_VECTOR"
+ "vfw<nmacc_msac>.vv\t%0,%3,%4%p1"
+ [(set_attr "type" "vfwmuladd")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+(define_insn "@pred_widen_neg_mul_<optab><mode>_scalar"
+ [(set (match_operand:VWEXTF 0 "register_operand" "=&vr")
+ (if_then_else:VWEXTF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 6 "vector_length_operand" " rK")
+ (match_operand 7 "const_int_operand" " i")
+ (match_operand 8 "const_int_operand" " i")
+ (match_operand 9 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (neg:VWEXTF
+ (plus_minus:VWEXTF
+ (match_operand:VWEXTF 2 "register_operand" " 0")
+ (mult:VWEXTF
+ (float_extend:VWEXTF
+ (vec_duplicate:<V_DOUBLE_TRUNC>
+ (match_operand:<VSUBEL> 3 "register_operand" " r")))
+ (float_extend:VWEXTF
+ (match_operand:<V_DOUBLE_TRUNC> 4 "register_operand" " vr")))))
+ (match_operand:VWEXTF 5 "vector_merge_operand" " 0vu")))]
+ "TARGET_VECTOR"
+ "vfw<nmacc_msac>.vf\t%0,%3,%4%p1"
+ [(set_attr "type" "vfwmuladd")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+;; -------------------------------------------------------------------------------
+;; ---- Predicated floating-point comparison operations
+;; -------------------------------------------------------------------------------
+;; Includes:
+;; - 13.13 Vector Floating-Point Compare Instructions
+;; -------------------------------------------------------------------------------
+
+(define_expand "@pred_cmp<mode>"
+ [(set (match_operand:<VM> 0 "register_operand")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand")
+ (match_operand 6 "vector_length_operand")
+ (match_operand 7 "const_int_operand")
+ (match_operand 8 "const_int_operand")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "signed_order_operator"
+ [(match_operand:VF 4 "register_operand")
+ (match_operand:VF 5 "register_operand")])
+ (match_operand:<VM> 2 "vector_merge_operand")))]
+ "TARGET_VECTOR"
+ {})
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_cmp<mode>"
+ [(set (match_operand:<VM> 0 "register_operand" "=vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 6 "vector_length_operand" " rK")
+ (match_operand 7 "const_int_operand" " i")
+ (match_operand 8 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "signed_order_operator"
+ [(match_operand:VF 4 "register_operand" " vr")
+ (match_operand:VF 5 "register_operand" " vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " 0vu")))]
+ "TARGET_VECTOR && known_le (GET_MODE_SIZE (<MODE>mode), BYTES_PER_RISCV_VECTOR)"
+ "vmf%B3.vv\t%0,%4,%5%p1"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_cmp<mode>_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 6 "vector_length_operand" " rK")
+ (match_operand 7 "const_int_operand" " i")
+ (match_operand 8 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "signed_order_operator"
+ [(match_operand:VF 4 "register_operand" " vr")
+ (match_operand:VF 5 "register_operand" " vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " 0vu")))]
+ "TARGET_VECTOR && known_gt (GET_MODE_SIZE (<MODE>mode), BYTES_PER_RISCV_VECTOR)"
+ "vmf%B3.vv\t%0,%4,%5%p1"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_expand "@pred_cmp<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand")
+ (match_operand 6 "vector_length_operand")
+ (match_operand 7 "const_int_operand")
+ (match_operand 8 "const_int_operand")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "signed_order_operator"
+ [(match_operand:VF 4 "register_operand")
+ (vec_duplicate:VF
+ (match_operand:<VEL> 5 "register_operand"))])
+ (match_operand:<VM> 2 "vector_merge_operand")))]
+ "TARGET_VECTOR"
+ {})
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_cmp<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 6 "vector_length_operand" " rK")
+ (match_operand 7 "const_int_operand" " i")
+ (match_operand 8 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "signed_order_operator"
+ [(match_operand:VF 4 "register_operand" " vr")
+ (vec_duplicate:VF
+ (match_operand:<VEL> 5 "register_operand" " r"))])
+ (match_operand:<VM> 2 "vector_merge_operand" " 0vu")))]
+ "TARGET_VECTOR && known_le (GET_MODE_SIZE (<MODE>mode), BYTES_PER_RISCV_VECTOR)"
+ "vmf%B3.vf\t%0,%4,%5%p1"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_cmp<mode>_scalar_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 6 "vector_length_operand" " rK")
+ (match_operand 7 "const_int_operand" " i")
+ (match_operand 8 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "signed_order_operator"
+ [(match_operand:VF 4 "register_operand" " vr")
+ (vec_duplicate:VF
+ (match_operand:<VEL> 5 "register_operand" " r"))])
+ (match_operand:<VM> 2 "vector_merge_operand" " 0vu")))]
+ "TARGET_VECTOR && known_gt (GET_MODE_SIZE (<MODE>mode), BYTES_PER_RISCV_VECTOR)"
+ "vmf%B3.vf\t%0,%4,%5%p1"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_expand "@pred_eqne<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand")
+ (match_operand 6 "vector_length_operand")
+ (match_operand 7 "const_int_operand")
+ (match_operand 8 "const_int_operand")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "equality_operator"
+ [(vec_duplicate:VF
+ (match_operand:<VEL> 5 "register_operand"))
+ (match_operand:VF 4 "register_operand")])
+ (match_operand:<VM> 2 "vector_merge_operand")))]
+ "TARGET_VECTOR"
+ {})
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_eqne<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 6 "vector_length_operand" " rK")
+ (match_operand 7 "const_int_operand" " i")
+ (match_operand 8 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "equality_operator"
+ [(vec_duplicate:VF
+ (match_operand:<VEL> 5 "register_operand" " r"))
+ (match_operand:VF 4 "register_operand" " vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " 0vu")))]
+ "TARGET_VECTOR && known_le (GET_MODE_SIZE (<MODE>mode), BYTES_PER_RISCV_VECTOR)"
+ "vmf%B3.vf\t%0,%4,%5%p1"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_eqne<mode>_scalar_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 6 "vector_length_operand" " rK")
+ (match_operand 7 "const_int_operand" " i")
+ (match_operand 8 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "equality_operator"
+ [(vec_duplicate:VF
+ (match_operand:<VEL> 5 "register_operand" " r"))
+ (match_operand:VF 4 "register_operand" " vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " 0vu")))]
+ "TARGET_VECTOR && known_gt (GET_MODE_SIZE (<MODE>mode), BYTES_PER_RISCV_VECTOR)"
+ "vmf%B3.vf\t%0,%4,%5%p1"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")])
+
+;; -------------------------------------------------------------------------------
+;; ---- Predicated floating-point merge
+;; -------------------------------------------------------------------------------
+;; Includes:
+;; - 13.15 Vector Floating-Point Merge Instruction
+;; -------------------------------------------------------------------------------
+
+(define_insn "@pred_merge<mode>_scalar"
+ [(set (match_operand:VF 0 "register_operand" "=vd")
+ (if_then_else:VF
+ (match_operand:<VM> 4 "register_operand" " vm")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_dup 4)
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (vec_duplicate:VF
+ (match_operand:<VEL> 3 "register_operand" " f"))
+ (match_operand:VF 2 "register_operand" " vr"))
+ (match_operand:VF 1 "vector_merge_operand" "0vu")))]
+ "TARGET_VECTOR"
+ "vfmerge.vfm\t%0,%2,%3,%4"
+ [(set_attr "type" "vfmerge")
+ (set_attr "mode" "<MODE>")])
+
+;; -------------------------------------------------------------------------------
+;; ---- Predicated floating-point conversions
+;; -------------------------------------------------------------------------------
+;; Includes:
+;; - 13.17 Single-Width Floating-Point/Integer Type-Convert Instructions
+;; -------------------------------------------------------------------------------
+
+(define_insn "@pred_fcvt_x<v_su>_f<mode>"
+ [(set (match_operand:<VCONVERT> 0 "register_operand" "=vd, vr")
+ (if_then_else:<VCONVERT>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK")
+ (match_operand 5 "const_int_operand" " i, i")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<VCONVERT>
+ [(match_operand:VF 3 "register_operand" " vr, vr")] VFCVTS)
+ (match_operand:<VCONVERT> 2 "vector_merge_operand" "0vu,0vu")))]
+ "TARGET_VECTOR"
+ "vfcvt.x<v_su>.f.v\t%0,%3%p1"
+ [(set_attr "type" "vfcvtftoi")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_<fix_cvt><mode>"
+ [(set (match_operand:<VCONVERT> 0 "register_operand" "=vd, vr")
+ (if_then_else:<VCONVERT>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK")
+ (match_operand 5 "const_int_operand" " i, i")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (any_fix:<VCONVERT>
+ (match_operand:VF 3 "register_operand" " vr, vr"))
+ (match_operand:<VCONVERT> 2 "vector_merge_operand" "0vu,0vu")))]
+ "TARGET_VECTOR"
+ "vfcvt.rtz.x<u>.f.v\t%0,%3%p1"
+ [(set_attr "type" "vfcvtftoi")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_<float_cvt><mode>"
+ [(set (match_operand:VF 0 "register_operand" "=vd, vr")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK")
+ (match_operand 5 "const_int_operand" " i, i")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (any_float:VF
+ (match_operand:<VCONVERT> 3 "register_operand" " vr, vr"))
+ (match_operand:VF 2 "vector_merge_operand" "0vu,0vu")))]
+ "TARGET_VECTOR"
+ "vfcvt.f.x<u>.v\t%0,%3%p1"
+ [(set_attr "type" "vfcvtitof")
+ (set_attr "mode" "<MODE>")])
+
+;; -------------------------------------------------------------------------------
+;; ---- Predicated floating-point widen conversions
+;; -------------------------------------------------------------------------------
+;; Includes:
+;; - 13.18 Widening Floating-Point/Integer Type-Convert Instructions
+;; -------------------------------------------------------------------------------
+
+(define_insn "@pred_widen_fcvt_x<v_su>_f<mode>"
+ [(set (match_operand:VWCONVERTI 0 "register_operand" "=&vr")
+ (if_then_else:VWCONVERTI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VWCONVERTI
+ [(match_operand:<VNCONVERT> 3 "register_operand" " vr")] VFCVTS)
+ (match_operand:VWCONVERTI 2 "vector_merge_operand" " 0vu")))]
+ "TARGET_VECTOR"
+ "vfwcvt.x<v_su>.f.v\t%0,%3%p1"
+ [(set_attr "type" "vfwcvtftoi")
+ (set_attr "mode" "<VNCONVERT>")])
+
+(define_insn "@pred_widen_<fix_cvt><mode>"
+ [(set (match_operand:VWCONVERTI 0 "register_operand" "=&vr")
+ (if_then_else:VWCONVERTI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (any_fix:VWCONVERTI
+ (match_operand:<VNCONVERT> 3 "register_operand" " vr"))
+ (match_operand:VWCONVERTI 2 "vector_merge_operand" " 0vu")))]
+ "TARGET_VECTOR"
+ "vfwcvt.rtz.x<u>.f.v\t%0,%3%p1"
+ [(set_attr "type" "vfwcvtftoi")
+ (set_attr "mode" "<VNCONVERT>")])
+
+(define_insn "@pred_widen_<float_cvt><mode>"
+ [(set (match_operand:VF 0 "register_operand" "=&vr")
+ (if_then_else:VF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (any_float:VF
+ (match_operand:<VNCONVERT> 3 "register_operand" " vr"))
+ (match_operand:VF 2 "vector_merge_operand" " 0vu")))]
+ "TARGET_VECTOR"
+ "vfwcvt.f.x<u>.v\t%0,%3%p1"
+ [(set_attr "type" "vfwcvtitof")
+ (set_attr "mode" "<VNCONVERT>")])
+
+(define_insn "@pred_extend<mode>"
+ [(set (match_operand:VWEXTF 0 "register_operand" "=&vr")
+ (if_then_else:VWEXTF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (float_extend:VWEXTF
+ (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" " vr"))
+ (match_operand:VWEXTF 2 "vector_merge_operand" " 0vu")))]
+ "TARGET_VECTOR"
+ "vfwcvt.f.f.v\t%0,%3%p1"
+ [(set_attr "type" "vfwcvtftof")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+;; -------------------------------------------------------------------------------
+;; ---- Predicated floating-point narrow conversions
+;; -------------------------------------------------------------------------------
+;; Includes:
+;; - 13.19 Narrowing Floating-Point/Integer Type-Convert Instructions
+;; -------------------------------------------------------------------------------
+
+(define_insn "@pred_narrow_fcvt_x<v_su>_f<mode>"
+ [(set (match_operand:<VNCONVERT> 0 "register_operand" "=vd, vr, ?&vr")
+ (if_then_else:<VNCONVERT>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vmWc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<VNCONVERT>
+ [(match_operand:VF 3 "register_operand" " 0, 0, vr")] VFCVTS)
+ (match_operand:<VNCONVERT> 2 "vector_merge_operand" "0vu,0vu, 0vu")))]
+ "TARGET_VECTOR"
+ "vfncvt.x<v_su>.f.w\t%0,%3%p1"
+ [(set_attr "type" "vfncvtftoi")
+ (set_attr "mode" "<VNCONVERT>")])
+
+(define_insn "@pred_narrow_<fix_cvt><mode>"
+ [(set (match_operand:<VNCONVERT> 0 "register_operand" "=vd, vr, ?&vr")
+ (if_then_else:<VNCONVERT>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vmWc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (any_fix:<VNCONVERT>
+ (match_operand:VF 3 "register_operand" " 0, 0, vr"))
+ (match_operand:<VNCONVERT> 2 "vector_merge_operand" " 0vu,0vu, 0vu")))]
+ "TARGET_VECTOR"
+ "vfncvt.rtz.x<u>.f.w\t%0,%3%p1"
+ [(set_attr "type" "vfncvtftoi")
+ (set_attr "mode" "<VNCONVERT>")])
+
+(define_insn "@pred_narrow_<float_cvt><mode>"
+ [(set (match_operand:<VNCONVERT> 0 "register_operand" "=vd, vr, ?&vr")
+ (if_then_else:<VNCONVERT>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vmWc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (any_float:<VNCONVERT>
+ (match_operand:VWCONVERTI 3 "register_operand" " 0, 0, vr"))
+ (match_operand:<VNCONVERT> 2 "vector_merge_operand" "0vu,0vu, 0vu")))]
+ "TARGET_VECTOR"
+ "vfncvt.f.x<u>.w\t%0,%3%p1"
+ [(set_attr "type" "vfncvtitof")
+ (set_attr "mode" "<VNCONVERT>")])
+
+(define_insn "@pred_trunc<mode>"
+ [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand" "=vd, vr, ?&vr")
+ (if_then_else:<V_DOUBLE_TRUNC>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vmWc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (float_truncate:<V_DOUBLE_TRUNC>
+ (match_operand:VWEXTF 3 "register_operand" " 0, 0, vr"))
+ (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" "0vu,0vu, 0vu")))]
+ "TARGET_VECTOR"
+ "vfncvt.f.f.w\t%0,%3%p1"
+ [(set_attr "type" "vfncvtftof")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+(define_insn "@pred_rod_trunc<mode>"
+ [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand" "=vd, vr, ?&vr")
+ (if_then_else:<V_DOUBLE_TRUNC>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vmWc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<V_DOUBLE_TRUNC>
+ [(float_truncate:<V_DOUBLE_TRUNC>
+ (match_operand:VWEXTF 3 "register_operand" " 0, 0, vr"))] UNSPEC_ROD)
+ (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" "0vu,0vu, 0vu")))]
+ "TARGET_VECTOR"
+ "vfncvt.rod.f.f.w\t%0,%3%p1"
+ [(set_attr "type" "vfncvtftof")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")])
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/ternop_vv_constraint-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/ternop_vv_constraint-3.c
new file mode 100644
index 00000000000..5ff07da1146
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/ternop_vv_constraint-3.c
@@ -0,0 +1,83 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+** vsetivli\tzero,4,e32,m1,ta,ma
+** vle32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vle32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vse32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void * in2, void *out)
+{
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+ vfloat32m1_t v3 = __riscv_vfmacc_vv_f32m1 (v, v2, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfmacc_vv_f32m1(v3, v2, v2, 4);
+ v4 = __riscv_vfmacc_vv_f32m1 (v4, v2, v2, 4);
+ v4 = __riscv_vfmacc_vv_f32m1 (v4, v2, v2, 4);
+ v4 = __riscv_vfmacc_vv_f32m1 (v4, v2, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** vsetivli\tzero,4,e32,m1,tu,ma
+** vle32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vle32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vse32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void * in2, void *out)
+{
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+ vfloat32m1_t v3 = __riscv_vfmacc_vv_f32m1_tu (v, v2, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfmacc_vv_f32m1_tu(v3, v2, v2, 4);
+ v4 = __riscv_vfmacc_vv_f32m1_tu (v4, v2, v2, 4);
+ v4 = __riscv_vfmacc_vv_f32m1_tu (v4, v2, v2, 4);
+ v4 = __riscv_vfmacc_vv_f32m1_tu (v4, v2, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** vsetivli\tzero,4,e32,m1,ta,ma
+** vlm\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vle32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vle32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** vse32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void * in2, void * in3, void *out)
+{
+ vbool32_t m = __riscv_vlm_v_b32 (in3, 4);
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+ vfloat32m1_t v3 = __riscv_vfmacc_vv_f32m1_m (m, v, v2, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfmacc_vv_f32m1_m(m, v3, v2, v2, 4);
+ v4 = __riscv_vfmacc_vv_f32m1_m (m, v4, v2, v2, 4);
+ v4 = __riscv_vfmacc_vv_f32m1_m (m, v4, v2, v2, 4);
+ v4 = __riscv_vfmacc_vv_f32m1_m (m, v4, v2, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/ternop_vv_constraint-4.c b/gcc/testsuite/gcc.target/riscv/rvv/base/ternop_vv_constraint-4.c
new file mode 100644
index 00000000000..c280d97824f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/ternop_vv_constraint-4.c
@@ -0,0 +1,83 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+** vsetivli\tzero,4,e32,m1,ta,ma
+** vle32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vle32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vse32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void * in2, void *out)
+{
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+ vfloat32m1_t v3 = __riscv_vfmadd_vv_f32m1 (v, v2, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfmadd_vv_f32m1(v3, v2, v2, 4);
+ v4 = __riscv_vfmadd_vv_f32m1 (v4, v2, v2, 4);
+ v4 = __riscv_vfmadd_vv_f32m1 (v4, v2, v2, 4);
+ v4 = __riscv_vfmadd_vv_f32m1 (v4, v2, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** vsetivli\tzero,4,e32,m1,tu,ma
+** vle32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vle32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vse32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void * in2, void *out)
+{
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+ vfloat32m1_t v3 = __riscv_vfmadd_vv_f32m1_tu (v, v2, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfmadd_vv_f32m1_tu(v3, v2, v2, 4);
+ v4 = __riscv_vfmadd_vv_f32m1_tu (v4, v2, v2, 4);
+ v4 = __riscv_vfmadd_vv_f32m1_tu (v4, v2, v2, 4);
+ v4 = __riscv_vfmadd_vv_f32m1_tu (v4, v2, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** vsetivli\tzero,4,e32,m1,ta,ma
+** vlm\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vle32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vle32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** vse32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void * in2, void * in3, void *out)
+{
+ vbool32_t m = __riscv_vlm_v_b32 (in3, 4);
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+ vfloat32m1_t v3 = __riscv_vfmadd_vv_f32m1_m (m, v, v2, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfmadd_vv_f32m1_m(m, v3, v2, v2, 4);
+ v4 = __riscv_vfmadd_vv_f32m1_m (m, v4, v2, v2, 4);
+ v4 = __riscv_vfmadd_vv_f32m1_m (m, v4, v2, v2, 4);
+ v4 = __riscv_vfmadd_vv_f32m1_m (m, v4, v2, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/ternop_vv_constraint-5.c b/gcc/testsuite/gcc.target/riscv/rvv/base/ternop_vv_constraint-5.c
new file mode 100644
index 00000000000..1f71aa867c2
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/ternop_vv_constraint-5.c
@@ -0,0 +1,83 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+** vsetivli\tzero,4,e32,m1,ta,ma
+** vle32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vle32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vse32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void * in2, void *out)
+{
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+ vfloat32m1_t v3 = __riscv_vfnmacc_vv_f32m1 (v, v2, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfnmacc_vv_f32m1(v3, v2, v2, 4);
+ v4 = __riscv_vfnmacc_vv_f32m1 (v4, v2, v2, 4);
+ v4 = __riscv_vfnmacc_vv_f32m1 (v4, v2, v2, 4);
+ v4 = __riscv_vfnmacc_vv_f32m1 (v4, v2, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** vsetivli\tzero,4,e32,m1,tu,ma
+** vle32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vle32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vse32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void * in2, void *out)
+{
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+ vfloat32m1_t v3 = __riscv_vfnmacc_vv_f32m1_tu (v, v2, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfnmacc_vv_f32m1_tu(v3, v2, v2, 4);
+ v4 = __riscv_vfnmacc_vv_f32m1_tu (v4, v2, v2, 4);
+ v4 = __riscv_vfnmacc_vv_f32m1_tu (v4, v2, v2, 4);
+ v4 = __riscv_vfnmacc_vv_f32m1_tu (v4, v2, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** vsetivli\tzero,4,e32,m1,ta,ma
+** vlm\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vle32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vle32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** vse32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void * in2, void * in3, void *out)
+{
+ vbool32_t m = __riscv_vlm_v_b32 (in3, 4);
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+ vfloat32m1_t v3 = __riscv_vfnmacc_vv_f32m1_m (m, v, v2, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfnmacc_vv_f32m1_m(m, v3, v2, v2, 4);
+ v4 = __riscv_vfnmacc_vv_f32m1_m (m, v4, v2, v2, 4);
+ v4 = __riscv_vfnmacc_vv_f32m1_m (m, v4, v2, v2, 4);
+ v4 = __riscv_vfnmacc_vv_f32m1_m (m, v4, v2, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/ternop_vv_constraint-6.c b/gcc/testsuite/gcc.target/riscv/rvv/base/ternop_vv_constraint-6.c
new file mode 100644
index 00000000000..2d2ed661434
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/ternop_vv_constraint-6.c
@@ -0,0 +1,83 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+** vsetivli\tzero,4,e32,m1,ta,ma
+** vle32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vle32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vse32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void * in2, void *out)
+{
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+ vfloat32m1_t v3 = __riscv_vfnmadd_vv_f32m1 (v, v2, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfnmadd_vv_f32m1(v3, v2, v2, 4);
+ v4 = __riscv_vfnmadd_vv_f32m1 (v4, v2, v2, 4);
+ v4 = __riscv_vfnmadd_vv_f32m1 (v4, v2, v2, 4);
+ v4 = __riscv_vfnmadd_vv_f32m1 (v4, v2, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** vsetivli\tzero,4,e32,m1,tu,ma
+** vle32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vle32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** vse32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void * in2, void *out)
+{
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+ vfloat32m1_t v3 = __riscv_vfnmadd_vv_f32m1_tu (v, v2, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfnmadd_vv_f32m1_tu(v3, v2, v2, 4);
+ v4 = __riscv_vfnmadd_vv_f32m1_tu (v4, v2, v2, 4);
+ v4 = __riscv_vfnmadd_vv_f32m1_tu (v4, v2, v2, 4);
+ v4 = __riscv_vfnmadd_vv_f32m1_tu (v4, v2, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** vsetivli\tzero,4,e32,m1,ta,ma
+** vlm\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vle32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vle32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** vse32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void * in2, void * in3, void *out)
+{
+ vbool32_t m = __riscv_vlm_v_b32 (in3, 4);
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+ vfloat32m1_t v3 = __riscv_vfnmadd_vv_f32m1_m (m, v, v2, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfnmadd_vv_f32m1_m(m, v3, v2, v2, 4);
+ v4 = __riscv_vfnmadd_vv_f32m1_m (m, v4, v2, v2, 4);
+ v4 = __riscv_vfnmadd_vv_f32m1_m (m, v4, v2, v2, 4);
+ v4 = __riscv_vfnmadd_vv_f32m1_m (m, v4, v2, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/ternop_vx_constraint-8.c b/gcc/testsuite/gcc.target/riscv/rvv/base/ternop_vx_constraint-8.c
new file mode 100644
index 00000000000..82e14734056
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/ternop_vx_constraint-8.c
@@ -0,0 +1,71 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+** vsetivli\tzero,4,e32,m1,tu,ma
+** vle32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vle32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vfma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** vfma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** vse32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void * in2, void *out, float x)
+{
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+ vfloat32m1_t v3 = __riscv_vfmacc_vf_f32m1 (v, x, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfmacc_vf_f32m1_tu (v3, x, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+** vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** vsetivli\tzero,4,e32,m1,tu,ma
+** vle32.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** vle32.v\tv[0-9]+,0\([a-x0-9]+\)
+** vfma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** vfma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** vse32.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void * in2, void *out, float x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1_m (mask, in2, 4);
+ vfloat32m1_t v3 = __riscv_vfmacc_vf_f32m1 (v, x, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfmacc_vf_f32m1_tu (v3, x, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+** vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** vsetivli\tzero,4,e32,m1,tu,mu
+** vle32.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** vle32.v\tv[0-9]+,0\([a-x0-9]+\)
+** vfma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** vfma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+** vse32.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void * in2, void *out, float x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1_m (mask, in2, 4);
+ vfloat32m1_t v3 = __riscv_vfmacc_vf_f32m1 (v, x, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfmacc_vf_f32m1_tumu (mask, v3, x, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/ternop_vx_constraint-9.c b/gcc/testsuite/gcc.target/riscv/rvv/base/ternop_vx_constraint-9.c
new file mode 100644
index 00000000000..1beed49d9ac
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/ternop_vx_constraint-9.c
@@ -0,0 +1,71 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+** vsetivli\tzero,4,e32,m1,tu,ma
+** vle32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vle32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** vfnma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** vfnma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** vse32\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void * in2, void *out, float x)
+{
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+ vfloat32m1_t v3 = __riscv_vfnmacc_vf_f32m1 (v, x, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfnmacc_vf_f32m1_tu (v3, x, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+** vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** vsetivli\tzero,4,e32,m1,tu,ma
+** vle32.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** vle32.v\tv[0-9]+,0\([a-x0-9]+\)
+** vfnma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** vfnma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** vse32.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void * in2, void *out, float x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1_m (mask, in2, 4);
+ vfloat32m1_t v3 = __riscv_vfnmacc_vf_f32m1 (v, x, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfnmacc_vf_f32m1_tu (v3, x, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+** vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** vsetivli\tzero,4,e32,m1,tu,mu
+** vle32.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** vle32.v\tv[0-9]+,0\([a-x0-9]+\)
+** vfnma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** vfnma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+** vse32.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void * in2, void *out, float x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1_m (mask, in2, 4);
+ vfloat32m1_t v3 = __riscv_vfnmacc_vf_f32m1 (v, x, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfnmacc_vf_f32m1_tumu (mask, v3, x, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {vmv} } } */
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2023-02-22 13:44 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-22 13:44 [gcc r13-6276] RISC-V: Add floating-point RVV C/C++ api Kito Cheng
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).