public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: 钟居哲 <juzhe.zhong@rivai.ai>
To: cooper.joshua <cooper.joshua@linux.alibaba.com>,
	 gcc-patches <gcc-patches@gcc.gnu.org>
Cc: "jim.wilson.gcc" <jim.wilson.gcc@gmail.com>,
	palmer <palmer@dabbelt.com>, andrew <andrew@sifive.com>,
	"philipp.tomsich" <philipp.tomsich@vrull.eu>,
	"Jeff Law" <jeffreyalaw@gmail.com>,
	"Christoph Müllner" <christoph.muellner@vrull.eu>,
	jinma <jinma@linux.alibaba.com>,
	"Cooper Qu" <cooper.qu@linux.alibaba.com>
Subject: Re: 回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector
Date: Wed, 20 Dec 2023 23:29:45 +0800	[thread overview]
Message-ID: <F18256229484E170+2023122023294511508543@rivai.ai> (raw)
In-Reply-To: <1b73b51a-bf30-4e67-b861-88c171548e56.cooper.joshua@linux.alibaba.com>


Could you first send a separate patch first that only adds the theadvector intrinsics that can leverage current RVV intrinsics?

Then, we can be easily visit each following intrinsics that you can't leverage current intrinsics.

I expect the next patch is adding stride load/store that you can't leverage current intrinsics but same pattern as current patterns.

The final patch adds new intrinsics that current RVV intrinsics doesn't have like vlb...etc.






juzhe.zhong@rivai.ai



 



发件人: joshua



发送时间: 2023-12-20 23:21



收件人: 钟居哲; gcc-patches



抄送: jim.wilson.gcc; palmer; andrew; philipp.tomsich; Jeff Law; Christoph Müllner; jinma; Cooper Qu



主题: 回复:回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector



Hi Juzhe,







All the patterns that I "copied" from current vector.md are necessary. The differences are beyond "th" prefix. They are actually different patterns since they generate totally different instructions apart from "th_" string.







We have already tried our best to eliminate extra patterns in thead-vector.md. You can refer to the difference list in our spec and find out whether these patterns are reduntant. 







Joshua



------------------------------------------------------------------



发件人:钟居哲 <juzhe.zhong@rivai.ai>



发送时间:2023年12月20日(星期三) 22:55



收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>



抄 送:"jim.wilson.gcc"<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; Jeff Law<jeffreyalaw@gmail.com>; "Christoph Müllner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; Cooper Qu<cooper.qu@linux.alibaba.com>



主 题:Re: 回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector







My first impression, you are just copying current vector.md with no patterns change,



just simply add "th_" string into pattern name.





It looks odd to me.





Take LLVM for example, even though the build up time for LLVM match table and tablegen is not an issue for now,



they still try hard to minimize the matchtable ,optimize the tablegen.





To me this patch just double the patterns, and potentially explode the patterns of RISC-V.





I think we should optimize thead vector patterns, eliminate the redundant unnecessary patterns to avoid affecting



the build up of GCC toolchain.









juzhe.zhong@rivai.ai

















发件人: joshua









发送时间: 2023-12-20 22:41









收件人: 钟居哲; gcc-patches









抄送: jim.wilson.gcc; palmer; andrew; philipp.tomsich; Jeff Law; Christoph Müllner; jinma; Cooper Qu









主题: 回复:回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector









Hi Juzhe,









Yes, XTheadVector does not have vfneg.v as a pseudo instruction for vfsgnjn.vv.









We have listed all the differences between vector and xtheadvector in our spec. You may refer to it.









https://github.com/T-head-Semi/thead-extension-spec/blob/master/xtheadvector.adoc









https://github.com/T-head-Semi/thead-extension-spec/commit/a0d8dd857e404011562379f2e7f5fae6f9a6bfdd

















Joshua









































------------------------------------------------------------------









发件人:钟居哲 <juzhe.zhong@rivai.ai>









发送时间:2023年12月20日(星期三) 22:27









收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>









抄 送:"jim.wilson.gcc"<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; Jeff Law<jeffreyalaw@gmail.com>; "Christoph Müllner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; Cooper Qu<cooper.qu@linux.alibaba.com>









主 题:Re: 回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector





















Why do you add this ?













+(define_insn "@pred_th_<optab><mode>"









+  [(set (match_operand:V_VLSF 0 "register_operand"           "=vd, vd, vr, vr")









+ (if_then_else:V_VLSF









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")









+      (match_operand 4 "vector_length_operand"    " rK, rK, rK, rK")









+      (match_operand 5 "const_int_operand"        "  i,  i,  i,  i")









+      (match_operand 6 "const_int_operand"        "  i,  i,  i,  i")









+      (match_operand 7 "const_int_operand"        "  i,  i,  i,  i")









+      (match_operand 8 "const_int_operand"        "  i,  i,  i,  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)









+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)









+   (any_float_unop:V_VLSF









+     (match_operand:V_VLSF 3 "register_operand"       " vr, vr, vr, vr"))









+   (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]









+  "TARGET_XTHEADVECTOR"









+  "vf<insn>.v\t%0,%3%p1"









+  [(set_attr "type" "<float_insn_type>")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))









+   (set (attr "frm_mode")









+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])

















Theadvector is not th.vfneg.v ?

























juzhe.zhong@rivai.ai





































发件人: joshua





















发送时间: 2023-12-20 22:24





















收件人: 钟居哲; gcc-patches





















抄送: jim.wilson.gcc; palmer; andrew; philipp.tomsich; Jeff Law; Christoph Müllner; jinma; Cooper Qu





















主题: 回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector





















Hi Juzhe,





































The patterns you supposed redundant are all necessary, because they generate different instructions from vector.





















Take pred_th_unit_strided_store as an example, xtheadvector do not have <sew> in load/store instructions,





















and we cannot reuse the same pattern as vector. That is why we define new function_base in thead-vector-builtins-functions.def.





































Joshua





































































































































------------------------------------------------------------------





















发件人:钟居哲 <juzhe.zhong@rivai.ai>





















发送时间:2023年12月20日(星期三) 22:00





















收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>





















抄 送:"jim.wilson.gcc"<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; Jeff Law<jeffreyalaw@gmail.com>; "Christoph Müllner"<christoph.muellner@vrull.eu>; "cooper.joshua"<cooper.joshua@linux.alibaba.com>; jinma<jinma@linux.alibaba.com>; Cooper Qu<cooper.qu@linux.alibaba.com>





















主 题:Re: [PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector





































+// 7.6. Vector Indexed Instructions





















+DEF_THEAD_RVV_FUNCTION (vluxei8, th_vluxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vluxei16, th_vluxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vluxei32, th_vluxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vluxei64, th_vluxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vloxei8, th_vloxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vloxei16, th_vloxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vloxei32, th_vloxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vloxei64, th_vloxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsuxei8, th_vsuxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsuxei16, th_vsuxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsuxei32, th_vsuxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsuxei64, th_vsuxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsoxei8, th_vsoxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsoxei16, th_vsoxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsoxei32, th_vsoxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsoxei64, th_vsoxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)





































Why do you add these ?





































+(define_insn "@pred_th_unit_strided_store<mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+      [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+       (match_operand 3 "vector_length_operand"    "   rK")





















+       (match_operand 4 "const_int_operand"        "    i")





















+       (reg:SI VL_REGNUM)





















+       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"      "   rJ")





















+    (match_operand:VT 2 "register_operand"         "   vr")





















+    (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED))]





















+  "TARGET_XTHEADVECTOR"





















+  "vsseg<nf>e.v\t%2,(%z1)%p0"





















+  [(set_attr "type" "vssegte")





















+   (set_attr "mode" "<MODE>")])





































These patterns are redundant just names are different.





















They should be removed.





















juzhe.zhong@rivai.ai





































From: Jun Sha (Joshua)





















Date: 2023-12-20 20:34





















To: gcc-patches





















CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; juzhe.zhong; Jun Sha (Joshua); Jin Ma; Xianmiao Qu





















Subject: [PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector





















This patch is to handle the differences in instruction generation





















between Vector and XTheadVector, adding th. prefix





















to all XTheadVector instructions is not included.





































For some vector patterns that cannot be avoided, we use





















!TARGET_XTHEADVECTOR to disable them in vector.md in order





















not to generate instructions that xtheadvector does not support,





















like vmv1r and vsext.vf2.





































gcc/ChangeLog:





































* config.gcc:  Add files for XTheadVector intrinsics.





















* config/riscv/autovec.md: Guard XTheadVector.





















* config/riscv/riscv-string.cc (expand_block_move):





















Guard XTheadVector.





















* config/riscv/riscv-v.cc (legitimize_move):





















New expansion.





















(get_prefer_tail_policy): Give specific value for tail.





















(get_prefer_mask_policy): Give specific value for mask.





















(vls_mode_valid_p): Avoid autovec.





















* config/riscv/riscv-vector-builtins-shapes.cc (check_type):





















(build_one): New function.





















* config/riscv/riscv-vector-builtins.cc (DEF_RVV_FUNCTION):





















(DEF_THEAD_RVV_FUNCTION): Add new marcos.





















(check_required_extensions):





















(handle_pragma_vector):





















* config/riscv/riscv-vector-builtins.h (RVV_REQUIRE_VECTOR):





















(RVV_REQUIRE_XTHEADVECTOR):





















Add RVV_REQUIRE_VECTOR and RVV_REQUIRE_XTHEADVECTOR.





















(struct function_group_info):





















* config/riscv/riscv-vector-switch.def (ENTRY):





















Disable fractional mode for the XTheadVector extension.





















(TUPLE_ENTRY): Likewise.





















* config/riscv/riscv-vsetvl.cc: Add functions for xtheadvector.





















* config/riscv/riscv.cc (riscv_v_ext_vls_mode_p):





















Guard XTheadVector.





















(riscv_v_adjust_bytesize): Likewise.





















(riscv_preferred_simd_mode): Likewsie.





















(riscv_autovectorize_vector_modes): Likewise.





















(riscv_vector_mode_supported_any_target_p): Likewise.





















(TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P): Likewise.





















* config/riscv/t-riscv: Add new files.





















* config/riscv/vector-iterators.md: Remove fractional LMUL.





















* config/riscv/vector.md: Include thead-vector.md.





















* config/riscv/riscv_th_vector.h: New file.





















* config/riscv/thead-vector-builtins-functions.def: New file.





















* config/riscv/thead-vector-builtins.cc: New file.





















* config/riscv/thead-vector-builtins.h: New file.





















* config/riscv/thead-vector.md: New file.





































gcc/testsuite/ChangeLog:





































* gcc.target/riscv/rvv/base/pragma-1.c: Add XTheadVector.





















* gcc.target/riscv/rvv/base/abi-1.c: Exclude XTheadVector.





















* lib/target-supports.exp: Add target for XTheadVector.





































Co-authored-by: Jin Ma <jinma@linux.alibaba.com>





















Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>





















Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>





















---





















gcc/config.gcc                                |    4 +-





















gcc/config/riscv/autovec.md                   |    2 +-





















gcc/config/riscv/predicates.md                |    8 +-





















gcc/config/riscv/riscv-string.cc              |    3 +





















gcc/config/riscv/riscv-v.cc                   |   13 +-





















.../riscv/riscv-vector-builtins-shapes.cc     |   23 +





















gcc/config/riscv/riscv-vector-builtins.cc     |    7 +





















gcc/config/riscv/riscv-vector-builtins.h      |    5 +-





















gcc/config/riscv/riscv-vector-switch.def      |  150 +-





















gcc/config/riscv/riscv.cc                     |   20 +-





















gcc/config/riscv/riscv_th_vector.h            |   49 +





















gcc/config/riscv/t-riscv                      |   16 +





















.../riscv/thead-vector-builtins-functions.def |  627 ++++





















gcc/config/riscv/thead-vector-builtins.cc     |  746 +++++





















gcc/config/riscv/thead-vector-builtins.h      |   92 +





















gcc/config/riscv/thead-vector.md              | 2574 +++++++++++++++++





















gcc/config/riscv/vector-iterators.md          |  186 +-





















gcc/config/riscv/vector.md                    |   36 +-





















.../gcc.target/riscv/rvv/base/abi-1.c         |    2 +-





















.../gcc.target/riscv/rvv/base/pragma-1.c      |    2 +-





















gcc/testsuite/lib/target-supports.exp         |   12 +





















21 files changed, 4386 insertions(+), 191 deletions(-)





















create mode 100644 gcc/config/riscv/riscv_th_vector.h





















create mode 100644 gcc/config/riscv/thead-vector-builtins-functions.def





















create mode 100644 gcc/config/riscv/thead-vector-builtins.cc





















create mode 100644 gcc/config/riscv/thead-vector-builtins.h





















create mode 100644 gcc/config/riscv/thead-vector.md





































diff --git a/gcc/config.gcc b/gcc/config.gcc





















index f0676c830e8..4478395ab77 100644





















--- a/gcc/config.gcc





















+++ b/gcc/config.gcc





















@@ -547,9 +547,9 @@ riscv*)





















extra_objs="riscv-builtins.o riscv-c.o riscv-sr.o riscv-shorten-memrefs.o riscv-selftests.o riscv-string.o"





















extra_objs="${extra_objs} riscv-v.o riscv-vsetvl.o riscv-vector-costs.o riscv-avlprop.o"





















extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o"





















- extra_objs="${extra_objs} thead.o riscv-target-attr.o"





















+ extra_objs="${extra_objs} thead.o riscv-target-attr.o thead-vector-builtins.o"





















d_target_objs="riscv-d.o"





















- extra_headers="riscv_vector.h"





















+ extra_headers="riscv_vector.h riscv_th_vector.h"





















target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc"





















target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.h"





















;;





















diff --git a/gcc/config/riscv/autovec.md b/gcc/config/riscv/autovec.md





















index 8b8a92f10a1..1fac56c7095 100644





















--- a/gcc/config/riscv/autovec.md





















+++ b/gcc/config/riscv/autovec.md





















@@ -2579,7 +2579,7 @@ (define_expand "rawmemchr<ANYI:mode>"





















  [(match_operand      0 "register_operand")





















    (match_operand      1 "memory_operand")





















    (match_operand:ANYI 2 "const_int_operand")]





















-  "TARGET_VECTOR"





















+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"





















  {





















    riscv_vector::expand_rawmemchr(<MODE>mode, operands[0], operands[1],





















  operands[2]);





















diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md





















index 1a3a4f1ecbb..d910367e59c 100644





















--- a/gcc/config/riscv/predicates.md





















+++ b/gcc/config/riscv/predicates.md





















@@ -64,8 +64,9 @@ (define_predicate "csr_operand"





















        (match_operand 0 "register_operand")))





















(define_predicate "vector_csr_operand"





















-  (ior (match_operand 0 "const_csr_operand")





















-       (match_operand 0 "register_operand")))





















+  (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")





















+      (match_operand 0 "const_csr_operand"))





















+    (match_operand 0 "register_operand")))





















;; V has 32-bit unsigned immediates.  This happens to be the same constraint as





















;; the csr_operand, but it's not CSR related.





















@@ -425,7 +426,8 @@ (define_predicate "immediate_register_operand"





















;; Predicates for the V extension.





















(define_special_predicate "vector_length_operand"





















  (ior (match_operand 0 "pmode_register_operand")





















-       (match_operand 0 "const_csr_operand")))





















+      (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")





















+    (match_operand 0 "const_csr_operand"))))





















(define_special_predicate "autovec_length_operand"





















  (ior (match_operand 0 "pmode_register_operand")





















diff --git a/gcc/config/riscv/riscv-string.cc b/gcc/config/riscv/riscv-string.cc





















index 11c1f74d0b3..ec8f3486fd8 100644





















--- a/gcc/config/riscv/riscv-string.cc





















+++ b/gcc/config/riscv/riscv-string.cc





















@@ -808,6 +808,9 @@ expand_block_move (rtx dst_in, rtx src_in, rtx length_in)





















bnez a2, loop                   # Any more?





















ret                             # Return





















  */





















+   if (TARGET_XTHEADVECTOR)





















+    return false;





















+





















  gcc_assert (TARGET_VECTOR);





















  HOST_WIDE_INT potential_ew





















diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc





















index 486f5deb296..710332e17db 100644





















--- a/gcc/config/riscv/riscv-v.cc





















+++ b/gcc/config/riscv/riscv-v.cc





















@@ -1444,6 +1444,13 @@ legitimize_move (rtx dest, rtx *srcp)





















      return true;





















    }





















+  if (TARGET_XTHEADVECTOR)





















+      {





















+ emit_insn (gen_pred_th_whole_mov (mode, dest, src,





















+   RVV_VLMAX, GEN_INT(VLMAX)));





















+ return true;





















+      }





















+





















  if (riscv_v_ext_vls_mode_p (mode))





















    {





















      if (GET_MODE_NUNITS (mode).to_constant () <= 31)





















@@ -1693,7 +1700,7 @@ get_prefer_tail_policy ()





















      compiler pick up either agnostic or undisturbed. Maybe we





















      will have a compile option like -mprefer=agnostic to set





















      this value???.  */





















-  return TAIL_ANY;





















+  return TARGET_XTHEADVECTOR ? TAIL_AGNOSTIC : TAIL_ANY;





















}





















/* Get prefer mask policy.  */





















@@ -1704,7 +1711,7 @@ get_prefer_mask_policy ()





















      compiler pick up either agnostic or undisturbed. Maybe we





















      will have a compile option like -mprefer=agnostic to set





















      this value???.  */





















-  return MASK_ANY;





















+  return TARGET_XTHEADVECTOR ? MASK_UNDISTURBED : MASK_ANY;





















}





















/* Get avl_type rtx.  */





















@@ -4294,7 +4301,7 @@ cmp_lmul_gt_one (machine_mode mode)





















bool





















vls_mode_valid_p (machine_mode vls_mode)





















{





















-  if (!TARGET_VECTOR)





















+  if (!TARGET_VECTOR || TARGET_XTHEADVECTOR)





















    return false;





















  if (riscv_autovec_preference == RVV_SCALABLE)





















diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc





















index 4a754e0228f..6b49404a1fa 100644





















--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc





















+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc





















@@ -33,6 +33,25 @@





















namespace riscv_vector {





















+/* Check whether the RETURN_TYPE and ARGUMENT_TYPES are





















+   valid for the function.  */





















+





















+static bool





















+check_type (tree return_type, vec<tree> &argument_types)





















+{





















+  tree arg;





















+  unsigned i;





















+





















+  if (!return_type)





















+    return false;





















+





















+  FOR_EACH_VEC_ELT (argument_types, i, arg)





















+    if (!arg)





















+      return false;





















+





















+  return true;





















+}





















+





















/* Add one function instance for GROUP, using operand suffix at index OI,





















    mode suffix at index PAIR && bi and predication suffix at index pred_idx.  */





















static void





















@@ -49,6 +68,10 @@ build_one (function_builder &b, const function_group_info &group,





















    group.ops_infos.types[vec_type_idx].index);





















  b.allocate_argument_types (function_instance, argument_types);





















  b.apply_predication (function_instance, return_type, argument_types);





















+





















+  if (TARGET_XTHEADVECTOR && !check_type (return_type, argument_types))





















+    return;





















+





















  b.add_overloaded_function (function_instance, *group.shape);





















  b.add_unique_function (function_instance, (*group.shape), return_type,





















argument_types);





















diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc





















index 4e2c66c2de7..f5f9000d89c 100644





















--- a/gcc/config/riscv/riscv-vector-builtins.cc





















+++ b/gcc/config/riscv/riscv-vector-builtins.cc





















@@ -51,6 +51,7 @@





















#include "riscv-vector-builtins.h"





















#include "riscv-vector-builtins-shapes.h"





















#include "riscv-vector-builtins-bases.h"





















+#include "thead-vector-builtins.h"





















using namespace riscv_vector;





















@@ -2687,6 +2688,12 @@ static function_group_info function_groups[] = {





















#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)                         \





















  {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},





















#include "riscv-vector-builtins-functions.def"





















+#undef DEF_RVV_FUNCTION





















+#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)                         \





















+  {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},





















+#define DEF_THEAD_RVV_FUNCTION(NAME, BASE, SHAPE, PREDS, OPS_INFO)             \





















+  {#NAME, &bases::BASE, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},





















+#include "thead-vector-builtins-functions.def"





















};





















/* The RVV types, with their built-in





















diff --git a/gcc/config/riscv/riscv-vector-builtins.h b/gcc/config/riscv/riscv-vector-builtins.h





















index 4f38c09d73d..bb463510dd2 100644





















--- a/gcc/config/riscv/riscv-vector-builtins.h





















+++ b/gcc/config/riscv/riscv-vector-builtins.h





















@@ -123,6 +123,7 @@ enum required_ext





















  ZVKNHB_EXT,  /* Crypto vector Zvknhb sub-ext */





















  ZVKSED_EXT,  /* Crypto vector Zvksed sub-ext */





















  ZVKSH_EXT,   /* Crypto vector Zvksh sub-ext */





















+  XTHEADVECTOR_EXT,   /* XTheadVector extension */





















};





















/* Enumerates the RVV operand types.  */





















@@ -233,7 +234,7 @@ struct function_group_info





















    switch (ext_value)





















    {





















      case VECTOR_EXT:





















-        return TARGET_VECTOR;





















+ return (TARGET_VECTOR && !TARGET_XTHEADVECTOR);





















      case ZVBB_EXT:





















        return TARGET_ZVBB;





















      case ZVBB_OR_ZVKB_EXT:





















@@ -252,6 +253,8 @@ struct function_group_info





















        return TARGET_ZVKSED;





















      case ZVKSH_EXT:





















        return TARGET_ZVKSH;





















+      case XTHEADVECTOR_EXT:





















+ return TARGET_XTHEADVECTOR;





















      default:





















        gcc_unreachable ();





















    }





















diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def





















index 5c9f9bcbc3e..f7a66b34bae 100644





















--- a/gcc/config/riscv/riscv-vector-switch.def





















+++ b/gcc/config/riscv/riscv-vector-switch.def





















@@ -68,9 +68,9 @@ Encode the ratio of SEW/LMUL into the mask types.





















#endif





















/* Disable modes if TARGET_MIN_VLEN == 32.  */





















-ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)





















-ENTRY (RVVMF32BI, true, LMUL_F4, 32)





















-ENTRY (RVVMF16BI, true, LMUL_F2, 16)





















+ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F8, 64)





















+ENTRY (RVVMF32BI, true, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F4, 32)





















+ENTRY (RVVMF16BI, true, TARGET_XTHEADVECTOR ? LMUL_1 : LMUL_F2 , 16)





















ENTRY (RVVMF8BI, true, LMUL_1, 8)





















ENTRY (RVVMF4BI, true, LMUL_2, 4)





















ENTRY (RVVMF2BI, true, LMUL_4, 2)





















@@ -81,39 +81,39 @@ ENTRY (RVVM8QI, true, LMUL_8, 1)





















ENTRY (RVVM4QI, true, LMUL_4, 2)





















ENTRY (RVVM2QI, true, LMUL_2, 4)





















ENTRY (RVVM1QI, true, LMUL_1, 8)





















-ENTRY (RVVMF2QI, true, LMUL_F2, 16)





















-ENTRY (RVVMF4QI, true, LMUL_F4, 32)





















-ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)





















+ENTRY (RVVMF2QI, !TARGET_XTHEADVECTOR, LMUL_F2, 16)





















+ENTRY (RVVMF4QI, !TARGET_XTHEADVECTOR, LMUL_F4, 32)





















+ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F8, 64)





















/* Disable modes if TARGET_MIN_VLEN == 32.  */





















ENTRY (RVVM8HI, true, LMUL_8, 2)





















ENTRY (RVVM4HI, true, LMUL_4, 4)





















ENTRY (RVVM2HI, true, LMUL_2, 8)





















ENTRY (RVVM1HI, true, LMUL_1, 16)





















-ENTRY (RVVMF2HI, true, LMUL_F2, 32)





















-ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32, LMUL_F4, 64)





















+ENTRY (RVVMF2HI, !TARGET_XTHEADVECTOR, LMUL_F2, 32)





















+ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)





















/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_16.  */





















ENTRY (RVVM8HF, TARGET_VECTOR_ELEN_FP_16, LMUL_8, 2)





















ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4)





















ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8)





















ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16)





















-ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_F2, 32)





















-ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, LMUL_F4, 64)





















+ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, LMUL_F2, 32)





















+ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)





















/* Disable modes if TARGET_MIN_VLEN == 32.  */





















ENTRY (RVVM8SI, true, LMUL_8, 4)





















ENTRY (RVVM4SI, true, LMUL_4, 8)





















ENTRY (RVVM2SI, true, LMUL_2, 16)





















ENTRY (RVVM1SI, true, LMUL_1, 32)





















-ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32, LMUL_F2, 64)





















+ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)





















/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32.  */





















ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4)





















ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8)





















ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16)





















ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32)





















-ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_F2, 64)





















+ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)





















/* Disable modes if !TARGET_VECTOR_ELEN_64.  */





















ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8)





















@@ -140,127 +140,127 @@ ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64)





















#endif





















TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8)





















-TUPLE_ENTRY (RVVMF2x8QI, true, RVVMF2QI, 8, LMUL_F2, 16)





















-TUPLE_ENTRY (RVVMF4x8QI, true, RVVMF4QI, 8, LMUL_F4, 32)





















-TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 8, LMUL_F8, 64)





















+TUPLE_ENTRY (RVVMF2x8QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 8, LMUL_F2, 16)





















+TUPLE_ENTRY (RVVMF4x8QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 8, LMUL_F4, 32)





















+TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 8, LMUL_F8, 64)





















TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8)





















-TUPLE_ENTRY (RVVMF2x7QI, true, RVVMF2QI, 7, LMUL_F2, 16)





















-TUPLE_ENTRY (RVVMF4x7QI, true, RVVMF4QI, 7, LMUL_F4, 32)





















-TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 7, LMUL_F8, 64)





















+TUPLE_ENTRY (RVVMF2x7QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 7, LMUL_F2, 16)





















+TUPLE_ENTRY (RVVMF4x7QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 7, LMUL_F4, 32)





















+TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 7, LMUL_F8, 64)





















TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8)





















-TUPLE_ENTRY (RVVMF2x6QI, true, RVVMF2QI, 6, LMUL_F2, 16)





















-TUPLE_ENTRY (RVVMF4x6QI, true, RVVMF4QI, 6, LMUL_F4, 32)





















-TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 6, LMUL_F8, 64)





















+TUPLE_ENTRY (RVVMF2x6QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 6, LMUL_F2, 16)





















+TUPLE_ENTRY (RVVMF4x6QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 6, LMUL_F4, 32)





















+TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 6, LMUL_F8, 64)





















TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8)





















-TUPLE_ENTRY (RVVMF2x5QI, true, RVVMF2QI, 5, LMUL_F2, 16)





















-TUPLE_ENTRY (RVVMF4x5QI, true, RVVMF4QI, 5, LMUL_F4, 32)





















-TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 5, LMUL_F8, 64)





















+TUPLE_ENTRY (RVVMF2x5QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 5, LMUL_F2, 16)





















+TUPLE_ENTRY (RVVMF4x5QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 5, LMUL_F4, 32)





















+TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 5, LMUL_F8, 64)





















TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4)





















TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8)





















-TUPLE_ENTRY (RVVMF2x4QI, true, RVVMF2QI, 4, LMUL_F2, 16)





















-TUPLE_ENTRY (RVVMF4x4QI, true, RVVMF4QI, 4, LMUL_F4, 32)





















-TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 4, LMUL_F8, 64)





















+TUPLE_ENTRY (RVVMF2x4QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 4, LMUL_F2, 16)





















+TUPLE_ENTRY (RVVMF4x4QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 4, LMUL_F4, 32)





















+TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 4, LMUL_F8, 64)





















TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4)





















TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8)





















-TUPLE_ENTRY (RVVMF2x3QI, true, RVVMF2QI, 3, LMUL_F2, 16)





















-TUPLE_ENTRY (RVVMF4x3QI, true, RVVMF4QI, 3, LMUL_F4, 32)





















-TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 3, LMUL_F8, 64)





















+TUPLE_ENTRY (RVVMF2x3QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 3, LMUL_F2, 16)





















+TUPLE_ENTRY (RVVMF4x3QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 3, LMUL_F4, 32)





















+TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 3, LMUL_F8, 64)





















TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2)





















TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4)





















TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8)





















-TUPLE_ENTRY (RVVMF2x2QI, true, RVVMF2QI, 2, LMUL_F2, 16)





















-TUPLE_ENTRY (RVVMF4x2QI, true, RVVMF4QI, 2, LMUL_F4, 32)





















-TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 2, LMUL_F8, 64)





















+TUPLE_ENTRY (RVVMF2x2QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 2, LMUL_F2, 16)





















+TUPLE_ENTRY (RVVMF4x2QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 2, LMUL_F4, 32)





















+TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 2, LMUL_F8, 64)





















TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x8HI, true, RVVMF2HI, 8, LMUL_F2, 32)





















-TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 8, LMUL_F4, 64)





















+TUPLE_ENTRY (RVVMF2x8HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 8, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 8, LMUL_F4, 64)





















TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x7HI, true, RVVMF2HI, 7, LMUL_F2, 32)





















-TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 7, LMUL_F4, 64)





















+TUPLE_ENTRY (RVVMF2x7HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 7, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 7, LMUL_F4, 64)





















TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x6HI, true, RVVMF2HI, 6, LMUL_F2, 32)





















-TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 6, LMUL_F4, 64)





















+TUPLE_ENTRY (RVVMF2x6HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 6, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 6, LMUL_F4, 64)





















TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x5HI, true, RVVMF2HI, 5, LMUL_F2, 32)





















-TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 5, LMUL_F4, 64)





















+TUPLE_ENTRY (RVVMF2x5HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 5, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 5, LMUL_F4, 64)





















TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8)





















TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x4HI, true, RVVMF2HI, 4, LMUL_F2, 32)





















-TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 4, LMUL_F4, 64)





















+TUPLE_ENTRY (RVVMF2x4HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 4, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 4, LMUL_F4, 64)





















TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8)





















TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x3HI, true, RVVMF2HI, 3, LMUL_F2, 32)





















-TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 3, LMUL_F4, 64)





















+TUPLE_ENTRY (RVVMF2x3HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 3, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 3, LMUL_F4, 64)





















TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4)





















TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8)





















TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x2HI, true, RVVMF2HI, 2, LMUL_F2, 32)





















-TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 2, LMUL_F4, 64)





















+TUPLE_ENTRY (RVVMF2x2HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 2, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 2, LMUL_F4, 64)





















TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 8, LMUL_F2, 32)





















-TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 8, LMUL_F4, 64)





















+TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 8, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 8, LMUL_F4, 64)





















TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 7, LMUL_F2, 32)





















-TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 7, LMUL_F4, 64)





















+TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 7, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 7, LMUL_F4, 64)





















TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 6, LMUL_F2, 32)





















-TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 6, LMUL_F4, 64)





















+TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 6, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 6, LMUL_F4, 64)





















TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 5, LMUL_F2, 32)





















-TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 5, LMUL_F4, 64)





















+TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 5, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 5, LMUL_F4, 64)





















TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8)





















TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 4, LMUL_F2, 32)





















-TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 4, LMUL_F4, 64)





















+TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 4, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 4, LMUL_F4, 64)





















TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8)





















TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 3, LMUL_F2, 32)





















-TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 3, LMUL_F4, 64)





















+TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 3, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 3, LMUL_F4, 64)





















TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4)





















TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8)





















TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 2, LMUL_F2, 32)





















-TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 2, LMUL_F4, 64)





















+TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 2, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 2, LMUL_F4, 64)





















TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x8SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 8, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF2x8SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 8, LMUL_F2, 32)





















TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x7SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 7, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF2x7SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 7, LMUL_F2, 32)





















TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 6, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 6, LMUL_F2, 32)





















TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 5, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 5, LMUL_F2, 32)





















TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8)





















TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 4, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 4, LMUL_F2, 32)





















TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8)





















TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 3, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 3, LMUL_F2, 32)





















TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4)





















TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8)





















TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 2, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 2, LMUL_F2, 32)





















TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 8, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 8, LMUL_F2, 32)





















TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 7, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 7, LMUL_F2, 32)





















TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 6, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 6, LMUL_F2, 32)





















TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 5, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 5, LMUL_F2, 32)





















TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8)





















TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 4, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 4, LMUL_F2, 32)





















TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8)





















TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 3, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 3, LMUL_F2, 32)





















TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4)





















TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8)





















TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 2, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 2, LMUL_F2, 32)





















TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16)





















TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16)





















diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc





















index d3010bed8d8..18cc64b63e6 100644





















--- a/gcc/config/riscv/riscv.cc





















+++ b/gcc/config/riscv/riscv.cc





















@@ -1389,6 +1389,9 @@ riscv_v_adjust_bytesize (machine_mode mode, int scale)





















{





















  if (riscv_v_ext_vector_mode_p (mode))





















    {





















+      if (TARGET_XTHEADVECTOR)





















+ return BYTES_PER_RISCV_VECTOR;





















+





















      poly_int64 nunits = GET_MODE_NUNITS (mode);





















      poly_int64 mode_size = GET_MODE_SIZE (mode);





















@@ -9888,7 +9891,7 @@ riscv_use_divmod_expander (void)





















static machine_mode





















riscv_preferred_simd_mode (scalar_mode mode)





















{





















-  if (TARGET_VECTOR)





















+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)





















    return riscv_vector::preferred_simd_mode (mode);





















  return word_mode;





















@@ -10239,7 +10242,7 @@ riscv_mode_priority (int, int n)





















unsigned int





















riscv_autovectorize_vector_modes (vector_modes *modes, bool all)





















{





















-  if (TARGET_VECTOR)





















+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)





















    return riscv_vector::autovectorize_vector_modes (modes, all);





















  return default_autovectorize_vector_modes (modes, all);





















@@ -10422,6 +10425,16 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)





















  return false;





















}





















+/* Implements target hook vector_mode_supported_any_target_p.  */





















+





















+static bool





















+riscv_vector_mode_supported_any_target_p (machine_mode mode)





















+{





















+  if (TARGET_XTHEADVECTOR)





















+    return false;





















+  return true;





















+}





















+





















/* Initialize the GCC target structure.  */





















#undef TARGET_ASM_ALIGNED_HI_OP





















#define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"





















@@ -10765,6 +10778,9 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)





















#undef TARGET_PREFERRED_ELSE_VALUE





















#define TARGET_PREFERRED_ELSE_VALUE riscv_preferred_else_value





















+#undef TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P





















+#define TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P riscv_vector_mode_supported_any_target_p





















+





















struct gcc_target targetm = TARGET_INITIALIZER;





















#include "gt-riscv.h"





















diff --git a/gcc/config/riscv/riscv_th_vector.h b/gcc/config/riscv/riscv_th_vector.h





















new file mode 100644





















index 00000000000..6f47e0c90a4





















--- /dev/null





















+++ b/gcc/config/riscv/riscv_th_vector.h





















@@ -0,0 +1,49 @@





















+/* RISC-V 'XTheadVector' Extension intrinsics include file.





















+   Copyright (C) 2022-2023 Free Software Foundation, Inc.





















+





















+   This file is part of GCC.





















+





















+   GCC is free software; you can redistribute it and/or modify it





















+   under the terms of the GNU General Public License as published





















+   by the Free Software Foundation; either version 3, or (at your





















+   option) any later version.





















+





















+   GCC is distributed in the hope that it will be useful, but WITHOUT





















+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY





















+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public





















+   License for more details.





















+





















+   Under Section 7 of GPL version 3, you are granted additional





















+   permissions described in the GCC Runtime Library Exception, version





















+   3.1, as published by the Free Software Foundation.





















+





















+   You should have received a copy of the GNU General Public License and





















+   a copy of the GCC Runtime Library Exception along with this program;





















+   see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see





















+   <http://www.gnu.org/licenses/>.  */





















+





















+#ifndef __RISCV_TH_VECTOR_H





















+#define __RISCV_TH_VECTOR_H





















+





















+#include <stdint.h>





















+#include <stddef.h>





















+





















+#ifndef __riscv_xtheadvector





















+#error "XTheadVector intrinsics require the xtheadvector extension."





















+#else





















+#ifdef __cplusplus





















+extern "C" {





















+#endif





















+





















+/* NOTE: This implementation of riscv_th_vector.h is intentionally short.  It does





















+   not define the RVV types and intrinsic functions directly in C and C++





















+   code, but instead uses the following pragma to tell GCC to insert the





















+   necessary type and function definitions itself.  The net effect is the





















+   same, and the file is a complete implementation of riscv_th_vector.h.  */





















+#pragma riscv intrinsic "vector"





















+





















+#ifdef __cplusplus





















+}





















+#endif // __cplusplus





















+#endif // __riscv_xtheadvector





















+#endif // __RISCV_TH_ECTOR_H





















diff --git a/gcc/config/riscv/t-riscv b/gcc/config/riscv/t-riscv





















index 067771e3c97..09512092056 100644





















--- a/gcc/config/riscv/t-riscv





















+++ b/gcc/config/riscv/t-riscv





















@@ -23,6 +23,8 @@ riscv-vector-builtins.o: $(srcdir)/config/riscv/riscv-vector-builtins.cc \





















  $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \





















  $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \





















  $(srcdir)/config/riscv/riscv-vector-builtins-types.def \





















+  $(srcdir)/config/riscv/thead-vector-builtins.h \





















+  $(srcdir)/config/riscv/thead-vector-builtins-functions.def \





















  $(RISCV_BUILTINS_H)





















$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \





















$(srcdir)/config/riscv/riscv-vector-builtins.cc





















@@ -50,6 +52,20 @@ riscv-vector-builtins-bases.o: \





















$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \





















$(srcdir)/config/riscv/riscv-vector-builtins-bases.cc





















+thead-vector-builtins.o: \





















+  $(srcdir)/config/riscv/thead-vector-builtins.cc \





















+  $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(TREE_H) $(RTL_H) \





















+  $(TM_P_H) memmodel.h insn-codes.h $(OPTABS_H) $(RECOG_H) \





















+  $(EXPR_H) $(BASIC_BLOCK_H) $(FUNCTION_H) fold-const.h $(GIMPLE_H) \





















+  gimple-iterator.h gimplify.h explow.h $(EMIT_RTL_H) tree-vector-builder.h \





















+  rtx-vector-builder.h \





















+  $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \





















+  $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \





















+  $(srcdir)/config/riscv/thead-vector-builtins.h \





















+  $(RISCV_BUILTINS_H)





















+ $(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \





















+ $(srcdir)/config/riscv/thead-vector-builtins.cc





















+





















riscv-sr.o: $(srcdir)/config/riscv/riscv-sr.cc $(CONFIG_H) \





















  $(SYSTEM_H) $(TM_H)





















$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \





















diff --git a/gcc/config/riscv/thead-vector-builtins-functions.def b/gcc/config/riscv/thead-vector-builtins-functions.def





















new file mode 100644





















index 00000000000..a85ca24cb31





















--- /dev/null





















+++ b/gcc/config/riscv/thead-vector-builtins-functions.def





















@@ -0,0 +1,627 @@





















+#ifndef DEF_RVV_FUNCTION





















+#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)





















+#endif





















+





















+#ifndef DEF_THEAD_RVV_FUNCTION





















+#define DEF_THEAD_RVV_FUNCTION(NAME, BASE, SHAPE, PREDS, OPS_INFO)





















+#endif





















+





















+#define REQUIRED_EXTENSIONS XTHEADVECTOR_EXT





















+/* Internal helper functions for gimple fold use.  */





















+DEF_RVV_FUNCTION (read_vl, read_vl, none_preds, p_none_void_ops)





















+DEF_RVV_FUNCTION (vlenb, vlenb, none_preds, ul_none_void_ops)





















+





















+/* 6. Configuration-Setting Instructions.  */





















+





















+DEF_THEAD_RVV_FUNCTION (vsetvl, th_vsetvl, vsetvl, none_preds, i_none_size_size_ops)





















+DEF_THEAD_RVV_FUNCTION (vsetvlmax, th_vsetvlmax, vsetvlmax, none_preds, i_none_size_void_ops)





















+





















+/* 7. Vector Loads and Stores. */





















+





















+// 7.4. Vector Unit-Stride Instructions





















+DEF_THEAD_RVV_FUNCTION (vle, th_vle, loadstore, full_preds, all_v_scalar_const_ptr_ops)





















+DEF_THEAD_RVV_FUNCTION (vse, th_vse, loadstore, none_m_preds, all_v_scalar_ptr_ops)





















+DEF_THEAD_RVV_FUNCTION (vlm, th_vlm, loadstore, none_preds, b_v_scalar_const_ptr_ops)





















+DEF_THEAD_RVV_FUNCTION (vsm, th_vsm, loadstore, none_preds, b_v_scalar_ptr_ops)





















+





















+// 7.5. Vector Strided Instructions





















+DEF_THEAD_RVV_FUNCTION (vlse, th_vlse, loadstore, full_preds, all_v_scalar_const_ptr_ptrdiff_ops)





















+DEF_THEAD_RVV_FUNCTION (vsse, th_vsse, loadstore, none_m_preds, all_v_scalar_ptr_ptrdiff_ops)





















+





















+// 7.6. Vector Indexed Instructions





















+DEF_THEAD_RVV_FUNCTION (vluxei8, th_vluxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vluxei16, th_vluxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vluxei32, th_vluxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vluxei64, th_vluxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vloxei8, th_vloxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vloxei16, th_vloxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vloxei32, th_vloxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vloxei64, th_vloxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsuxei8, th_vsuxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsuxei16, th_vsuxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsuxei32, th_vsuxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsuxei64, th_vsuxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsoxei8, th_vsoxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsoxei16, th_vsoxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsoxei32, th_vsoxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsoxei64, th_vsoxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)





















+





















+// 7.7. Unit-stride Fault-Only-First Loads





















+DEF_THEAD_RVV_FUNCTION (vleff, th_vleff, fault_load, full_preds, all_v_scalar_const_ptr_size_ptr_ops)





















+





















+// TODO: 7.8. Vector Load/Store Segment Instructions





















+





















+/* 11. Vector Integer Arithmetic Instructions.  */





















+





















+// 11.1. Vector Single-Width Integer Add and Subtract





















+DEF_RVV_FUNCTION (vadd, alu, full_preds, iu_vvv_ops)





















+DEF_RVV_FUNCTION (vadd, alu, full_preds, iu_vvx_ops)





















+DEF_RVV_FUNCTION (vsub, alu, full_preds, iu_vvv_ops)





















+DEF_RVV_FUNCTION (vsub, alu, full_preds, iu_vvx_ops)





















+DEF_RVV_FUNCTION (vrsub, alu, full_preds, iu_vvx_ops)





















+DEF_THEAD_RVV_FUNCTION (vneg, th_vneg, alu, full_preds, iu_v_ops)





















+





















+// 11.2. Vector Widening Integer Add/Subtract





















+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wvv_ops)





















+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wvx_ops)





















+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wvv_ops)





















+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wvx_ops)





















+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wvv_ops)





















+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wvx_ops)





















+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wvv_ops)





















+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wvx_ops)





















+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wwv_ops)





















+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wwx_ops)





















+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wwv_ops)





















+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wwx_ops)





















+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wwv_ops)





















+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wwx_ops)





















+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wwv_ops)





















+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wwx_ops)





















+DEF_RVV_FUNCTION (vwcvt_x, alu, full_preds, i_x_x_v_ops)





















+DEF_RVV_FUNCTION (vwcvtu_x, alu, full_preds, u_x_x_v_ops)





















+





















+// 11.3. Vector Integer Extension





















+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf2_ops)





















+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf4_ops)





















+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf8_ops)





















+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf2_ops)





















+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf4_ops)





















+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf8_ops)





















+





















+// 11.4. Vector Integer Add-with-Carry/Subtract-with-Borrow Instructions





















+DEF_RVV_FUNCTION (vadc, no_mask_policy, none_tu_preds, iu_vvvm_ops)





















+DEF_RVV_FUNCTION (vadc, no_mask_policy, none_tu_preds, iu_vvxm_ops)





















+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvvm_ops)





















+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvxm_ops)





















+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvv_ops)





















+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvx_ops)





















+DEF_RVV_FUNCTION (vsbc, no_mask_policy, none_tu_preds, iu_vvvm_ops)





















+DEF_RVV_FUNCTION (vsbc, no_mask_policy, none_tu_preds, iu_vvxm_ops)





















+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvvm_ops)





















+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvxm_ops)





















+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvv_ops)





















+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvx_ops)





















+





















+// 11.5. Vector Bitwise Logical Instructions





















+DEF_RVV_FUNCTION (vand, alu, full_preds, iu_vvv_ops)





















+DEF_RVV_FUNCTION (vand, alu, full_preds, iu_vvx_ops)





















+DEF_RVV_FUNCTION (vor, alu, full_preds, iu_vvv_ops)





















+DEF_RVV_FUNCTION (vor, alu, full_preds, iu_vvx_ops)





















+DEF_RVV_FUNCTION (vxor, alu, full_preds, iu_vvv_ops)





















+DEF_RVV_FUNCTION (vxor, alu, full_preds, iu_vvx_ops)





















+DEF_THEAD_RVV_FUNCTION (vnot, th_vnot, alu, full_preds, iu_v_ops)





















+





















+// 11.6. Vector Single-Width Shift Instructions





















+DEF_RVV_FUNCTION (vsll, alu, full_preds, iu_shift_vvv_ops)





















+DEF_RVV_FUNCTION (vsll, alu, full_preds, iu_shift_vvx_ops)





















+DEF_RVV_FUNCTION (vsra, alu, full_preds, i_shift_vvv_ops)





















+DEF_RVV_FUNCTION (vsra, alu, full_preds, i_shift_vvx_ops)





















+DEF_RVV_FUNCTION (vsrl, alu, full_preds, u_shift_vvv_ops)





















+DEF_RVV_FUNCTION (vsrl, alu, full_preds, u_shift_vvx_ops)





















+





















+// 11.7. Vector Narrowing Integer Right Shift Instructions





















+DEF_THEAD_RVV_FUNCTION (vnsrl, th_vnsrl, narrow_alu, full_preds, u_narrow_shift_vwv_ops)





















+DEF_THEAD_RVV_FUNCTION (vnsrl, th_vnsrl, narrow_alu, full_preds, u_narrow_shift_vwx_ops)





















+DEF_THEAD_RVV_FUNCTION (vnsra, th_vnsra, narrow_alu, full_preds, i_narrow_shift_vwv_ops)





















+DEF_THEAD_RVV_FUNCTION (vnsra, th_vnsra, narrow_alu, full_preds, i_narrow_shift_vwx_ops)





















+DEF_THEAD_RVV_FUNCTION (vncvt_x, th_vncvt_x, narrow_alu, full_preds, iu_trunc_ops)





















+





















+// 11.8. Vector Integer Compare Instructions





















+DEF_RVV_FUNCTION (vmseq, return_mask, none_m_mu_preds, iu_mvv_ops)





















+DEF_RVV_FUNCTION (vmseq, return_mask, none_m_mu_preds, iu_mvx_ops)





















+DEF_RVV_FUNCTION (vmsne, return_mask, none_m_mu_preds, iu_mvv_ops)





















+DEF_RVV_FUNCTION (vmsne, return_mask, none_m_mu_preds, iu_mvx_ops)





















+DEF_RVV_FUNCTION (vmsltu, return_mask, none_m_mu_preds, u_mvv_ops)





















+DEF_RVV_FUNCTION (vmsltu, return_mask, none_m_mu_preds, u_mvx_ops)





















+DEF_RVV_FUNCTION (vmslt, return_mask, none_m_mu_preds, i_mvv_ops)





















+DEF_RVV_FUNCTION (vmslt, return_mask, none_m_mu_preds, i_mvx_ops)





















+DEF_RVV_FUNCTION (vmsleu, return_mask, none_m_mu_preds, u_mvv_ops)





















+DEF_RVV_FUNCTION (vmsleu, return_mask, none_m_mu_preds, u_mvx_ops)





















+DEF_RVV_FUNCTION (vmsle, return_mask, none_m_mu_preds, i_mvv_ops)





















+DEF_RVV_FUNCTION (vmsle, return_mask, none_m_mu_preds, i_mvx_ops)





















+DEF_RVV_FUNCTION (vmsgtu, return_mask, none_m_mu_preds, u_mvv_ops)





















+DEF_RVV_FUNCTION (vmsgtu, return_mask, none_m_mu_preds, u_mvx_ops)





















+DEF_RVV_FUNCTION (vmsgt, return_mask, none_m_mu_preds, i_mvv_ops)





















+DEF_RVV_FUNCTION (vmsgt, return_mask, none_m_mu_preds, i_mvx_ops)





















+DEF_RVV_FUNCTION (vmsgeu, return_mask, none_m_mu_preds, u_mvv_ops)





















+DEF_RVV_FUNCTION (vmsgeu, return_mask, none_m_mu_preds, u_mvx_ops)





















+DEF_RVV_FUNCTION (vmsge, return_mask, none_m_mu_preds, i_mvv_ops)





















+DEF_RVV_FUNCTION (vmsge, return_mask, none_m_mu_preds, i_mvx_ops)





















+





















+// 11.9. Vector Integer Min/Max Instructions





















+DEF_RVV_FUNCTION (vminu, alu, full_preds, u_vvv_ops)





















+DEF_RVV_FUNCTION (vminu, alu, full_preds, u_vvx_ops)





















+DEF_RVV_FUNCTION (vmin, alu, full_preds, i_vvv_ops)





















+DEF_RVV_FUNCTION (vmin, alu, full_preds, i_vvx_ops)





















+DEF_RVV_FUNCTION (vmaxu, alu, full_preds, u_vvv_ops)





















+DEF_RVV_FUNCTION (vmaxu, alu, full_preds, u_vvx_ops)





















+DEF_RVV_FUNCTION (vmax, alu, full_preds, i_vvv_ops)





















+DEF_RVV_FUNCTION (vmax, alu, full_preds, i_vvx_ops)





















+





















+// 11.10. Vector Single-Width Integer Multiply Instructions





















+DEF_RVV_FUNCTION (vmul, alu, full_preds, iu_vvv_ops)





















+DEF_RVV_FUNCTION (vmul, alu, full_preds, iu_vvx_ops)





















+DEF_RVV_FUNCTION (vmulh, alu, full_preds, full_v_i_vvv_ops)





















+DEF_RVV_FUNCTION (vmulh, alu, full_preds, full_v_i_vvx_ops)





















+DEF_RVV_FUNCTION (vmulhu, alu, full_preds, full_v_u_vvv_ops)





















+DEF_RVV_FUNCTION (vmulhu, alu, full_preds, full_v_u_vvx_ops)





















+DEF_RVV_FUNCTION (vmulhsu, alu, full_preds, full_v_i_su_vvv_ops)





















+DEF_RVV_FUNCTION (vmulhsu, alu, full_preds, full_v_i_su_vvx_ops)





















+





















+// 11.11. Vector Integer Divide Instructions





















+DEF_RVV_FUNCTION (vdivu, alu, full_preds, u_vvv_ops)





















+DEF_RVV_FUNCTION (vdivu, alu, full_preds, u_vvx_ops)





















+DEF_RVV_FUNCTION (vdiv, alu, full_preds, i_vvv_ops)





















+DEF_RVV_FUNCTION (vdiv, alu, full_preds, i_vvx_ops)





















+DEF_RVV_FUNCTION (vremu, alu, full_preds, u_vvv_ops)





















+DEF_RVV_FUNCTION (vremu, alu, full_preds, u_vvx_ops)





















+DEF_RVV_FUNCTION (vrem, alu, full_preds, i_vvv_ops)





















+DEF_RVV_FUNCTION (vrem, alu, full_preds, i_vvx_ops)





















+





















+// 11.12. Vector Widening Integer Multiply Instructions





















+DEF_RVV_FUNCTION (vwmul, alu, full_preds, i_wvv_ops)





















+DEF_RVV_FUNCTION (vwmul, alu, full_preds, i_wvx_ops)





















+DEF_RVV_FUNCTION (vwmulu, alu, full_preds, u_wvv_ops)





















+DEF_RVV_FUNCTION (vwmulu, alu, full_preds, u_wvx_ops)





















+DEF_RVV_FUNCTION (vwmulsu, alu, full_preds, i_su_wvv_ops)





















+DEF_RVV_FUNCTION (vwmulsu, alu, full_preds, i_su_wvx_ops)





















+





















+// 11.13. Vector Single-Width Integer Multiply-Add Instructions





















+DEF_RVV_FUNCTION (vmacc, alu, full_preds, iu_vvvv_ops)





















+DEF_RVV_FUNCTION (vmacc, alu, full_preds, iu_vvxv_ops)





















+DEF_RVV_FUNCTION (vnmsac, alu, full_preds, iu_vvvv_ops)





















+DEF_RVV_FUNCTION (vnmsac, alu, full_preds, iu_vvxv_ops)





















+DEF_RVV_FUNCTION (vmadd, alu, full_preds, iu_vvvv_ops)





















+DEF_RVV_FUNCTION (vmadd, alu, full_preds, iu_vvxv_ops)





















+DEF_RVV_FUNCTION (vnmsub, alu, full_preds, iu_vvvv_ops)





















+DEF_RVV_FUNCTION (vnmsub, alu, full_preds, iu_vvxv_ops)





















+





















+// 11.14. Vector Widening Integer Multiply-Add Instructions





















+DEF_RVV_FUNCTION (vwmaccu, alu, full_preds, u_wwvv_ops)





















+DEF_RVV_FUNCTION (vwmaccu, alu, full_preds, u_wwxv_ops)





















+DEF_RVV_FUNCTION (vwmacc, alu, full_preds, i_wwvv_ops)





















+DEF_RVV_FUNCTION (vwmacc, alu, full_preds, i_wwxv_ops)





















+DEF_RVV_FUNCTION (vwmaccsu, alu, full_preds, i_su_wwvv_ops)





















+DEF_RVV_FUNCTION (vwmaccsu, alu, full_preds, i_su_wwxv_ops)





















+DEF_RVV_FUNCTION (vwmaccus, alu, full_preds, i_us_wwxv_ops)





















+





















+// 11.15. Vector Integer Merge Instructions





















+DEF_RVV_FUNCTION (vmerge, no_mask_policy, none_tu_preds, all_vvvm_ops)





















+DEF_RVV_FUNCTION (vmerge, no_mask_policy, none_tu_preds, iu_vvxm_ops)





















+





















+// 11.16 Vector Integer Move Instructions





















+DEF_RVV_FUNCTION (vmv_v, move, none_tu_preds, all_v_ops)





















+DEF_RVV_FUNCTION (vmv_v, move, none_tu_preds, iu_x_ops)





















+





















+/* 12. Vector Fixed-Point Arithmetic Instructions. */





















+





















+// 12.1. Vector Single-Width Saturating Add and Subtract





















+DEF_RVV_FUNCTION (vsaddu, alu, full_preds, u_vvv_ops)





















+DEF_RVV_FUNCTION (vsaddu, alu, full_preds, u_vvx_ops)





















+DEF_RVV_FUNCTION (vsadd, alu, full_preds, i_vvv_ops)





















+DEF_RVV_FUNCTION (vsadd, alu, full_preds, i_vvx_ops)





















+DEF_RVV_FUNCTION (vssubu, alu, full_preds, u_vvv_ops)





















+DEF_RVV_FUNCTION (vssubu, alu, full_preds, u_vvx_ops)





















+DEF_RVV_FUNCTION (vssub, alu, full_preds, i_vvv_ops)





















+DEF_RVV_FUNCTION (vssub, alu, full_preds, i_vvx_ops)





















+





















+// 12.2. Vector Single-Width Averaging Add and Subtract





















+DEF_RVV_FUNCTION (vaaddu, alu, full_preds, u_vvv_ops)





















+DEF_RVV_FUNCTION (vaaddu, alu, full_preds, u_vvx_ops)





















+DEF_RVV_FUNCTION (vaadd, alu, full_preds, i_vvv_ops)





















+DEF_RVV_FUNCTION (vaadd, alu, full_preds, i_vvx_ops)





















+DEF_RVV_FUNCTION (vasubu, alu, full_preds, u_vvv_ops)





















+DEF_RVV_FUNCTION (vasubu, alu, full_preds, u_vvx_ops)





















+DEF_RVV_FUNCTION (vasub, alu, full_preds, i_vvv_ops)





















+DEF_RVV_FUNCTION (vasub, alu, full_preds, i_vvx_ops)





















+





















+// 12.3. Vector Single-Width Fractional Multiply with Rounding and Saturation





















+DEF_RVV_FUNCTION (vsmul, alu, full_preds, full_v_i_vvv_ops)





















+DEF_RVV_FUNCTION (vsmul, alu, full_preds, full_v_i_vvx_ops)





















+





















+// 12.4. Vector Single-Width Scaling Shift Instructions





















+DEF_RVV_FUNCTION (vssrl, alu, full_preds, u_shift_vvv_ops)





















+DEF_RVV_FUNCTION (vssrl, alu, full_preds, u_shift_vvx_ops)





















+DEF_RVV_FUNCTION (vssra, alu, full_preds, i_shift_vvv_ops)





















+DEF_RVV_FUNCTION (vssra, alu, full_preds, i_shift_vvx_ops)





















+





















+// 12.5. Vector Narrowing Fixed-Point Clip Instructions





















+DEF_RVV_FUNCTION (vnclipu, narrow_alu, full_preds, u_narrow_shift_vwv_ops)





















+DEF_RVV_FUNCTION (vnclipu, narrow_alu, full_preds, u_narrow_shift_vwx_ops)





















+DEF_RVV_FUNCTION (vnclip, narrow_alu, full_preds, i_narrow_shift_vwv_ops)





















+DEF_RVV_FUNCTION (vnclip, narrow_alu, full_preds, i_narrow_shift_vwx_ops)





















+





















+/* 13. Vector Floating-Point Instructions.  */





















+





















+// 13.2. Vector Single-Width Floating-Point Add/Subtract Instructions





















+DEF_RVV_FUNCTION (vfadd, alu, full_preds, f_vvv_ops)





















+DEF_RVV_FUNCTION (vfadd, alu, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfsub, alu, full_preds, f_vvv_ops)





















+DEF_RVV_FUNCTION (vfsub, alu, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfrsub, alu, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfadd_frm, alu_frm, full_preds, f_vvv_ops)





















+DEF_RVV_FUNCTION (vfadd_frm, alu_frm, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfsub_frm, alu_frm, full_preds, f_vvv_ops)





















+DEF_RVV_FUNCTION (vfsub_frm, alu_frm, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfrsub_frm, alu_frm, full_preds, f_vvf_ops)





















+





















+// 13.3. Vector Widening Floating-Point Add/Subtract Instructions





















+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wvv_ops)





















+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wvf_ops)





















+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wvv_ops)





















+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wvf_ops)





















+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wwv_ops)





















+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wwf_ops)





















+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wwv_ops)





















+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wwf_ops)





















+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wvv_ops)





















+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wvf_ops)





















+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wvv_ops)





















+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wvf_ops)





















+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wwv_ops)





















+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wwf_ops)





















+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wwv_ops)





















+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wwf_ops)





















+





















+// 13.4. Vector Single-Width Floating-Point Multiply/Divide Instructions





















+DEF_RVV_FUNCTION (vfmul, alu, full_preds, f_vvv_ops)





















+DEF_RVV_FUNCTION (vfmul, alu, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfdiv, alu, full_preds, f_vvv_ops)





















+DEF_RVV_FUNCTION (vfdiv, alu, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfrdiv, alu, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfmul_frm, alu_frm, full_preds, f_vvv_ops)





















+DEF_RVV_FUNCTION (vfmul_frm, alu_frm, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfdiv_frm, alu_frm, full_preds, f_vvv_ops)





















+DEF_RVV_FUNCTION (vfdiv_frm, alu_frm, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfrdiv_frm, alu_frm, full_preds, f_vvf_ops)





















+





















+// 13.5. Vector Widening Floating-Point Multiply





















+DEF_RVV_FUNCTION (vfwmul, alu, full_preds, f_wvv_ops)





















+DEF_RVV_FUNCTION (vfwmul, alu, full_preds, f_wvf_ops)





















+DEF_RVV_FUNCTION (vfwmul_frm, alu_frm, full_preds, f_wvv_ops)





















+DEF_RVV_FUNCTION (vfwmul_frm, alu_frm, full_preds, f_wvf_ops)





















+





















+// 13.6. Vector Single-Width Floating-Point Fused Multiply-Add Instructions





















+DEF_RVV_FUNCTION (vfmacc, alu, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfmacc, alu, full_preds, f_vvfv_ops)





















+DEF_RVV_FUNCTION (vfnmsac, alu, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfnmsac, alu, full_preds, f_vvfv_ops)





















+DEF_RVV_FUNCTION (vfmadd, alu, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfmadd, alu, full_preds, f_vvfv_ops)





















+DEF_RVV_FUNCTION (vfnmsub, alu, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfnmsub, alu, full_preds, f_vvfv_ops)





















+DEF_RVV_FUNCTION (vfnmacc, alu, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfnmacc, alu, full_preds, f_vvfv_ops)





















+DEF_RVV_FUNCTION (vfmsac, alu, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfmsac, alu, full_preds, f_vvfv_ops)





















+DEF_RVV_FUNCTION (vfnmadd, alu, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfnmadd, alu, full_preds, f_vvfv_ops)





















+DEF_RVV_FUNCTION (vfmsub, alu, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfmsub, alu, full_preds, f_vvfv_ops)





















+





















+DEF_RVV_FUNCTION (vfmacc_frm, alu_frm, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfmacc_frm, alu_frm, full_preds, f_vvfv_ops)





















+DEF_RVV_FUNCTION (vfnmacc_frm, alu_frm, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfnmacc_frm, alu_frm, full_preds, f_vvfv_ops)





















+DEF_RVV_FUNCTION (vfmsac_frm, alu_frm, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfmsac_frm, alu_frm, full_preds, f_vvfv_ops)





















+DEF_RVV_FUNCTION (vfnmsac_frm, alu_frm, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfnmsac_frm, alu_frm, full_preds, f_vvfv_ops)





















+DEF_RVV_FUNCTION (vfmadd_frm, alu_frm, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfmadd_frm, alu_frm, full_preds, f_vvfv_ops)





















+DEF_RVV_FUNCTION (vfnmadd_frm, alu_frm, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfnmadd_frm, alu_frm, full_preds, f_vvfv_ops)





















+DEF_RVV_FUNCTION (vfmsub_frm, alu_frm, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfmsub_frm, alu_frm, full_preds, f_vvfv_ops)





















+DEF_RVV_FUNCTION (vfnmsub_frm, alu_frm, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfnmsub_frm, alu_frm, full_preds, f_vvfv_ops)





















+





















+// 13.7. Vector Widening Floating-Point Fused Multiply-Add Instructions





















+DEF_RVV_FUNCTION (vfwmacc, alu, full_preds, f_wwvv_ops)





















+DEF_RVV_FUNCTION (vfwmacc, alu, full_preds, f_wwfv_ops)





















+DEF_RVV_FUNCTION (vfwnmacc, alu, full_preds, f_wwvv_ops)





















+DEF_RVV_FUNCTION (vfwnmacc, alu, full_preds, f_wwfv_ops)





















+DEF_RVV_FUNCTION (vfwmsac, alu, full_preds, f_wwvv_ops)





















+DEF_RVV_FUNCTION (vfwmsac, alu, full_preds, f_wwfv_ops)





















+DEF_RVV_FUNCTION (vfwnmsac, alu, full_preds, f_wwvv_ops)





















+DEF_RVV_FUNCTION (vfwnmsac, alu, full_preds, f_wwfv_ops)





















+





















+DEF_RVV_FUNCTION (vfwmacc_frm, alu_frm, full_preds, f_wwvv_ops)





















+DEF_RVV_FUNCTION (vfwmacc_frm, alu_frm, full_preds, f_wwfv_ops)





















+DEF_RVV_FUNCTION (vfwnmacc_frm, alu_frm, full_preds, f_wwvv_ops)





















+DEF_RVV_FUNCTION (vfwnmacc_frm, alu_frm, full_preds, f_wwfv_ops)





















+DEF_RVV_FUNCTION (vfwmsac_frm, alu_frm, full_preds, f_wwvv_ops)





















+DEF_RVV_FUNCTION (vfwmsac_frm, alu_frm, full_preds, f_wwfv_ops)





















+DEF_RVV_FUNCTION (vfwnmsac_frm, alu_frm, full_preds, f_wwvv_ops)





















+DEF_RVV_FUNCTION (vfwnmsac_frm, alu_frm, full_preds, f_wwfv_ops)





















+





















+// 13.8. Vector Floating-Point Square-Root Instruction





















+DEF_RVV_FUNCTION (vfsqrt, alu, full_preds, f_v_ops)





















+





















+DEF_RVV_FUNCTION (vfsqrt_frm, alu_frm, full_preds, f_v_ops)





















+





















+// 13.9. Vector Floating-Point Reciprocal Square-Root Estimate Instruction





















+DEF_RVV_FUNCTION (vfrsqrt7, alu, full_preds, f_v_ops)





















+





















+// 13.10. Vector Floating-Point Reciprocal Estimate Instruction





















+DEF_RVV_FUNCTION (vfrec7, alu, full_preds, f_v_ops)





















+





















+DEF_RVV_FUNCTION (vfrec7_frm, alu_frm, full_preds, f_v_ops)





















+





















+// 13.11. Vector Floating-Point MIN/MAX Instructions





















+DEF_RVV_FUNCTION (vfmin, alu, full_preds, f_vvv_ops)





















+DEF_RVV_FUNCTION (vfmin, alu, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfmax, alu, full_preds, f_vvv_ops)





















+DEF_RVV_FUNCTION (vfmax, alu, full_preds, f_vvf_ops)





















+





















+// 13.12. Vector Floating-Point Sign-Injection Instructions





















+DEF_RVV_FUNCTION (vfsgnj, alu, full_preds, f_vvv_ops)





















+DEF_RVV_FUNCTION (vfsgnj, alu, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfsgnjn, alu, full_preds, f_vvv_ops)





















+DEF_RVV_FUNCTION (vfsgnjn, alu, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfsgnjx, alu, full_preds, f_vvv_ops)





















+DEF_RVV_FUNCTION (vfsgnjx, alu, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfneg, alu, full_preds, f_v_ops)





















+DEF_RVV_FUNCTION (vfabs, alu, full_preds, f_v_ops)





















+





















+// 13.13. Vector Floating-Point Compare Instructions





















+DEF_RVV_FUNCTION (vmfeq, return_mask, none_m_mu_preds, f_mvv_ops)





















+DEF_RVV_FUNCTION (vmfeq, return_mask, none_m_mu_preds, f_mvf_ops)





















+DEF_RVV_FUNCTION (vmfne, return_mask, none_m_mu_preds, f_mvv_ops)





















+DEF_RVV_FUNCTION (vmfne, return_mask, none_m_mu_preds, f_mvf_ops)





















+DEF_RVV_FUNCTION (vmflt, return_mask, none_m_mu_preds, f_mvv_ops)





















+DEF_RVV_FUNCTION (vmflt, return_mask, none_m_mu_preds, f_mvf_ops)





















+DEF_RVV_FUNCTION (vmfle, return_mask, none_m_mu_preds, f_mvv_ops)





















+DEF_RVV_FUNCTION (vmfle, return_mask, none_m_mu_preds, f_mvf_ops)





















+DEF_RVV_FUNCTION (vmfgt, return_mask, none_m_mu_preds, f_mvv_ops)





















+DEF_RVV_FUNCTION (vmfgt, return_mask, none_m_mu_preds, f_mvf_ops)





















+DEF_RVV_FUNCTION (vmfge, return_mask, none_m_mu_preds, f_mvv_ops)





















+DEF_RVV_FUNCTION (vmfge, return_mask, none_m_mu_preds, f_mvf_ops)





















+





















+// 13.14. Vector Floating-Point Classify Instruction





















+DEF_RVV_FUNCTION (vfclass, alu, full_preds, f_to_u_v_ops)





















+





















+// 13.15. Vector Floating-Point Merge Instruction





















+DEF_RVV_FUNCTION (vfmerge, no_mask_policy, none_tu_preds, f_vvfm_ops)





















+





















+// 13.16. Vector Floating-Point Move Instruction





















+DEF_RVV_FUNCTION (vfmv_v, move, none_tu_preds, f_f_ops)





















+





















+// 13.17. Single-Width Floating-Point/Integer Type-Convert Instructions





















+DEF_RVV_FUNCTION (vfcvt_x, alu, full_preds, f_to_i_f_v_ops)





















+DEF_RVV_FUNCTION (vfcvt_xu, alu, full_preds, f_to_u_f_v_ops)





















+DEF_RVV_FUNCTION (vfcvt_rtz_x, alu, full_preds, f_to_i_f_v_ops)





















+DEF_RVV_FUNCTION (vfcvt_rtz_xu, alu, full_preds, f_to_u_f_v_ops)





















+DEF_RVV_FUNCTION (vfcvt_f, alu, full_preds, i_to_f_x_v_ops)





















+DEF_RVV_FUNCTION (vfcvt_f, alu, full_preds, u_to_f_xu_v_ops)





















+





















+DEF_RVV_FUNCTION (vfcvt_x_frm, alu_frm, full_preds, f_to_i_f_v_ops)





















+DEF_RVV_FUNCTION (vfcvt_xu_frm, alu_frm, full_preds, f_to_u_f_v_ops)





















+DEF_RVV_FUNCTION (vfcvt_f_frm, alu_frm, full_preds, i_to_f_x_v_ops)





















+DEF_RVV_FUNCTION (vfcvt_f_frm, alu_frm, full_preds, u_to_f_xu_v_ops)





















+





















+// 13.18. Widening Floating-Point/Integer Type-Convert Instructions





















+DEF_RVV_FUNCTION (vfwcvt_x, alu, full_preds, f_to_wi_f_v_ops)





















+DEF_RVV_FUNCTION (vfwcvt_xu, alu, full_preds, f_to_wu_f_v_ops)





















+DEF_RVV_FUNCTION (vfwcvt_rtz_x, alu, full_preds, f_to_wi_f_v_ops)





















+DEF_RVV_FUNCTION (vfwcvt_rtz_xu, alu, full_preds, f_to_wu_f_v_ops)





















+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, i_to_wf_x_v_ops)





















+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, u_to_wf_xu_v_ops)





















+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, f_to_wf_f_v_ops)





















+





















+DEF_RVV_FUNCTION (vfwcvt_x_frm, alu_frm, full_preds, f_to_wi_f_v_ops)





















+DEF_RVV_FUNCTION (vfwcvt_xu_frm, alu_frm, full_preds, f_to_wu_f_v_ops)





















+





















+// 13.19. Narrowing Floating-Point/Integer Type-Convert Instructions





















+DEF_THEAD_RVV_FUNCTION (vfncvt_x, th_vfncvt_x, narrow_alu, full_preds, f_to_ni_f_w_ops)





















+DEF_THEAD_RVV_FUNCTION (vfncvt_xu, th_vfncvt_xu, narrow_alu, full_preds, f_to_nu_f_w_ops)





















+DEF_RVV_FUNCTION (vfncvt_rtz_x, narrow_alu, full_preds, f_to_ni_f_w_ops)





















+DEF_RVV_FUNCTION (vfncvt_rtz_xu, narrow_alu, full_preds, f_to_nu_f_w_ops)





















+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, i_to_nf_x_w_ops)





















+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, u_to_nf_xu_w_ops)





















+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, f_to_nf_f_w_ops)





















+DEF_RVV_FUNCTION (vfncvt_rod_f, narrow_alu, full_preds, f_to_nf_f_w_ops)





















+





















+DEF_THEAD_RVV_FUNCTION (vfncvt_x_frm, th_vfncvt_x_frm, narrow_alu_frm, full_preds, f_to_ni_f_w_ops)





















+DEF_THEAD_RVV_FUNCTION (vfncvt_xu_frm, th_vfncvt_xu_frm, narrow_alu_frm, full_preds, f_to_nu_f_w_ops)





















+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, i_to_nf_x_w_ops)





















+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, u_to_nf_xu_w_ops)





















+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, f_to_nf_f_w_ops)





















+





















+/* 14. Vector Reduction Operations.  */





















+





















+// 14.1. Vector Single-Width Integer Reduction Instructions





















+DEF_RVV_FUNCTION (vredsum, reduc_alu, no_mu_preds, iu_vs_ops)





















+DEF_RVV_FUNCTION (vredmaxu, reduc_alu, no_mu_preds, iu_vs_ops)





















+DEF_RVV_FUNCTION (vredmax, reduc_alu, no_mu_preds, iu_vs_ops)





















+DEF_RVV_FUNCTION (vredminu, reduc_alu, no_mu_preds, iu_vs_ops)





















+DEF_RVV_FUNCTION (vredmin, reduc_alu, no_mu_preds, iu_vs_ops)





















+DEF_RVV_FUNCTION (vredand, reduc_alu, no_mu_preds, iu_vs_ops)





















+DEF_RVV_FUNCTION (vredor, reduc_alu, no_mu_preds, iu_vs_ops)





















+DEF_RVV_FUNCTION (vredxor, reduc_alu, no_mu_preds, iu_vs_ops)





















+





















+// 14.2. Vector Widening Integer Reduction Instructions





















+DEF_RVV_FUNCTION (vwredsum, reduc_alu, no_mu_preds, wi_vs_ops)





















+DEF_RVV_FUNCTION (vwredsumu, reduc_alu, no_mu_preds, wu_vs_ops)





















+





















+// 14.3. Vector Single-Width Floating-Point Reduction Instructions





















+DEF_THEAD_RVV_FUNCTION (vfredusum, th_vfredusum, reduc_alu, no_mu_preds, f_vs_ops)





















+DEF_THEAD_RVV_FUNCTION (vfredosum, th_vfredosum, reduc_alu, no_mu_preds, f_vs_ops)





















+DEF_RVV_FUNCTION (vfredmax, reduc_alu, no_mu_preds, f_vs_ops)





















+DEF_RVV_FUNCTION (vfredmin, reduc_alu, no_mu_preds, f_vs_ops)





















+





















+DEF_THEAD_RVV_FUNCTION (vfredusum_frm, th_vfredusum_frm, reduc_alu_frm, no_mu_preds, f_vs_ops)





















+DEF_THEAD_RVV_FUNCTION (vfredosum_frm, th_vfredosum_frm, reduc_alu_frm, no_mu_preds, f_vs_ops)





















+





















+// 14.4. Vector Widening Floating-Point Reduction Instructions





















+DEF_THEAD_RVV_FUNCTION (vfwredosum, th_vfwredosum, reduc_alu, no_mu_preds, wf_vs_ops)





















+DEF_THEAD_RVV_FUNCTION (vfwredusum, th_vfwredusum, reduc_alu, no_mu_preds, wf_vs_ops)





















+





















+DEF_THEAD_RVV_FUNCTION (vfwredosum_frm, th_vfwredosum_frm, reduc_alu_frm, no_mu_preds, wf_vs_ops)





















+DEF_THEAD_RVV_FUNCTION (vfwredusum_frm, th_vfwredusum_frm, reduc_alu_frm, no_mu_preds, wf_vs_ops)





















+





















+/* 15. Vector Mask Instructions.  */





















+





















+// 15.1. Vector Mask-Register Logical Instructions





















+DEF_RVV_FUNCTION (vmand, mask_alu, none_preds, b_mmm_ops)





















+DEF_RVV_FUNCTION (vmnand, mask_alu, none_preds, b_mmm_ops)





















+DEF_RVV_FUNCTION (vmandn, mask_alu, none_preds, b_mmm_ops)





















+DEF_RVV_FUNCTION (vmxor, mask_alu, none_preds, b_mmm_ops)





















+DEF_RVV_FUNCTION (vmor, mask_alu, none_preds, b_mmm_ops)





















+DEF_RVV_FUNCTION (vmnor, mask_alu, none_preds, b_mmm_ops)





















+DEF_RVV_FUNCTION (vmorn, mask_alu, none_preds, b_mmm_ops)





















+DEF_RVV_FUNCTION (vmxnor, mask_alu, none_preds, b_mmm_ops)





















+DEF_RVV_FUNCTION (vmmv, mask_alu, none_preds, b_mm_ops)





















+DEF_RVV_FUNCTION (vmclr, mask_alu, none_preds, b_m_ops)





















+DEF_RVV_FUNCTION (vmset, mask_alu, none_preds, b_m_ops)





















+DEF_RVV_FUNCTION (vmnot, mask_alu, none_preds, b_mm_ops)





















+// 15.2. Vector count population in mask vcpop.m





















+DEF_THEAD_RVV_FUNCTION (vcpop, th_vcpop, mask_alu, none_m_preds, b_ulong_m_ops)





















+// 15.3. vfirst find-first-set mask bit





















+DEF_THEAD_RVV_FUNCTION (vfirst, th_vfirst, mask_alu, none_m_preds, b_long_m_ops)





















+// 15.4. vmsbf.m set-before-first mask bit





















+DEF_RVV_FUNCTION (vmsbf, mask_alu, none_m_mu_preds, b_mm_ops)





















+// 15.5. vmsif.m set-including-first mask bit





















+DEF_RVV_FUNCTION (vmsif, mask_alu, none_m_mu_preds, b_mm_ops)





















+// 15.6. vmsof.m set-only-first mask bit





















+DEF_RVV_FUNCTION (vmsof, mask_alu, none_m_mu_preds, b_mm_ops)





















+// 15.8. Vector Iota Instruction





















+DEF_RVV_FUNCTION (viota, mask_alu, full_preds, u_vm_ops)





















+// 15.9. Vector Element Index Instruction





















+DEF_RVV_FUNCTION (vid, alu, full_preds, u_v_ops)





















+





















+/* 16. Vector Permutation Instructions.  */





















+





















+// 16.1. Integer Scalar Move Instructions





















+DEF_RVV_FUNCTION (vmv_x, scalar_move, none_preds, iu_x_s_ops)





















+DEF_RVV_FUNCTION (vmv_s, move, none_tu_preds, iu_s_x_ops)





















+





















+// 16.2. Floating-Point Scalar Move Instructions





















+DEF_RVV_FUNCTION (vfmv_f, scalar_move, none_preds, f_f_s_ops)





















+DEF_RVV_FUNCTION (vfmv_s, move, none_tu_preds, f_s_f_ops)





















+





















+// 16.3. Vector Slide Instructions





















+DEF_RVV_FUNCTION (vslideup, alu, full_preds, all_vvvx_ops)





















+DEF_RVV_FUNCTION (vslidedown, alu, full_preds, all_vvx_ops)





















+DEF_RVV_FUNCTION (vslide1up, alu, full_preds, iu_vvx_ops)





















+DEF_RVV_FUNCTION (vslide1down, alu, full_preds, iu_vvx_ops)





















+DEF_RVV_FUNCTION (vfslide1up, alu, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfslide1down, alu, full_preds, f_vvf_ops)





















+





















+// 16.4. Vector Register Gather Instructions





















+DEF_RVV_FUNCTION (vrgather, alu, full_preds, all_gather_vvv_ops)





















+DEF_RVV_FUNCTION (vrgather, alu, full_preds, all_gather_vvx_ops)





















+DEF_RVV_FUNCTION (vrgatherei16, alu, full_preds, all_gatherei16_vvv_ops)





















+





















+// 16.5. Vector Compress Instruction





















+DEF_RVV_FUNCTION (vcompress, alu, none_tu_preds, all_vvm_ops)





















+





















+/* Miscellaneous Vector Functions.  */





















+DEF_RVV_FUNCTION (vundefined, vundefined, none_preds, all_none_void_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, i_v_u_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, u_v_i_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, f_v_i_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, f_v_u_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, i_v_f_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, u_v_f_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew8_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew16_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew32_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew64_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool1_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool2_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool4_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool8_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool16_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool32_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool64_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew8_lmul1_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew16_lmul1_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew32_lmul1_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew64_lmul1_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew8_lmul1_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew16_lmul1_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew32_lmul1_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew64_lmul1_interpret_ops)





















+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x2_ops)





















+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x4_ops)





















+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x8_ops)





















+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x16_ops)





















+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x32_ops)





















+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x64_ops)





















+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x2_ops)





















+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x4_ops)





















+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x8_ops)





















+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x16_ops)





















+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x32_ops)





















+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x64_ops)





















+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x2_ops)





















+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x4_ops)





















+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x8_ops)





















+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul2_x2_ops)





















+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul2_x4_ops)





















+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul4_x2_ops)





















+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x2_ops)





















+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x4_ops)





















+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x8_ops)





















+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul2_x2_ops)





















+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul2_x4_ops)





















+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul4_x2_ops)





















+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x2_ops)





















+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x4_ops)





















+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x8_ops)





















+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul2_x2_ops)





















+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul2_x4_ops)





















+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul4_x2_ops)





















+





















+// Tuple types





















+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_tuple_ops)





















+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_tuple_ops)





















+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_tuple_ops)





















+DEF_RVV_FUNCTION (vundefined, vundefined, none_preds, all_none_void_tuple_ops)





















+DEF_THEAD_RVV_FUNCTION (vlseg, th_vlseg, seg_loadstore, full_preds, tuple_v_scalar_const_ptr_ops)





















+DEF_THEAD_RVV_FUNCTION (vsseg, th_vsseg, seg_loadstore, none_m_preds, tuple_v_scalar_ptr_ops)





















+DEF_THEAD_RVV_FUNCTION (vlsseg, th_vlsseg, seg_loadstore, full_preds, tuple_v_scalar_const_ptr_ptrdiff_ops)





















+DEF_THEAD_RVV_FUNCTION (vssseg, th_vssseg, seg_loadstore, none_m_preds, tuple_v_scalar_ptr_ptrdiff_ops)





















+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew8_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew16_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew32_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew64_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew8_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew16_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew32_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew64_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew8_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew16_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew32_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew64_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew8_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew16_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew32_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew64_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vlsegff, th_vlsegff, seg_fault_load, full_preds, tuple_v_scalar_const_ptr_size_ptr_ops)





















+#undef REQUIRED_EXTENSIONS





















+





















+#undef DEF_RVV_FUNCTION





















+#undef DEF_THEAD_RVV_FUNCTION





















\ No newline at end of file





















diff --git a/gcc/config/riscv/thead-vector-builtins.cc b/gcc/config/riscv/thead-vector-builtins.cc





















new file mode 100644





















index 00000000000..9d84ed39937





















--- /dev/null





















+++ b/gcc/config/riscv/thead-vector-builtins.cc





















@@ -0,0 +1,746 @@





















+/* function_base implementation for RISC-V XTheadVector Extension





















+   for GNU compiler.





















+   Copyright (C) 2022-2023 Free Software Foundation, Inc.





















+   Contributed by Joshua (cooper.joshua@linux.alibaba.com), T-Head





















+   Semiconductor Co., Ltd.





















+





















+   This file is part of GCC.





















+





















+   GCC is free software; you can redistribute it and/or modify it





















+   under the terms of the GNU General Public License as published by





















+   the Free Software Foundation; either version 3, or (at your option)





















+   any later version.





















+





















+   GCC is distributed in the hope that it will be useful, but





















+   WITHOUT ANY WARRANTY; without even the implied warranty of





















+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU





















+   General Public License for more details.





















+





















+   You should have received a copy of the GNU General Public License





















+   along with GCC; see the file COPYING3.  If not see





















+   <http://www.gnu.org/licenses/>.  */





















+





















+#include "config.h"





















+#include "system.h"





















+#include "coretypes.h"





















+#include "tm.h"





















+#include "tree.h"





















+#include "rtl.h"





















+#include "tm_p.h"





















+#include "memmodel.h"





















+#include "insn-codes.h"





















+#include "optabs.h"





















+#include "recog.h"





















+#include "expr.h"





















+#include "basic-block.h"





















+#include "function.h"





















+#include "fold-const.h"





















+#include "gimple.h"





















+#include "gimple-iterator.h"





















+#include "gimplify.h"





















+#include "explow.h"





















+#include "emit-rtl.h"





















+#include "tree-vector-builder.h"





















+#include "rtx-vector-builder.h"





















+#include "riscv-vector-builtins.h"





















+#include "riscv-vector-builtins-shapes.h"





















+#include "riscv-vector-builtins-bases.h"





















+#include "thead-vector-builtins.h"





















+





















+using namespace riscv_vector;





















+





















+namespace riscv_vector {





















+





















+/* Implements vsetvl<mode> && vsetvlmax<mode>.  */





















+template<bool VLMAX_P>





















+class th_vsetvl : public function_base





















+{





















+public:





















+  bool apply_vl_p () const override





















+  {





















+    return false;





















+  }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    if (VLMAX_P)





















+      e.add_input_operand (Pmode, gen_rtx_REG (Pmode, 0));





















+    else





















+      e.add_input_operand (0);





















+





















+    tree type = builtin_types[e.type.index].vector;





















+    machine_mode mode = TYPE_MODE (type);





















+





















+    machine_mode inner_mode = GET_MODE_INNER (mode);





















+    /* SEW.  */





















+    e.add_input_operand (Pmode,





















+      gen_int_mode (GET_MODE_BITSIZE (inner_mode), Pmode));





















+





















+    /* LMUL.  */





















+    e.add_input_operand (Pmode,





















+      gen_int_mode (get_vlmul (mode), Pmode));





















+





















+    /* TAIL_ANY.  */





















+    e.add_input_operand (Pmode,





















+ gen_int_mode (get_prefer_tail_policy (), Pmode));





















+





















+    /* MASK_ANY.  */





















+    e.add_input_operand (Pmode,





















+ gen_int_mode (get_prefer_mask_policy (), Pmode));





















+    return e.generate_insn (code_for_th_vsetvl_no_side_effects (Pmode));





















+  }





















+};





















+





















+/* Implements





















+ * vle.v/vse.v/vlm.v/vsm.v/vlse.v/vsse.v/vluxei.v/vloxei.v/vsuxei.v/vsoxei.v





















+ * codegen.  */





















+template<bool STORE_P, lst_type LST_TYPE, bool ORDERED_P>





















+class th_loadstore : public function_base





















+{





















+public:





















+  bool apply_tail_policy_p () const override { return !STORE_P; }





















+  bool apply_mask_policy_p () const override { return !STORE_P; }





















+





















+  unsigned int call_properties (const function_instance &) const override





















+  {





















+    if (STORE_P)





















+      return CP_WRITE_MEMORY;





















+    else





















+      return CP_READ_MEMORY;





















+  }





















+





















+  bool can_be_overloaded_p (enum predication_type_index pred) const override





















+  {





















+    if (STORE_P || LST_TYPE == LST_INDEXED)





















+      return true;





















+    return pred != PRED_TYPE_none;





















+  }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    if (LST_TYPE == LST_INDEXED)





















+      {





















+ int unspec = ORDERED_P ? UNSPEC_ORDERED : UNSPEC_UNORDERED;





















+ if (STORE_P)





















+     return e.use_exact_insn (





















+       code_for_pred_th_indexed_store (unspec, e.vector_mode (),





















+       e.index_mode ()));





















+ else





















+   {





















+     unsigned src_eew_bitsize





















+       = GET_MODE_BITSIZE (GET_MODE_INNER (e.index_mode ()));





















+     unsigned dst_eew_bitsize





















+       = GET_MODE_BITSIZE (GET_MODE_INNER (e.vector_mode ()));





















+     if (dst_eew_bitsize == src_eew_bitsize)





















+       {





















+ return e.use_exact_insn (





















+   code_for_pred_th_indexed_load_same_eew (





















+     unspec, e.vector_mode ()));





















+       }





















+     else if (dst_eew_bitsize > src_eew_bitsize)





















+       {





















+ unsigned factor = dst_eew_bitsize / src_eew_bitsize;





















+ switch (factor)





















+   {





















+   case 2:





















+     return e.use_exact_insn (





















+       code_for_pred_th_indexed_load_x2_greater_eew (





















+ unspec, e.vector_mode ()));





















+   case 4:





















+     return e.use_exact_insn (





















+       code_for_pred_th_indexed_load_x4_greater_eew (





















+ unspec, e.vector_mode ()));





















+   case 8:





















+     return e.use_exact_insn (





















+       code_for_pred_th_indexed_load_x8_greater_eew (





















+ unspec, e.vector_mode ()));





















+   default:





















+     gcc_unreachable ();





















+   }





















+       }





















+     else





















+       {





















+ unsigned factor = src_eew_bitsize / dst_eew_bitsize;





















+ switch (factor)





















+   {





















+   case 2:





















+     return e.use_exact_insn (





















+       code_for_pred_th_indexed_load_x2_smaller_eew (





















+ unspec, e.vector_mode ()));





















+   case 4:





















+     return e.use_exact_insn (





















+       code_for_pred_th_indexed_load_x4_smaller_eew (





















+ unspec, e.vector_mode ()));





















+   case 8:





















+     return e.use_exact_insn (





















+       code_for_pred_th_indexed_load_x8_smaller_eew (





















+ unspec, e.vector_mode ()));





















+   default:





















+     gcc_unreachable ();





















+   }





















+       }





















+   }





















+      }





















+    else if (LST_TYPE == LST_STRIDED)





















+      {





















+ if (STORE_P)





















+   return e.use_contiguous_store_insn (





















+     code_for_pred_th_strided_store (e.vector_mode ()));





















+ else





















+   return e.use_contiguous_load_insn (





















+     code_for_pred_th_strided_load (e.vector_mode ()));





















+      }





















+    else





















+      {





















+ if (STORE_P)





















+   return e.use_contiguous_store_insn (





















+     code_for_pred_th_store (e.vector_mode ()));





















+ else





















+   return e.use_contiguous_load_insn (





















+     code_for_pred_mov (e.vector_mode ()));





















+      }





















+  }





















+};





















+





















+/* Implements vneg/vnot.  */





















+template<rtx_code CODE, enum frm_op_type FRM_OP = NO_FRM>





















+class th_unop : public function_base





















+{





















+public:





















+  bool has_rounding_mode_operand_p () const override





















+  {





















+    return FRM_OP == HAS_FRM;





















+  }





















+





















+  bool may_require_frm_p () const override { return true; }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    return e.use_exact_insn (code_for_pred_th (CODE, e.vector_mode ()));





















+  }





















+};





















+





















+/* Implements vnsrl/vnsra.  */





















+template<rtx_code CODE>





















+class th_vnshift : public function_base





















+{





















+public:





















+  rtx expand (function_expander &e) const override





















+  {





















+    switch (e.op_info->op)





















+      {





















+      case OP_TYPE_wx:





















+ return e.use_exact_insn (





















+   code_for_pred_th_narrow_scalar (CODE, e.vector_mode ()));





















+      case OP_TYPE_wv:





















+ return e.use_exact_insn (





















+   code_for_pred_th_narrow (CODE, e.vector_mode ()));





















+      default:





















+ gcc_unreachable ();





















+      }





















+  }





















+};





















+





















+/* Implements vncvt.  */





















+class th_vncvt_x : public function_base





















+{





















+public:





















+  rtx expand (function_expander &e) const override





















+  {





















+    return e.use_exact_insn (





















+      code_for_pred_th_trunc (e.vector_mode ()));





















+  }





















+};





















+





















+/* Implements vnclip/vnclipu.  */





















+template<int UNSPEC>





















+class th_vnclip : public function_base





















+{





















+public:





















+  bool has_rounding_mode_operand_p () const override { return true; }





















+





















+  bool may_require_vxrm_p () const override { return true; }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    switch (e.op_info->op)





















+      {





















+      case OP_TYPE_wx:





















+ return e.use_exact_insn (





















+   code_for_pred_th_narrow_clip_scalar (UNSPEC, e.vector_mode ()));





















+      case OP_TYPE_wv:





















+ return e.use_exact_insn (





















+   code_for_pred_th_narrow_clip (UNSPEC, e.vector_mode ()));





















+      default:





















+ gcc_unreachable ();





















+      }





















+  }





















+};





















+





















+/* Implements vcpop.  */





















+class th_vcpop : public function_base





















+{





















+public:





















+  bool apply_tail_policy_p () const override { return false; }





















+  bool apply_mask_policy_p () const override { return false; }





















+  bool has_merge_operand_p () const override { return false; }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    return e.use_exact_insn (





















+      code_for_pred_th_popcount (e.vector_mode (), Pmode));





















+  }





















+};





















+





















+/* Implements vfirst.  */





















+class th_vfirst : public function_base





















+{





















+public:





















+  bool apply_tail_policy_p () const override { return false; }





















+  bool apply_mask_policy_p () const override { return false; }





















+  bool has_merge_operand_p () const override { return false; }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    return e.use_exact_insn (





















+      code_for_pred_th_ffs (e.vector_mode (), Pmode));





















+  }





















+};





















+





















+/* Implements vmadc.  */





















+class th_vmadc : public function_base





















+{





















+public:





















+  bool apply_tail_policy_p () const override { return false; }





















+  bool apply_mask_policy_p () const override { return false; }





















+  bool use_mask_predication_p () const override { return false; }





















+  bool has_merge_operand_p () const override { return false; }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    switch (e.op_info->op)





















+      {





















+      case OP_TYPE_vvm:





















+ return e.use_exact_insn (code_for_pred_th_madc (e.vector_mode ()));





















+      case OP_TYPE_vxm:





















+ return e.use_exact_insn (code_for_pred_th_madc_scalar (e.vector_mode ()));





















+      case OP_TYPE_vv:





















+ return e.use_exact_insn (





















+   code_for_pred_th_madc_overflow (e.vector_mode ()));





















+      case OP_TYPE_vx:





















+ return e.use_exact_insn (





















+   code_for_pred_th_madc_overflow_scalar (e.vector_mode ()));





















+      default:





















+ gcc_unreachable ();





















+      }





















+  }





















+};





















+





















+/* Implements vmsbc.  */





















+class th_vmsbc : public function_base





















+{





















+public:





















+  bool apply_tail_policy_p () const override { return false; }





















+  bool apply_mask_policy_p () const override { return false; }





















+  bool use_mask_predication_p () const override { return false; }





















+  bool has_merge_operand_p () const override { return false; }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    switch (e.op_info->op)





















+      {





















+      case OP_TYPE_vvm:





















+ return e.use_exact_insn (code_for_pred_th_msbc (e.vector_mode ()));





















+      case OP_TYPE_vxm:





















+ return e.use_exact_insn (code_for_pred_th_msbc_scalar (e.vector_mode ()));





















+      case OP_TYPE_vv:





















+ return e.use_exact_insn (





















+   code_for_pred_th_msbc_overflow (e.vector_mode ()));





















+      case OP_TYPE_vx:





















+ return e.use_exact_insn (





















+   code_for_pred_th_msbc_overflow_scalar (e.vector_mode ()));





















+      default:





















+ gcc_unreachable ();





















+      }





















+  }





















+};





















+





















+/* Implements vfncvt.x.  */





















+template<int UNSPEC, enum frm_op_type FRM_OP = NO_FRM>





















+class th_vfncvt_x : public function_base





















+{





















+public:





















+  bool has_rounding_mode_operand_p () const override





















+  {





















+    return FRM_OP == HAS_FRM;





















+  }





















+





















+  bool may_require_frm_p () const override { return true; }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    return e.use_exact_insn (





















+      code_for_pred_th_narrow_fcvt_x_f (UNSPEC, e.arg_mode (0)));





















+  }





















+};





















+





















+template<enum frm_op_type FRM_OP = NO_FRM>





















+class th_vfncvt_f : public function_base





















+{





















+public:





















+  bool has_rounding_mode_operand_p () const override





















+  {





















+    return FRM_OP == HAS_FRM;





















+  }





















+





















+  bool may_require_frm_p () const override { return true; }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    if (e.op_info->op == OP_TYPE_f_w)





















+      return e.use_exact_insn (





















+ code_for_pred_th_trunc (e.vector_mode ()));





















+    if (e.op_info->op == OP_TYPE_x_w)





















+      return e.use_exact_insn (





















+ code_for_pred_th_narrow (FLOAT, e.arg_mode (0)));





















+    if (e.op_info->op == OP_TYPE_xu_w)





















+      return e.use_exact_insn (





















+ code_for_pred_th_narrow (UNSIGNED_FLOAT, e.arg_mode (0)));





















+    gcc_unreachable ();





















+  }





















+};





















+





















+/* Implements floating-point reduction instructions.  */





















+template<unsigned UNSPEC, enum frm_op_type FRM_OP = NO_FRM>





















+class th_freducop : public function_base





















+{





















+public:





















+  bool has_rounding_mode_operand_p () const override





















+  {





















+    return FRM_OP == HAS_FRM;





















+  }





















+





















+  bool may_require_frm_p () const override { return true; }





















+





















+  bool apply_mask_policy_p () const override { return false; }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    return e.use_exact_insn (code_for_pred_th (UNSPEC, e.vector_mode ()));





















+  }





















+};





















+





















+class th_vleff : public function_base





















+{





















+public:





















+  unsigned int call_properties (const function_instance &) const override





















+  {





















+    return CP_READ_MEMORY | CP_WRITE_CSR;





















+  }





















+





















+  bool can_be_overloaded_p (enum predication_type_index pred) const override





















+  {





















+    return pred != PRED_TYPE_none;





















+  }





















+





















+  gimple *fold (gimple_folder &f) const override





















+  {





















+    return fold_fault_load (f);





















+  }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    return e.use_contiguous_load_insn (





















+      code_for_pred_th_fault_load (e.vector_mode ()));





















+  }





















+};





















+





















+/* Implements vlseg.v.  */





















+class th_vlseg : public function_base





















+{





















+public:





















+  unsigned int call_properties (const function_instance &) const override





















+  {





















+    return CP_READ_MEMORY;





















+  }





















+





















+  bool can_be_overloaded_p (enum predication_type_index pred) const override





















+  {





















+    return pred != PRED_TYPE_none;





















+  }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    return e.use_exact_insn (





















+      code_for_pred_th_unit_strided_load (e.vector_mode ()));





















+  }





















+};





















+





















+/* Implements vsseg.v.  */





















+class th_vsseg : public function_base





















+{





















+public:





















+  bool apply_tail_policy_p () const override { return false; }





















+  bool apply_mask_policy_p () const override { return false; }





















+





















+  unsigned int call_properties (const function_instance &) const override





















+  {





















+    return CP_WRITE_MEMORY;





















+  }





















+





















+  bool can_be_overloaded_p (enum predication_type_index) const override





















+  {





















+    return true;





















+  }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    return e.use_exact_insn (





















+      code_for_pred_th_unit_strided_store (e.vector_mode ()));





















+  }





















+};





















+





















+/* Implements vlsseg.v.  */





















+class th_vlsseg : public function_base





















+{





















+public:





















+  unsigned int call_properties (const function_instance &) const override





















+  {





















+    return CP_READ_MEMORY;





















+  }





















+





















+  bool can_be_overloaded_p (enum predication_type_index pred) const override





















+  {





















+    return pred != PRED_TYPE_none;





















+  }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    return e.use_exact_insn (





















+      code_for_pred_th_strided_load (e.vector_mode ()));





















+  }





















+};





















+





















+/* Implements vssseg.v.  */





















+class th_vssseg : public function_base





















+{





















+public:





















+  bool apply_tail_policy_p () const override { return false; }





















+  bool apply_mask_policy_p () const override { return false; }





















+





















+  unsigned int call_properties (const function_instance &) const override





















+  {





















+    return CP_WRITE_MEMORY;





















+  }





















+





















+  bool can_be_overloaded_p (enum predication_type_index) const override





















+  {





















+    return true;





















+  }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    return e.use_exact_insn (





















+      code_for_pred_th_strided_store (e.vector_mode ()));





















+  }





















+};





















+





















+template<int UNSPEC>





















+class th_seg_indexed_load : public function_base





















+{





















+public:





















+  unsigned int call_properties (const function_instance &) const override





















+  {





















+    return CP_READ_MEMORY;





















+  }





















+





















+  bool can_be_overloaded_p (enum predication_type_index) const override





















+  {





















+    return true;





















+  }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    return e.use_exact_insn (





















+      code_for_pred_th_indexed_load (





















+ UNSPEC, e.vector_mode (), e.index_mode ()));





















+  }





















+};





















+





















+template<int UNSPEC>





















+class th_seg_indexed_store : public function_base





















+{





















+public:





















+  bool apply_tail_policy_p () const override { return false; }





















+  bool apply_mask_policy_p () const override { return false; }





















+





















+  unsigned int call_properties (const function_instance &) const override





















+  {





















+    return CP_WRITE_MEMORY;





















+  }





















+





















+  bool can_be_overloaded_p (enum predication_type_index) const override





















+  {





















+    return true;





















+  }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    return e.use_exact_insn (





















+      code_for_pred_th_indexed_store (





















+ UNSPEC, e.vector_mode (), e.index_mode ()));





















+  }





















+};





















+





















+/* Implements vlsegff.v.  */





















+class th_vlsegff : public function_base





















+{





















+public:





















+  unsigned int call_properties (const function_instance &) const override





















+  {





















+    return CP_READ_MEMORY | CP_WRITE_CSR;





















+  }





















+





















+  bool can_be_overloaded_p (enum predication_type_index pred) const override





















+  {





















+    return pred != PRED_TYPE_none;





















+  }





















+





















+  gimple *fold (gimple_folder &f) const override





















+  {





















+    return fold_fault_load (f);





















+  }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    return e.use_exact_insn (





















+      code_for_pred_th_fault_load (e.vector_mode ()));





















+  }





















+};





















+





















+static CONSTEXPR const th_vsetvl<false> th_vsetvl_obj;





















+static CONSTEXPR const th_vsetvl<true> th_vsetvlmax_obj;





















+static CONSTEXPR const th_loadstore<false, LST_UNIT_STRIDE, false> th_vle_obj;





















+static CONSTEXPR const th_loadstore<true, LST_UNIT_STRIDE, false> th_vse_obj;





















+static CONSTEXPR const th_loadstore<false, LST_UNIT_STRIDE, false> th_vlm_obj;





















+static CONSTEXPR const th_loadstore<true, LST_UNIT_STRIDE, false> th_vsm_obj;





















+static CONSTEXPR const th_loadstore<false, LST_STRIDED, false> th_vlse_obj;





















+static CONSTEXPR const th_loadstore<true, LST_STRIDED, false> th_vsse_obj;





















+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei8_obj;





















+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei16_obj;





















+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei32_obj;





















+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei64_obj;





















+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei8_obj;





















+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei16_obj;





















+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei32_obj;





















+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei64_obj;





















+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei8_obj;





















+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei16_obj;





















+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei32_obj;





















+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei64_obj;





















+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei8_obj;





















+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei16_obj;





















+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei32_obj;





















+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei64_obj;





















+static CONSTEXPR const th_unop<NEG> th_vneg_obj;





















+static CONSTEXPR const th_unop<NOT> th_vnot_obj;





















+static CONSTEXPR const th_vnshift<LSHIFTRT> th_vnsrl_obj;





















+static CONSTEXPR const th_vnshift<ASHIFTRT> th_vnsra_obj;





















+static CONSTEXPR const th_vncvt_x th_vncvt_x_obj;





















+static CONSTEXPR const th_vnclip<UNSPEC_VNCLIP> th_vnclip_obj;





















+static CONSTEXPR const th_vnclip<UNSPEC_VNCLIPU> th_vnclipu_obj;





















+static CONSTEXPR const th_vcpop th_vcpop_obj;





















+static CONSTEXPR const th_vfirst th_vfirst_obj;





















+static CONSTEXPR const th_vmadc th_vmadc_obj;





















+static CONSTEXPR const th_vmsbc th_vmsbc_obj;





















+static CONSTEXPR const th_vfncvt_x<UNSPEC_VFCVT> th_vfncvt_x_obj;





















+static CONSTEXPR const th_vfncvt_x<UNSPEC_VFCVT, HAS_FRM> th_vfncvt_x_frm_obj;





















+static CONSTEXPR const th_vfncvt_x<UNSPEC_UNSIGNED_VFCVT> th_vfncvt_xu_obj;





















+static CONSTEXPR const th_vfncvt_x<UNSPEC_UNSIGNED_VFCVT, HAS_FRM> th_vfncvt_xu_frm_obj;





















+static CONSTEXPR const th_vfncvt_f<NO_FRM> th_vfncvt_f_obj;





















+static CONSTEXPR const th_vfncvt_f<HAS_FRM> th_vfncvt_f_frm_obj;





















+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_UNORDERED> th_vfredusum_obj;





















+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_UNORDERED, HAS_FRM> th_vfredusum_frm_obj;





















+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_ORDERED> th_vfredosum_obj;





















+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_ORDERED, HAS_FRM> th_vfredosum_frm_obj;





















+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_UNORDERED> th_vfwredusum_obj;





















+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_UNORDERED, HAS_FRM> th_vfwredusum_frm_obj;





















+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_ORDERED> th_vfwredosum_obj;





















+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_ORDERED, HAS_FRM> th_vfwredosum_frm_obj;





















+static CONSTEXPR const th_vleff th_vleff_obj;





















+static CONSTEXPR const th_vlseg th_vlseg_obj;





















+static CONSTEXPR const th_vsseg th_vsseg_obj;





















+static CONSTEXPR const th_vlsseg th_vlsseg_obj;





















+static CONSTEXPR const th_vssseg th_vssseg_obj;





















+static CONSTEXPR const th_seg_indexed_load<UNSPEC_UNORDERED> th_vluxseg_obj;





















+static CONSTEXPR const th_seg_indexed_load<UNSPEC_ORDERED> th_vloxseg_obj;





















+static CONSTEXPR const th_seg_indexed_store<UNSPEC_UNORDERED> th_vsuxseg_obj;





















+static CONSTEXPR const th_seg_indexed_store<UNSPEC_ORDERED> th_vsoxseg_obj;





















+static CONSTEXPR const th_vlsegff th_vlsegff_obj;





















+





















+/* Declare the function base NAME, pointing it to an instance





















+   of class <NAME>_obj.  */





















+#define BASE(NAME) \





















+  namespace bases { const function_base *const NAME = &NAME##_obj; }





















+





















+BASE (th_vsetvl)





















+BASE (th_vsetvlmax)





















+BASE (th_vle)





















+BASE (th_vse)





















+BASE (th_vlm)





















+BASE (th_vsm)





















+BASE (th_vlse)





















+BASE (th_vsse)





















+BASE (th_vluxei8)





















+BASE (th_vluxei16)





















+BASE (th_vluxei32)





















+BASE (th_vluxei64)





















+BASE (th_vloxei8)





















+BASE (th_vloxei16)





















+BASE (th_vloxei32)





















+BASE (th_vloxei64)





















+BASE (th_vsuxei8)





















+BASE (th_vsuxei16)





















+BASE (th_vsuxei32)





















+BASE (th_vsuxei64)





















+BASE (th_vsoxei8)





















+BASE (th_vsoxei16)





















+BASE (th_vsoxei32)





















+BASE (th_vsoxei64)





















+BASE (th_vneg)





















+BASE (th_vnot)





















+BASE (th_vnsrl)





















+BASE (th_vnsra)





















+BASE (th_vncvt_x)





















+BASE (th_vnclip)





















+BASE (th_vnclipu)





















+BASE (th_vcpop)





















+BASE (th_vfirst)





















+BASE (th_vmadc)





















+BASE (th_vmsbc)





















+BASE (th_vfncvt_x)





















+BASE (th_vfncvt_x_frm)





















+BASE (th_vfncvt_xu)





















+BASE (th_vfncvt_xu_frm)





















+BASE (th_vfncvt_f)





















+BASE (th_vfncvt_f_frm)





















+BASE (th_vfredusum)





















+BASE (th_vfredusum_frm)





















+BASE (th_vfredosum)





















+BASE (th_vfredosum_frm)





















+BASE (th_vfwredusum)





















+BASE (th_vfwredusum_frm)





















+BASE (th_vfwredosum)





















+BASE (th_vfwredosum_frm)





















+BASE (th_vleff)





















+BASE (th_vlseg)





















+BASE (th_vsseg)





















+BASE (th_vlsseg)





















+BASE (th_vssseg)





















+BASE (th_vluxseg)





















+BASE (th_vloxseg)





















+BASE (th_vsuxseg)





















+BASE (th_vsoxseg)





















+BASE (th_vlsegff)





















+





















+} // end namespace riscv_vector





















diff --git a/gcc/config/riscv/thead-vector-builtins.h b/gcc/config/riscv/thead-vector-builtins.h





















new file mode 100644





















index 00000000000..d0bf00b8e81





















--- /dev/null





















+++ b/gcc/config/riscv/thead-vector-builtins.h





















@@ -0,0 +1,92 @@





















+/* function_base declaration for RISC-V XTheadVector Extension





















+   for GNU compiler.





















+   Copyright (C) 2022-2023 Free Software Foundation, Inc.





















+   Contributed by Joshua (cooper.joshua@linux.alibaba.com), T-Head





















+   Semiconductor Co., Ltd.





















+





















+   This file is part of GCC.





















+





















+   GCC is free software; you can redistribute it and/or modify it





















+   under the terms of the GNU General Public License as published by





















+   the Free Software Foundation; either version 3, or (at your option)





















+   any later version.





















+





















+   GCC is distributed in the hope that it will be useful, but





















+   WITHOUT ANY WARRANTY; without even the implied warranty of





















+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU





















+   General Public License for more details.





















+





















+   You should have received a copy of the GNU General Public License





















+   along with GCC; see the file COPYING3.  If not see





















+   <http://www.gnu.org/licenses/>.  */





















+





















+#ifndef GCC_THEAD_VECTOR_BUILTINS_H





















+#define GCC_THEAD_VECTOR_BUILTINS_H





















+





















+namespace riscv_vector {





















+





















+namespace bases {





















+extern const function_base *const th_vsetvl;





















+extern const function_base *const th_vsetvlmax;





















+extern const function_base *const th_vle;





















+extern const function_base *const th_vse;





















+extern const function_base *const th_vlm;





















+extern const function_base *const th_vsm;





















+extern const function_base *const th_vlse;





















+extern const function_base *const th_vsse;





















+extern const function_base *const th_vluxei8;





















+extern const function_base *const th_vluxei16;





















+extern const function_base *const th_vluxei32;





















+extern const function_base *const th_vluxei64;





















+extern const function_base *const th_vloxei8;





















+extern const function_base *const th_vloxei16;





















+extern const function_base *const th_vloxei32;





















+extern const function_base *const th_vloxei64;





















+extern const function_base *const th_vsuxei8;





















+extern const function_base *const th_vsuxei16;





















+extern const function_base *const th_vsuxei32;





















+extern const function_base *const th_vsuxei64;





















+extern const function_base *const th_vsoxei8;





















+extern const function_base *const th_vsoxei16;





















+extern const function_base *const th_vsoxei32;





















+extern const function_base *const th_vsoxei64;





















+extern const function_base *const th_vneg;





















+extern const function_base *const th_vnot;





















+extern const function_base *const th_vnsrl;





















+extern const function_base *const th_vnsra;





















+extern const function_base *const th_vncvt_x;





















+extern const function_base *const th_vnclip;





















+extern const function_base *const th_vnclipu;





















+extern const function_base *const th_vcpop;





















+extern const function_base *const th_vfirst;





















+extern const function_base *const th_vmadc;





















+extern const function_base *const th_vmsbc;





















+extern const function_base *const th_vfncvt_x;





















+extern const function_base *const th_vfncvt_x_frm;





















+extern const function_base *const th_vfncvt_xu;





















+extern const function_base *const th_vfncvt_xu_frm;





















+extern const function_base *const th_vfncvt_f;





















+extern const function_base *const th_vfncvt_f_frm;





















+extern const function_base *const th_vfredusum;





















+extern const function_base *const th_vfredusum_frm;





















+extern const function_base *const th_vfredosum;





















+extern const function_base *const th_vfredosum_frm;





















+extern const function_base *const th_vfwredusum;





















+extern const function_base *const th_vfwredusum_frm;





















+extern const function_base *const th_vfwredosum;





















+extern const function_base *const th_vfwredosum_frm;





















+extern const function_base *const th_vleff;





















+extern const function_base *const th_vlseg;





















+extern const function_base *const th_vsseg;





















+extern const function_base *const th_vlsseg;





















+extern const function_base *const th_vssseg;





















+extern const function_base *const th_vluxseg;





















+extern const function_base *const th_vloxseg;





















+extern const function_base *const th_vsuxseg;





















+extern const function_base *const th_vsoxseg;





















+extern const function_base *const th_vlsegff;





















+}





















+





















+} // end namespace riscv_vector





















+





















+#endif





















diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md





















new file mode 100644





















index 00000000000..072fb5e68e1





















--- /dev/null





















+++ b/gcc/config/riscv/thead-vector.md





















@@ -0,0 +1,2574 @@





















+(define_c_enum "unspec" [





















+  UNSPEC_TH_VWLDST





















+])





















+





















+(define_int_attr th_order [





















+  (UNSPEC_ORDERED "") (UNSPEC_UNORDERED "u")





















+])





















+





















+(define_int_attr th_reduc_op [





















+  (UNSPEC_REDUC_SUM "redsum")





















+  (UNSPEC_REDUC_SUM_ORDERED "redosum") (UNSPEC_REDUC_SUM_UNORDERED "redsum")





















+  (UNSPEC_REDUC_MAXU "redmaxu") (UNSPEC_REDUC_MAX "redmax") (UNSPEC_REDUC_MINU "redminu") (UNSPEC_REDUC_MIN "redmin")





















+  (UNSPEC_REDUC_AND "redand") (UNSPEC_REDUC_OR "redor") (UNSPEC_REDUC_XOR "redxor")





















+  (UNSPEC_WREDUC_SUM "wredsum") (UNSPEC_WREDUC_SUMU "wredsumu")





















+  (UNSPEC_WREDUC_SUM_ORDERED "wredosum") (UNSPEC_WREDUC_SUM_UNORDERED "wredsum")





















+])





















+





















+(define_code_iterator neg_unop [neg])





















+(define_code_iterator not_unop [not])





















+





















+(define_code_iterator any_float_unop_neg [neg])





















+(define_code_iterator any_float_unop_abs [abs])





















+





















+(define_mode_iterator V_VLS_VT [V VLS VT])





















+(define_mode_iterator V_VB_VLS_VT [V VB VLS VT])





















+





















+(define_split





















+  [(set (match_operand:V_VB_VLS_VT 0 "reg_or_mem_operand")





















+ (match_operand:V_VB_VLS_VT 1 "reg_or_mem_operand"))]





















+  "TARGET_XTHEADVECTOR"





















+  [(const_int 0)]





















+  {





















+    emit_insn (gen_pred_th_whole_mov (<MODE>mode, operands[0], operands[1],





















+       RVV_VLMAX, GEN_INT(riscv_vector::VLMAX)));





















+    DONE;





















+  })





















+





















+(define_insn_and_split "@pred_th_whole_mov<mode>"





















+  [(set (match_operand:V_VLS_VT 0 "reg_or_mem_operand"  "=vr,vr, m")





















+ (unspec:V_VLS_VT





















+   [(match_operand:V_VLS_VT 1 "reg_or_mem_operand" " vr, m,vr")





















+    (match_operand 2 "vector_length_operand"   " rK, rK, rK")





















+    (match_operand 3 "const_1_operand"         "  i, i, i")





















+    (reg:SI VL_REGNUM)





















+    (reg:SI VTYPE_REGNUM)]





















+ UNSPEC_TH_VWLDST))]





















+  "TARGET_XTHEADVECTOR"





















+  "@





















+   vmv.v.v\t%0,%1





















+   vle.v\t%0,%1





















+   vse.v\t%1,%0"





















+  "&& REG_P (operands[0]) && REG_P (operands[1])





















+   && REGNO (operands[0]) == REGNO (operands[1])"





















+  [(const_int 0)]





















+  ""





















+  [(set_attr "type" "vimov,vlds,vlds")





















+   (set_attr "mode" "<MODE>")





















+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))





















+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))





















+   (set (attr "avl_type_idx") (const_int 3))





















+   (set_attr "vl_op_idx" "2")])





















+





















+(define_insn_and_split "@pred_th_whole_mov<mode>"





















+  [(set (match_operand:VB 0 "reg_or_mem_operand"  "=vr,vr, m")





















+ (unspec:VB





















+   [(match_operand:VB 1 "reg_or_mem_operand" " vr, m,vr")





















+    (match_operand 2 "vector_length_operand"   " rK, rK, rK")





















+    (match_operand 3 "const_1_operand"         "  i, i, i")





















+    (reg:SI VL_REGNUM)





















+    (reg:SI VTYPE_REGNUM)]





















+ UNSPEC_TH_VWLDST))]





















+  "TARGET_XTHEADVECTOR"





















+  "@





















+   vmv.v.v\t%0,%1





















+   vle.v\t%0,%1





















+   vse.v\t%1,%0"





















+  "&& REG_P (operands[0]) && REG_P (operands[1])





















+   && REGNO (operands[0]) == REGNO (operands[1])"





















+  [(const_int 0)]





















+  ""





















+  [(set_attr "type" "vimov,vlds,vlds")





















+   (set_attr "mode" "<MODE>")





















+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))





















+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))





















+   (set (attr "avl_type_idx") (const_int 3))





















+   (set_attr "vl_op_idx" "2")





















+   (set (attr "sew") (const_int 8))





















+   (set (attr "vlmul") (symbol_ref "riscv_vector::LMUL_1"))])





















+





















+(define_expand "@pred_th_mov<mode>"





















+  [(set (match_operand:V_VLS 0 "nonimmediate_operand")





















+    (if_then_else:V_VLS





















+      (unspec:<VM>





















+        [(match_operand:<VM> 1 "vector_mask_operand")





















+         (match_operand 4 "vector_length_operand")





















+         (match_operand 5 "const_int_operand")





















+         (match_operand 6 "const_int_operand")





















+         (match_operand 7 "const_int_operand")





















+         (reg:SI VL_REGNUM)





















+         (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+      (match_operand:V_VLS 3 "vector_move_operand")





















+      (match_operand:V_VLS 2 "vector_merge_operand")))]





















+  "TARGET_XTHEADVECTOR"





















+  {})





















+





















+(define_insn_and_split "*pred_broadcast<mode>"





















+  [(set (match_operand:V_VLSI 0 "register_operand"                 "=vr, vr, vd, vd, vr, vr, vr, vr")





















+ (if_then_else:V_VLSI





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_broadcast_mask_operand" "Wc1,Wc1, vm, vm,Wc1,Wc1,Wb1,Wb1")





















+      (match_operand 4 "vector_length_operand"              " rK, rK, rK, rK, rK, rK, rK, rK")





















+      (match_operand 5 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")





















+      (match_operand 6 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")





















+      (match_operand 7 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (vec_duplicate:V_VLSI





















+     (match_operand:<VEL> 3 "direct_broadcast_operand"       " r,  r,Wdm,Wdm,Wdm,Wdm,  r,  r"))





















+   (match_operand:V_VLSI 2 "vector_merge_operand"            "vu,  0, vu,  0, vu,  0, vu,  0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "@





















+   vmv.v.x\t%0,%3





















+   vmv.v.x\t%0,%3





















+   vlse.v\t%0,%3,zero,%1.t





















+   vlse.v\t%0,%3,zero,%1.t





















+   vlse.v\t%0,%3,zero





















+   vlse.v\t%0,%3,zero





















+   vmv.s.x\t%0,%3





















+   vmv.s.x\t%0,%3"





















+  "(register_operand (operands[3], <VEL>mode)





















+  || CONST_POLY_INT_P (operands[3]))





















+  && GET_MODE_BITSIZE (<VEL>mode) > GET_MODE_BITSIZE (Pmode)"





















+  [(set (match_dup 0)





















+ (if_then_else:V_VLSI (unspec:<VM> [(match_dup 1) (match_dup 4)





















+      (match_dup 5) (match_dup 6) (match_dup 7)





















+      (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (vec_duplicate:V_VLSI (match_dup 3))





















+   (match_dup 2)))]





















+  {





















+    gcc_assert (can_create_pseudo_p ());





















+    if (CONST_POLY_INT_P (operands[3]))





















+      {





















+ rtx tmp = gen_reg_rtx (<VEL>mode);





















+ emit_move_insn (tmp, operands[3]);





















+ operands[3] = tmp;





















+      }





















+    rtx m = assign_stack_local (<VEL>mode, GET_MODE_SIZE (<VEL>mode),





















+ GET_MODE_ALIGNMENT (<VEL>mode));





















+    m = validize_mem (m);





















+    emit_move_insn (m, operands[3]);





















+    m = gen_rtx_MEM (<VEL>mode, force_reg (Pmode, XEXP (m, 0)));





















+    operands[3] = m;





















+





















+    /* For SEW = 64 in RV32 system, we expand vmv.s.x:





















+       andi a2,a2,1





















+       vsetvl zero,a2,e64





















+       vlse64.v  */





















+    if (satisfies_constraint_Wb1 (operands[1]))





















+      {





















+ operands[4] = riscv_vector::gen_avl_for_scalar_move (operands[4]);





















+ operands[1] = CONSTM1_RTX (<VM>mode);





















+      }





















+  }





















+  [(set_attr "type" "vimov,vimov,vlds,vlds,vlds,vlds,vimovxv,vimovxv")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "*pred_broadcast<mode>"





















+  [(set (match_operand:V_VLSF_ZVFHMIN 0 "register_operand"         "=vr, vr, vr, vr, vr, vr, vr, vr")





















+ (if_then_else:V_VLSF_ZVFHMIN





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_broadcast_mask_operand" "Wc1,Wc1, vm, vm,Wc1,Wc1,Wb1,Wb1")





















+      (match_operand 4 "vector_length_operand"              " rK, rK, rK, rK, rK, rK, rK, rK")





















+      (match_operand 5 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")





















+      (match_operand 6 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")





















+      (match_operand 7 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (vec_duplicate:V_VLSF_ZVFHMIN





















+     (match_operand:<VEL> 3 "direct_broadcast_operand"       " f,  f,Wdm,Wdm,Wdm,Wdm,  f,  f"))





















+   (match_operand:V_VLSF_ZVFHMIN 2 "vector_merge_operand"    "vu,  0, vu,  0, vu,  0, vu,  0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "@





















+   vfmv.v.f\t%0,%3





















+   vfmv.v.f\t%0,%3





















+   vlse.v\t%0,%3,zero,%1.t





















+   vlse.v\t%0,%3,zero,%1.t





















+   vlse.v\t%0,%3,zero





















+   vlse.v\t%0,%3,zero





















+   vfmv.s.f\t%0,%3





















+   vfmv.s.f\t%0,%3"





















+  [(set_attr "type" "vfmov,vfmov,vlds,vlds,vlds,vlds,vfmovfv,vfmovfv")





















+   (set_attr "mode" "<MODE>")])





















+





















+;; vle.v/vse.v,vmv.v.v





















+(define_insn_and_split "*pred_th_mov<mode>"





















+  [(set (match_operand:V_VLS 0 "nonimmediate_operand"            "=vr,    vr,    vd,     m,    vr,    vr")





















+    (if_then_else:V_VLS





















+      (unspec:<VM>





















+        [(match_operand:<VM> 1 "vector_mask_operand"           "vmWc1,   Wc1,    vm, vmWc1,   Wc1,   Wc1")





















+         (match_operand 4 "vector_length_operand"              "   rK,    rK,    rK,    rK,    rK,    rK")





















+         (match_operand 5 "const_int_operand"                  "    i,     i,     i,     i,     i,     i")





















+         (match_operand 6 "const_int_operand"                  "    i,     i,     i,     i,     i,     i")





















+         (match_operand 7 "const_int_operand"                  "    i,     i,     i,     i,     i,     i")





















+         (reg:SI VL_REGNUM)





















+         (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+      (match_operand:V_VLS 3 "reg_or_mem_operand"              "    m,     m,     m,    vr,    vr,    vr")





















+      (match_operand:V_VLS 2 "vector_merge_operand"            "    0,    vu,    vu,    vu,    vu,     0")))]





















+  "(TARGET_XTHEADVECTOR





















+    && (register_operand (operands[0], <MODE>mode)





















+        || register_operand (operands[3], <MODE>mode)))"





















+  "@





















+   vle.v\t%0,%3%p1





















+   vle.v\t%0,%3





















+   vle.v\t%0,%3,%1.t





















+   vse.v\t%3,%0%p1





















+   vmv.v.v\t%0,%3





















+   vmv.v.v\t%0,%3"





















+  "&& register_operand (operands[0], <MODE>mode)





















+   && register_operand (operands[3], <MODE>mode)





















+   && satisfies_constraint_vu (operands[2])





















+   && INTVAL (operands[7]) == riscv_vector::VLMAX"





















+  [(set (match_dup 0) (match_dup 3))]





















+  ""





















+  [(set_attr "type" "vlde,vlde,vlde,vste,vimov,vimov")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn_and_split "@pred_th_mov<mode>"





















+  [(set (match_operand:VB_VLS 0 "nonimmediate_operand"               "=vr,   m,  vr,  vr,  vr")





















+ (if_then_else:VB_VLS





















+   (unspec:VB_VLS





















+     [(match_operand:VB_VLS 1 "vector_all_trues_mask_operand" "Wc1, Wc1, Wc1, Wc1, Wc1")





















+      (match_operand 4 "vector_length_operand"            " rK,  rK,  rK,  rK,  rK")





















+      (match_operand 5 "const_int_operand"                "  i,   i,   i,   i,   i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operand:VB_VLS 3 "vector_move_operand"              "  m,  vr,  vr, Wc0, Wc1")





















+   (match_operand:VB_VLS 2 "vector_undef_operand"             " vu,  vu,  vu,  vu,  vu")))]





















+  "TARGET_XTHEADVECTOR"





















+  "@





















+   #





















+   #





















+   vmcpy.m\t%0,%3





















+   vmclr.m\t%0





















+   vmset.m\t%0"





















+  "&& !reload_completed"





















+  [(const_int 0)]





















+  {





















+    if ((MEM_P (operands[0]) || MEM_P (operands[3]))





















+        || (REG_P (operands[0]) && REG_P (operands[3])





















+     && INTVAL (operands[5]) == riscv_vector::VLMAX))





















+      {





















+ emit_move_insn (operands[0], operands[3]);





















+ DONE;





















+      }





















+





















+    FAIL;





















+  }





















+  [(set_attr "type" "vldm,vstm,vmalu,vmalu,vmalu")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "@pred_th_store<mode>"





















+  [(set (match_operand:V 0 "memory_operand"                 "+m")





















+ (if_then_else:V





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")





















+      (match_operand 3 "vector_length_operand"    "   rK")





















+      (match_operand 4 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operand:V 2 "register_operand"         "    vr")





















+   (match_dup 0)))]





















+  "TARGET_XTHEADVECTOR"





















+  "vse.v\t%2,%0%p1"





















+  [(set_attr "type" "vste")





















+   (set_attr "mode" "<MODE>")





















+   (set (attr "avl_type_idx") (const_int 4))





















+   (set_attr "vl_op_idx" "3")])





















+





















+(define_insn "@pred_th_strided_load<mode>"





















+  [(set (match_operand:V 0 "register_operand"              "=vr,    vr,    vd,    vr,    vr,    vd")





















+ (if_then_else:V





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm,    vmWc1,   Wc1,    vm")





















+      (match_operand 5 "vector_length_operand"    "   rK,    rK,    rK,       rK,    rK,    rK")





















+      (match_operand 6 "const_int_operand"        "    i,     i,     i,        i,     i,     i")





















+      (match_operand 7 "const_int_operand"        "    i,     i,     i,        i,     i,     i")





















+      (match_operand 8 "const_int_operand"        "    i,     i,     i,        i,     i,     i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:V





















+     [(match_operand:V 3 "memory_operand"         "     m,     m,     m,    m,     m,     m")





















+      (match_operand 4 "<V:stride_predicate>"     "<V:stride_load_constraint>")] UNSPEC_STRIDED)





















+   (match_operand:V 2 "vector_merge_operand"      "     0,    vu,    vu,    0,    vu,    vu")))]





















+  "TARGET_XTHEADVECTOR"





















+  "@





















+  vlse.v\t%0,%3,%z4%p1





















+  vlse.v\t%0,%3,%z4





















+  vlse.v\t%0,%3,%z4,%1.t





















+  vle.v\t%0,%3%p1





















+  vle.v\t%0,%3





















+  vle.v\t%0,%3,%1.t"





















+  [(set_attr "type" "vlds")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "@pred_th_strided_store<mode>"





















+  [(set (match_operand:V 0 "memory_operand"                 "+m,    m")





















+ (if_then_else:V





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,    vmWc1")





















+      (match_operand 4 "vector_length_operand"    "   rK,       rK")





















+      (match_operand 5 "const_int_operand"        "    i,        i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:V





















+     [(match_operand 2 "<V:stride_predicate>"     "<V:stride_store_constraint>")





















+      (match_operand:V 3 "register_operand"       "   vr,       vr")] UNSPEC_STRIDED)





















+   (match_dup 0)))]





















+  "TARGET_XTHEADVECTOR"





















+  "@





















+  vsse.v\t%3,%0,%z2%p1





















+  vse.v\t%3,%0%p1"





















+  [(set_attr "type" "vsts")





















+   (set_attr "mode" "<MODE>")





















+   (set (attr "avl_type_idx") (const_int 5))])





















+





















+





















+(define_insn "@pred_th_indexed_<order>load<mode>_same_eew"





















+  [(set (match_operand:V 0 "register_operand"             "=vd, vr,vd, vr")





















+ (if_then_else:V





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"  " vm,Wc1,vm,Wc1")





















+      (match_operand 5 "vector_length_operand"     " rK, rK,rK, rK")





















+      (match_operand 6 "const_int_operand"         "  i,  i, i,  i")





















+      (match_operand 7 "const_int_operand"         "  i,  i, i,  i")





















+      (match_operand 8 "const_int_operand"         "  i,  i, i,  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:V





















+     [(match_operand 3 "pmode_reg_or_0_operand"    " rJ, rJ,rJ, rJ")





















+      (mem:BLK (scratch))





















+      (match_operand:<VINDEX> 4 "register_operand" " vr, vr,vr, vr")] ORDER)





















+   (match_operand:V 2 "vector_merge_operand"       " vu, vu, 0,  0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlxe.v\t%0,(%z3),%4%p1"





















+  [(set_attr "type" "vld<order>x")





















+   (set_attr "mode" "<MODE>")])





















+





















+;; DEST eew is greater than SOURCE eew.





















+(define_insn "@pred_th_indexed_<order>load<mode>_x2_greater_eew"





















+  [(set (match_operand:VEEWEXT2 0 "register_operand"                    "=&vr,  &vr")





















+ (if_then_else:VEEWEXT2





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"               "vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"                  "   rK,   rK")





















+      (match_operand 6 "const_int_operand"                      "    i,    i")





















+      (match_operand 7 "const_int_operand"                      "    i,    i")





















+      (match_operand 8 "const_int_operand"                      "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:VEEWEXT2





















+     [(match_operand 3 "pmode_reg_or_0_operand"                 "   rJ,   rJ")





















+      (mem:BLK (scratch))





















+      (match_operand:<VINDEX_DOUBLE_TRUNC> 4 "register_operand" "   vr,   vr")] ORDER)





















+   (match_operand:VEEWEXT2 2 "vector_merge_operand"             "   vu,    0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlxe.v\t%0,(%z3),%4%p1"





















+  [(set_attr "type" "vld<order>x")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "@pred_th_indexed_<order>load<mode>_x4_greater_eew"





















+  [(set (match_operand:VEEWEXT4 0 "register_operand"                    "=&vr,  &vr")





















+ (if_then_else:VEEWEXT4





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"               "vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"                  "   rK,   rK")





















+      (match_operand 6 "const_int_operand"                      "    i,    i")





















+      (match_operand 7 "const_int_operand"                      "    i,    i")





















+      (match_operand 8 "const_int_operand"                      "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:VEEWEXT4





















+     [(match_operand 3 "pmode_reg_or_0_operand"                 "   rJ,   rJ")





















+      (mem:BLK (scratch))





















+      (match_operand:<VINDEX_QUAD_TRUNC> 4 "register_operand"   "   vr,   vr")] ORDER)





















+   (match_operand:VEEWEXT4 2 "vector_merge_operand"             "   vu,    0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlxe.v\t%0,(%z3),%4%p1"





















+  [(set_attr "type" "vld<order>x")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "@pred_th_indexed_<order>load<mode>_x8_greater_eew"





















+  [(set (match_operand:VEEWEXT8 0 "register_operand"                    "=&vr,  &vr")





















+ (if_then_else:VEEWEXT8





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"               "vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"                  "   rK,   rK")





















+      (match_operand 6 "const_int_operand"                      "    i,    i")





















+      (match_operand 7 "const_int_operand"                      "    i,    i")





















+      (match_operand 8 "const_int_operand"                      "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:VEEWEXT8





















+     [(match_operand 3 "pmode_reg_or_0_operand"                 "   rJ,   rJ")





















+      (mem:BLK (scratch))





















+      (match_operand:<VINDEX_OCT_TRUNC> 4 "register_operand"    "   vr,   vr")] ORDER)





















+   (match_operand:VEEWEXT8 2 "vector_merge_operand"             "   vu,    0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlxe.v\t%0,(%z3),%4%p1"





















+  [(set_attr "type" "vld<order>x")





















+   (set_attr "mode" "<MODE>")])





















+





















+;; DEST eew is smaller than SOURCE eew.





















+(define_insn "@pred_th_indexed_<order>load<mode>_x2_smaller_eew"





















+  [(set (match_operand:VEEWTRUNC2 0 "register_operand"               "=vd, vd, vr, vr,  &vr,  &vr")





















+ (if_then_else:VEEWTRUNC2





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"             " vm, vm,Wc1,Wc1,vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"                " rK, rK, rK, rK,   rK,   rK")





















+      (match_operand 6 "const_int_operand"                    "  i,  i,  i,  i,    i,    i")





















+      (match_operand 7 "const_int_operand"                    "  i,  i,  i,  i,    i,    i")





















+      (match_operand 8 "const_int_operand"                    "  i,  i,  i,  i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:VEEWTRUNC2





















+     [(match_operand 3 "pmode_reg_or_0_operand"               " rJ, rJ, rJ, rJ,   rJ,   rJ")





















+      (mem:BLK (scratch))





















+      (match_operand:<VINDEX_DOUBLE_EXT> 4 "register_operand" "  0,  0,  0,  0,   vr,   vr")] ORDER)





















+   (match_operand:VEEWTRUNC2 2 "vector_merge_operand"         " vu,  0, vu,  0,   vu,    0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlxe.v\t%0,(%z3),%4%p1"





















+  [(set_attr "type" "vld<order>x")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "@pred_th_indexed_<order>load<mode>_x4_smaller_eew"





















+  [(set (match_operand:VEEWTRUNC4 0 "register_operand"             "=vd, vd, vr, vr,  &vr,  &vr")





















+ (if_then_else:VEEWTRUNC4





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"           " vm, vm,Wc1,Wc1,vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"              " rK, rK, rK, rK,   rK,   rK")





















+      (match_operand 6 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")





















+      (match_operand 7 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")





















+      (match_operand 8 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:VEEWTRUNC4





















+     [(match_operand 3 "pmode_reg_or_0_operand"             " rJ, rJ, rJ, rJ,   rJ,   rJ")





















+      (mem:BLK (scratch))





















+      (match_operand:<VINDEX_QUAD_EXT> 4 "register_operand" "  0,  0,  0,  0,   vr,   vr")] ORDER)





















+   (match_operand:VEEWTRUNC4 2 "vector_merge_operand"       " vu,  0, vu,  0,   vu,    0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlxe.v\t%0,(%z3),%4%p1"





















+  [(set_attr "type" "vld<order>x")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "@pred_th_indexed_<order>load<mode>_x8_smaller_eew"





















+  [(set (match_operand:VEEWTRUNC8 0 "register_operand"            "=vd, vd, vr, vr,  &vr,  &vr")





















+ (if_then_else:VEEWTRUNC8





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"          " vm, vm,Wc1,Wc1,vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"             " rK, rK, rK, rK,   rK,   rK")





















+      (match_operand 6 "const_int_operand"                 "  i,  i,  i,  i,    i,    i")





















+      (match_operand 7 "const_int_operand"                 "  i,  i,  i,  i,    i,    i")





















+      (match_operand 8 "const_int_operand"                 "  i,  i,  i,  i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:VEEWTRUNC8





















+     [(match_operand 3 "pmode_reg_or_0_operand"            " rJ, rJ, rJ, rJ,   rJ,   rJ")





















+      (mem:BLK (scratch))





















+      (match_operand:<VINDEX_OCT_EXT> 4 "register_operand" "  0,  0,  0,  0,   vr,   vr")] ORDER)





















+   (match_operand:VEEWTRUNC8 2 "vector_merge_operand"      " vu,  0, vu,  0,   vu,    0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlxe.v\t%0,(%z3),%4%p1"





















+  [(set_attr "type" "vld<order>x")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "@pred_th_indexed_<th_order>store<RATIO64:mode><RATIO64I:mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+      (match_operand 4 "vector_length_operand"    "   rK")





















+      (match_operand 5 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")





















+    (match_operand:RATIO64I 2 "register_operand" "  vr")





















+    (match_operand:RATIO64 3 "register_operand"  "  vr")] ORDER))]





















+  "TARGET_XTHEADVECTOR"





















+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"





















+  [(set_attr "type" "vstux")





















+   (set_attr "mode" "<RATIO64:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<th_order>store<RATIO32:mode><RATIO32I:mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+      (match_operand 4 "vector_length_operand"    "   rK")





















+      (match_operand 5 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")





















+    (match_operand:RATIO32I 2 "register_operand" "  vr")





















+    (match_operand:RATIO32 3 "register_operand"  "  vr")] ORDER))]





















+  "TARGET_XTHEADVECTOR"





















+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"





















+  [(set_attr "type" "vstux")





















+   (set_attr "mode" "<RATIO32:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<th_order>store<RATIO16:mode><RATIO16I:mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+      (match_operand 4 "vector_length_operand"    "   rK")





















+      (match_operand 5 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")





















+    (match_operand:RATIO16I 2 "register_operand" "  vr")





















+    (match_operand:RATIO16 3 "register_operand"  "  vr")] ORDER))]





















+  "TARGET_XTHEADVECTOR"





















+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"





















+  [(set_attr "type" "vstux")





















+   (set_attr "mode" "<RATIO16:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<th_order>store<RATIO8:mode><RATIO8I:mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+      (match_operand 4 "vector_length_operand"    "   rK")





















+      (match_operand 5 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")





















+    (match_operand:RATIO8I 2 "register_operand" "  vr")





















+    (match_operand:RATIO8 3 "register_operand"  "  vr")] ORDER))]





















+  "TARGET_XTHEADVECTOR"





















+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"





















+  [(set_attr "type" "vstux")





















+   (set_attr "mode" "<RATIO8:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<th_order>store<RATIO4:mode><RATIO4I:mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+      (match_operand 4 "vector_length_operand"    "   rK")





















+      (match_operand 5 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")





















+    (match_operand:RATIO4I 2 "register_operand" "  vr")





















+    (match_operand:RATIO4 3 "register_operand"  "  vr")] ORDER))]





















+  "TARGET_XTHEADVECTOR"





















+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"





















+  [(set_attr "type" "vstux")





















+   (set_attr "mode" "<RATIO4:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<th_order>store<RATIO2:mode><RATIO2I:mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+      (match_operand 4 "vector_length_operand"    "   rK")





















+      (match_operand 5 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"       "  rJ")





















+    (match_operand:RATIO2I 2 "register_operand"  "  vr")





















+    (match_operand:RATIO2 3 "register_operand"   "  vr")] ORDER))]





















+  "TARGET_XTHEADVECTOR"





















+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"





















+  [(set_attr "type" "vstux")





















+   (set_attr "mode" "<RATIO2:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<th_order>store<RATIO1:mode><RATIO1:mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+      (match_operand 4 "vector_length_operand"    "   rK")





















+      (match_operand 5 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"       "  rJ")





















+    (match_operand:RATIO1 2 "register_operand"   "  vr")





















+    (match_operand:RATIO1 3 "register_operand"    "  vr")] ORDER))]





















+  "TARGET_XTHEADVECTOR"





















+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"





















+  [(set_attr "type" "vstux")





















+   (set_attr "mode" "<RATIO1:MODE>")])





















+





















+(define_insn "@pred_th_popcount<VB:mode><P:mode>"





















+  [(set (match_operand:P 0 "register_operand"               "=r")





















+ (popcount:P





















+   (unspec:VB





















+     [(and:VB





















+        (match_operand:VB 1 "vector_mask_operand" "vmWc1")





















+        (match_operand:VB 2 "register_operand"    "   vr"))





















+      (match_operand 3 "vector_length_operand"    "   rK")





















+      (match_operand 4 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmpopc.m\t%0,%2%p1"





















+  [(set_attr "type" "vmpop")





















+   (set_attr "mode" "<VB:MODE>")])





















+





















+(define_insn "@pred_th_ffs<VB:mode><P:mode>"





















+  [(set (match_operand:P 0 "register_operand"                 "=r")





















+ (plus:P





















+   (ffs:P





















+     (unspec:VB





















+       [(and:VB





















+          (match_operand:VB 1 "vector_mask_operand" "vmWc1")





















+          (match_operand:VB 2 "register_operand"    "   vr"))





















+        (match_operand 3 "vector_length_operand"    "   rK")





















+        (match_operand 4 "const_int_operand"        "    i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE))





















+   (const_int -1)))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmfirst.m\t%0,%2%p1"





















+  [(set_attr "type" "vmffs")





















+   (set_attr "mode" "<VB:MODE>")])





















+





















+(define_insn "@pred_th_narrow_fcvt_x<v_su>_f<mode>"





















+  [(set (match_operand:<VNCONVERT> 0 "register_operand"        "=&vd, &vd, &vr, &vr,  &vr,  &vr")





















+ (if_then_else:<VNCONVERT>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"       " vm, vm,Wc1,Wc1,vmWc1,vmWc1")





















+      (match_operand 4 "vector_length_operand"          " rK, rK, rK, rK,   rK,   rK")





















+      (match_operand 5 "const_int_operand"              "  i,  i,  i,  i,    i,    i")





















+      (match_operand 6 "const_int_operand"              "  i,  i,  i,  i,    i,    i")





















+      (match_operand 7 "const_int_operand"              "  i,  i,  i,  i,    i,    i")





















+      (match_operand 8 "const_int_operand"              "  i,  i,  i,  i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)





















+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:<VNCONVERT>





















+      [(match_operand:V_VLSF 3 "register_operand"       "  vd,  vd,  vr,  vr,   vr,   vr")] VFCVTS)





















+   (match_operand:<VNCONVERT> 2 "vector_merge_operand"  " vu,  vd, vu,  vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vfncvt.x<v_su>.f.v\t%0,%3%p1"





















+  [(set_attr "type" "vfncvtftoi")





















+   (set_attr "mode" "<VNCONVERT>")





















+   (set (attr "frm_mode")





















+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])





















+





















+(define_insn "@pred_th_narrow_<float_cvt><mode>"





















+  [(set (match_operand:<VNCONVERT> 0 "register_operand"       "=&vd, &vd, &vr, &vr,  &vr,  &vr")





















+ (if_then_else:<VNCONVERT>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      " vm, vm,Wc1,Wc1,vmWc1,vmWc1")





















+      (match_operand 4 "vector_length_operand"         " rK, rK, rK, rK,   rK,   rK")





















+      (match_operand 5 "const_int_operand"             "  i,  i,  i,  i,    i,    i")





















+      (match_operand 6 "const_int_operand"             "  i,  i,  i,  i,    i,    i")





















+      (match_operand 7 "const_int_operand"             "  i,  i,  i,  i,    i,    i")





















+      (match_operand 8 "const_int_operand"             "  i,  i,  i,  i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)





















+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)





















+   (any_float:<VNCONVERT>





















+      (match_operand:VWCONVERTI 3 "register_operand"   "  vd,  vd,  vr,  vr,   vr,   vr"))





















+   (match_operand:<VNCONVERT> 2 "vector_merge_operand" " vu,  vd, vu,  vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vfncvt.f.x<u>.v\t%0,%3%p1"





















+  [(set_attr "type" "vfncvtitof")





















+   (set_attr "mode" "<VNCONVERT>")





















+   (set (attr "frm_mode")





















+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])





















+





















+(define_insn "@pred_th_narrow_<optab><mode>"





















+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd,&vd, &vr, &vr,&vd, &vr,  &vr,  &vr, vd, vr,  &vr,  &vr")





















+ (if_then_else:<V_DOUBLE_TRUNC>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm,vm,Wc1,Wc1,vm,Wc1,vmWc1,vmWc1, vm,Wc1,vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"                  " rK,rK, rK, rK,rK, rK,   rK,   rK, rK, rK,   rK,   rK")





















+      (match_operand 6 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")





















+      (match_operand 7 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")





















+      (match_operand 8 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (truncate:<V_DOUBLE_TRUNC>





















+     (any_shiftrt:VWEXTI





















+      (match_operand:VWEXTI 3 "register_operand"                " vr,vr, vr, vr, vd,  vr,   vr,   vr,  vd,  vr,   vr,   vr")





















+      (match_operand:<V_DOUBLE_TRUNC> 4 "vector_shift_operand"  "  vd, vd,  vr,  vr,vr, vr,   vr,   vr, vk, vk,   vk,   vk")))





















+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     "  vd,vu,  vr, vu,vu, vu,   vu,    vr, vu, vu,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vn<insn>.v%o4\t%0,%3,%v4%p1"





















+  [(set_attr "type" "vnshift")





















+   (set_attr "mode" "<V_DOUBLE_TRUNC>")])





















+





















+(define_insn "@pred_th_narrow_<optab><mode>_scalar"





















+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd, &vd, &vr, &vr,  &vr,  &vr")





















+ (if_then_else:<V_DOUBLE_TRUNC>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm, vm,Wc1,Wc1,vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"                  " rK, rK, rK, rK,   rK,   rK")





















+      (match_operand 6 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")





















+      (match_operand 7 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")





















+      (match_operand 8 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (truncate:<V_DOUBLE_TRUNC>





















+     (any_shiftrt:VWEXTI





















+      (match_operand:VWEXTI 3 "register_operand"                "  vd,  vd,  vr,  vr,   vr,   vr")





















+      (match_operand 4 "pmode_reg_or_uimm5_operand"             " rK, rK, rK, rK,   rK,   rK")))





















+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     " vu,  vd, vu,  vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vn<insn>.v%o4\t%0,%3,%4%p1"





















+  [(set_attr "type" "vnshift")





















+   (set_attr "mode" "<V_DOUBLE_TRUNC>")])





















+





















+(define_insn "@pred_th_trunc<mode>"





















+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd, &vd, &vr, &vr,  &vr,  &vr")





















+ (if_then_else:<V_DOUBLE_TRUNC>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm, vm,Wc1,Wc1,vmWc1,vmWc1")





















+      (match_operand 4 "vector_length_operand"                  " rK, rK, rK, rK,   rK,   rK")





















+      (match_operand 5 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")





















+      (match_operand 6 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")





















+      (match_operand 7 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (truncate:<V_DOUBLE_TRUNC>





















+     (match_operand:VWEXTI 3 "register_operand"                 "  vd,  vd,  vr,  vr,   vr,   vr"))





















+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     " vu,  vd, vu,  vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vnsrl.vx\t%0,%3,x0%p1"





















+  [(set_attr "type" "vnshift")





















+   (set_attr "mode" "<V_DOUBLE_TRUNC>")





















+   (set_attr "vl_op_idx" "4")





















+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+(define_insn "@pred_th_trunc<mode>"





















+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"       "=&vd, &vd, &vr, &vr,  &vr,  &vr")





















+ (if_then_else:<V_DOUBLE_TRUNC>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"           " vm, vm,Wc1,Wc1,vmWc1,vmWc1")





















+      (match_operand 4 "vector_length_operand"              " rK, rK, rK, rK,   rK,   rK")





















+      (match_operand 5 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")





















+      (match_operand 6 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")





















+      (match_operand 7 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")





















+      (match_operand 8 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)





















+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)





















+   (float_truncate:<V_DOUBLE_TRUNC>





















+      (match_operand:VWEXTF_ZVFHMIN 3 "register_operand"            "  vd,  vd,  vr,  vr,   vr,   vr"))





















+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vu,  vd, vu,  vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vfncvt.f.f.v\t%0,%3%p1"





















+  [(set_attr "type" "vfncvtftof")





















+   (set_attr "mode" "<V_DOUBLE_TRUNC>")





















+   (set (attr "frm_mode")





















+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])





















+





















+(define_insn "@pred_th_fault_load<mode>"





















+  [(set (match_operand:V 0 "register_operand"              "=vd,    vd,    vr,    vr")





















+ (if_then_else:V





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "   vm,    vm,   Wc1,   Wc1")





















+      (match_operand 4 "vector_length_operand"    "   rK,    rK,    rK,    rK")





















+      (match_operand 5 "const_int_operand"        "    i,     i,     i,     i")





















+      (match_operand 6 "const_int_operand"        "    i,     i,     i,     i")





















+      (match_operand 7 "const_int_operand"        "    i,     i,     i,     i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:V





















+     [(match_operand:V 3 "memory_operand"         "    m,     m,     m,     m")] UNSPEC_VLEFF)





















+   (match_operand:V 2 "vector_merge_operand"      "   vu,     0,    vu,     0")))





















+   (set (reg:SI VL_REGNUM)





















+   (unspec:SI





















+     [(if_then_else:V





















+        (unspec:<VM>





















+ [(match_dup 1) (match_dup 4) (match_dup 5)





















+ (match_dup 6) (match_dup 7)





















+ (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+        (unspec:V [(match_dup 3)] UNSPEC_VLEFF)





















+        (match_dup 2))] UNSPEC_MODIFY_VL))]





















+  "TARGET_XTHEADVECTOR"





















+  "vleff.v\t%0,%3%p1"





















+  [(set_attr "type" "vldff")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "@pred_th_unit_strided_load<mode>"





















+  [(set (match_operand:VT 0 "register_operand"             "=vr,    vr,    vd")





















+ (if_then_else:VT





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm")





















+      (match_operand 4 "vector_length_operand"    "   rK,    rK,    rK")





















+      (match_operand 5 "const_int_operand"        "    i,     i,     i")





















+      (match_operand 6 "const_int_operand"        "    i,     i,     i")





















+      (match_operand 7 "const_int_operand"        "    i,     i,     i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:VT





















+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")





















+      (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED)





















+   (match_operand:VT 2 "vector_merge_operand"     "    0,    vu,    vu")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlseg<nf>e.v\t%0,(%z3)%p1"





















+  [(set_attr "type" "vlsegde")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "@pred_th_unit_strided_store<mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+      [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+       (match_operand 3 "vector_length_operand"    "   rK")





















+       (match_operand 4 "const_int_operand"        "    i")





















+       (reg:SI VL_REGNUM)





















+       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"      "   rJ")





















+    (match_operand:VT 2 "register_operand"         "   vr")





















+    (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED))]





















+  "TARGET_XTHEADVECTOR"





















+  "vsseg<nf>e.v\t%2,(%z1)%p0"





















+  [(set_attr "type" "vssegte")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "@pred_th_strided_load<mode>"





















+  [(set (match_operand:VT 0 "register_operand"             "=vr,    vr,    vd")





















+ (if_then_else:VT





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm")





















+      (match_operand 5 "vector_length_operand"    "   rK,    rK,    rK")





















+      (match_operand 6 "const_int_operand"        "    i,     i,     i")





















+      (match_operand 7 "const_int_operand"        "    i,     i,     i")





















+      (match_operand 8 "const_int_operand"        "    i,     i,     i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:VT





















+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")





















+      (match_operand 4 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")





















+      (mem:BLK (scratch))] UNSPEC_STRIDED)





















+   (match_operand:VT 2 "vector_merge_operand"     "    0,    vu,    vu")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlsseg<nf>e.v\t%0,(%z3),%z4%p1"





















+  [(set_attr "type" "vlsegds")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "@pred_th_strided_store<mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+      [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+       (match_operand 4 "vector_length_operand"    "   rK")





















+       (match_operand 5 "const_int_operand"        "    i")





















+       (reg:SI VL_REGNUM)





















+       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"      "   rJ")





















+    (match_operand 2 "pmode_reg_or_0_operand"      "   rJ")





















+    (match_operand:VT 3 "register_operand"         "   vr")





















+    (mem:BLK (scratch))] UNSPEC_STRIDED))]





















+  "TARGET_XTHEADVECTOR"





















+  "vssseg<nf>e.v\t%3,(%z1),%z2%p0"





















+  [(set_attr "type" "vssegts")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "@pred_th_fault_load<mode>"





















+  [(set (match_operand:VT 0 "register_operand"             "=vr,    vr,    vd")





















+ (if_then_else:VT





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm")





















+      (match_operand 4 "vector_length_operand"    "   rK,    rK,    rK")





















+      (match_operand 5 "const_int_operand"        "    i,     i,     i")





















+      (match_operand 6 "const_int_operand"        "    i,     i,     i")





















+      (match_operand 7 "const_int_operand"        "    i,     i,     i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:VT





















+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")





















+      (mem:BLK (scratch))] UNSPEC_VLEFF)





















+   (match_operand:VT 2 "vector_merge_operand"     "    0,    vu,    vu")))





















+   (set (reg:SI VL_REGNUM)





















+        (unspec:SI





















+          [(if_then_else:VT





















+      (unspec:<VM>





















+        [(match_dup 1) (match_dup 4) (match_dup 5)





















+         (match_dup 6) (match_dup 7)





















+         (reg:SI VL_REGNUM)





















+         (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+      (unspec:VT





















+         [(match_dup 3) (mem:BLK (scratch))] UNSPEC_VLEFF)





















+      (match_dup 2))] UNSPEC_MODIFY_VL))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlseg<nf>eff.v\t%0,(%z3)%p1"





















+  [(set_attr "type" "vlsegdff")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "@pred_th_indexed_<order>load<V1T:mode><RATIO64I:mode>"





















+  [(set (match_operand:V1T 0 "register_operand"           "=&vr,  &vr")





















+ (if_then_else:V1T





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"    "   rK,   rK")





















+      (match_operand 6 "const_int_operand"        "    i,    i")





















+      (match_operand 7 "const_int_operand"        "    i,    i")





















+      (match_operand 8 "const_int_operand"        "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:V1T





















+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")





















+      (mem:BLK (scratch))





















+      (match_operand:RATIO64I 4 "register_operand"     "   vr,   vr")] ORDER)





















+   (match_operand:V1T 2 "vector_merge_operand"    "   vu,    0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"





















+  [(set_attr "type" "vlsegd<order>x")





















+   (set_attr "mode" "<V1T:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<order>load<V2T:mode><RATIO32I:mode>"





















+  [(set (match_operand:V2T 0 "register_operand"           "=&vr,  &vr")





















+ (if_then_else:V2T





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"    "   rK,   rK")





















+      (match_operand 6 "const_int_operand"        "    i,    i")





















+      (match_operand 7 "const_int_operand"        "    i,    i")





















+      (match_operand 8 "const_int_operand"        "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:V2T





















+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")





















+      (mem:BLK (scratch))





















+      (match_operand:RATIO32I 4 "register_operand"     "   vr,   vr")] ORDER)





















+   (match_operand:V2T 2 "vector_merge_operand"    "   vu,    0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"





















+  [(set_attr "type" "vlsegd<order>x")





















+   (set_attr "mode" "<V2T:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<order>load<V4T:mode><RATIO16I:mode>"





















+  [(set (match_operand:V4T 0 "register_operand"           "=&vr,  &vr")





















+ (if_then_else:V4T





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"    "   rK,   rK")





















+      (match_operand 6 "const_int_operand"        "    i,    i")





















+      (match_operand 7 "const_int_operand"        "    i,    i")





















+      (match_operand 8 "const_int_operand"        "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:V4T





















+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")





















+      (mem:BLK (scratch))





















+      (match_operand:RATIO16I 4 "register_operand"     "   vr,   vr")] ORDER)





















+   (match_operand:V4T 2 "vector_merge_operand"    "   vu,    0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"





















+  [(set_attr "type" "vlsegd<order>x")





















+   (set_attr "mode" "<V4T:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<order>load<V8T:mode><RATIO8I:mode>"





















+  [(set (match_operand:V8T 0 "register_operand"           "=&vr,  &vr")





















+ (if_then_else:V8T





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"    "   rK,   rK")





















+      (match_operand 6 "const_int_operand"        "    i,    i")





















+      (match_operand 7 "const_int_operand"        "    i,    i")





















+      (match_operand 8 "const_int_operand"        "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:V8T





















+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")





















+      (mem:BLK (scratch))





















+      (match_operand:RATIO8I 4 "register_operand"     "   vr,   vr")] ORDER)





















+   (match_operand:V8T 2 "vector_merge_operand"    "   vu,    0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"





















+  [(set_attr "type" "vlsegd<order>x")





















+   (set_attr "mode" "<V8T:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<order>load<V16T:mode><RATIO4I:mode>"





















+  [(set (match_operand:V16T 0 "register_operand"          "=&vr,  &vr")





















+ (if_then_else:V16T





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"    "   rK,   rK")





















+      (match_operand 6 "const_int_operand"        "    i,    i")





















+      (match_operand 7 "const_int_operand"        "    i,    i")





















+      (match_operand 8 "const_int_operand"        "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:V16T





















+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")





















+      (mem:BLK (scratch))





















+      (match_operand:RATIO4I 4 "register_operand"    "   vr,   vr")] ORDER)





















+   (match_operand:V16T 2 "vector_merge_operand"   "   vu,    0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"





















+  [(set_attr "type" "vlsegd<order>x")





















+   (set_attr "mode" "<V16T:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<order>load<V32T:mode><RATIO2I:mode>"





















+  [(set (match_operand:V32T 0 "register_operand"          "=&vr,  &vr")





















+ (if_then_else:V32T





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"    "   rK,   rK")





















+      (match_operand 6 "const_int_operand"        "    i,    i")





















+      (match_operand 7 "const_int_operand"        "    i,    i")





















+      (match_operand 8 "const_int_operand"        "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:V32T





















+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")





















+      (mem:BLK (scratch))





















+      (match_operand:RATIO2I 4 "register_operand"    "   vr,   vr")] ORDER)





















+   (match_operand:V32T 2 "vector_merge_operand"   "   vu,    0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"





















+  [(set_attr "type" "vlsegd<order>x")





















+   (set_attr "mode" "<V32T:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<th_order>store<V1T:mode><RATIO64I:mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+      (match_operand 4 "vector_length_operand"    "   rK")





















+      (match_operand 5 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")





















+    (match_operand:RATIO64I 2 "register_operand"       "   vr")





















+    (match_operand:V1T 3 "register_operand"       "   vr")] ORDER))]





















+  "TARGET_XTHEADVECTOR"





















+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"





















+  [(set_attr "type" "vssegtux")





















+   (set_attr "mode" "<V1T:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<th_order>store<V2T:mode><RATIO32I:mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+      (match_operand 4 "vector_length_operand"    "   rK")





















+      (match_operand 5 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")





















+    (match_operand:RATIO32I 2 "register_operand"       "   vr")





















+    (match_operand:V2T 3 "register_operand"       "   vr")] ORDER))]





















+  "TARGET_XTHEADVECTOR"





















+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"





















+  [(set_attr "type" "vssegtux")





















+   (set_attr "mode" "<V2T:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<th_order>store<V4T:mode><RATIO16I:mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+      (match_operand 4 "vector_length_operand"    "   rK")





















+      (match_operand 5 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")





















+    (match_operand:RATIO16I 2 "register_operand"       "   vr")





















+    (match_operand:V4T 3 "register_operand"       "   vr")] ORDER))]





















+  "TARGET_XTHEADVECTOR"





















+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"





















+  [(set_attr "type" "vssegtux")





















+   (set_attr "mode" "<V4T:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<th_order>store<V8T:mode><RATIO8I:mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+      (match_operand 4 "vector_length_operand"    "   rK")





















+      (match_operand 5 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")





















+    (match_operand:RATIO8I 2 "register_operand"       "   vr")





















+    (match_operand:V8T 3 "register_operand"       "   vr")] ORDER))]





















+  "TARGET_XTHEADVECTOR"





















+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"





















+  [(set_attr "type" "vssegtux")





















+   (set_attr "mode" "<V8T:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<th_order>store<V16T:mode><RATIO4I:mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+      (match_operand 4 "vector_length_operand"    "   rK")





















+      (match_operand 5 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")





















+    (match_operand:RATIO4I 2 "register_operand"      "   vr")





















+    (match_operand:V16T 3 "register_operand"      "   vr")] ORDER))]





















+  "TARGET_XTHEADVECTOR"





















+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"





















+  [(set_attr "type" "vssegtux")





















+   (set_attr "mode" "<V16T:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<th_order>store<V32T:mode><RATIO2I:mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+      (match_operand 4 "vector_length_operand"    "   rK")





















+      (match_operand 5 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")





















+    (match_operand:RATIO2I 2 "register_operand"      "   vr")





















+    (match_operand:V32T 3 "register_operand"      "   vr")] ORDER))]





















+  "TARGET_XTHEADVECTOR"





















+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0";





















+  [(set_attr "type" "vssegtux")





















+   (set_attr "mode" "<V32T:MODE>")])





















+





















+(define_insn "@pred_th_<optab><mode>"





















+  [(set (match_operand:V_VLSF 0 "register_operand"           "=vd, vd, vr, vr")





















+ (if_then_else:V_VLSF





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")





















+      (match_operand 4 "vector_length_operand"    " rK, rK, rK, rK")





















+      (match_operand 5 "const_int_operand"        "  i,  i,  i,  i")





















+      (match_operand 6 "const_int_operand"        "  i,  i,  i,  i")





















+      (match_operand 7 "const_int_operand"        "  i,  i,  i,  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (any_float_unop_neg:V_VLSF





















+     (match_operand:V_VLSF 3 "register_operand"       " vr, vr, vr, vr"))





















+   (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vfsgnjn.vv\t%0,%3,%3%p1"





















+  [(set_attr "type" "<float_insn_type>")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "4")





















+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+(define_insn "@pred_th_<optab><mode>"





















+  [(set (match_operand:V_VLSF 0 "register_operand"           "=vd, vd, vr, vr")





















+ (if_then_else:V_VLSF





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")





















+      (match_operand 4 "vector_length_operand"    " rK, rK, rK, rK")





















+      (match_operand 5 "const_int_operand"        "  i,  i,  i,  i")





















+      (match_operand 6 "const_int_operand"        "  i,  i,  i,  i")





















+      (match_operand 7 "const_int_operand"        "  i,  i,  i,  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (any_float_unop_abs:V_VLSF





















+     (match_operand:V_VLSF 3 "register_operand"       " vr, vr, vr, vr"))





















+   (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vfsgnjx.vv\t%0,%3,%3%p1"





















+  [(set_attr "type" "<float_insn_type>")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "4")





















+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+(define_insn "@pred_th_<optab><mode>"





















+  [(set (match_operand:V_VLSI 0 "register_operand"          "=vd,vd, vr, vr")





















+ (if_then_else:V_VLSI





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "vm,vm,Wc1,Wc1")





















+      (match_operand 4 "vector_length_operand"    "rK,rK, rK, rK")





















+      (match_operand 5 "const_int_operand"        " i, i,  i,  i")





















+      (match_operand 6 "const_int_operand"        " i, i,  i,  i")





















+      (match_operand 7 "const_int_operand"        " i, i,  i,  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (not_unop:V_VLSI





















+     (match_operand:V_VLSI 3 "register_operand"       "vr,vr, vr, vr"))





















+   (match_operand:V_VLSI 2 "vector_merge_operand"     "vu, 0, vu,  0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vnot.v\t%0,%3%p1"





















+  [(set_attr "type" "vialu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "4")





















+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta (operands[5])"))





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma (operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+(define_insn "@pred_th_<optab><mode>"





















+  [(set (match_operand:V_VLSI 0 "register_operand" "=vd,vd, vr, vr")





















+ (if_then_else:V_VLSI





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "vm,vm,Wc1,Wc1")





















+      (match_operand 4 "vector_length_operand"    "rK,rK, rK, rK")





















+      (match_operand 5 "const_int_operand" " i, i,  i,  i")





















+      (match_operand 6 "const_int_operand" " i, i,  i,  i")





















+      (match_operand 7 "const_int_operand" " i, i,  i,  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (neg_unop:V_VLSI





















+     (match_operand:V_VLSI 3 "register_operand"       "vr,vr, vr, vr"))





















+   (match_operand:V_VLSI 2 "vector_merge_operand"     "vu, 0, vu,  0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vrsub.vx\t%0,%3,x0%p1"





















+  [(set_attr "type" "vialu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "4")





















+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+(define_insn "@pred_th_<optab><mode>"





















+  [(set (match_operand:V_VLSF 0 "register_operand"           "=vd, vd, vr, vr")





















+ (if_then_else:V_VLSF





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")





















+      (match_operand 4 "vector_length_operand"    " rK, rK, rK, rK")





















+      (match_operand 5 "const_int_operand"        "  i,  i,  i,  i")





















+      (match_operand 6 "const_int_operand"        "  i,  i,  i,  i")





















+      (match_operand 7 "const_int_operand"        "  i,  i,  i,  i")





















+      (match_operand 8 "const_int_operand"        "  i,  i,  i,  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)





















+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)





















+   (any_float_unop:V_VLSF





















+     (match_operand:V_VLSF 3 "register_operand"       " vr, vr, vr, vr"))





















+   (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vf<insn>.v\t%0,%3%p1"





















+  [(set_attr "type" "<float_insn_type>")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "4")





















+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))





















+   (set (attr "frm_mode")





















+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])





















+





















+(define_insn "@pred_th_narrow_clip<v_su><mode>"





















+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd,&vd, &vr, &vr,&vd, &vr,  &vr,  &vr, &vd, &vr,  &vr,  &vr")





















+ (if_then_else:<V_DOUBLE_TRUNC>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm,vm,Wc1,Wc1,vm,Wc1,vmWc1,vmWc1, vm,Wc1,vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"                  " rK,rK, rK, rK,rK, rK,   rK,   rK, rK, rK,   rK,   rK")





















+      (match_operand 6 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")





















+      (match_operand 7 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")





















+      (match_operand 8 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")





















+      (match_operand 9 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)





















+      (reg:SI VXRM_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:<V_DOUBLE_TRUNC>





















+     [(match_operand:VWEXTI 3 "register_operand"                " vr,vr, vr, vr, vd,  vr,   vr,   vr,  vd,  vr,   vr,   vr")





















+      (match_operand:<V_DOUBLE_TRUNC> 4 "vector_shift_operand"  "  vd, vd,  vr,  vr,vr, vr,   vr,   vr, vk, vk,   vk,   vk")] VNCLIP)





















+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     "  vd,vu,  vr, vu,vu, vu,   vu,    vr, vu, vu,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vnclip<v_su>.v%o4\t%0,%3,%v4%p1"





















+  [(set_attr "type" "vnclip")





















+   (set_attr "mode" "<V_DOUBLE_TRUNC>")])





















+





















+(define_insn "@pred_th_narrow_clip<v_su><mode>_scalar"





















+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd, &vd, &vr, &vr,  &vr,  &vr")





















+ (if_then_else:<V_DOUBLE_TRUNC>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm, vm,Wc1,Wc1,vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"                  " rK, rK, rK, rK,   rK,   rK")





















+      (match_operand 6 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")





















+      (match_operand 7 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")





















+      (match_operand 8 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")





















+      (match_operand 9 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)





















+      (reg:SI VXRM_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:<V_DOUBLE_TRUNC>





















+     [(match_operand:VWEXTI 3 "register_operand"                "  vd,  vd,  vr,  vr,   vr,   vr")





















+      (match_operand 4 "pmode_reg_or_uimm5_operand"             " rK, rK, rK, rK,   rK,   rK")] VNCLIP)





















+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     " vu,  vd, vu,  vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vnclip<v_su>.v%o4\t%0,%3,%4%p1"





















+  [(set_attr "type" "vnclip")





















+   (set_attr "mode" "<V_DOUBLE_TRUNC>")])





















+





















+;; Float Reduction Sum (vfred[ou]sum.vs)





















+(define_insn "@pred_th_<th_reduc_op><mode>"





















+  [(set (match_operand:<V_LMUL1>           0 "register_operand"      "=vr,vr")





















+ (unspec:<V_LMUL1>





















+   [(unspec:<VM>





















+     [(match_operand:<VM>          1 "vector_mask_operand"   "vmWc1,vmWc1")





















+      (match_operand               5 "vector_length_operand" "   rK,   rK")





















+      (match_operand               6 "const_int_operand"     "    i,    i")





















+      (match_operand               7 "const_int_operand"     "    i,    i")





















+      (match_operand               8 "const_int_operand"     "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)





















+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)





















+           (unspec:<V_LMUL1> [





















+             (match_operand:V_VLSF        3 "register_operand"      "   vr,   vr")





















+             (match_operand:<V_LMUL1>     4 "register_operand"      "   vr,   vr")





















+           ] ANY_FREDUC_SUM)





















+    (match_operand:<V_LMUL1>       2 "vector_merge_operand"  "   vu,    0")] UNSPEC_REDUC))]





















+  "TARGET_XTHEADVECTOR"





















+  "vf<th_reduc_op>.vs\t%0,%3,%4%p1"





















+  [(set_attr "type" "vfred<order>")





















+   (set_attr "mode" "<MODE>")





















+   (set (attr "frm_mode")





















+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])





















+





















+;; Float Widen Reduction Sum (vfwred[ou]sum.vs)





















+(define_insn "@pred_th_<th_reduc_op><mode>"





















+  [(set (match_operand:<V_EXT_LMUL1>         0 "register_operand"      "=&vr, &vr")





















+ (unspec:<V_EXT_LMUL1>





















+   [(unspec:<VM>





















+     [(match_operand:<VM>           1 "vector_mask_operand"   "vmWc1,vmWc1")





















+      (match_operand                5 "vector_length_operand" "   rK,   rK")





















+      (match_operand                6 "const_int_operand"     "    i,    i")





















+      (match_operand                7 "const_int_operand"     "    i,    i")





















+      (match_operand                8 "const_int_operand"     "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)





















+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)





















+           (unspec:<V_EXT_LMUL1> [





















+      (match_operand:VF_HS          3 "register_operand"      "   vr,   vr")





















+      (match_operand:<V_EXT_LMUL1>  4 "register_operand"      "  vr0,  vr0")





















+           ] ANY_FWREDUC_SUM)





















+    (match_operand:<V_EXT_LMUL1>    2 "vector_merge_operand"  "   vu,    0")] UNSPEC_REDUC))]





















+  "TARGET_XTHEADVECTOR"





















+  "vf<th_reduc_op>.vs\t%0,%3,%4%p1"





















+  [(set_attr "type" "vfwred<order>")





















+   (set_attr "mode" "<MODE>")





















+   (set (attr "frm_mode")





















+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])





















+





















+(define_insn "@pred_th_madc<mode>"





















+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr, &vr, &vr")





















+ (unspec:<VM>





















+    [(plus:VI





















+      (match_operand:VI 1 "register_operand"     "  %vr,  vr,  vr")





















+      (match_operand:VI 2 "vector_arith_operand" "vrvi,  vr,  vi"))





















+     (match_operand:<VM> 3 "register_operand"    "  vm,  vm,  vm")





















+     (unspec:<VM>





















+       [(match_operand 4 "vector_length_operand" "  rK,  rK,  rK")





















+        (match_operand 5 "const_int_operand"     "   i,   i,   i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmadc.v%o2m\t%0,%1,%v2,%3"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "4")





















+   (set (attr "avl_type_idx") (const_int 5))])





















+





















+(define_insn "@pred_th_msbc<mode>"





















+  [(set (match_operand:<VM> 0 "register_operand"        "=&vr")





















+ (unspec:<VM>





















+    [(minus:VI





















+      (match_operand:VI 1 "register_operand"     "  vr")





















+      (match_operand:VI 2 "register_operand"     " vr"))





















+     (match_operand:<VM> 3 "register_operand"    " vm")





















+     (unspec:<VM>





















+       [(match_operand 4 "vector_length_operand" " rK")





















+        (match_operand 5 "const_int_operand"     "  i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmsbc.vvm\t%0,%1,%2,%3"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "4")





















+   (set (attr "avl_type_idx") (const_int 5))])





















+





















+(define_insn "@pred_th_madc<mode>_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")





















+ (unspec:<VM>





















+    [(plus:VI_QHS





















+      (vec_duplicate:VI_QHS





















+        (match_operand:<VEL> 2 "register_operand" "  r"))





















+      (match_operand:VI_QHS 1 "register_operand"  "  vr"))





















+     (match_operand:<VM> 3 "register_operand"     " vm")





















+     (unspec:<VM>





















+       [(match_operand 4 "vector_length_operand"  " rK")





















+        (match_operand 5 "const_int_operand"      "  i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmadc.vxm\t%0,%1,%2,%3"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "4")





















+   (set (attr "avl_type_idx") (const_int 5))])





















+





















+(define_insn "@pred_th_msbc<mode>_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")





















+ (unspec:<VM>





















+    [(minus:VI_QHS





















+      (vec_duplicate:VI_QHS





















+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))





















+      (match_operand:VI_QHS 1 "register_operand"  "  vr"))





















+     (match_operand:<VM> 3 "register_operand"     " vm")





















+     (unspec:<VM>





















+       [(match_operand 4 "vector_length_operand"  " rK")





















+        (match_operand 5 "const_int_operand"      "  i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmsbc.vxm\t%0,%1,%z2,%3"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "4")





















+   (set (attr "avl_type_idx") (const_int 5))])





















+





















+(define_expand "@pred_th_madc<mode>_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand")





















+ (unspec:<VM>





















+    [(plus:VI_D





















+      (vec_duplicate:VI_D





















+        (match_operand:<VEL> 2 "reg_or_int_operand"))





















+      (match_operand:VI_D 1 "register_operand"))





















+     (match_operand:<VM> 3 "register_operand")





















+     (unspec:<VM>





















+       [(match_operand 4 "vector_length_operand")





















+        (match_operand 5 "const_int_operand")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]





















+  "TARGET_XTHEADVECTOR"





















+{





















+  if (riscv_vector::sew64_scalar_helper (





















+ operands,





















+ /* scalar op */&operands[2],





















+ /* vl */operands[4],





















+ <MODE>mode,





















+ riscv_vector::simm5_p (operands[2]),





















+ [] (rtx *operands, rtx boardcast_scalar) {





















+   emit_insn (gen_pred_th_madc<mode> (operands[0], operands[1],





















+        boardcast_scalar, operands[3], operands[4], operands[5]));





















+        },





















+ (riscv_vector::avl_type) INTVAL (operands[5])))





















+    DONE;





















+})





















+





















+(define_insn "*pred_th_madc<mode>_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")





















+ (unspec:<VM>





















+    [(plus:VI_D





















+      (vec_duplicate:VI_D





















+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))





















+      (match_operand:VI_D 1 "register_operand"    "  vr"))





















+     (match_operand:<VM> 3 "register_operand"     " vm")





















+     (unspec:<VM>





















+       [(match_operand 4 "vector_length_operand"  " rK")





















+        (match_operand 5 "const_int_operand"      "  i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmadc.vxm\t%0,%1,%z2,%3"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "4")





















+   (set (attr "avl_type_idx") (const_int 5))])





















+





















+(define_insn "*pred_th_madc<mode>_extended_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"             "=&vr")





















+ (unspec:<VM>





















+    [(plus:VI_D





















+      (vec_duplicate:VI_D





















+        (sign_extend:<VEL>





















+          (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))





















+      (match_operand:VI_D 1 "register_operand"         "  vr"))





















+     (match_operand:<VM> 3 "register_operand"          " vm")





















+     (unspec:<VM>





















+       [(match_operand 4 "vector_length_operand"       " rK")





















+        (match_operand 5 "const_int_operand"           "  i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmadc.vxm\t%0,%1,%z2,%3"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "4")





















+   (set (attr "avl_type_idx") (const_int 5))])





















+





















+(define_expand "@pred_th_msbc<mode>_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand")





















+ (unspec:<VM>





















+    [(minus:VI_D





















+      (vec_duplicate:VI_D





















+        (match_operand:<VEL> 2 "reg_or_int_operand"))





















+      (match_operand:VI_D 1 "register_operand"))





















+     (match_operand:<VM> 3 "register_operand")





















+     (unspec:<VM>





















+       [(match_operand 4 "vector_length_operand")





















+        (match_operand 5 "const_int_operand")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]





















+  "TARGET_XTHEADVECTOR"





















+{





















+  if (riscv_vector::sew64_scalar_helper (





















+ operands,





















+ /* scalar op */&operands[2],





















+ /* vl */operands[4],





















+ <MODE>mode,





















+ false,





















+ [] (rtx *operands, rtx boardcast_scalar) {





















+   emit_insn (gen_pred_th_msbc<mode> (operands[0], operands[1],





















+        boardcast_scalar, operands[3], operands[4], operands[5]));





















+        },





















+ (riscv_vector::avl_type) INTVAL (operands[5])))





















+    DONE;





















+})





















+





















+(define_insn "*pred_th_msbc<mode>_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")





















+ (unspec:<VM>





















+    [(minus:VI_D





















+      (vec_duplicate:VI_D





















+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))





















+      (match_operand:VI_D 1 "register_operand"    "  vr"))





















+     (match_operand:<VM> 3 "register_operand"     " vm")





















+     (unspec:<VM>





















+       [(match_operand 4 "vector_length_operand"  " rK")





















+        (match_operand 5 "const_int_operand"      "  i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmsbc.vxm\t%0,%1,%z2,%3"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "4")





















+   (set (attr "avl_type_idx") (const_int 5))])





















+





















+(define_insn "*pred_th_msbc<mode>_extended_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"              "=&vr")





















+ (unspec:<VM>





















+    [(minus:VI_D





















+      (vec_duplicate:VI_D





















+        (sign_extend:<VEL>





















+          (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))





















+      (match_operand:VI_D 1 "register_operand"         "  vr"))





















+     (match_operand:<VM> 3 "register_operand"          " vm")





















+     (unspec:<VM>





















+       [(match_operand 4 "vector_length_operand"       " rK")





















+        (match_operand 5 "const_int_operand"           "  i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmsbc.vxm\t%0,%1,%z2,%3"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "4")





















+   (set (attr "avl_type_idx") (const_int 5))])





















+





















+(define_insn "@pred_th_madc<mode>_overflow"





















+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr, &vr, &vr")





















+ (unspec:<VM>





















+    [(plus:VI





















+      (match_operand:VI 1 "register_operand"     "  %vr,  vr,  vr")





















+      (match_operand:VI 2 "vector_arith_operand" "vrvi,  vr,  vi"))





















+     (unspec:<VM>





















+       [(match_operand 3 "vector_length_operand" "  rK,  rK,  rK")





















+        (match_operand 4 "const_int_operand"     "   i,   i,   i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmadc.v%o2\t%0,%1,%v2"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "3")





















+   (set (attr "avl_type_idx") (const_int 4))])





















+





















+(define_insn "@pred_th_msbc<mode>_overflow"





















+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")





















+ (unspec:<VM>





















+    [(minus:VI





















+      (match_operand:VI 1 "register_operand"     "   vr")





















+      (match_operand:VI 2 "register_operand"     "  vr"))





















+     (unspec:<VM>





















+       [(match_operand 3 "vector_length_operand" "  rK")





















+        (match_operand 4 "const_int_operand"     "   i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmsbc.vv\t%0,%1,%2"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "3")





















+   (set (attr "avl_type_idx") (const_int 4))])





















+





















+(define_insn "@pred_th_madc<mode>_overflow_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")





















+ (unspec:<VM>





















+    [(plus:VI_QHS





















+      (vec_duplicate:VI_QHS





















+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))





















+      (match_operand:VI_QHS 1 "register_operand"  "  vr"))





















+     (unspec:<VM>





















+       [(match_operand 3 "vector_length_operand"  " rK")





















+        (match_operand 4 "const_int_operand"      "  i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmadc.vx\t%0,%1,%z2"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "3")





















+   (set (attr "avl_type_idx") (const_int 4))])





















+





















+(define_insn "@pred_th_msbc<mode>_overflow_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")





















+ (unspec:<VM>





















+    [(minus:VI_QHS





















+      (vec_duplicate:VI_QHS





















+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))





















+      (match_operand:VI_QHS 1 "register_operand"  "  vr"))





















+     (unspec:<VM>





















+       [(match_operand 3 "vector_length_operand"  " rK")





















+        (match_operand 4 "const_int_operand"      "  i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmsbc.vx\t%0,%1,%z2"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "3")





















+   (set (attr "avl_type_idx") (const_int 4))])





















+





















+(define_expand "@pred_th_madc<mode>_overflow_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand")





















+ (unspec:<VM>





















+    [(plus:VI_D





















+      (vec_duplicate:VI_D





















+        (match_operand:<VEL> 2 "reg_or_int_operand"))





















+      (match_operand:VI_D 1 "register_operand"))





















+     (unspec:<VM>





















+       [(match_operand 3 "vector_length_operand")





















+        (match_operand 4 "const_int_operand")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]





















+  "TARGET_XTHEADVECTOR"





















+{





















+  if (riscv_vector::sew64_scalar_helper (





















+ operands,





















+ /* scalar op */&operands[2],





















+ /* vl */operands[3],





















+ <MODE>mode,





















+ riscv_vector::simm5_p (operands[2]),





















+ [] (rtx *operands, rtx boardcast_scalar) {





















+   emit_insn (gen_pred_th_madc<mode>_overflow (operands[0], operands[1],





















+        boardcast_scalar, operands[3], operands[4]));





















+        },





















+ (riscv_vector::avl_type) INTVAL (operands[4])))





















+    DONE;





















+})





















+





















+(define_insn "*pred_th_madc<mode>_overflow_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")





















+ (unspec:<VM>





















+    [(plus:VI_D





















+      (vec_duplicate:VI_D





















+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))





















+      (match_operand:VI_D 1 "register_operand"    "  vr"))





















+     (unspec:<VM>





















+       [(match_operand 3 "vector_length_operand"  " rK")





















+        (match_operand 4 "const_int_operand"      "  i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmadc.vx\t%0,%1,%z2"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "3")





















+   (set (attr "avl_type_idx") (const_int 4))])





















+





















+(define_insn "*pred_th_madc<mode>_overflow_extended_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"             "=&vr")





















+ (unspec:<VM>





















+    [(plus:VI_D





















+      (vec_duplicate:VI_D





















+        (sign_extend:<VEL>





















+          (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))





















+      (match_operand:VI_D 1 "register_operand"         "  vr"))





















+     (unspec:<VM>





















+       [(match_operand 3 "vector_length_operand"       " rK")





















+        (match_operand 4 "const_int_operand"           "  i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmadc.vx\t%0,%1,%z2"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "3")





















+   (set (attr "avl_type_idx") (const_int 4))])





















+





















+(define_expand "@pred_th_msbc<mode>_overflow_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand")





















+ (unspec:<VM>





















+    [(minus:VI_D





















+      (vec_duplicate:VI_D





















+        (match_operand:<VEL> 2 "reg_or_int_operand"))





















+      (match_operand:VI_D 1 "register_operand"))





















+     (unspec:<VM>





















+       [(match_operand 3 "vector_length_operand")





















+        (match_operand 4 "const_int_operand")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]





















+  "TARGET_XTHEADVECTOR"





















+{





















+  if (riscv_vector::sew64_scalar_helper (





















+ operands,





















+ /* scalar op */&operands[2],





















+ /* vl */operands[3],





















+ <MODE>mode,





















+ false,





















+ [] (rtx *operands, rtx boardcast_scalar) {





















+   emit_insn (gen_pred_th_msbc<mode>_overflow (operands[0], operands[1],





















+        boardcast_scalar, operands[3], operands[4]));





















+        },





















+ (riscv_vector::avl_type) INTVAL (operands[4])))





















+    DONE;





















+})





















+





















+(define_insn "*pred_th_msbc<mode>_overflow_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")





















+ (unspec:<VM>





















+    [(minus:VI_D





















+      (vec_duplicate:VI_D





















+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))





















+      (match_operand:VI_D 1 "register_operand"    "  vr"))





















+     (unspec:<VM>





















+       [(match_operand 3 "vector_length_operand"  " rK")





















+        (match_operand 4 "const_int_operand"      "  i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmsbc.vx\t%0,%1,%z2"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "3")





















+   (set (attr "avl_type_idx") (const_int 4))])





















+





















+(define_insn "*pred_th_msbc<mode>_overflow_extended_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"             "=&vr")





















+ (unspec:<VM>





















+    [(minus:VI_D





















+      (vec_duplicate:VI_D





















+        (sign_extend:<VEL>





















+          (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))





















+      (match_operand:VI_D 1 "register_operand"         "  vr"))





















+     (unspec:<VM>





















+       [(match_operand 3 "vector_length_operand"      " rK")





















+        (match_operand 4 "const_int_operand"          "  i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmsbc.vx\t%0,%1,%z2"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "3")





















+   (set (attr "avl_type_idx") (const_int 4))])





















+





















+(define_insn "*th_vsetvl<mode>"





















+  [(set (match_operand:P 0 "register_operand" "=r")





















+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")





















+    (match_operand 2 "const_int_operand" "i")





















+    (match_operand 3 "const_int_operand" "i")





















+    (match_operand 4 "const_int_operand" "i")





















+    (match_operand 5 "const_int_operand" "i")] UNSPEC_VSETVL))





















+   (set (reg:SI VL_REGNUM)





















+ (unspec:SI [(match_dup 1)





















+     (match_dup 2)





















+     (match_dup 3)] UNSPEC_VSETVL))





















+   (set (reg:SI VTYPE_REGNUM)





















+ (unspec:SI [(match_dup 2)





















+     (match_dup 3)





















+     (match_dup 4)





















+     (match_dup 5)] UNSPEC_VSETVL))]





















+  "TARGET_XTHEADVECTOR"





















+  "vsetvli\t%0,%1,e%2,%m3"





















+  [(set_attr "type" "vsetvl")





















+   (set_attr "mode" "<MODE>")





















+   (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))





















+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))





















+   (set (attr "ta") (symbol_ref "INTVAL (operands[4])"))





















+   (set (attr "ma") (symbol_ref "INTVAL (operands[5])"))])





















+





















+;; vsetvl zero,zero,vtype instruction.





















+;; This pattern has no side effects and does not set X0 register.





















+(define_insn "*th_vsetvl_vtype_change_only"





















+  [(set (reg:SI VTYPE_REGNUM)





















+ (unspec:SI





















+   [(match_operand 0 "const_int_operand" "i")





















+    (match_operand 1 "const_int_operand" "i")





















+    (match_operand 2 "const_int_operand" "i")





















+    (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))]





















+  "TARGET_XTHEADVECTOR"





















+  "vsetvli\tzero,zero,e%0,%m1"





















+  [(set_attr "type" "vsetvl")





















+   (set_attr "mode" "SI")





















+   (set (attr "sew") (symbol_ref "INTVAL (operands[0])"))





















+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[1])"))





















+   (set (attr "ta") (symbol_ref "INTVAL (operands[2])"))





















+   (set (attr "ma") (symbol_ref "INTVAL (operands[3])"))])





















+





















+;; vsetvl zero,rs1,vtype instruction.





















+;; The reason we need this pattern since we should avoid setting X0 register





















+;; in vsetvl instruction pattern.





















+(define_insn "*th_vsetvl_discard_result<mode>"





















+  [(set (reg:SI VL_REGNUM)





















+ (unspec:SI [(match_operand:P 0 "vector_csr_operand" "rK")





















+     (match_operand 1 "const_int_operand" "i")





















+     (match_operand 2 "const_int_operand" "i")] UNSPEC_VSETVL))





















+   (set (reg:SI VTYPE_REGNUM)





















+ (unspec:SI [(match_dup 1)





















+     (match_dup 2)





















+     (match_operand 3 "const_int_operand" "i")





















+     (match_operand 4 "const_int_operand" "i")] UNSPEC_VSETVL))]





















+  "TARGET_XTHEADVECTOR"





















+  "vsetvli\tzero,%0,e%1,%m2"





















+  [(set_attr "type" "vsetvl")





















+   (set_attr "mode" "<MODE>")





















+   (set (attr "sew") (symbol_ref "INTVAL (operands[1])"))





















+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[2])"))





















+   (set (attr "ta") (symbol_ref "INTVAL (operands[3])"))





















+   (set (attr "ma") (symbol_ref "INTVAL (operands[4])"))])





















+





















+;; It's emit by vsetvl/vsetvlmax intrinsics with no side effects.





















+;; Since we have many optmization passes from "expand" to "reload_completed",





















+;; such pattern can allow us gain benefits of these optimizations.





















+(define_insn_and_split "@th_vsetvl<mode>_no_side_effects"





















+  [(set (match_operand:P 0 "register_operand" "=r")





















+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")





















+    (match_operand 2 "const_int_operand" "i")





















+    (match_operand 3 "const_int_operand" "i")





















+    (match_operand 4 "const_int_operand" "i")





















+    (match_operand 5 "const_int_operand" "i")] UNSPEC_VSETVL))]





















+  "TARGET_XTHEADVECTOR"





















+  "#"





















+  "&& epilogue_completed"





















+  [(parallel





















+    [(set (match_dup 0)





















+   (unspec:P [(match_dup 1) (match_dup 2) (match_dup 3)





















+      (match_dup 4) (match_dup 5)] UNSPEC_VSETVL))





















+     (set (reg:SI VL_REGNUM)





















+   (unspec:SI [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))





















+     (set (reg:SI VTYPE_REGNUM)





















+   (unspec:SI [(match_dup 2) (match_dup 3) (match_dup 4)





















+       (match_dup 5)] UNSPEC_VSETVL))])]





















+  ""





















+  [(set_attr "type" "vsetvl")





















+   (set_attr "mode" "SI")])





















+





















+(define_insn "*pred_th_cmp<mode>_merge_tie_mask"





















+  [(set (match_operand:<VM> 0 "register_operand"              "=vm")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "register_operand"        "   0")





















+      (match_operand 5 "vector_length_operand"        "  rK")





















+      (match_operand 6 "const_int_operand"            "   i")





















+      (match_operand 7 "const_int_operand"            "   i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 2 "comparison_except_ltge_operator"





















+      [(match_operand:V_VLSI 3 "register_operand"         "  vr")





















+       (match_operand:V_VLSI 4 "vector_arith_operand"     "vrvi")])





















+   (match_dup 1)))]





















+  "TARGET_XTHEADVECTOR"





















+  "vms%B2.v%o4\t%0,%3,%v4,v0.t"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "merge_op_idx" "1")





















+   (set_attr "vl_op_idx" "5")





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+;; We don't use early-clobber for LMUL <= 1 to get better codegen.





















+(define_insn "*pred_th_cmp<mode>"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr,   &vr,   &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1,vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "comparison_except_ltge_operator"





















+      [(match_operand:V_VLSI 4 "register_operand"          "   vr,   vr,   vr,   vr")





















+       (match_operand:V_VLSI 5 "vector_arith_operand"      "   vr,   vr,   vi,   vi")])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"





















+  "vms%B3.v%o5\t%0,%4,%v5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+;; We use early-clobber for source LMUL > dest LMUL.





















+(define_insn "*pred_th_cmp<mode>_narrow"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,   &vr,   &vr,   &vr,   &vr,  &vr,  &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "   0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "comparison_except_ltge_operator"





















+      [(match_operand:V_VLSI 4 "register_operand"          "   vr,    vr,   vr,    vr,    vr,   vr,    vr,   vr,   vr")





















+       (match_operand:V_VLSI 5 "vector_arith_operand"      " vrvi, vrvi,    vr,    vr, vrvi,    vr,    vr, vrvi, vrvi")])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,   vu,   vu,    vr,    vr,    vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"





















+  "vms%B3.v%o5\t%0,%4,%v5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "*pred_th_ltge<mode>_merge_tie_mask"





















+  [(set (match_operand:<VM> 0 "register_operand"              "=vm")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "register_operand"        "   0")





















+      (match_operand 5 "vector_length_operand"        "  rK")





















+      (match_operand 6 "const_int_operand"            "   i")





















+      (match_operand 7 "const_int_operand"            "   i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 2 "ltge_operator"





















+      [(match_operand:V_VLSI 3 "register_operand"         "  vr")





















+       (match_operand:V_VLSI 4 "vector_neg_arith_operand" "vrvj")])





















+   (match_dup 1)))]





















+  "TARGET_XTHEADVECTOR"





















+  "vms%B2.v%o4\t%0,%3,%v4,v0.t"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "merge_op_idx" "1")





















+   (set_attr "vl_op_idx" "5")





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+;; We don't use early-clobber for LMUL <= 1 to get better codegen.





















+(define_insn "*pred_th_ltge<mode>"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr,   &vr,   &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1,vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "ltge_operator"





















+      [(match_operand:V_VLSI 4 "register_operand"          "   vr,   vr,   vr,   vr")





















+       (match_operand:V_VLSI 5 "vector_neg_arith_operand"  "   vr,   vr,   vj,   vj")])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"





















+  "vms%B3.v%o5\t%0,%4,%v5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+;; We use early-clobber for source LMUL > dest LMUL.





















+(define_insn "*pred_th_ltge<mode>_narrow"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,   &vr,   &vr,   &vr,   &vr,  &vr,  &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "ltge_operator"





















+      [(match_operand:V_VLSI 4 "register_operand"          "   vr,    vr,   vr,    vr,    vr,   vr,    vr,   vr,   vr")





















+       (match_operand:V_VLSI 5 "vector_neg_arith_operand"  " vrvj, vrvj,    vr,    vr, vrvj,    vr,    vr, vrvj, vrvj")])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,   vu,   vu,    vr,    vr,    vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"





















+  "vms%B3.v%o5\t%0,%4,%v5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"





















+  [(set (match_operand:<VM> 0 "register_operand"               "=vm")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "register_operand"          "  0")





















+      (match_operand 5 "vector_length_operand"          " rK")





















+      (match_operand 6 "const_int_operand"              "  i")





















+      (match_operand 7 "const_int_operand"              "  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 2 "comparison_except_eqge_operator"





















+      [(match_operand:V_VLSI_QHS 3 "register_operand"       " vr")





















+       (vec_duplicate:V_VLSI_QHS





















+         (match_operand:<VEL> 4 "register_operand"      "  r"))])





















+   (match_dup 1)))]





















+  "TARGET_XTHEADVECTOR"





















+  "vms%B2.vx\t%0,%3,%4,v0.t"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "merge_op_idx" "1")





















+   (set_attr "vl_op_idx" "5")





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+;; We don't use early-clobber for LMUL <= 1 to get better codegen.





















+(define_insn "*pred_th_cmp<mode>_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "comparison_except_eqge_operator"





















+      [(match_operand:V_VLSI_QHS 4 "register_operand"      "   vr,   vr")





















+       (vec_duplicate:V_VLSI_QHS





















+         (match_operand:<VEL> 5 "register_operand"     "    r,    r"))])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"





















+  "vms%B3.vx\t%0,%4,%5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+;; We use early-clobber for source LMUL > dest LMUL.





















+(define_insn "*pred_th_cmp<mode>_scalar_narrow"





















+  [(set (match_operand:<VM> 0 "register_operand"             "=vm,   &vr,   &vr,  &vr,  &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"   "    0,vmWc1,vmWc1,vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"      "   rK,   rK,   rK,   rK,   rK")





















+      (match_operand 7 "const_int_operand"          "    i,    i,    i,    i,    i")





















+      (match_operand 8 "const_int_operand"          "    i,    i,    i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "comparison_except_eqge_operator"





















+      [(match_operand:V_VLSI_QHS 4 "register_operand"   "   vr,    vr,    vr,   vr,   vr")





















+       (vec_duplicate:V_VLSI_QHS





















+         (match_operand:<VEL> 5 "register_operand"  "    r,    r,    r,    r,    r"))])





















+   (match_operand:<VM> 2 "vector_merge_operand"     "   vu,   vu,    vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"





















+  "vms%B3.vx\t%0,%4,%5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=vm")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "register_operand"           "  0")





















+      (match_operand 5 "vector_length_operand"           " rK")





















+      (match_operand 6 "const_int_operand"               "  i")





















+      (match_operand 7 "const_int_operand"               "  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 2 "equality_operator"





















+      [(vec_duplicate:V_VLSI_QHS





















+         (match_operand:<VEL> 4 "register_operand"       "  r"))





















+       (match_operand:V_VLSI_QHS 3 "register_operand"        " vr")])





















+   (match_dup 1)))]





















+  "TARGET_XTHEADVECTOR"





















+  "vms%B2.vx\t%0,%3,%4,v0.t"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "merge_op_idx" "1")





















+   (set_attr "vl_op_idx" "5")





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+;; We don't use early-clobber for LMUL <= 1 to get better codegen.





















+(define_insn "*pred_th_eqne<mode>_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "equality_operator"





















+      [(vec_duplicate:V_VLSI_QHS





















+         (match_operand:<VEL> 5 "register_operand"     "    r,    r"))





















+       (match_operand:V_VLSI_QHS 4 "register_operand"      "   vr,   vr")])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"





















+  "vms%B3.vx\t%0,%4,%5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+;; We use early-clobber for source LMUL > dest LMUL.





















+(define_insn "*pred_th_eqne<mode>_scalar_narrow"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,  &vr,  &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "equality_operator"





















+      [(vec_duplicate:V_VLSI_QHS





















+         (match_operand:<VEL> 5 "register_operand"     "    r,    r,    r,    r,    r"))





















+       (match_operand:V_VLSI_QHS 4 "register_operand"      "   vr,    vr,    vr,   vr,   vr")])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"





















+  "vms%B3.vx\t%0,%4,%5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=vm")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "register_operand"           "  0")





















+      (match_operand 5 "vector_length_operand"           " rK")





















+      (match_operand 6 "const_int_operand"               "  i")





















+      (match_operand 7 "const_int_operand"               "  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 2 "comparison_except_eqge_operator"





















+      [(match_operand:V_VLSI_D 3 "register_operand"          " vr")





















+       (vec_duplicate:V_VLSI_D





















+         (match_operand:<VEL> 4 "register_operand"       "  r"))])





















+   (match_dup 1)))]





















+  "TARGET_XTHEADVECTOR"





















+  "vms%B2.vx\t%0,%3,%4,v0.t"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "merge_op_idx" "1")





















+   (set_attr "vl_op_idx" "5")





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=vm")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "register_operand"           "  0")





















+      (match_operand 5 "vector_length_operand"           " rK")





















+      (match_operand 6 "const_int_operand"               "  i")





















+      (match_operand 7 "const_int_operand"               "  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 2 "equality_operator"





















+      [(vec_duplicate:V_VLSI_D





















+         (match_operand:<VEL> 4 "register_operand"       "  r"))





















+       (match_operand:V_VLSI_D 3 "register_operand"          " vr")])





















+   (match_dup 1)))]





















+  "TARGET_XTHEADVECTOR"





















+  "vms%B2.vx\t%0,%3,%4,v0.t"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "merge_op_idx" "1")





















+   (set_attr "vl_op_idx" "5")





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+;; We don't use early-clobber for LMUL <= 1 to get better codegen.





















+(define_insn "*pred_th_cmp<mode>_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "comparison_except_eqge_operator"





















+      [(match_operand:V_VLSI_D 4 "register_operand"        "   vr,   vr")





















+       (vec_duplicate:V_VLSI_D





















+         (match_operand:<VEL> 5 "register_operand"     "    r,    r"))])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"





















+  "vms%B3.vx\t%0,%4,%5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+;; We use early-clobber for source LMUL > dest LMUL.





















+(define_insn "*pred_th_cmp<mode>_scalar_narrow"





















+  [(set (match_operand:<VM> 0 "register_operand"             "=vm,   &vr,   &vr,  &vr,  &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"   "    0,vmWc1,vmWc1,vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"      "   rK,   rK,   rK,   rK,   rK")





















+      (match_operand 7 "const_int_operand"          "    i,    i,    i,    i,    i")





















+      (match_operand 8 "const_int_operand"          "    i,    i,    i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "comparison_except_eqge_operator"





















+      [(match_operand:V_VLSI_D 4 "register_operand"     "   vr,    vr,    vr,   vr,   vr")





















+       (vec_duplicate:V_VLSI_D





















+         (match_operand:<VEL> 5 "register_operand"  "    r,    r,    r,    r,    r"))])





















+   (match_operand:<VM> 2 "vector_merge_operand"     "   vu,   vu,    vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"





















+  "vms%B3.vx\t%0,%4,%5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+;; We don't use early-clobber for LMUL <= 1 to get better codegen.





















+(define_insn "*pred_th_eqne<mode>_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "equality_operator"





















+      [(vec_duplicate:V_VLSI_D





















+         (match_operand:<VEL> 5 "register_operand"     "    r,    r"))





















+       (match_operand:V_VLSI_D 4 "register_operand"        "   vr,   vr")])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"





















+  "vms%B3.vx\t%0,%4,%5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+;; We use early-clobber for source LMUL > dest LMUL.





















+(define_insn "*pred_th_eqne<mode>_scalar_narrow"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,  &vr,  &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "equality_operator"





















+      [(vec_duplicate:V_VLSI_D





















+         (match_operand:<VEL> 5 "register_operand"     "    r,    r,    r,    r,    r"))





















+       (match_operand:V_VLSI_D 4 "register_operand"        "   vr,    vr,    vr,   vr,   vr")])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"





















+  "vms%B3.vx\t%0,%4,%5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "*pred_th_cmp<mode>_extended_scalar_merge_tie_mask"





















+  [(set (match_operand:<VM> 0 "register_operand"               "=vm")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "register_operand"          "  0")





















+      (match_operand 5 "vector_length_operand"          " rK")





















+      (match_operand 6 "const_int_operand"              "  i")





















+      (match_operand 7 "const_int_operand"              "  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 2 "comparison_except_eqge_operator"





















+      [(match_operand:V_VLSI_D 3 "register_operand"         " vr")





















+       (vec_duplicate:V_VLSI_D





















+         (sign_extend:<VEL>





















+           (match_operand:<VSUBEL> 4 "register_operand" "  r")))])





















+   (match_dup 1)))]





















+  "TARGET_XTHEADVECTOR"





















+  "vms%B2.vx\t%0,%3,%4,v0.t"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "merge_op_idx" "1")





















+   (set_attr "vl_op_idx" "5")





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+;; We don't use early-clobber for LMUL <= 1 to get better codegen.





















+(define_insn "*pred_th_cmp<mode>_extended_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"                 "=&vr,   &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"       "vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"          "   rK,   rK")





















+      (match_operand 7 "const_int_operand"              "    i,    i")





















+      (match_operand 8 "const_int_operand"              "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "comparison_except_eqge_operator"





















+      [(match_operand:V_VLSI_D 4 "register_operand"         "   vr,   vr")





















+       (vec_duplicate:V_VLSI_D





















+         (sign_extend:<VEL>





















+           (match_operand:<VSUBEL> 5 "register_operand" "    r,    r")))])





















+   (match_operand:<VM> 2 "vector_merge_operand"         "   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"





















+  "vms%B3.vx\t%0,%4,%5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "*pred_th_cmp<mode>_extended_scalar_narrow"





















+  [(set (match_operand:<VM> 0 "register_operand"                 "=vm,   &vr,   &vr,  &vr,  &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"       "    0,vmWc1,vmWc1,vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"          "   rK,   rK,   rK,   rK,   rK")





















+      (match_operand 7 "const_int_operand"              "    i,    i,    i,    i,    i")





















+      (match_operand 8 "const_int_operand"              "    i,    i,    i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "comparison_except_eqge_operator"





















+      [(match_operand:V_VLSI_D 4 "register_operand"         "   vr,    vr,    vr,   vr,   vr")





















+       (vec_duplicate:V_VLSI_D





















+         (sign_extend:<VEL>





















+           (match_operand:<VSUBEL> 5 "register_operand" "    r,    r,    r,    r,    r")))])





















+   (match_operand:<VM> 2 "vector_merge_operand"         "   vu,   vu,    vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"





















+  "vms%B3.vx\t%0,%4,%5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "*pred_th_eqne<mode>_extended_scalar_merge_tie_mask"





















+  [(set (match_operand:<VM> 0 "register_operand"                 "=vm")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "register_operand"            "  0")





















+      (match_operand 5 "vector_length_operand"            " rK")





















+      (match_operand 6 "const_int_operand"                "  i")





















+      (match_operand 7 "const_int_operand"                "  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 2 "equality_operator"





















+      [(vec_duplicate:V_VLSI_D





















+         (sign_extend:<VEL>





















+           (match_operand:<VSUBEL> 4 "register_operand"   "  r")))





















+       (match_operand:V_VLSI_D 3 "register_operand"           " vr")])





















+   (match_dup 1)))]





















+  "TARGET_XTHEADVECTOR"





















+  "vms%B2.vx\t%0,%3,%4,v0.t"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "merge_op_idx" "1")





















+   (set_attr "vl_op_idx" "5")





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+;; We don't use early-clobber for LMUL <= 1 to get better codegen.





















+(define_insn "*pred_th_eqne<mode>_extended_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"                 "=&vr,   &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"       "vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"          "   rK,   rK")





















+      (match_operand 7 "const_int_operand"              "    i,    i")





















+      (match_operand 8 "const_int_operand"              "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "equality_operator"





















+      [(vec_duplicate:V_VLSI_D





















+         (sign_extend:<VEL>





















+           (match_operand:<VSUBEL> 5 "register_operand" "    r,    r")))





















+       (match_operand:V_VLSI_D 4 "register_operand"         "   vr,   vr")])





















+   (match_operand:<VM> 2 "vector_merge_operand"         "   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"





















+  "vms%B3.vx\t%0,%4,%5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "*pred_th_eqne<mode>_extended_scalar_narrow"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,  &vr,  &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"       "    0,vmWc1,vmWc1,vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"          "   rK,   rK,   rK,   rK,   rK")





















+      (match_operand 7 "const_int_operand"              "    i,    i,    i,    i,    i")





















+      (match_operand 8 "const_int_operand"              "    i,    i,    i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "equality_operator"





















+      [(vec_duplicate:V_VLSI_D





















+         (sign_extend:<VEL>





















+           (match_operand:<VSUBEL> 5 "register_operand" "    r,    r,    r,    r,    r")))





















+       (match_operand:V_VLSI_D 4 "register_operand"         "   vr,    vr,    vr,   vr,   vr")])





















+   (match_operand:<VM> 2 "vector_merge_operand"         "   vu,   vu,    vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"





















+  "vms%B3.vx\t%0,%4,%5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "*pred_th_cmp<mode>"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "signed_order_operator"





















+      [(match_operand:V_VLSF 4 "register_operand"      "   vr,   vr")





















+       (match_operand:V_VLSF 5 "register_operand"      "   vr,   vr")])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"





















+  "vmf%B3.vv\t%0,%4,%5%p1"





















+  [(set_attr "type" "vfcmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "*pred_th_cmp<mode>_narrow_merge_tie_mask"





















+  [(set (match_operand:<VM> 0 "register_operand"               "=vm")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "register_operand"          "  0")





















+      (match_operand 5 "vector_length_operand"          " rK")





















+      (match_operand 6 "const_int_operand"              "  i")





















+      (match_operand 7 "const_int_operand"              "  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 2 "signed_order_operator"





















+      [(match_operand:V_VLSF 3 "register_operand"           " vr")





















+       (match_operand:V_VLSF 4 "register_operand"           " vr")])





















+   (match_dup 1)))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmf%B2.vv\t%0,%3,%4,v0.t"





















+  [(set_attr "type" "vfcmp")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "merge_op_idx" "1")





















+   (set_attr "vl_op_idx" "5")





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+;; We use early-clobber for source LMUL > dest LMUL.





















+(define_insn "*pred_th_cmp<mode>_narrow"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,   &vr,   &vr,   &vr,   &vr,  &vr,  &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "signed_order_operator"





















+      [(match_operand:V_VLSF 4 "register_operand"      "   vr,    vr,   vr,    vr,    vr,   vr,    vr,   vr,   vr")





















+       (match_operand:V_VLSF 5 "register_operand"      "   vr,   vr,    vr,    vr,   vr,    vr,    vr,   vr,   vr")])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,   vu,   vu,    vr,    vr,    vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"





















+  "vmf%B3.vv\t%0,%4,%5%p1"





















+  [(set_attr "type" "vfcmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"





















+  [(set (match_operand:<VM> 0 "register_operand"              "=vm")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "register_operand"         "  0")





















+      (match_operand 5 "vector_length_operand"         " rK")





















+      (match_operand 6 "const_int_operand"             "  i")





















+      (match_operand 7 "const_int_operand"             "  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 2 "signed_order_operator"





















+      [(match_operand:V_VLSF 3 "register_operand"      " vr")





















+       (vec_duplicate:V_VLSF





















+         (match_operand:<VEL> 4 "register_operand"     "  f"))])





















+   (match_dup 1)))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmf%B2.vf\t%0,%3,%4,v0.t"





















+  [(set_attr "type" "vfcmp")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "merge_op_idx" "1")





















+   (set_attr "vl_op_idx" "5")





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+;; We don't use early-clobber for LMUL <= 1 to get better codegen.





















+(define_insn "*pred_th_cmp<mode>_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "signed_order_operator"





















+      [(match_operand:V_VLSF 4 "register_operand"      "   vr,   vr")





















+       (vec_duplicate:V_VLSF





















+         (match_operand:<VEL> 5 "register_operand"     "    f,    f"))])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"





















+  "vmf%B3.vf\t%0,%4,%5%p1"





















+  [(set_attr "type" "vfcmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+;; We use early-clobber for source LMUL > dest LMUL.





















+(define_insn "*pred_th_cmp<mode>_scalar_narrow"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,  &vr,  &vr,  &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "signed_order_operator"





















+      [(match_operand:V_VLSF 4 "register_operand"      "   vr,    vr,    vr,   vr,   vr")





















+       (vec_duplicate:V_VLSF





















+         (match_operand:<VEL> 5 "register_operand"     "    f,    f,    f,    f,    f"))])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"





















+  "vmf%B3.vf\t%0,%4,%5%p1"





















+  [(set_attr "type" "vfcmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"





















+  [(set (match_operand:<VM> 0 "register_operand"              "=vm")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "register_operand"         "  0")





















+      (match_operand 5 "vector_length_operand"         " rK")





















+      (match_operand 6 "const_int_operand"             "  i")





















+      (match_operand 7 "const_int_operand"             "  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 2 "equality_operator"





















+      [(vec_duplicate:V_VLSF





















+         (match_operand:<VEL> 4 "register_operand"     "  f"))





















+       (match_operand:V_VLSF 3 "register_operand"      " vr")])





















+   (match_dup 1)))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmf%B2.vf\t%0,%3,%4,v0.t"





















+  [(set_attr "type" "vfcmp")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "merge_op_idx" "1")





















+   (set_attr "vl_op_idx" "5")





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+;; We don't use early-clobber for LMUL <= 1 to get better codegen.





















+(define_insn "*pred_th_eqne<mode>_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "equality_operator"





















+      [(vec_duplicate:V_VLSF





















+         (match_operand:<VEL> 5 "register_operand"     "    f,    f"))





















+       (match_operand:V_VLSF 4 "register_operand"      "   vr,   vr")])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"





















+  "vmf%B3.vf\t%0,%4,%5%p1"





















+  [(set_attr "type" "vfcmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+;; We use early-clobber for source LMUL > dest LMUL.





















+(define_insn "*pred_th_eqne<mode>_scalar_narrow"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,  &vr,  &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "equality_operator"





















+      [(vec_duplicate:V_VLSF





















+         (match_operand:<VEL> 5 "register_operand"     "    f,    f,    f,    f,    f"))





















+       (match_operand:V_VLSF 4 "register_operand"      "   vr,    vr,    vr,   vr,   vr")])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"





















+  "vmf%B3.vf\t%0,%4,%5%p1"





















+  [(set_attr "type" "vfcmp")





















+   (set_attr "mode" "<MODE>")])





















\ No newline at end of file





















diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md





















index 5f5f7b5b986..c0fc7a2441d 100644





















--- a/gcc/config/riscv/vector-iterators.md





















+++ b/gcc/config/riscv/vector-iterators.md





















@@ -109,11 +109,11 @@ (define_c_enum "unspecv" [





















])





















(define_mode_iterator VI [





















-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")





















+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")





















+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")





















+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")





















  (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")





















@@ -128,11 +128,11 @@ (define_mode_iterator VI [





















;; allow the instruction and mode to be matched during combine et al.





















(define_mode_iterator VF [





















  (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")





















-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")





















-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")





















+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")





















+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")





















  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")





















-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















  (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")





















  (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")





















@@ -140,11 +140,11 @@ (define_mode_iterator VF [





















(define_mode_iterator VF_ZVFHMIN [





















  (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")





















-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")





















-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")





















+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")





















+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")





















  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")





















-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















  (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")





















  (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")





















@@ -271,16 +271,16 @@ (define_mode_iterator VLSF_ZVFHMIN [





















])





















(define_mode_iterator VEEWEXT2 [





















-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")





















+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")





















-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")





















-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")





















+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")





















+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")





















-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")





















+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")





















-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















  (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")





















  (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")





















@@ -290,10 +290,10 @@ (define_mode_iterator VEEWEXT2 [





















])





















(define_mode_iterator VEEWEXT4 [





















-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")





















+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")





















-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















  (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")





















  (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")





















@@ -311,59 +311,59 @@ (define_mode_iterator VEEWEXT8 [





















])





















(define_mode_iterator VEEWTRUNC2 [





















-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")





















+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")





















+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")





















-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")





















-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")





















+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")





















+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")





















  (RVVM4SI "TARGET_64BIT")





















  (RVVM2SI "TARGET_64BIT")





















  (RVVM1SI "TARGET_64BIT")





















-  (RVVMF2SI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")





















+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")





















  (RVVM4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")





















  (RVVM2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")





















  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")





















-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")





















+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")





















])





















(define_mode_iterator VEEWTRUNC4 [





















-  RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")





















+  RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (RVVM2HI "TARGET_64BIT")





















  (RVVM1HI "TARGET_64BIT")





















-  (RVVMF2HI "TARGET_64BIT")





















-  (RVVMF4HI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")





















+  (RVVMF2HI "!TARGET_XTHEADVECTOR && TARGET_64BIT")





















+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")





















  (RVVM2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")





















  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")





















-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")





















-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")





















+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")





















+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")





















])





















(define_mode_iterator VEEWTRUNC8 [





















  (RVVM1QI "TARGET_64BIT")





















-  (RVVMF2QI "TARGET_64BIT")





















-  (RVVMF4QI "TARGET_64BIT")





















-  (RVVMF8QI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")





















+  (RVVMF2QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")





















+  (RVVMF4QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")





















+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")





















])





















(define_mode_iterator VEI16 [





















-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")





















+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")





















+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")





















-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")





















-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")





















+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")





















+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")





















-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")





















+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")





















-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















  (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")





















  (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")





















@@ -452,11 +452,11 @@ (define_mode_iterator VEI16 [





















])





















(define_mode_iterator VFULLI [





















-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")





















+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")





















+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")





















+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (RVVM8DI "TARGET_FULL_V") (RVVM4DI "TARGET_FULL_V") (RVVM2DI "TARGET_FULL_V") (RVVM1DI "TARGET_FULL_V")





















@@ -509,17 +509,17 @@ (define_mode_iterator VFULLI [





















])





















(define_mode_iterator VI_QH [





















-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")





















+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")





















+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















])





















(define_mode_iterator VI_QHS [





















-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")





















+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")





















+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")





















+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")





















  (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")





















@@ -560,11 +560,11 @@ (define_mode_iterator VI_QHS [





















])





















(define_mode_iterator VI_QHS_NO_M8 [





















-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")





















+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")





















+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")





















+  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")





















  (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")





















@@ -603,11 +603,11 @@ (define_mode_iterator VI_QHS_NO_M8 [





















(define_mode_iterator VF_HS [





















  (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")





















-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")





















-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")





















+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")





















+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")





















  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")





















-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















  (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")





















  (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")





















@@ -638,12 +638,12 @@ (define_mode_iterator VF_HS_NO_M8 [





















  (RVVM4HF "TARGET_ZVFH")





















  (RVVM2HF "TARGET_ZVFH")





















  (RVVM1HF "TARGET_ZVFH")





















-  (RVVMF2HF "TARGET_ZVFH")





















-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")





















+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")





















+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")





















  (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")





















  (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")





















  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")





















-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















  (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")





















  (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")





















@@ -674,11 +674,11 @@ (define_mode_iterator VF_HS_M8 [





















])





















(define_mode_iterator V_VLSI_QHS [





















-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")





















+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")





















+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")





















+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")





















  (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")





















@@ -756,27 +756,27 @@ (define_mode_iterator VFULLI_D [





















;; E.g. when index mode = RVVM8QImde and Pmode = SImode, if it is not zero_extend or





















;; scalar != 1, such gather/scatter is not allowed since we don't have RVVM32SImode.





















(define_mode_iterator RATIO64 [





















-  (RVVMF8QI "TARGET_MIN_VLEN > 32")





















-  (RVVMF4HI "TARGET_MIN_VLEN > 32")





















-  (RVVMF2SI "TARGET_MIN_VLEN > 32")





















+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (RVVM1DI "TARGET_VECTOR_ELEN_64")





















-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")





















-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")





















+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















  (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")





















])





















(define_mode_iterator RATIO32 [





















-  RVVMF4QI





















-  RVVMF2HI





















+  (RVVMF4QI "!TARGET_XTHEADVECTOR")





















+  (RVVMF2HI "!TARGET_XTHEADVECTOR")





















  RVVM1SI





















  (RVVM2DI "TARGET_VECTOR_ELEN_64")





















-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")





















+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")





















  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")





















  (RVVM2DF "TARGET_VECTOR_ELEN_FP_64")





















])





















(define_mode_iterator RATIO16 [





















-  RVVMF2QI





















+  (RVVMF2QI "!TARGET_XTHEADVECTOR")





















  RVVM1HI





















  RVVM2SI





















  (RVVM4DI "TARGET_VECTOR_ELEN_64")





















@@ -814,21 +814,21 @@ (define_mode_iterator RATIO1 [





















])





















(define_mode_iterator RATIO64I [





















-  (RVVMF8QI "TARGET_MIN_VLEN > 32")





















-  (RVVMF4HI "TARGET_MIN_VLEN > 32")





















-  (RVVMF2SI "TARGET_MIN_VLEN > 32")





















+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")





















])





















(define_mode_iterator RATIO32I [





















-  RVVMF4QI





















-  RVVMF2HI





















+  (RVVMF4QI "!TARGET_XTHEADVECTOR")





















+  (RVVMF2HI "!TARGET_XTHEADVECTOR")





















  RVVM1SI





















  (RVVM2DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")





















])





















(define_mode_iterator RATIO16I [





















-  RVVMF2QI





















+  (RVVMF2QI "!TARGET_XTHEADVECTOR")





















  RVVM1HI





















  RVVM2SI





















  (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")





















@@ -873,21 +873,21 @@ (define_mode_iterator V_WHOLE [





















])





















(define_mode_iterator V_FRACT [





















-  RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")





















+  (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")





















+  (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")





















+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")





















-  (RVVMF2SI "TARGET_MIN_VLEN > 32")





















+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















])





















(define_mode_iterator VWEXTI [





















-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")





















+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")





















+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")





















  (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")





















@@ -933,7 +933,7 @@ (define_mode_iterator VWEXTF_ZVFHMIN [





















  (RVVM4SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")





















  (RVVM2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")





















  (RVVM1SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")





















-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















  (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")





















  (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")





















@@ -966,7 +966,7 @@ (define_mode_iterator VWEXTF [





















  (RVVM4SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")





















  (RVVM2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")





















  (RVVM1SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")





















-  (RVVMF2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















  (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")





















  (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")





















@@ -996,7 +996,7 @@ (define_mode_iterator VWEXTF [





















(define_mode_iterator VWCONVERTI [





















  (RVVM8SI "TARGET_ZVFH") (RVVM4SI "TARGET_ZVFH") (RVVM2SI "TARGET_ZVFH") (RVVM1SI "TARGET_ZVFH")





















-  (RVVMF2SI "TARGET_ZVFH")





















+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_ZVFH")





















  (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")





















  (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")





















@@ -1045,7 +1045,7 @@ (define_mode_iterator VWWCONVERTI [





















])





















(define_mode_iterator VQEXTI [





















-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")





















+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")





















  (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")





















@@ -1456,11 +1456,11 @@ (define_mode_iterator VB [





















;; VINDEXED [VI8 VI16 VI32 (VI64 "TARGET_64BIT")].





















(define_mode_iterator VINDEXED [





















-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")





















+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")





















+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")





















+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")





















  (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")





















@@ -1468,12 +1468,12 @@ (define_mode_iterator VINDEXED [





















  (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")





















  (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")





















-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")





















-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")





















+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")





















+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")





















  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")





















  (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")





















-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















  (RVVM8DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")





















  (RVVM4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")





















@@ -3173,11 +3173,11 @@ (define_mode_attr v_f2si_convert [





















(define_mode_iterator V_VLS_F_CONVERT_SI [





















  (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH")





















-  (RVVMF2HF "TARGET_ZVFH") (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")





















+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")





















  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")





















  (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")





















-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















  (RVVM8DF "TARGET_VECTOR_ELEN_FP_64")





















  (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")





















@@ -3290,12 +3290,12 @@ (define_mode_attr V_F2DI_CONVERT_BRIDGE [





















])





















(define_mode_iterator V_VLS_F_CONVERT_DI [





















-  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")





















-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")





















+  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")





















+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")





















  (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")





















  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")





















-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















  (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")





















  (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")





















diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md





















index 036b2425f32..9941651341d 100644





















--- a/gcc/config/riscv/vector.md





















+++ b/gcc/config/riscv/vector.md





















@@ -83,7 +83,7 @@ (define_attr "has_vl_op" "false,true"





















;; check. However, we need default value of SEW for vsetvl instruction since there





















;; is no field for ratio in the vsetvl instruction encoding.





















(define_attr "sew" ""





















-  (cond [(eq_attr "mode" "RVVMF64BI,RVVMF32BI,RVVMF16BI,RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\





















+  (cond [(eq_attr "mode" "RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\





















  RVVM8QI,RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI,\





















  RVVM1x8QI,RVVMF2x8QI,RVVMF4x8QI,RVVMF8x8QI,\





















  RVVM1x7QI,RVVMF2x7QI,RVVMF4x7QI,RVVMF8x7QI,\





















@@ -95,6 +95,18 @@ (define_attr "sew" ""





















  V1QI,V2QI,V4QI,V8QI,V16QI,V32QI,V64QI,V128QI,V256QI,V512QI,V1024QI,V2048QI,V4096QI,\





















  V1BI,V2BI,V4BI,V8BI,V16BI,V32BI,V64BI,V128BI,V256BI,V512BI,V1024BI,V2048BI,V4096BI")





















(const_int 8)





















+ (eq_attr "mode" "RVVMF16BI")





















+    (if_then_else (match_test "TARGET_XTHEADVECTOR")





















+      (const_int 16)





















+      (const_int 8))





















+ (eq_attr "mode" "RVVMF32BI")





















+    (if_then_else (match_test "TARGET_XTHEADVECTOR")





















+      (const_int 32)





















+      (const_int 8))





















+ (eq_attr "mode" "RVVMF64BI")





















+    (if_then_else (match_test "TARGET_XTHEADVECTOR")





















+      (const_int 64)





















+      (const_int 8))





















(eq_attr "mode" "RVVM8HI,RVVM4HI,RVVM2HI,RVVM1HI,RVVMF2HI,RVVMF4HI,\





















  RVVM1x8HI,RVVMF2x8HI,RVVMF4x8HI,\





















  RVVM1x7HI,RVVMF2x7HI,RVVMF4x7HI,\





















@@ -155,9 +167,9 @@ (define_attr "vlmul" ""





















(eq_attr "mode" "RVVM4QI,RVVMF2BI") (symbol_ref "riscv_vector::LMUL_4")





















(eq_attr "mode" "RVVM2QI,RVVMF4BI") (symbol_ref "riscv_vector::LMUL_2")





















(eq_attr "mode" "RVVM1QI,RVVMF8BI") (symbol_ref "riscv_vector::LMUL_1")





















- (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "riscv_vector::LMUL_F2")





















- (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "riscv_vector::LMUL_F4")





















- (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "riscv_vector::LMUL_F8")





















+ (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F2")





















+ (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F4")





















+ (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F8")





















(eq_attr "mode" "RVVM8HI") (symbol_ref "riscv_vector::LMUL_8")





















(eq_attr "mode" "RVVM4HI") (symbol_ref "riscv_vector::LMUL_4")





















(eq_attr "mode" "RVVM2HI") (symbol_ref "riscv_vector::LMUL_2")





















@@ -428,6 +440,10 @@ (define_attr "ratio" ""





















  vislide1up,vislide1down,vfslide1up,vfslide1down,\





















  vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")





















  (const_int INVALID_ATTRIBUTE)





















+ (and (eq_attr "type" "vlde,vste,vlsegde,vssegte,vlsegds,vssegts,\





















+        vlsegdff,vssegtux,vlsegdox,vlsegdux")





















+       (match_test "TARGET_XTHEADVECTOR"))





















+    (const_int INVALID_ATTRIBUTE)





















(eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1)





















(eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2)





















(eq_attr "mode" "RVVM2QI,RVVMF4BI") (const_int 4)





















@@ -888,6 +904,8 @@ (define_attr "frm_mode" ""





















(symbol_ref "riscv_vector::FRM_DYN")]





















(symbol_ref "riscv_vector::FRM_NONE")))





















+(include "thead-vector.md")





















+





















;; -----------------------------------------------------------------





















;; ---- Miscellaneous Operations





















;; -----------------------------------------------------------------





















@@ -1097,7 +1115,7 @@ (define_expand "mov<mode>"





















(define_insn "*mov<mode>_whole"





















  [(set (match_operand:V_WHOLE 0 "reg_or_mem_operand" "=vr, m,vr")





















(match_operand:V_WHOLE 1 "reg_or_mem_operand" "  m,vr,vr"))]





















-  "TARGET_VECTOR"





















+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"





















  "@





















    vl%m1re<sew>.v\t%0,%1





















    vs%m1r.v\t%1,%0





















@@ -1125,7 +1143,7 @@ (define_expand "mov<mode>"





















(define_insn "*mov<mode>"





















  [(set (match_operand:VB 0 "register_operand" "=vr")





















(match_operand:VB 1 "register_operand" " vr"))]





















-  "TARGET_VECTOR"





















+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"





















  "vmv1r.v\t%0,%1"





















  [(set_attr "type" "vmov")





















    (set_attr "mode" "<MODE>")])





















@@ -3680,7 +3698,7 @@ (define_insn "@pred_<optab><mode>_vf2"





















  (any_extend:VWEXTI





















    (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand"   "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84,   vr,   vr"))





















  (match_operand:VWEXTI 2 "vector_merge_operand"           " vu, vu,  0,  0, vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]





















-  "TARGET_VECTOR"





















+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"





















  "v<sz>ext.vf2\t%0,%3%p1"





















  [(set_attr "type" "vext")





















    (set_attr "mode" "<MODE>")





















@@ -3701,7 +3719,7 @@ (define_insn "@pred_<optab><mode>_vf4"





















  (any_extend:VQEXTI





















    (match_operand:<V_QUAD_TRUNC> 3 "register_operand"   "W43,W43,W43,W43,W86,W86,W86,W86,   vr,   vr"))





















  (match_operand:VQEXTI 2 "vector_merge_operand"         " vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]





















-  "TARGET_VECTOR"





















+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"





















  "v<sz>ext.vf4\t%0,%3%p1"





















  [(set_attr "type" "vext")





















    (set_attr "mode" "<MODE>")





















@@ -3722,7 +3740,7 @@ (define_insn "@pred_<optab><mode>_vf8"





















  (any_extend:VOEXTI





















    (match_operand:<V_OCT_TRUNC> 3 "register_operand"   "W87,W87,W87,W87,   vr,   vr"))





















  (match_operand:VOEXTI 2 "vector_merge_operand"        " vu, vu,  0,  0,   vu,    0")))]





















-  "TARGET_VECTOR"





















+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"





















  "v<sz>ext.vf8\t%0,%3%p1"





















  [(set_attr "type" "vext")





















    (set_attr "mode" "<MODE>")





















diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c





















index 2e0e12aa045..2eef9e1e1a8 100644





















--- a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c





















+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c





















@@ -1,4 +1,4 @@





















-/* { dg-do compile } */





















+/* { dg-do compile { target { ! riscv_xtheadvector } } } */





















/* { dg-skip-if "test rvv intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */





















void foo0 () {__rvv_bool64_t t;}





















diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c





















index 3d81b179235..ef329e30785 100644





















--- a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c





















+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c





















@@ -1,4 +1,4 @@





















/* { dg-do compile } */





















/* { dg-options "-O3 -march=rv32gc -mabi=ilp32d" } */





















-#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' extension enabled} } */





















+#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' or 'XTHEADVECTOR' extension enabled} } */





















diff --git a/gcc/testsuite/lib/target-supports.exp b/gcc/testsuite/lib/target-supports.exp





















index 7f13ff0ca56..70df6b1401c 100644





















--- a/gcc/testsuite/lib/target-supports.exp





















+++ b/gcc/testsuite/lib/target-supports.exp





















@@ -1952,6 +1952,18 @@ proc check_effective_target_riscv_zbb { } {





















    }]





















}





















+# Return 1 if the target arch supports the XTheadVector extension, 0 otherwise.





















+# Cache the result.





















+





















+proc check_effective_target_riscv_xtheadvector { } {





















+    return [check_no_compiler_messages riscv_ext_xtheadvector assembly {





















+       #ifndef __riscv_xtheadvector





















+       #error "Not __riscv_xtheadvector"





















+       #endif





















+    }]





















+}





















+





















+





















# Return 1 if we can execute code when using dg-add-options riscv_v





















proc check_effective_target_riscv_v_ok { } {





















--





















2.17.1































































  reply	other threads:[~2023-12-20 15:30 UTC|newest]

Thread overview: 69+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-18  4:22 [PATCH v2 0/9] RISC-V: Support XTheadVector extensions Jun Sha (Joshua)
2023-11-18  4:26 ` [PATCH v2 1/9] RISC-V: minimal support for xtheadvector Jun Sha (Joshua)
2023-11-18 10:06   ` Kito Cheng
2023-11-18  4:28 ` [PATCH v2 2/9] RISC-V: Handle differences between xtheadvector and vector Jun Sha (Joshua)
2023-11-18 10:13   ` Kito Cheng
2023-11-18  4:29 ` [PATCH v2 3/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part1) Jun Sha (Joshua)
2023-11-18  4:32 ` [PATCH v2 4/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part2) Jun Sha (Joshua)
2023-11-18  4:34 ` [PATCH v2 5/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part3) Jun Sha (Joshua)
2023-11-18  4:35 ` [PATCH v2 6/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part4) Jun Sha (Joshua)
2023-11-18  4:37 ` [PATCH v2 8/9] RISC-V: Add support for xtheadvector-specific load/store intrinsics Jun Sha (Joshua)
2023-11-18  4:39 ` [PATCH v2 9/9] RISC-V: Disable fractional type intrinsics for the XTheadVector extension Jun Sha (Joshua)
2023-12-20 12:20 ` [PATCH v3 0/6] RISC-V: Support " Jun Sha (Joshua)
2023-12-20 12:25   ` [PATCH v3 1/6] RISC-V: Refactor riscv-vector-builtins-bases.cc Jun Sha (Joshua)
2023-12-20 18:14     ` Jeff Law
2023-12-27  2:46       ` 回复:[PATCH " joshua
2023-12-29  1:44       ` joshua
2023-12-20 12:27   ` [PATCH v3 2/6] RISC-V: Split csr_operand in predicates.md for vector patterns Jun Sha (Joshua)
2023-12-20 18:16     ` Jeff Law
2023-12-27  2:49       ` 回复:[PATCH " joshua
2023-12-28 15:50         ` Jeff Law
2023-12-20 12:30   ` [PATCH v3 3/6] RISC-V: Introduce XTheadVector as a subset of V1.0.0 Jun Sha (Joshua)
2023-12-20 12:32   ` [PATCH v3 4/6] RISC-V: Adds the prefix "th." for the instructions of XTheadVector Jun Sha (Joshua)
2023-12-20 18:22     ` Jeff Law
2023-12-20 22:48       ` 钟居哲
2023-12-21  4:41         ` Jeff Law
2023-12-21  9:43           ` Kito Cheng
2023-12-25  6:25     ` [PATCH v4 " Jun Sha (Joshua)
2023-12-25  6:37       ` juzhe.zhong
2023-12-25  7:08         ` 回复:[PATCH " joshua
2023-12-25  7:09           ` juzhe.zhong
2023-12-25  8:14       ` [PATCH " Jun Sha (Joshua)
2023-12-25  8:18         ` juzhe.zhong
2023-12-20 12:34   ` [PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector Jun Sha (Joshua)
2023-12-20 14:00     ` 钟居哲
2023-12-20 14:24       ` 回复:[PATCH " joshua
2023-12-20 14:27         ` 钟居哲
2023-12-20 14:41           ` 回复:回复:[PATCH " joshua
2023-12-20 14:48             ` 回复:[PATCH " 钟居哲
2023-12-20 14:55             ` 钟居哲
2023-12-20 15:21               ` 回复:回复:[PATCH " joshua
2023-12-20 15:29                 ` 钟居哲 [this message]
2023-12-25  6:29     ` [PATCH v4 " Jun Sha (Joshua)
2023-12-29  1:46       ` Jun Sha (Joshua)
2023-12-29  1:58         ` juzhe.zhong
2023-12-29  2:09           ` 回复:[PATCH " joshua
2023-12-29  2:11             ` Re:[PATCH " joshua
2023-12-29  2:14             ` 回复:[PATCH " juzhe.zhong
2023-12-29  2:17               ` Re:[PATCH " joshua
2023-12-29  2:22                 ` juzhe.zhong
2023-12-29  2:25                   ` Re:Re:[PATCH " joshua
2023-12-29  2:25                     ` Re:[PATCH " juzhe.zhong
2023-12-29  2:30                       ` joshua
2023-12-29  2:31                         ` juzhe.zhong
2023-12-29  2:47                         ` juzhe.zhong
2023-12-20 12:36   ` [PATCH v3 6/6] RISC-V: Add support for xtheadvector-specific intrinsics Jun Sha (Joshua)
2023-12-25  6:31     ` [PATCH v4 " Jun Sha (Joshua)
2023-12-29  1:49       ` Jun Sha (Joshua)
2023-12-20 23:04   ` [PATCH v3 0/6] RISC-V: Support XTheadVector extension 钟居哲
2023-12-22  3:33     ` 回复:[PATCH " joshua
2023-12-22  8:07       ` juzhe.zhong
2023-12-22 10:29         ` 回复:回复:[PATCH " joshua
2023-12-22 10:31           ` 回复:[PATCH " juzhe.zhong
2023-12-23  3:37             ` 回复:回复:[PATCH " joshua
2023-12-23 22:52               ` 回复:[PATCH " 钟居哲
2023-12-22 17:21         ` Jeff Law
2023-12-20 23:08   ` [PATCH " 钟居哲
2023-12-21  3:28     ` Jeff Law
2023-12-21  3:30       ` juzhe.zhong
2023-12-21  4:04         ` Jeff Law

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=F18256229484E170+2023122023294511508543@rivai.ai \
    --to=juzhe.zhong@rivai.ai \
    --cc=andrew@sifive.com \
    --cc=christoph.muellner@vrull.eu \
    --cc=cooper.joshua@linux.alibaba.com \
    --cc=cooper.qu@linux.alibaba.com \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=jeffreyalaw@gmail.com \
    --cc=jim.wilson.gcc@gmail.com \
    --cc=jinma@linux.alibaba.com \
    --cc=palmer@dabbelt.com \
    --cc=philipp.tomsich@vrull.eu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).